Skip to content

LLM robustness reference: WFGY 1.0 self-healing framework + ProblemMap #2806

@onestardao

Description

@onestardao

Hi, and thanks for maintaining ART – it’s one of the key libraries in the robustness / ML security ecosystem.

I’m working on WFGY 1.0, an open-source framework focused on LLM robustness, self-healing, and RAG debugging:

At a high level:

  • WFGY defines a problem-oriented view of LLM failures, with 16 categories covering RAG drift, reasoning collapse, entropy collapse, deployment / infra order problems, etc.
  • The WFGY 1.0 tech report also includes adversarial attack testing (PGD) on LLM tasks, with robustness numbers reported under extreme conditions (per the abstract).
  • Everything is released under MIT, with enough detail to be reproducible.

I wondered if there might be room for one of the following:

  1. Documentation cross-reference
    e.g. linking to WFGY 1.0 as an example of an LLM-centric self-healing / adversarial evaluation framework that builds on robustness ideas, perhaps in a “Related projects / LLM resources” paragraph.

  2. Future LLM example
    if you expand the LLM coverage in ART, WFGY’s ProblemMap and adversarial prompts could be a candidate dataset / scenario for a tutorial notebook.

If this doesn’t match ART’s scope or roadmap, feel free to close – I understand you need to keep the project focused. Just wanted to put it on your radar as a potential LLM-side complement to the robustness work you already do.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions