Skip to content

Implement LLM fallbacks #352

@bracesproul

Description

@bracesproul

We should implement a fallback system to the LLM .invoke calls. It should abstract the .invoke calls to a wrapper which catches any errors thrown. If an error is thrown, it switches to a different LLM provider, and retries the request. It should keep falling back to different providers until we've exhausted all providers (OpenAI, Google, Anthropic) Don't just switch models, but actually switch entire providers. Ensure this is applied to every LLM call inside the open-swe app (inside the graphs/ directory.

Just pick one model as a fallback for each provider.

Metadata

Metadata

Assignees

No one assigned

    Labels

    open-swe-autoCreate a new Open SWE run on the selected issue, auto-accept the plan

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions