fix(openrouter): strip 'openrouter/' prefix in chat transform_request#24275
fix(openrouter): strip 'openrouter/' prefix in chat transform_request#24275NIK-TIGER-BILL wants to merge 1 commit intoBerriAI:mainfrom
Conversation
Fixes BerriAI#24234 The embedding transformer already strips the 'openrouter/' prefix before sending the model name to the API (see litellm/llms/openrouter/embedding/ transformation.py:112-113). The chat transformer was missing the same guard: when litellm receives a model string like 'openrouter/mistralai/mistral-7b-instruct' it internally preserves the 'openrouter/' prefix (get_llm_provider_logic.py returns early at L166) so the full string was being forwarded to the OpenRouter API, which does not recognise it and returns a 404/invalid-model error. Fix: strip the 'openrouter/' prefix at the top of OpenrouterConfig.transform_request() — consistent with the approach already used in the embedding transformer.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
NIK-TIGER-BILL seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Greptile SummaryThis PR fixes a bug in Key points:
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/llms/openrouter/chat/transformation.py | Strips the openrouter/ prefix from the model string at the start of transform_request(), mirroring the same logic already in the embedding transformation. The fix is correct and minimal, though no new test explicitly asserts the stripped model value in the resulting request dict. |
Sequence Diagram
sequenceDiagram
participant Caller as Caller (litellm)
participant Logic as get_llm_provider_logic
participant Chat as OpenrouterConfig.transform_request
participant API as OpenRouter API
Caller->>Logic: model="openrouter/mistralai/mistral-7b-instruct"
Logic-->>Caller: returns early, model unchanged
Note over Caller,Chat: Before fix — prefix is forwarded as-is
Caller->>Chat: model="openrouter/mistralai/mistral-7b-instruct"
Chat->>API: { "model": "openrouter/mistralai/mistral-7b-instruct" }
API-->>Chat: ❌ 400 — model not recognised
Note over Caller,Chat: After fix — prefix is stripped in transform_request
Caller->>Chat: model="openrouter/mistralai/mistral-7b-instruct"
Chat->>Chat: model.startswith("openrouter/") → strip prefix
Chat->>API: { "model": "mistralai/mistral-7b-instruct" }
API-->>Chat: ✅ 200 OK
Last reviewed commit: "fix(openrouter): str..."
| if model.startswith("openrouter/"): | ||
| model = model[len("openrouter/"):] |
There was a problem hiding this comment.
Minor style inconsistency with embedding transformation
The analogous logic in the embedding transformation (litellm/llms/openrouter/embedding/transformation.py, line 113) uses model.replace("openrouter/", "", 1), while this PR uses model[len("openrouter/"):]. Both produce identical results given the startswith guard, but keeping the same idiom makes the codebase easier to maintain.
| if model.startswith("openrouter/"): | |
| model = model[len("openrouter/"):] | |
| model = model.replace("openrouter/", "", 1) |
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
| if model.startswith("openrouter/"): | ||
| model = model[len("openrouter/"):] |
There was a problem hiding this comment.
Missing regression test for the bug fix
The existing tests in tests/test_litellm/llms/openrouter/chat/test_openrouter_chat_transformation.py pass model="openrouter/..." to transform_request() but none of them assert that the resulting transformed_request["model"] has the prefix stripped. A dedicated test case would directly verify the fix and guard against regression:
def test_openrouter_transform_request_strips_provider_prefix():
"""Model field sent to the API must not contain the 'openrouter/' prefix."""
config = OpenrouterConfig()
transformed_request = config.transform_request(
model="openrouter/mistralai/mistral-7b-instruct",
messages=[{"role": "user", "content": "Hello"}],
optional_params={},
litellm_params={},
headers={},
)
assert transformed_request["model"] == "mistralai/mistral-7b-instruct"Rule Used: What: Ensure that any PR claiming to fix an issue ... (source)
Summary
Fixes #24234
Problem
OpenRouter chat completions are broken when the model is passed with the
openrouter/provider prefix (e.g.openrouter/mistralai/mistral-7b-instruct). The full string — includingopenrouter/— was being sent to the OpenRouter API, which does not recognise it and returns an error.Root cause
get_llm_provider_logic.pyreturns early at L166 when it detectscustom_llm_provider == "openrouter"andmodel.startswith("openrouter/"), preserving the prefix in the model string.OpenrouterConfig.transform_request()then forwarded this unparsed string directly to the upstream API.Fix
Strip the
openrouter/prefix at the start ofOpenrouterConfig.transform_request()— consistent with the approach already used inOpenrouterConfigfor embeddings (litellm/llms/openrouter/embedding/transformation.py, lines 112–113).Testing
Manually verified that the transform now produces the correct model string. Unit tests pass.