-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Open Issues from berriai/litellm Repository
Retrieved on: 2026-03-26 16:00 UTC
Total Open Issues: 30
Issue List
#24638 - fix: use redis_kwargs host/port in cache ping health check (#24636)
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T15:55:26Z
- URL: fix: use redis_kwargs host/port in cache ping health check (#24636) BerriAI/litellm#24638
- Author: sjhddh
- Description: Fixes #24636. The
/cache/pingendpoint was returningNoneforhostandportbecause it didn't fall back to the settings configured inredis_kwargs. This ensures the actual Redis host/port are displayed in the Admin UI.
#24637 - Fix overlong S3 logging object keys
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T15:35:17Z
- URL: Fix overlong S3 logging object keys BerriAI/litellm#24637
- Author: raashish1601
- Description: Fixes #24628. Cap the final S3 filename component to a filesystem-safe length inside
get_s3_object_key, preserve the readable time/id prefix while appending a deterministic hash suffix for overlong keys, and add regression coverage.
#24636 - Cache health check returns None for host and port despite working Redis connection
- Type: Issue
- Status: Open
- Created: 2026-03-26T15:20:18Z
- URL: Cache health check returns None for host and port despite working Redis connection BerriAI/litellm#24636
- Author: ThePlenkov
- Labels: None
- Description: The
/cache/pingendpoint returns"None"forhostandportin the top-level health check params, even though the Redis connection is fully operational and the correct values are present in the nestedredis_kwargs.
#24635 - feat(volcengine): add image generation support for Ark/Seedream
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T14:31:25Z
- URL: feat(volcengine): add image generation support for Ark/Seedream BerriAI/litellm#24635
- Author: forrestIsRunning
- Description: Adds first-class image generation support for VolcEngine (ByteDance Ark) Seedream models with dedicated VolcEngineImageGenerationConfig routing through the llm_http_handler path.
#24634 - fix(security): add SSRF protection to custom code guardrail HTTP primitives
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T14:20:22Z
- URL: fix(security): add SSRF protection to custom code guardrail HTTP primitives BerriAI/litellm#24634
- Author: 1Ckpwee
- Description: Added
_validate_url_for_ssrf()that blocks private IPs, loopback, link-local/metadata, and other RFC ranges before every outbound HTTP request to prevent SSRF vulnerabilities.
#24633 - Litellm fix opus tests
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T14:17:08Z
- URL: Litellm fix opus tests BerriAI/litellm#24633
- Author: Sameerlite
- Description: Fix tests related to Opus.
#24632 - Fix tests
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T14:13:21Z
- URL: Fix tests BerriAI/litellm#24632
- Author: Sameerlite
- Description: General test fixes.
#24631 - feat(models): add openrouter/minimax/minimax-m2.7 model pricing
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T14:09:30Z
- URL: feat(models): add openrouter/minimax/minimax-m2.7 model pricing BerriAI/litellm#24631
- Author: 1Ckpwee
- Description: Adds
openrouter/minimax/minimax-m2.7tomodel_prices_and_context_window.jsonand the backup file, as requested in #24601.
#24629 - feat(models): add openrouter/minimax/minimax-m2.7 to model pricing registry
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T13:11:27Z
- URL: feat(models): add openrouter/minimax/minimax-m2.7 to model pricing registry BerriAI/litellm#24629
- Author: Retr0-XD
- Description: Adds the
openrouter/minimax/minimax-m2.7model to the pricing and context window registry. This is the latest model in the MiniMax M2 series.
#24628 - [Bug]: S3 filename too long
- Type: Issue
- Status: Open
- Created: 2026-03-26T12:53:04Z
- URL: [Bug]: S3 filename too long BerriAI/litellm#24628
- Author: rodriciru
- Labels: bug, proxy
- Description: When trying to store logs on a dockerized S3 compatible rustfs image, 500 errors occur due to filename length exceeding the 255 character limit on Windows/Linux hosts.
#24627 - [Bug]: Pass-through multipart audio transcription endpoint returns UnicodeDecodeError
- Type: Issue
- Status: Open
- Created: 2026-03-26T12:29:20Z
- URL: [Bug]: Pass-through multipart audio transcription endpoint returns UnicodeDecodeError BerriAI/litellm#24627
- Author: nhyy244
- Labels: bug, proxy
- Description: Pass-through endpoint for Scaleway transcription API returns 500 Internal Server Error with UnicodeDecodeError when handling multipart audio requests.
#24626 - [Bug]: Gemini file retrieval fails: Error parsing file retrieve response
- Type: Issue
- Status: Open
- Created: 2026-03-26T11:16:40Z
- URL: [Bug]: Gemini file retrieval fails: Error parsing file retrieve response BerriAI/litellm#24626
- Author: MyroslavaTarcha
- Labels: bug, proxy, llm translation
- Description: When using LiteLLM Proxy with Google AI Studio (Gemini), file upload works but GET /v1/files/<file_id> always returns 500 error, making it impossible to check when an uploaded file becomes ACTIVE.
#24625 - fix(mcp): block arbitrary command execution via stdio transport
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T10:27:37Z
- URL: fix(mcp): block arbitrary command execution via stdio transport BerriAI/litellm#24625
- Author: Sameerlite
- Description: Fixes critical RCE vulnerability in MCP stdio test endpoints by adding command allowlist and PROXY_ADMIN role checks.
#24624 - fix(proxy): sanitize user_id input and block dangerous env var keys
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T10:04:30Z
- URL: fix(proxy): sanitize user_id input and block dangerous env var keys BerriAI/litellm#24624
- Author: Sameerlite
- Description: Security hardening for two input validation gaps: added user_id validation and blocked dangerous environment variable keys.
#24623 - fix docker documentation MASTER_KEY -> LITELLM_MASTER_KEY
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T09:35:14Z
- URL: fix docker documentation MASTER_KEY -> LITELLM_MASTER_KEY BerriAI/litellm#24623
- Author: D0wn3r
- Description: Change mention of
MASTER_KEYtoLITELLM_MASTER_KEYin Docker documentation.
#24622 - fix(vertex_ai): Gemini tool-use prompt tokens ignored, causing wrong usage counts
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T09:18:28Z
- URL: fix(vertex_ai): Gemini tool-use prompt tokens ignored, causing wrong usage counts BerriAI/litellm#24622
- Author: SyedShahmeerAli12
- Description: When Gemini uses built-in tools,
usageMetadataincludestoolUsePromptTokenCountwhich was being ignored, causing under-reported prompt_tokens and over-reported completion_tokens.
#24621 - [Bug]: Cannot generate 2K images with Gemini 3.1 Flash Image Preview (stuck at 1K) - extra_body is stripped
- Type: Issue
- Status: Open
- Created: 2026-03-26T09:09:15Z
- URL: [Bug]: Cannot generate 2K images with Gemini 3.1 Flash Image Preview (stuck at 1K) - extra_body is stripped BerriAI/litellm#24621
- Author: ieQ-strecker
- Labels: bug, proxy, llm translation
- Description: Cannot generate 2K images with Gemini 3.1 Flash Image Preview - the extra_body payload containing imageConfig is stripped from the outgoing request to Google API.
#24620 - fix: add openrouter/minimax/minimax-m2.7 model pricing
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T08:31:28Z
- URL: fix: add openrouter/minimax/minimax-m2.7 model pricing BerriAI/litellm#24620
- Author: WhoisMonesh
- Description: Adds the minimax-m2.7 model from OpenRouter to the model prices JSON with input/output token costs, context window, and feature support.
#24618 - fix(ollama): preserve image_url blocks in ollama_chat multimodal requests
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T08:05:56Z
- URL: fix(ollama): preserve image_url blocks in ollama_chat multimodal requests BerriAI/litellm#24618
- Author: WhoisMonesh
- Description: Fixes #24615 — ollama_chat provider was silently dropping image_url content blocks in multimodal requests. Fixed four bugs across operation ordering and URL construction.
#24617 - fix: Bedrock internalServerException mapping, AuthError no-retry, xai drop_params, SSE error handling
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T08:01:14Z
- URL: fix: Bedrock internalServerException mapping, AuthError no-retry, xai drop_params, SSE error handling BerriAI/litellm#24617
- Author: naarob
- Description: Four independent bug fixes: Bedrock internalServerException mapping, AuthenticationError 'Missing API Key' raises immediately, xai drop_params=True support, and async_sse_wrapper error handling.
#24616 - fix(router): 429 routing — cooldown bypass, providers.json mapping, Anthropic credit balance fallback
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T07:34:36Z
- URL: fix(router): 429 routing — cooldown bypass, providers.json mapping, Anthropic credit balance fallback BerriAI/litellm#24616
- Author: naarob
- Description: Three fixes for 429/rate-limit routing failures: cooldown bypass for 429-wrapped APIConnectionError, 9 missing providers added to openai_compatible_providers, and Anthropic credit balance mapping to RateLimitError.
#24615 - fix(ollama): preserve image_url blocks in ollama_chat multimodal requests
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T07:26:13Z
- URL: fix(ollama): preserve image_url blocks in ollama_chat multimodal requests BerriAI/litellm#24615
- Author: NIK-TIGER-BILL
- Description: Fixes #24598 — ollama_chat provider silently drops image_url content blocks in multimodal requests. Four bugs fixed across two files.
#24613 - Feature/add hpc ai provider
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T07:08:36Z
- URL: Feature/add hpc ai provider BerriAI/litellm#24613
- Author: lioZ129
- Description: Adds HPC-AI as an OpenAI-compatible provider with slug
hpc_aiand default base URLhttps://api.hpc-ai.com/inference/v1.
#24612 - fix(model): add supports_reasoning for gemini-3.1-flash-image-preview
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T06:57:12Z
- URL: fix(model): add supports_reasoning for gemini-3.1-flash-image-preview BerriAI/litellm#24612
- Author: Ryze0323
- Description: The gemini-3.1-flash-image-preview model supports thinking/reasoning via the thinkingLevel parameter, but was missing the supports_reasoning flag in model_prices_and_context_window.json.
#24611 - feat(router): order-based fallback across deployment priority levels
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T06:25:16Z
- URL: feat(router): order-based fallback across deployment priority levels BerriAI/litellm#24611
- Author: Sameerlite
- Description: When order=1 deployments fail, the router now automatically tries order=2, then order=3, before falling through to external fallbacks. Removes the requirement for enable_pre_call_checks.
#24610 - feat(gemini): Lyria 3 preview models in cost map and docs
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T06:11:26Z
- URL: feat(gemini): Lyria 3 preview models in cost map and docs BerriAI/litellm#24610
- Author: Sameerlite
- Description: Adds
gemini/lyria-3-clip-previewandgemini/lyria-3-pro-previewto model pricing with per-song pricing via output_cost_per_image. Includes docs and tests.
#24609 - [Bug]: No Error Handling in /v1/messages Path
- Type: Issue
- Status: Open
- Created: 2026-03-26T05:52:56Z
- URL: [Bug]: No Error Handling in /v1/messages Path BerriAI/litellm#24609
- Author: urainshah
- Labels: bug, proxy, llm translation
- Description: async_sse_wrapper function has no try/except block, so when bedrock sends InternalServerException, the raw bedrock error passes through to the proxy unhandled.
#24608 - [Bug]: [Bedrock] internalServerException mid-stream error incorrectly mapped to BadRequestError (400) instead of internalServerException (500)
- Type: Issue
- Status: Open
- Created: 2026-03-26T05:41:15Z
- URL: [Bug]: [Bedrock] internalServerException mid-stream error incorrectly mapped to BadRequestError (400) instead of internalServerException (500) BerriAI/litellm#24608
- Author: urainshah
- Labels: bug, proxy, llm translation
- Description: Bedrock internalServerException is incorrectly mapped to BadRequestError (400) instead of InternalServerError (500) because botocore's Eventstream always sends HTTP 400 for all mid-stream errors.
#24606 - Fix Ollama model info URL normalization
- Type: Pull Request
- Status: Open
- Created: 2026-03-26T05:05:03Z
- URL: Fix Ollama model info URL normalization BerriAI/litellm#24606
- Author: LittleChenLiya
- Description: Fix OllamaConfig.get_model_info() to normalize api_base values that already end with /api/chat or /api/generate before requesting /api/show.
#24605 - MCP Server: TiOLi AGENTIS — AI Agent Exchange (23 tools, SSE)
- Type: Issue
- Status: Open
- Created: 2026-03-26T04:50:25Z
- URL: MCP Server: TiOLi AGENTIS — AI Agent Exchange (23 tools, SSE) BerriAI/litellm#24605
- Author: Sendersby
- Labels: None
- Description: AI agent financial exchange announcement with 23 MCP tools, 400+ REST endpoints, blockchain-verified.
Summary Statistics
- Pull Requests: 23
- Issues: 7
- Bug Reports: 5 (labeled as bug)
- Security-related: 2
- Proxy-related: 8
- LLM Translation: 4
This report was automatically generated by the LiteLLM Issue Summary bot.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels