Skip to content

feat: add MiniMax as LLM provider with M2.7 default model#1166

Open
octo-patch wants to merge 2 commits intoPythagora-io:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider with M2.7 default model#1166
octo-patch wants to merge 2 commits intoPythagora-io:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 14, 2026

Summary

Add MiniMax as an LLM provider for GPT Pilot with the latest M2.7 model as default.

Changes

  • Add MiniMaxClient with OpenAI-compatible API integration
  • Support MiniMax-M2.7 (default), MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed models
  • Temperature clamping (MiniMax requires temp > 0)
  • Rate limiting with OpenAI-compatible headers
  • Streaming response support
  • Unit tests and integration tests

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities, offering 204K context window via OpenAI-compatible API.

Testing

  • Unit tests covering model calls, temperature clamping, JSON mode, streaming, base URL, rate limiting
  • Integration tests for M2.7, M2.7-highspeed, M2.5, and M2.5-highspeed models

Add MiniMax (https://platform.minimax.io) as a new LLM provider,
using their OpenAI-compatible API endpoint.

Supported models:
- MiniMax-M2.5 (default) - 204K context window
- MiniMax-M2.5-highspeed - faster variant

Key implementation details:
- New MiniMaxClient class using the OpenAI SDK with MiniMax base URL
- Temperature clamped to minimum 0.01 (MiniMax requires > 0)
- response_format (json_mode) skipped as MiniMax does not support it
- API key read from config or MINIMAX_API_KEY environment variable
- Default base URL: https://api.minimax.io/v1

Changes:
- core/llm/minimax_client.py: New MiniMax LLM client
- core/config/__init__.py: Add MINIMAX to LLMProvider enum
- core/llm/base.py: Register MiniMaxClient in provider factory
- example-config.json: Add MiniMax configuration example
- tests/llm/test_minimax.py: Unit tests (10 tests)
- tests/integration/llm/test_minimax.py: Integration tests
- README.md: Add minimax to supported providers list
@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

- Update default model from MiniMax-M2.5 to MiniMax-M2.7 in tests and config
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to available models list
- Add integration tests for M2.7-highspeed model
- Keep M2.5 and M2.5-highspeed as available alternatives with tests
@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider feat: add MiniMax as LLM provider with M2.7 default model Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants