Skip to content

feat: add MiniMax as alternative LLM provider#212

Open
octo-patch wants to merge 1 commit intolitanlitudan:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider#212
octo-patch wants to merge 1 commit intolitanlitudan:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax M2.7 and M2.7-highspeed as alternative LLM providers via OpenAI-compatible API
  • Full integration across Python CLI package (settings, config, CLI command) and TypeScript web frontend (provider template)
  • 21 unit tests + 3 integration tests covering settings registry, config management, token verification, and live API calls

Changes

Python package (7 files, 141 additions)

File Change
settings.py MiniMaxM27Settings and MiniMaxM27HighspeedSettings presets with temperature=0.7 and openai_api_base
config.py set_minimax_token() / load_minimax_token() with env var (MINIMAX_API_KEY) and config file support
cli.py skyagi config minimax subcommand + runtime API key injection when MiniMax model is selected
util.py verify_minimax_token() validation

TypeScript web (2 files)

File Change
model.ts MiniMax in ModelProvider enum + providerTemplates entry with M2.7 and M2.7-highspeed
.env.example MINIMAX_API_KEY

Tests (3 files, 248 additions)

  • 21 unit tests: settings registry, preset values, config load/save, env var precedence, token verification, factory pattern
  • 3 integration tests: live M2.7 and M2.7-highspeed chat completions, settings-based model loading

How it works

MiniMax provides an OpenAI-compatible API, so the integration reuses LangChain ChatOpenAI class with a custom openai_api_base. No new LLM wrapper class is needed.

Test plan

  • All 21 unit tests pass
  • All 3 integration tests pass with live MiniMax API
  • MiniMax models appear in skyagi model list output
  • skyagi config minimax saves token to ~/.skyagi/config.json
  • Manual test: run full simulation with MiniMax M2.7 model

Add MiniMax M2.7 and M2.7-highspeed models as alternative LLM providers
via the OpenAI-compatible API. This gives users a choice beyond OpenAI
for running generative agent simulations.

Python package changes:
- settings.py: MiniMaxM27Settings and MiniMaxM27HighspeedSettings presets
  with temperature clamping (0.7) and OpenAI-compat base URL
- model.py: reuses existing ChatOpenAI with custom openai_api_base
- config.py: set_minimax_token / load_minimax_token with env var
  (MINIMAX_API_KEY) and config file support
- cli.py: skyagi config minimax command + runtime API key injection
- util.py: verify_minimax_token validation

TypeScript web changes:
- model.ts: MiniMax provider in ModelProvider enum and providerTemplates
  with M2.7 and M2.7-highspeed model presets
- .env.example: MINIMAX_API_KEY variable

Tests: 21 unit tests + 3 integration tests
@octo-patch octo-patch requested a review from qizheng7 as a code owner March 28, 2026 20:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant