Add first-class ChatGPT subscription provider support#881
Add first-class ChatGPT subscription provider support#881Spherrrical wants to merge 3 commits intomainfrom
Conversation
|
Testing out now |
|
Can confirm this is working for me! Lovely stuff |
adilhafeez
left a comment
There was a problem hiding this comment.
the refresh token flow is not automatic - so the way it works right now is that it 1. loads token from disk, 2. refresh token if needed and then 3. inject the token. Issue is if token expired while plano is running there is no way to refersh it and we'll have to restart the service to refresh the token. This doesn't seem right.
| const CHATGPT_BASE_INSTRUCTIONS: &str = | ||
| "You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer."; | ||
| match &req.instructions { | ||
| Some(existing) if existing.contains(CHATGPT_BASE_INSTRUCTIONS) => {} | ||
| Some(existing) => { | ||
| req.instructions = | ||
| Some(format!("{}\n\n{}", CHATGPT_BASE_INSTRUCTIONS, existing)); | ||
| } | ||
| None => { | ||
| req.instructions = Some(CHATGPT_BASE_INSTRUCTIONS.to_string()); | ||
| } | ||
| } | ||
| req.store = Some(false); | ||
| req.stream = Some(true); | ||
|
|
There was a problem hiding this comment.
why are hard-coding system prompt? and what if user has set stream=false?
There was a problem hiding this comment.
the system will not work without this prompt, and the stream=false does not work for this provider
| CHATGPT_DEFAULT_ORIGINATOR = "codex_cli_rs" | ||
| CHATGPT_DEFAULT_USER_AGENT = "codex_cli_rs/0.0.0 (Unknown 0; unknown) unknown" | ||
|
|
There was a problem hiding this comment.
Why harcode user agent and orignator? This will overwrite user's user_agent if it was set.
There was a problem hiding this comment.
the provider will not work without these headers
|
I have found an issue with using OpenClaw and this branch, when posting with tools as OpenClaw does:
I am getting I have created a PR to address this here: #884 |
|
I found an issue with using OpenClaw and this branch, when posting:
We were getting Unsupported Field: max_output_tokens I have created a PR to address this here: #884 |
|
I found an issue with using OpenClaw and this branch, when posting:
We were getting
I have created a PR to address this here: #884 |
Adds support for routing LLM traffic through a ChatGPT Plus/Pro subscription using OpenAI's device-code OAuth flow (same as the Codex CLI). Users authenticate once via
planoai chatgpt loginand tokens are auto-refreshed from~/.plano/chatgpt/auth.json. Thechatgptprovider is wired end-to-end: CLI commands, config auto-injection, schema updates, RustProviderId/LlmProviderTypevariants, request normalization for the Codex backend, and custom header forwarding in the WASM gateway. A self-contained demo withconfig.yaml,chat.py, and a curl test script is included underdemos/llm_routing/chatgpt_subscription/.