-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Adding new GPT and Claude models #17999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 3 Skipped Deployments
|
WalkthroughThis update centralizes OpenAI model option constants into a new shared module, updates imports across actions to use these shared constants, and modifies model selection/filtering logic in the OpenAI app, including extending support for new models. The Anthropic component receives a new model entry and version bump, while package versions are incremented accordingly. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant OpenAI Action
participant Models Module
User->>OpenAI Action: Configure action (e.g., select model)
OpenAI Action->>Models Module: Import model options (e.g., IMAGE_MODELS, TTS_MODELS)
OpenAI Action-->>User: Present model options from Models Module
sequenceDiagram
participant User
participant OpenAI App
participant OpenAI API
participant Models Module
User->>OpenAI App: Request models (e.g., for assistants)
OpenAI App->>OpenAI API: Fetch available models
OpenAI App->>Models Module: Import include/exclude lists
OpenAI App->>OpenAI App: Filter models using include/exclude lists
OpenAI App-->>User: Return filtered models
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15–20 minutes Suggested reviewers
Poem
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (2)
components/openai/actions/create-image/create-image.mjs (2)
108-116
: Bug: writing the same image for all files when N > 1Inside the tmp branch, the loop always uses response.data[0].b64_json, so multiple files will contain identical content. Use the i-th image.
- await fs.writeFileSync(outputFilePath, Buffer.from(response.data[0].b64_json.toString(), "base64")); + const img = response.data[i] ?? response.data[0]; + const b64 = img?.b64_json; + if (!b64) { + throw new Error("Expected base64 image in response.data but none was found"); + } + await fs.writeFileSync(outputFilePath, Buffer.from(b64, "base64"));Also consider iterating up to Math.min(n || response.data.length, response.data.length) to avoid out-of-bounds indexing.
113-116
: Harden output path handling to avoid path traversal outside the stash dirCurrently, if filename includes "tmp/", it’s used verbatim. A user-controlled filename like "../../tmp/evil" could still be problematic. Normalize and ensure the final path remains inside the stash directory.
- const outputFilePath = filename.includes("tmp/") - ? filename - : `${process.env.STASH_DIR || "/tmp"}/${filename}`; + const baseDir = process.env.STASH_DIR || "/tmp"; + const rawPath = filename.includes("tmp/") ? filename : `${baseDir}/${filename}`; + const path = await import("path"); + const normalized = path.default.normalize(rawPath); + if (!normalized.startsWith(path.default.normalize(baseDir + "/"))) { + throw new Error("Invalid filename: must resolve inside the tmp directory"); + } + const outputFilePath = normalized;
🧹 Nitpick comments (4)
components/anthropic/actions/common/constants.mjs (1)
3-6
: Verify default bump to Claude Opus 4.1 and confirm model slugThe slug
claude-opus-4-1-20250805
matches Anthropic’s official ID as of August 2025. SinceMESSAGE_MODELS[0]
drives the default in chat actions, inserting Opus 4.1 at index 0 changes the default model. Please confirm:
- Opus 4.1 is generally available to all users (rollout readiness).
- The default bump to Opus 4.1 is intentional and acceptable.
Consider adding an inline note to highlight that index 0 must hold the latest model:
export default { MESSAGE_MODELS: [ + // NOTE: Keep the latest generally-available Claude model at index 0. + // Chat actions default to MESSAGE_MODELS[0].value. { label: "Claude Opus 4.1", value: "claude-opus-4-1-20250805", },components/openai/actions/create-transcription/create-transcription.mjs (1)
4-5
: Looks consistent with refactor; small UX improvement suggestionImport/use of TRANSCRIPTION_MODELS and version bump are good. Consider validating response_format based on model to fail fast (gpt-4o(-mini)-transcribe supports only json).
async run({ $ }) { + // Guard: enforce response_format=json for 4o-transcribe models + if ( + ["gpt-4o-transcribe", "gpt-4o-mini-transcribe"].includes(this.model) && + this.response_format && this.response_format !== "json" + ) { + throw new Error("For gpt-4o-transcribe models, response_format must be 'json'."); + }Also applies to: 10-10, 24-25, 70-75
components/openai/actions/create-thread/create-thread.mjs (1)
91-93
: Optional: return label/value pairs for consistencyOther props often surface options as
{ label, value }
. You can keep strings, but to be consistent and future-proof, consider:- return models.map(({ id }) => id); + return models.map(({ id }) => ({ label: id, value: id }));components/openai/openai.app.mjs (1)
360-361
: Chat completion filter updated to include GPT‑5 — good; consider centralizingRegex covers
4o
,o*
,4.1
, andgpt-5
. To avoid future drift with assistants filtering, consider consolidating chat filtering substrings/regex into the shared models module, similar to assistants.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
components/anthropic/actions/chat/chat.mjs
(1 hunks)components/anthropic/actions/common/constants.mjs
(1 hunks)components/anthropic/package.json
(1 hunks)components/openai/actions/chat-using-web-search/chat-using-web-search.mjs
(2 hunks)components/openai/actions/create-image/create-image.mjs
(2 hunks)components/openai/actions/create-moderation/create-moderation.mjs
(2 hunks)components/openai/actions/create-thread/create-thread.mjs
(2 hunks)components/openai/actions/create-transcription/create-transcription.mjs
(2 hunks)components/openai/common/constants.mjs
(0 hunks)components/openai/common/models.mjs
(1 hunks)components/openai/openai.app.mjs
(5 hunks)
💤 Files with no reviewable changes (1)
- components/openai/common/constants.mjs
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2024-12-12T19:23:09.039Z
Learnt from: jcortes
PR: PipedreamHQ/pipedream#14935
File: components/sailpoint/package.json:15-18
Timestamp: 2024-12-12T19:23:09.039Z
Learning: When developing Pipedream components, do not add built-in Node.js modules like `fs` to `package.json` dependencies, as they are native modules provided by the Node.js runtime.
Applied to files:
components/anthropic/package.json
📚 Learning: 2025-05-27T01:47:08.604Z
Learnt from: GTFalcao
PR: PipedreamHQ/pipedream#16831
File: components/anthropic/actions/common/constants.mjs:3-10
Timestamp: 2025-05-27T01:47:08.604Z
Learning: The correct Claude 4 model identifiers in Anthropic's API are `claude-opus-4-20250514` for Claude Opus 4 and `claude-sonnet-4-20250514` for Claude Sonnet 4, as confirmed by official Anthropic documentation and direct consultation with Claude.
Applied to files:
components/anthropic/actions/common/constants.mjs
📚 Learning: 2025-07-01T17:01:46.327Z
Learnt from: js07
PR: PipedreamHQ/pipedream#17375
File: components/tinypng/actions/compress-image/compress-image.mjs:18-23
Timestamp: 2025-07-01T17:01:46.327Z
Learning: In TinyPNG compress-image action (components/tinypng/actions/compress-image/compress-image.mjs), the syncDir property uses accessMode: "read" because this action only reads input files and returns API responses without writing files to /tmp, unlike other TinyPNG actions that save processed files to disk.
Applied to files:
components/openai/actions/create-image/create-image.mjs
components/openai/actions/create-transcription/create-transcription.mjs
📚 Learning: 2024-10-30T15:24:39.294Z
Learnt from: jcortes
PR: PipedreamHQ/pipedream#14467
File: components/gainsight_px/actions/create-account/create-account.mjs:4-6
Timestamp: 2024-10-30T15:24:39.294Z
Learning: In `components/gainsight_px/actions/create-account/create-account.mjs`, the action name should be "Create Account" instead of "Create Memory".
Applied to files:
components/openai/actions/create-image/create-image.mjs
components/openai/actions/create-transcription/create-transcription.mjs
🧬 Code Graph Analysis (6)
components/openai/actions/chat-using-web-search/chat-using-web-search.mjs (1)
components/openai/common/models.mjs (2)
WEB_SEARCH_CHAT_MODELS
(64-67)WEB_SEARCH_CHAT_MODELS
(64-67)
components/openai/actions/create-moderation/create-moderation.mjs (1)
components/openai/common/models.mjs (2)
MODERATION_MODELS
(42-45)MODERATION_MODELS
(42-45)
components/openai/actions/create-image/create-image.mjs (1)
components/openai/common/models.mjs (2)
IMAGE_MODELS
(37-40)IMAGE_MODELS
(37-40)
components/openai/actions/create-transcription/create-transcription.mjs (1)
components/openai/common/models.mjs (2)
TRANSCRIPTION_MODELS
(70-74)TRANSCRIPTION_MODELS
(70-74)
components/openai/actions/create-thread/create-thread.mjs (1)
components/openai/openai.app.mjs (6)
models
(45-45)models
(350-353)models
(357-359)models
(363-365)models
(374-376)models
(385-387)
components/openai/openai.app.mjs (1)
components/openai/common/models.mjs (8)
TTS_MODELS
(32-35)TTS_MODELS
(32-35)FINE_TUNING_MODEL_OPTIONS
(1-30)FINE_TUNING_MODEL_OPTIONS
(1-30)ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS
(48-56)ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS
(48-56)ASSISTANTS_MODEL_EXCLUDED
(59-61)ASSISTANTS_MODEL_EXCLUDED
(59-61)
🔇 Additional comments (14)
components/anthropic/package.json (1)
3-3
: All Anthropic versions updated to 0.2.0
- components/anthropic/package.json: version set to 0.2.0
- components/anthropic/actions/chat/chat.mjs: version field matches 0.2.0
- No remaining “0.1.0” references found under components/anthropic
components/anthropic/actions/chat/chat.mjs (1)
6-6
: Version bump aligned; heads-up on implicit default model change.Bumping to 0.2.0 looks good. Since the default model relies on
constants.MESSAGE_MODELS[0]
, adding Opus 4.1 at index 0 changes the default model for this action. Confirm that this implicit default switch is intentional and that tenants have access to 4.1 to avoid failures.components/openai/common/models.mjs (3)
63-67
: Centralization looks goodWEB_SEARCH_CHAT_MODELS centralization is consistent with the refactor and improves maintainability.
70-74
: Transcription models list LGTMMatches usage constraints referenced in the action docs. No issues spotted.
19-29
: Confirm GPT-4.1 date-stamped models support fine-tuningThe entries
gpt-4.1-2025-04-14
gpt-4.1-mini-2025-04-14
correspond to the April 14 2025 releases of GPT-4.1 and its mini variant, both of which are supported for fine-tuning as of August 2025. No changes are needed here—those IDs are valid for your fine-tuning workflows.
components/openai/actions/create-moderation/create-moderation.mjs (1)
2-2
: Import swap and version bump are correctUsing MODERATION_MODELS from models.mjs aligns with the refactor; version bump is appropriate.
Also applies to: 8-8, 21-21
components/openai/actions/create-image/create-image.mjs (1)
4-4
: Model options centralization looks goodImporting IMAGE_MODELS and wiring it to the model prop is consistent with the refactor; version bump is appropriate.
Also applies to: 8-8, 18-19
components/openai/actions/chat-using-web-search/chat-using-web-search.mjs (1)
4-5
: Model list centralization + default selection LGTMUsing WEB_SEARCH_CHAT_MODELS and defaulting to its first entry reduces drift across the codebase. Version bump is appropriate.
Also applies to: 9-10, 19-21
components/openai/actions/create-thread/create-thread.mjs (2)
9-9
: Version bump looks goodNo issues with the version increment to 0.0.16.
91-93
: Good move to centralized assistants model sourceSwitching to
this.openai.getAssistantsModels({})
aligns this action with the new shared filtering logic and reduces local duplication.components/openai/openai.app.mjs (4)
3-7
: Centralized model constants import — LGTMImporting from
./common/models.mjs
removes hard-coded arrays fromconstants
and improves maintainability.
276-277
: Switch to shared TTS and fine-tuning options — LGTMUsing
TTS_MODELS
andFINE_TUNING_MODEL_OPTIONS
from the centralized models module is consistent and reduces drift. Shapes (string[] vs {label,value}[]) match prop expectations.Also applies to: 282-283
388-391
: Assistants model filter via include/exclude lists — LGTMFiltering by
ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS
with explicit exclusions is clear and maintainable. Nice improvement over scattered substring checks.
29-29
: Default model “gpt-5-mini” availability verified
OpenAI’s documentation confirms that “gpt-5-mini” is generally available to all API users, so keeping it as the default poses no access issues.
default: WEB_SEARCH_CHAT_MODELS[0], | ||
options: WEB_SEARCH_CHAT_MODELS, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Optional: verify supported models for the web search tool
If OpenAI adjusts tool support per model, this list may drift. Consider verifying tool compatibility during CI.
🌐 Web query:
Which OpenAI models currently support the "web_search_preview" tool for the Responses API?
💡 Result:
The "web_search_preview" tool for the OpenAI Responses API is currently supported by the following models:
- GPT-4o and GPT-4o-mini are both fully supported and officially documented as compatible with the "web_search_preview" tool when using the Responses API[3][5][7].
- GPT-4.1 and GPT-4.1-mini are also supported, but with some limitations (such as context window size and excluding specific variants—see below)[1][9].
Key limitations and additional notes:
- GPT-4.1-nano does not support the web_search_preview tool[1].
- The tool is not available for o3-mini and, in some contexts, other smaller or non-primary variants[10].
- The web_search functionality in the API provides responses with inline citations from web sources and can be paired with other tools for agentic applications[3][7].
- Chat Completions API uses dedicated search models (gpt-4o-search-preview, gpt-4o-mini-search-preview), which have different support parameters—refer to individual model documentation for details[1][3][10].
Summary Table:
Model name | Supports "web_search_preview" (Responses API) | Notes |
---|---|---|
gpt-4o | ✔ | Full support, recommended |
gpt-4o-mini | ✔ | Full support |
gpt-4.1 | ✔ | Full support, but context window capped at 128,000 tokens |
gpt-4.1-mini | ✔ | Full support, same context window cap |
gpt-4.1-nano | ✘ | Not supported |
For the most robust and up-to-date web search capability via the Responses API, use gpt-4o or gpt-4o-mini[3][5][7].
Citations:
- 1: https://platform.openai.com/docs/guides/tools-web-search
- 2: https://python.langchain.com/docs/integrations/chat/openai/
- 3: https://openai.com/index/new-tools-for-building-agents/
- 4: https://openai.github.io/openai-agents-python/ref/tool/
- 5: https://ai-sdk.dev/cookbook/guides/openai-responses
- 6: https://ai-sdk.dev/providers/ai-sdk-providers/openai
- 7: https://openai.com/index/new-tools-and-features-in-the-responses-api/
- 8: [Enhancement]: Support for OpenAI's new Responses API danny-avila/LibreChat#6364
- 9: https://platform.openai.com/docs/guides/tools
- 10: https://github.com/simonw/llm/issues/831
Ensure only supported models are listed for web_search_preview and automate compatibility checks
Based on the latest OpenAI docs, only the following models support the web_search_preview
tool in the Responses API (excluding gpt-4.1-nano
and other non-primary variants):
- gpt-4o
- gpt-4o-mini
- gpt-4.1
- gpt-4.1-mini
Please:
- Update
WEB_SEARCH_CHAT_MODELS
in
components/openai/actions/chat-using-web-search/chat-using-web-search.mjs
to exactly these four supported model names. - Add a CI step (e.g., a simple script against
https://api.openai.com/v1/tools
) that verifiesweb_search_preview
support hasn’t drifted whenever this list is changed.
🤖 Prompt for AI Agents
In components/openai/actions/chat-using-web-search/chat-using-web-search.mjs
around lines 19 to 21, update the WEB_SEARCH_CHAT_MODELS array to include only
the four supported models: gpt-4o, gpt-4o-mini, gpt-4.1, and gpt-4.1-mini,
removing any others like gpt-4.1-nano. Additionally, create a new CI script that
queries the OpenAI tools API endpoint to verify that these models still support
the web_search_preview tool, ensuring the list remains accurate over time.
export const ASSISTANTS_MODEL_INCLUDE_SUBSTRINGS = [ | ||
"gpt-3.5-turbo", | ||
"gpt-4-turbo", | ||
"gpt-4o", | ||
"gpt-4.1", | ||
"gpt-5", | ||
"gpt-5-mini", | ||
"gpt-5-nano", | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prevent accidental inclusion of non-Assistants-safe models
Filtering by substrings can pull in models like transcribe/realtime variants (e.g., "gpt-4o-transcribe") into Assistants model options. Add explicit exclusions to the excluded list to be safe.
export const ASSISTANTS_MODEL_EXCLUDED = [
"gpt-3.5-turbo-0301",
+ // Not valid for Assistants model selection
+ "gpt-4o-transcribe",
+ "gpt-4o-mini-transcribe",
+ // Common non-chat families you may want to exclude proactively if surfaced elsewhere
+ "gpt-4o-realtime-preview", // example; adjust to actual IDs if present
];
If filtering logic exists in openai.app.mjs, consider enforcing a predicate that excludes IDs ending with "-transcribe" and "-realtime" instead of maintaining a manual list. Want me to propose that change?
Also applies to: 59-61
🤖 Prompt for AI Agents
In components/openai/common/models.mjs around lines 48 to 56 and 59 to 61, the
current substring filtering for Assistants models may unintentionally include
models with suffixes like "-transcribe" or "-realtime". To fix this, update the
filtering logic to explicitly exclude model IDs ending with these suffixes by
adding a predicate that filters out any model strings ending with "-transcribe"
or "-realtime" instead of relying solely on substring inclusion. This will
prevent accidental inclusion of non-Assistants-safe models.
return models.sort((a, b) => b?.created - a?.created); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle undefined creation timestamps in sort comparator
Some model objects may not expose created
. A NaN comparator can lead to unstable ordering. Add a fallback and a stable tie-breaker:
- return models.sort((a, b) => b?.created - a?.created);
+ return models.sort((a, b) => {
+ const bc = typeof b?.created === "number" ? b.created : -Infinity;
+ const ac = typeof a?.created === "number" ? a.created : -Infinity;
+ if (bc !== ac) return bc - ac; // newer first
+ return String(a?.id).localeCompare(String(b?.id)); // stable tie-break
+ });
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return models.sort((a, b) => b?.created - a?.created); | |
}, | |
return models.sort((a, b) => { | |
const bc = typeof b?.created === "number" ? b.created : -Infinity; | |
const ac = typeof a?.created === "number" ? a.created : -Infinity; | |
if (bc !== ac) return bc - ac; // newer first | |
return String(a?.id).localeCompare(String(b?.id)); // stable tie-break | |
}); |
🤖 Prompt for AI Agents
In components/openai/openai.app.mjs around lines 354 to 355, the sort comparator
does not handle cases where the created timestamp is undefined, which can cause
NaN results and unstable sorting. Update the comparator to provide a fallback
value (e.g., 0) for undefined created timestamps and add a stable tie-breaker
such as comparing model IDs or names to ensure consistent ordering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @GTFalcao, LGTM! Ready for QA!
Hi everyone, all test cases are passed! Ready for release! Test report |
Adding support for the newly released GPT-5 and Claude Opus 4.1 models
I also moved the model constants in the OpenAI app to a separate file with just model constants, and moved some hard-coded arrays into constants for better maintainability
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores