Fixes #3008 Responses API: multi-turn conversations 400 on turn 2 when passing response.output back as input#3053
Fixes #3008 Responses API: multi-turn conversations 400 on turn 2 when passing response.output back as input#3053Kilros0817 wants to merge 4 commits intoopenai:mainfrom
Conversation
… 2 when passing response.output back as input
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0faec06edd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if hasattr(item, "as_input"): | ||
| result.append(item.as_input()) | ||
| else: | ||
| result.append(item.to_dict()) |
There was a problem hiding this comment.
Strip output-only fields in output_as_input fallback
The new fallback path serializes unhandled output variants with item.to_dict() unchanged, which can still emit output-only keys and trigger the same 400 errors this change is trying to prevent. For example, ResponseToolSearchCall includes created_by (response_tool_search_call.py:30), but the corresponding input shape ToolSearchCall does not accept that field (response_input_item_param.py:153-170), so response.output_as_input() can produce invalid follow-up input whenever such items appear in response.output.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0faec06edd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if hasattr(item, "as_input"): | ||
| result.append(item.as_input()) | ||
| else: | ||
| result.append(item.to_dict()) |
There was a problem hiding this comment.
Sanitize fallback items before returning input-ready output
output_as_input() currently uses item.to_dict() for every output variant that does not implement as_input(), which leaks output-only fields for some union members. For example, ResponseFunctionToolCallOutputItem can include created_by, but the corresponding input shape (FunctionCallOutput) does not define that field, so replaying response.output_as_input() into responses.create(input=...) can still produce 400 unknown-parameter failures for non-covered item types. This makes the new helper unreliable outside the three explicitly handled classes.
Useful? React with 👍 / 👎.
… 2 when passing response.output back as input
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 57d56ff892
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| if hasattr(item, "as_input"): | ||
| result.append(item.as_input()) | ||
| else: | ||
| result.append(item.to_dict()) |
There was a problem hiding this comment.
Strip output-only fields before using fallback serialization
output_as_input() still appends raw item.to_dict() for any output item without as_input(), which means output-only fields leak back into next-turn input. This is still reproducible for items like ResponseToolSearchOutputItem, ResponseFunctionShellToolCallOutput, and ResponseApplyPatchToolCallOutput that include created_by, while their input counterparts (ResponseToolSearchOutputItemParam, ShellCallOutput, ApplyPatchCallOutput) do not define that field; in those tool-enabled conversations, conversation.extend(response.output_as_input()) can still trigger 400 unknown-parameter errors despite this fix.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Note
To use Codex here, create a Codex account and connect to github.
… 2 when passing response.output back as input
… 2 when passing response.output back as input
Problem
Reusing
response.outputitems as input viamodel_dump()caused 400 errors in the OpenAI API:ResponseOutputMessageincludedstatus(required field, not valid input)ResponseReasoningItemincludedstatus: Noneandencrypted_content: NoneResponseFunctionToolCallalso includedstatusThis made the API reject requests with:
400: Unknown parameter: input[i].status
Solution
This PR introduces:
as_input()on:ResponseOutputMessage(stripsstatus)ResponseReasoningItem(stripsstatusandencrypted_content)ResponseFunctionToolCall(stripsstatus)output_as_input()onResponseas a convenience method that:as_input()on supported itemsto_dict()for othersExample Usage