[Bugfix]: Fix structured output in multi-turn gpt-oss#34454
[Bugfix]: Fix structured output in multi-turn gpt-oss#34454bbrowning wants to merge 5 commits intovllm-project:mainfrom
Conversation
The logic in the gptoss_reasoning_parser to detect when the model has finished outputting reasoning content is is starting to output content to the final channel was inadvertently matching on final channel messages from previous messages for multi-turn scenarios. In practice this meant that vLLM started applying the grammar bitmasks to the entirety of the model's output in these multi-turn conversations prematurely, causing the model to deviate from its trained Harmony format and lead to empty or invalid outputs. This PR fixes things by never looking for the final channel marker in any message prior to the current one the model is generating so that we don't falsely believe the model is starting generation of the final channel unless it's actually doing so during this turn of the conversation. Prior to vLLM v0.13.0 this bug existed but we didn't actually trip over it because the way we handle multi-turn conversation state with gpt-oss models was missing important tokens that coincidentally caused those prior conversations to not actually match these token id checks. But, once we fixed multi-turn conversation state, that caused structured output usage with things like `json_object` response formats to then hit this bug in the reasoning parser. Fixes vllm-project#32791 Signed-off-by: Ben Browning <bbrownin@redhat.com>
65c163e to
c851d60
Compare
There was a problem hiding this comment.
Code Review
This pull request addresses a critical bug in the gptoss_reasoning_parser that caused premature termination of reasoning in multi-turn conversations, leading to incorrect structured outputs. The fix, which involves stopping the backward search for the end-of-reasoning marker upon encountering a message boundary from a previous turn, is logical and well-implemented. The inclusion of a specific unit test to cover this multi-turn scenario is a great addition and significantly improves the robustness of the parser. Overall, the changes are excellent and effectively resolve the described issue. I have one suggestion to further improve the robustness of the code.
Signed-off-by: Ben Browning <bbrownin@redhat.com>
…asoning-structured
Instead of .encode followed by taking the first token, it's cleaner to just directly use model_tokenizer.vocab to fetch single token ids. Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Signed-off-by: Ben Browning <bbrownin@redhat.com>
CI discovered some additional tests that use gptoss_reasoning_parser but with a mocked tokenizer. So, this adds a mocked `vocab` to that mock tokenizer so that these tests also pass. Signed-off-by: Ben Browning <bbrownin@redhat.com>
Head branch was pushed to by a user without write access
|
CI picked up some additional tests that used gptoss_reasoning_parser but with a mocked tokenizer that failed after adjusting to use The latter two failed and caught by CI, but are passing locally now. |
Purpose
The logic in the gptoss_reasoning_parser to detect when the model has finished outputting reasoning content and starting to output content to the final channel was inadvertently matching on final channel messages from previous messages for multi-turn scenarios. In practice this meant that vLLM started applying the grammar bitmasks to the entirety of the model's output in these multi-turn conversations prematurely, causing the model to deviate from its trained Harmony format and lead to empty or invalid outputs.
This PR fixes things by never looking for the final channel marker in any message prior to the current one the model is generating so that we don't falsely believe the model is starting generation of the final channel unless it's actually doing so during this turn of the conversation.
Prior to vLLM v0.13.0 this bug existed but we didn't actually trip over it because the way we handle multi-turn conversation state with gpt-oss models was missing important tokens that coincidentally caused those prior conversations to not actually match these token id checks. But, once we fixed multi-turn conversation state, that caused structured output usage with things like
json_objectresponse formats to then hit this bug in the reasoning parser.Fixes #32791
Test Plan
I added a unit test specifically to cover this case, following test-driven-development by ensuring the test failed initially, applied my fix, and then ensured the test passed.
The existing and new gptoss_reasoning_parser unit tests were run via:
Additionally, I ran the manual reproducer (labeled as case 3) in #32791:
Test Result
All the unit tests passed.
For the manual curl test, prior to this change it gave a response with empty content:
After this change, the model gives the expected response: