-
-
Notifications
You must be signed in to change notification settings - Fork 104
Reduce finish_reason reliance #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Reduce finish_reason reliance #96
Conversation
31d7e68
to
e2bc89f
Compare
Non-auto tool choices will result in a finish_reason of "stop" rather than "tool_calls". The current code assumes "stop" means there are no tool calls to be made when this isn't true. Instead, it should simply check for any available tool calls. Alongside, the last message in a chat completion stream isn't guaranteed to have the finish_reason. Instead we should check for any reasons to stop as we collect chunks and then use that at the end.
e2bc89f
to
1c88b33
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR addresses issues with how tool calls are detected in OpenAI chat completions by reducing reliance on the finish_reason
field. The main issue is that non-auto tool choices return a finish_reason
of "stop" even when tool calls are present, causing the current logic to incorrectly assume no tool calls exist.
- Replace finish_reason-based tool call detection with direct checking of tool_calls presence
- Track tool call state during streaming to handle finish_reason inconsistencies
- Simplify logic to only stop processing for "length" finish_reason or absence of tool calls
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
File | Description |
---|---|
streamChatCompletion.py | Replaces finish_reason logic with tool_call tracking and direct tool_calls checking |
chatCompletion.py | Updates condition to check for tool_calls presence instead of relying on finish_reason |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
assert last.choices[0].finish_reason is not None | ||
|
||
if len(last.choices) > 0 and last.choices[0].finish_reason.value in ["stop", "length"]: | ||
if fully_done or tool_call == False: | ||
logger.debug("no tool calls found") | ||
fully_done = True | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use not tool_call
instead of tool_call == False
for boolean comparison. This is more pythonic and consistent with PEP 8 style guidelines.
continue | |
if fully_done or not tool_call: | |
logger.debug("no tool calls found") | |
fully_done = True | |
continue |
Copilot uses AI. Check for mistakes.
Non-auto tool choices will result in a finish_reason of "stop" rather than "tool_calls". The current code assumes "stop" means there are no tool calls to be made when this isn't true. Instead, it should simply check for any available tool calls.
This is talked about by OpenAI staff in this thread: https://community.openai.com/t/new-api-feature-forcing-function-calling-via-tool-choice-required/731488/13
Interestingly, these recent tests show different results: https://community.openai.com/t/function-call-with-finish-reason-of-stop/437226/45
I couldn't find any conclusive reference of this on the Docs/API page, but regardless it should be safer this way to rely on the presence/absence of tool_calls.
Alongside, the last message in a chat completion stream isn't guaranteed to have the finish_reason. Instead we should check for any reasons to stop as we collect chunks and then use that at the end. The openAI python library follows a similar manner when processing chunks:
https://github.com/openai/openai-python/blob/e6c6757553bbdb777c31d0daf5916fb9e2b47ff8/src/openai/lib/streaming/chat/_completions.py#L361
Fixes #88