support for native reasoning in CoT for reasoning models #8764
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
adds support for ChainOfThought to use the built-in reasoning when used with reasoning models.
previous behavior would not make use of this reasoning content, and instead still create a reasoning field during the generation. instead now we turn off CoT if the model is detected to generate reasoning (using
litellm.supports_reasoning
)and flagged in dspy.settings as
use_native_reasoning
Examples:
backward compatible with non-reasoning models, which default to the CoT logic of adding the reasoning OutputField on the fly.
(Note that OpenAI reasoning models do not expose the content and require setting the
summary
parameter to get a reasoning summary trace)