Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions .changeset/real-rules-kiss.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
'ai': patch
'@ai-sdk/provider': patch
'@ai-sdk/openai': patch
'@ai-sdk/openai-compatible': patch
'@ai-sdk/anthropic': patch
---

Add top-level `thinking` call settings to the V3 language model spec and map them across providers.

- Added `thinking` to `LanguageModelV3CallOptions` and AI SDK call settings with runtime validation.
- Forwarded `thinking` through `generateText`, `streamText`, `generateObject`, `streamObject`, and `ToolLoopAgent`.
- Mapped top-level `thinking` to OpenAI chat/responses and OpenAI-compatible `reasoning_effort`.
- Mapped top-level `thinking` to Anthropic thinking configuration and effort output settings.
- Added regression tests for forwarding, precedence, and unsupported budget warnings.
7 changes: 7 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -421,6 +421,13 @@ To see `generateText` in action, check out [these examples](#examples).
description:
'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.',
},
{
name: 'thinking',
type: '{ type: "enabled"; effort?: "low" | "medium" | "high"; budgetTokens?: number } | { type: "disabled" }',
isOptional: true,
description:
'Top-level thinking / reasoning configuration. Providers may map this to provider-specific reasoning settings.',
},
{
name: 'maxRetries',
type: 'number',
Expand Down
7 changes: 7 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -422,6 +422,13 @@ To see `streamText` in action, check out [these examples](#examples).
description:
'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.',
},
{
name: 'thinking',
type: '{ type: "enabled"; effort?: "low" | "medium" | "high"; budgetTokens?: number } | { type: "disabled" }',
isOptional: true,
description:
'Top-level thinking / reasoning configuration. Providers may map this to provider-specific reasoning settings.',
},
{
name: 'maxRetries',
type: 'number',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -459,6 +459,13 @@ To see `generateObject` in action, check out the [additional examples](#more-exa
description:
'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.',
},
{
name: 'thinking',
type: '{ type: "enabled"; effort?: "low" | "medium" | "high"; budgetTokens?: number } | { type: "disabled" }',
isOptional: true,
description:
'Top-level thinking / reasoning configuration. Providers may map this to provider-specific reasoning settings.',
},
{
name: 'maxRetries',
type: 'number',
Expand Down
7 changes: 7 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/04-stream-object.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -465,6 +465,13 @@ To see `streamObject` in action, check out the [additional examples](#more-examp
description:
'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.',
},
{
name: 'thinking',
type: '{ type: "enabled"; effort?: "low" | "medium" | "high"; budgetTokens?: number } | { type: "disabled" }',
isOptional: true,
description:
'Top-level thinking / reasoning configuration. Providers may map this to provider-specific reasoning settings.',
},
{
name: 'maxRetries',
type: 'number',
Expand Down
4 changes: 3 additions & 1 deletion content/providers/01-ai-sdk-providers/03-openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,9 @@ The following provider options are available:
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Defaults to `undefined`.

- **reasoningEffort** _'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh'_
Reasoning effort for reasoning models. Defaults to `medium`. If you use `providerOptions` to set the `reasoningEffort` option, this model setting will be ignored.
Reasoning effort for reasoning models. Defaults to `medium`.
You can also set top-level `thinking` in `generateText`, `streamText`, `generateObject`, or `streamObject`.
If both are set, top-level `thinking` takes precedence.

<Note>
The 'none' type for `reasoningEffort` is only available for OpenAI's GPT-5.1
Expand Down
14 changes: 6 additions & 8 deletions content/providers/01-ai-sdk-providers/05-anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -221,20 +221,18 @@ The `speed` option accepts `'fast'` or `'standard'` (default behavior).

Anthropic has reasoning support for `claude-opus-4-20250514`, `claude-sonnet-4-20250514`, and `claude-3-7-sonnet-20250219` models.

You can enable it using the `thinking` provider option
and specifying a thinking budget in tokens.
You can enable it with top-level `thinking` and optionally specify a token budget.

```ts highlight="4,8-10"
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
```ts highlight="4,7-10"
import { anthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const { text, reasoningText, reasoning } = await generateText({
model: anthropic('claude-opus-4-20250514'),
prompt: 'How many people will live in the world in 2040?',
providerOptions: {
anthropic: {
thinking: { type: 'enabled', budgetTokens: 12000 },
} satisfies AnthropicLanguageModelOptions,
thinking: {
type: 'enabled',
budgetTokens: 12000,
},
});

Expand Down
2 changes: 2 additions & 0 deletions packages/ai/src/agent/tool-loop-agent-settings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,7 @@ export type ToolLoopAgentSettings<
| 'frequencyPenalty'
| 'stopSequences'
| 'seed'
| 'thinking'
| 'headers'
| 'instructions'
| 'stopWhen'
Expand All @@ -171,6 +172,7 @@ export type ToolLoopAgentSettings<
| 'frequencyPenalty'
| 'stopSequences'
| 'seed'
| 'thinking'
| 'headers'
| 'instructions'
| 'stopWhen'
Expand Down
34 changes: 34 additions & 0 deletions packages/ai/src/agent/tool-loop-agent.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,22 @@ describe('ToolLoopAgent', () => {
expect(doGenerateOptions?.abortSignal).toBe(abortController.signal);
});

it('should pass thinking to generateText', async () => {
const agent = new ToolLoopAgent({
model: mockModel,
thinking: { type: 'enabled', effort: 'high' },
});

await agent.generate({
prompt: 'Hello, world!',
});

expect(doGenerateOptions?.thinking).toEqual({
type: 'enabled',
effort: 'high',
});
});

it('should pass timeout to generateText', async () => {
const agent = new ToolLoopAgent({ model: mockModel });

Expand Down Expand Up @@ -350,6 +366,24 @@ describe('ToolLoopAgent', () => {
expect(doStreamOptions?.abortSignal).toBe(abortController.signal);
});

it('should pass thinking to streamText', async () => {
const agent = new ToolLoopAgent({
model: mockModel,
thinking: { type: 'enabled', effort: 'medium' },
});

const result = await agent.stream({
prompt: 'Hello, world!',
});

await result.consumeStream();

expect(doStreamOptions?.thinking).toEqual({
type: 'enabled',
effort: 'medium',
});
});

it('should pass timeout to streamText', async () => {
const agent = new ToolLoopAgent({
model: mockModel,
Expand Down
35 changes: 35 additions & 0 deletions packages/ai/src/generate-object/generate-object.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -701,6 +701,41 @@ describe('generateObject', () => {
});
});

describe('options.thinking', () => {
it('should pass thinking settings to model', async () => {
const result = await generateObject({
model: new MockLanguageModelV3({
doGenerate: async ({ thinking }) => {
expect(thinking).toStrictEqual({
type: 'enabled',
effort: 'low',
});

return {
...dummyResponseValues,
content: [
{
type: 'text',
text: '{ "content": "thinking settings test" }',
},
],
};
},
}),
schema: z.object({ content: z.string() }),
prompt: 'prompt',
thinking: {
type: 'enabled',
effort: 'low',
},
});

expect(result.object).toStrictEqual({
content: 'thinking settings test',
});
});
});

describe('error handling', () => {
function verifyNoObjectGeneratedError(
error: unknown,
Expand Down
1 change: 1 addition & 0 deletions packages/ai/src/generate-object/generate-object.ts
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ const originalGenerateId = createIdGenerator({ prefix: 'aiobj', size: 24 });
* If set, the model will stop generating text when one of the stop sequences is generated.
* @param seed - The seed (integer) to use for random sampling.
* If set and supported by the model, calls will generate deterministic results.
* @param thinking - Top-level thinking / reasoning configuration.
*
* @param maxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.
* @param abortSignal - An optional abort signal that can be used to cancel the call.
Expand Down
42 changes: 42 additions & 0 deletions packages/ai/src/generate-object/stream-object.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -802,6 +802,48 @@ describe('streamObject', () => {
});
});

describe('options.thinking', () => {
it('should pass thinking settings to model', async () => {
const result = streamObject({
model: new MockLanguageModelV3({
doStream: async ({ thinking }) => {
expect(thinking).toStrictEqual({
type: 'enabled',
effort: 'high',
});

return {
stream: convertArrayToReadableStream([
{ type: 'text-start', id: '1' },
{
type: 'text-delta',
id: '1',
delta: `{ "content": "thinking settings test" }`,
},
{ type: 'text-end', id: '1' },
{
type: 'finish',
finishReason: { unified: 'stop', raw: 'stop' },
usage: testUsage,
},
]),
};
},
}),
schema: z.object({ content: z.string() }),
prompt: 'prompt',
thinking: {
type: 'enabled',
effort: 'high',
},
});

expect(
await convertAsyncIterableToArray(result.partialObjectStream),
).toStrictEqual([{ content: 'thinking settings test' }]);
});
});

describe('custom schema', () => {
it('should send object deltas', async () => {
const mockModel = createTestModel();
Expand Down
1 change: 1 addition & 0 deletions packages/ai/src/generate-object/stream-object.ts
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ export type StreamObjectOnFinishCallback<RESULT> = (event: {
* If set, the model will stop generating text when one of the stop sequences is generated.
* @param seed - The seed (integer) to use for random sampling.
* If set and supported by the model, calls will generate deterministic results.
* @param thinking - Top-level thinking / reasoning configuration.
*
* @param maxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.
* @param abortSignal - An optional abort signal that can be used to cancel the call.
Expand Down
27 changes: 27 additions & 0 deletions packages/ai/src/generate-text/generate-text.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2235,6 +2235,33 @@ describe('generateText', () => {
});
});

describe('options.thinking', () => {
it('should pass thinking settings to model', async () => {
const result = await generateText({
model: new MockLanguageModelV3({
doGenerate: async ({ thinking }) => {
expect(thinking).toStrictEqual({
type: 'enabled',
effort: 'high',
});

return {
...dummyResponseValues,
content: [{ type: 'text', text: 'thinking settings test' }],
};
},
}),
prompt: 'test-input',
thinking: {
type: 'enabled',
effort: 'high',
},
});

expect(result.text).toStrictEqual('thinking settings test');
});
});

describe('options.abortSignal', () => {
it('should forward abort signal to tool execution', async () => {
const abortController = new AbortController();
Expand Down
1 change: 1 addition & 0 deletions packages/ai/src/generate-text/generate-text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,7 @@ export type GenerateTextOnFinishCallback<TOOLS extends ToolSet> = (
* If set, the model will stop generating text when one of the stop sequences is generated.
* @param seed - The seed (integer) to use for random sampling.
* If set and supported by the model, calls will generate deterministic results.
* @param thinking - Top-level thinking / reasoning configuration.
*
* @param maxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.
* @param abortSignal - An optional abort signal that can be used to cancel the call.
Expand Down
43 changes: 43 additions & 0 deletions packages/ai/src/generate-text/stream-text.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10059,6 +10059,49 @@ describe('streamText', () => {
});
});

describe('options.thinking', () => {
it('should pass thinking settings to model', async () => {
const result = streamText({
model: new MockLanguageModelV3({
doStream: async ({ thinking }) => {
expect(thinking).toStrictEqual({
type: 'enabled',
effort: 'medium',
});

return {
stream: convertArrayToReadableStream([
{ type: 'text-start', id: '1' },
{
type: 'text-delta',
id: '1',
delta: 'thinking settings test',
},
{ type: 'text-end', id: '1' },
{
type: 'finish',
finishReason: { unified: 'stop', raw: 'stop' },
usage: testUsage,
},
]),
};
},
}),
prompt: 'test-input',
thinking: {
type: 'enabled',
effort: 'medium',
},
onError: () => {},
});

assert.deepStrictEqual(
await convertAsyncIterableToArray(result.textStream),
['thinking settings test'],
);
});
});

describe('options.abortSignal', () => {
it('should forward abort signal to tool execution during streaming', async () => {
const abortController = new AbortController();
Expand Down
1 change: 1 addition & 0 deletions packages/ai/src/generate-text/stream-text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -238,6 +238,7 @@ export type StreamTextOnAbortCallback<TOOLS extends ToolSet> = (event: {
* If set, the model will stop generating text when one of the stop sequences is generated.
* @param seed - The seed (integer) to use for random sampling.
* If set and supported by the model, calls will generate deterministic results.
* @param thinking - Top-level thinking / reasoning configuration.
*
* @param maxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.
* @param abortSignal - An optional abort signal that can be used to cancel the call.
Expand Down
Loading
Loading