diff --git a/content/docs/02-getting-started/04-svelte.mdx b/content/docs/02-getting-started/04-svelte.mdx index 889c33e9ce0c..bc45268a4c1c 100644 --- a/content/docs/02-getting-started/04-svelte.mdx +++ b/content/docs/02-getting-started/04-svelte.mdx @@ -1,13 +1,13 @@ --- title: Svelte -description: Welcome to the AI SDK quickstart guide for Svelte! +description: Learn how to build your first agent with the AI SDK and Svelte. --- # Svelte Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. -In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. +In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. @@ -16,9 +16,9 @@ If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/p To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. -- An OpenAI API key. +- A [ Vercel AI Gateway ](https://vercel.com/ai-gateway) API key. -If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. +If you haven't obtained your Vercel AI Gateway API key, you can do so by [signing up](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai&title=Go+to+AI+Gateway) on the Vercel website. ## Set Up Your Application @@ -32,53 +32,50 @@ Navigate to the newly created directory: ### Install Dependencies -Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider. +Install `ai` and `@ai-sdk/svelte`, the AI package and AI SDK's Svelte bindings. The AI SDK's [ Vercel AI Gateway provider ](/providers/ai-sdk-providers/ai-gateway) ships with the `ai` package. You'll also install `zod`, a schema validation library used for defining tool inputs. - The AI SDK is designed to be a unified interface to interact with any large - language model. This means that you can change model and providers with just - one line of code! Learn more about [available providers](/providers) and - [building custom providers](/providers/community-providers/custom-providers) - in the [providers](/providers) section. + This guide uses the Vercel AI Gateway provider so you can access hundreds of + models from different providers with one API key, but you can switch to any + provider or model by installing its package. Check out available [AI SDK + providers](/providers/ai-sdk-providers) for more information.
- + - + - + - +
-### Configure OpenAI API Key +### Configure your AI Gateway API key -Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. +Create a `.env.local` file in your project root and add your AI Gateway API key. This key authenticates your application with the Vercel AI Gateway. Edit the `.env.local` file: ```env filename=".env.local" -OPENAI_API_KEY=xxxxxxxxx +AI_GATEWAY_API_KEY=xxxxxxxxx ``` -Replace `xxxxxxxxx` with your actual OpenAI API key. +Replace `xxxxxxxxx` with your actual Vercel AI Gateway API key. - Vite does not automatically load environment variables onto `process.env`, so - you'll need to import `OPENAI_API_KEY` from `$env/static/private` in your code - (see below). + The AI SDK's Vercel AI Gateway Provider will default to using the + `AI_GATEWAY_API_KEY` environment variable. Vite does not automatically load + environment variables onto `process.env`, so you'll need to import + `AI_GATEWAY_API_KEY` from `$env/static/private` in your code (see below). ## Create an API route @@ -86,20 +83,24 @@ Replace `xxxxxxxxx` with your actual OpenAI API key. Create a SvelteKit Endpoint, `src/routes/api/chat/+server.ts` and add the following code: ```tsx filename="src/routes/api/chat/+server.ts" -import { createOpenAI } from '@ai-sdk/openai'; -import { streamText, type UIMessage, convertToModelMessages } from 'ai'; +import { + streamText, + type UIMessage, + convertToModelMessages, + createGateway, +} from 'ai'; -import { OPENAI_API_KEY } from '$env/static/private'; +import { AI_GATEWAY_API_KEY } from '$env/static/private'; -const openai = createOpenAI({ - apiKey: OPENAI_API_KEY, +const gateway = createGateway({ + apiKey: AI_GATEWAY_API_KEY, }); export async function POST({ request }) { const { messages }: { messages: UIMessage[] } = await request.json(); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), }); @@ -108,18 +109,75 @@ export async function POST({ request }) { ``` - If you see type errors with `OPENAI_API_KEY` or your `POST` function, run the - dev server. + If you see type errors with `AI_GATEWAY_API_KEY` or your `POST` function, run + the dev server. Let's take a look at what is happening in this code: -1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package. +1. Create a gateway provider instance with the `createGateway` function from the `ai` package. 2. Define a `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. The `messages` are of UIMessage type, which are designed for use in application UI - they contain the entire message history and associated metadata like timestamps. 3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. The `messages` key expects a `ModelMessage[]` array. This type is different from `UIMessage` in that it does not include metadata, such as timestamps or sender information. To convert between these types, we use the `convertToModelMessages` function, which strips the UI-specific metadata and transforms the `UIMessage[]` array into the `ModelMessage[]` format that the model expects. 4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toUIMessageStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. 5. Return the result to the client to stream the response. +## Choosing a Provider + +The AI SDK supports dozens of model providers through [first-party](/providers/ai-sdk-providers), [OpenAI-compatible](/providers/openai-compatible-providers), and [ community ](/providers/community-providers) packages. + +This quickstart uses the [Vercel AI Gateway](https://vercel.com/ai-gateway) provider, which is the default [global provider](/docs/ai-sdk-core/provider-management#global-provider-configuration). This means you can access models using a simple string in the model configuration: + +```ts +model: 'openai/gpt-5.1'; +``` + +You can also explicitly import and use the gateway provider in two other equivalent ways: + +```ts +// Option 1: Import from 'ai' package (included by default) +import { gateway } from 'ai'; +model: gateway('openai/gpt-5.1'); + +// Option 2: Install and import from '@ai-sdk/gateway' package +import { gateway } from '@ai-sdk/gateway'; +model: gateway('openai/gpt-5.1'); +``` + +### Using other providers + +To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly: + +
+ + + + + + + + + + + + + + + + +
+ +```ts +import { openai } from '@ai-sdk/openai'; + +model: openai('gpt-5.1'); +``` + +#### Updating the global provider + +You can change the default global provider so string model references use your preferred provider everywhere in your application. Learn more about [provider management](/docs/ai-sdk-core/provider-management#global-provider-configuration). + +Pick the approach that best matches how you want to manage providers across your application. + ## Wire up the UI Now that you have an API route that can query an LLM, it's time to set up your frontend. The AI SDK's [UI](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one class, `Chat`. @@ -195,8 +253,8 @@ Let's enhance your chatbot by adding a simple weather tool. Modify your `src/routes/api/chat/+server.ts` file to include the new weather tool: ```tsx filename="src/routes/api/chat/+server.ts" highlight="2,3,17-31" -import { createOpenAI } from '@ai-sdk/openai'; import { + createGateway, streamText, type UIMessage, convertToModelMessages, @@ -205,17 +263,17 @@ import { } from 'ai'; import { z } from 'zod'; -import { OPENAI_API_KEY } from '$env/static/private'; +import { AI_GATEWAY_API_KEY } from '$env/static/private'; -const openai = createOpenAI({ - apiKey: OPENAI_API_KEY, +const gateway = createGateway({ + apiKey: AI_GATEWAY_API_KEY, }); export async function POST({ request }) { const { messages }: { messages: UIMessage[] } = await request.json(); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), tools: { weather: tool({ @@ -316,8 +374,8 @@ To solve this, you can enable multi-step tool calls using `stopWhen`. By default Modify your `src/routes/api/chat/+server.ts` file to include the `stopWhen` condition: ```ts filename="src/routes/api/chat/+server.ts" highlight="15" -import { createOpenAI } from '@ai-sdk/openai'; import { + createGateway, streamText, type UIMessage, convertToModelMessages, @@ -326,17 +384,17 @@ import { } from 'ai'; import { z } from 'zod'; -import { OPENAI_API_KEY } from '$env/static/private'; +import { AI_GATEWAY_API_KEY } from '$env/static/private'; -const openai = createOpenAI({ - apiKey: OPENAI_API_KEY, +const gateway = createGateway({ + apiKey: AI_GATEWAY_API_KEY, }); export async function POST({ request }) { const { messages }: { messages: UIMessage[] } = await request.json(); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: { @@ -369,8 +427,8 @@ By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 Update your `src/routes/api/chat/+server.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: ```tsx filename="src/routes/api/chat/+server.ts" highlight="32-45" -import { createOpenAI } from '@ai-sdk/openai'; import { + createGateway, streamText, type UIMessage, convertToModelMessages, @@ -379,17 +437,17 @@ import { } from 'ai'; import { z } from 'zod'; -import { OPENAI_API_KEY } from '$env/static/private'; +import { AI_GATEWAY_API_KEY } from '$env/static/private'; -const openai = createOpenAI({ - apiKey: OPENAI_API_KEY, +const gateway = createGateway({ + apiKey: AI_GATEWAY_API_KEY, }); export async function POST({ request }) { const { messages }: { messages: UIMessage[] } = await request.json(); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: { diff --git a/content/docs/02-getting-started/05-nuxt.mdx b/content/docs/02-getting-started/05-nuxt.mdx index 75b33bc3858f..410880f28c33 100644 --- a/content/docs/02-getting-started/05-nuxt.mdx +++ b/content/docs/02-getting-started/05-nuxt.mdx @@ -1,13 +1,13 @@ --- title: Vue.js (Nuxt) -description: Welcome to the AI SDK quickstart guide for Vue.js (Nuxt)! +description: Learn how to build your first agent with the AI SDK and Vue.js (Nuxt). --- # Vue.js (Nuxt) Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. -In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. +In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. @@ -16,9 +16,9 @@ If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/p To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. -- An OpenAI API key. +- A [ Vercel AI Gateway ](https://vercel.com/ai-gateway) API key. -If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. +If you haven't obtained your Vercel AI Gateway API key, you can do so by [signing up](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai&title=Go+to+AI+Gateway) on the Vercel website. ## Setup Your Application @@ -32,7 +32,7 @@ Navigate to the newly created directory: ### Install dependencies -Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider. +Install `ai` and `@ai-sdk/vue`. The Vercel AI Gateway provider ships with the `ai` package. The AI SDK is designed to be a unified interface to interact with any large @@ -44,48 +44,51 @@ Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider.
- + - + - + - +
-### Configure OpenAI API key +### Configure Vercel AI Gateway API key -Create a `.env` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. +Create a `.env` file in your project root and add your Vercel AI Gateway API Key. This key is used to authenticate your application with the Vercel AI Gateway service. Edit the `.env` file: ```env filename=".env" -NUXT_OPENAI_API_KEY=xxxxxxxxx +NUXT_AI_GATEWAY_API_KEY=xxxxxxxxx ``` -Replace `xxxxxxxxx` with your actual OpenAI API key and configure the environment variable in `nuxt.config.ts`: +Replace `xxxxxxxxx` with your actual Vercel AI Gateway API key and configure the environment variable in `nuxt.config.ts`: ```ts filename="nuxt.config.ts" export default defineNuxtConfig({ // rest of your nuxt config runtimeConfig: { - openaiApiKey: '', + aiGatewayApiKey: '', }, }); ``` - The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` - environment variable. + This guide uses Nuxt's runtime config to manage the API key. The `NUXT_` + prefix in the environment variable allows Nuxt to automatically load it into + the runtime config. While the AI Gateway Provider also supports a default + `AI_GATEWAY_API_KEY` environment variable, this approach provides better + integration with Nuxt's configuration system. ## Create an API route @@ -93,13 +96,17 @@ export default defineNuxtConfig({ Create an API route, `server/api/chat.ts` and add the following code: ```typescript filename="server/api/chat.ts" -import { streamText, UIMessage, convertToModelMessages } from 'ai'; -import { createOpenAI } from '@ai-sdk/openai'; +import { + streamText, + UIMessage, + convertToModelMessages, + createGateway, +} from 'ai'; export default defineLazyEventHandler(async () => { - const apiKey = useRuntimeConfig().openaiApiKey; - if (!apiKey) throw new Error('Missing OpenAI API key'); - const openai = createOpenAI({ + const apiKey = useRuntimeConfig().aiGatewayApiKey; + if (!apiKey) throw new Error('Missing AI Gateway API key'); + const gateway = createGateway({ apiKey: apiKey, }); @@ -107,7 +114,7 @@ export default defineLazyEventHandler(async () => { const { messages }: { messages: UIMessage[] } = await readBody(event); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), }); @@ -118,12 +125,63 @@ export default defineLazyEventHandler(async () => { Let's take a look at what is happening in this code: -1. Create an OpenAI provider instance with the `createOpenAI` function from the `@ai-sdk/openai` package. +1. Create a gateway provider instance with the `createGateway` function from the `ai` package. 2. Define an Event Handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. The `messages` are of UIMessage type, which are designed for use in application UI - they contain the entire message history and associated metadata like timestamps. 3. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (defined in step 1) and `messages` (defined in step 2). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. The `messages` key expects a `ModelMessage[]` array. This type is different from `UIMessage` in that it does not include metadata, such as timestamps or sender information. To convert between these types, we use the `convertToModelMessages` function, which strips the UI-specific metadata and transforms the `UIMessage[]` array into the `ModelMessage[]` format that the model expects. -4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. +4. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result). This result object contains the [ `toUIMessageStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-ui-message-stream-response) function which converts the result to a streamed response object. 5. Return the result to the client to stream the response. +## Choosing a Provider + +The AI SDK supports dozens of model providers through [first-party](/providers/ai-sdk-providers), [OpenAI-compatible](/providers/openai-compatible-providers), and [ community ](/providers/community-providers) packages. + +This quickstart uses the [Vercel AI Gateway](https://vercel.com/ai-gateway) provider, which is the default [global provider](/docs/ai-sdk-core/provider-management#global-provider-configuration). This means you can access models using a simple string in the model configuration: + +```ts +model: 'openai/gpt-5.1'; +``` + +You can also explicitly import and use the gateway provider in two other equivalent ways: + +```ts +// Option 1: Import from 'ai' package (included by default) +import { gateway } from 'ai'; +model: gateway('openai/gpt-5.1'); + +// Option 2: Install and import from '@ai-sdk/gateway' package +import { gateway } from '@ai-sdk/gateway'; +model: gateway('openai/gpt-5.1'); +``` + +### Using other providers + +To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly: + +
+ + + + + + + + + + + + + + + + +
+ +```ts +import { openai } from '@ai-sdk/openai'; + +model: openai('gpt-5.1'); +``` + ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui/overview) package abstract the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). @@ -200,15 +258,20 @@ Let's enhance your chatbot by adding a simple weather tool. Modify your `server/api/chat.ts` file to include the new weather tool: -```typescript filename="server/api/chat.ts" highlight="1,18-34" -import { streamText, UIMessage, convertToModelMessages, tool } from 'ai'; -import { createOpenAI } from '@ai-sdk/openai'; +```typescript filename="server/api/chat.ts" highlight="1,16-32" +import { + createGateway, + streamText, + UIMessage, + convertToModelMessages, + tool, +} from 'ai'; import { z } from 'zod'; export default defineLazyEventHandler(async () => { - const apiKey = useRuntimeConfig().openaiApiKey; - if (!apiKey) throw new Error('Missing OpenAI API key'); - const openai = createOpenAI({ + const apiKey = useRuntimeConfig().aiGatewayApiKey; + if (!apiKey) throw new Error('Missing AI Gateway API key'); + const gateway = createGateway({ apiKey: apiKey, }); @@ -216,7 +279,7 @@ export default defineLazyEventHandler(async () => { const { messages }: { messages: UIMessage[] } = await readBody(event); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), tools: { weather: tool({ @@ -316,21 +379,21 @@ To solve this, you can enable multi-step tool calls using `stopWhen`. By default Modify your `server/api/chat.ts` file to include the `stopWhen` condition: -```typescript filename="server/api/chat.ts" highlight="24" +```typescript filename="server/api/chat.ts" highlight="22" import { + createGateway, streamText, UIMessage, convertToModelMessages, tool, stepCountIs, } from 'ai'; -import { createOpenAI } from '@ai-sdk/openai'; import { z } from 'zod'; export default defineLazyEventHandler(async () => { - const apiKey = useRuntimeConfig().openaiApiKey; - if (!apiKey) throw new Error('Missing OpenAI API key'); - const openai = createOpenAI({ + const apiKey = useRuntimeConfig().aiGatewayApiKey; + if (!apiKey) throw new Error('Missing AI Gateway API key'); + const gateway = createGateway({ apiKey: apiKey, }); @@ -338,7 +401,7 @@ export default defineLazyEventHandler(async () => { const { messages }: { messages: UIMessage[] } = await readBody(event); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: { @@ -373,21 +436,21 @@ By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 Update your `server/api/chat.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: -```typescript filename="server/api/chat.ts" highlight="34-47" +```typescript filename="server/api/chat.ts" highlight="32-45" import { + createGateway, streamText, UIMessage, convertToModelMessages, tool, stepCountIs, } from 'ai'; -import { createOpenAI } from '@ai-sdk/openai'; import { z } from 'zod'; export default defineLazyEventHandler(async () => { - const apiKey = useRuntimeConfig().openaiApiKey; - if (!apiKey) throw new Error('Missing OpenAI API key'); - const openai = createOpenAI({ + const apiKey = useRuntimeConfig().aiGatewayApiKey; + if (!apiKey) throw new Error('Missing AI Gateway API key'); + const gateway = createGateway({ apiKey: apiKey, }); @@ -395,7 +458,7 @@ export default defineLazyEventHandler(async () => { const { messages }: { messages: UIMessage[] } = await readBody(event); const result = streamText({ - model: openai('gpt-4o'), + model: gateway('openai/gpt-5.1'), messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: { diff --git a/content/docs/02-getting-started/06-nodejs.mdx b/content/docs/02-getting-started/06-nodejs.mdx index 5ae9bd78ebe0..bb57d0327cf0 100644 --- a/content/docs/02-getting-started/06-nodejs.mdx +++ b/content/docs/02-getting-started/06-nodejs.mdx @@ -1,13 +1,13 @@ --- title: Node.js -description: Welcome to the AI SDK quickstart guide for Node.js! +description: Learn how to build your first agent with the AI SDK and Node.js. --- # Node.js Quickstart The AI SDK is a powerful Typescript library designed to help developers build AI-powered applications. -In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. +In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. @@ -16,9 +16,9 @@ If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/p To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. -- An OpenAI API key. +- A [ Vercel AI Gateway ](https://vercel.com/ai-gateway) API key. -If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. +If you haven't obtained your Vercel AI Gateway API key, you can do so by [signing up](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai&title=Go+to+AI+Gateway) on the Vercel website. ## Setup Your Application @@ -32,7 +32,7 @@ pnpm init ### Install Dependencies -Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider, along with other necessary dependencies. +Install `ai`, the AI SDK, along with other necessary dependencies. The AI SDK is designed to be a unified interface to interact with any large @@ -43,29 +43,29 @@ Install `ai` and `@ai-sdk/openai`, the AI SDK's OpenAI provider, along with othe ```bash -pnpm add ai@beta @ai-sdk/openai@beta zod dotenv +pnpm add ai@beta zod dotenv pnpm add -D @types/node tsx typescript ``` -The `ai` and `@ai-sdk/openai` packages contain the AI SDK and the [ AI SDK OpenAI provider](/providers/ai-sdk-providers/openai), respectively. You will use `zod` to define type-safe schemas that you will pass to the large language model (LLM). You will use `dotenv` to access environment variables (your OpenAI key) within your application. There are also three development dependencies, installed with the `-D` flag, that are necessary to run your Typescript code. +The `ai` package contains the AI SDK. You will use `zod` to define type-safe schemas that you will pass to the large language model (LLM). You will use `dotenv` to access environment variables (your Vercel AI Gateway key) within your application. There are also three development dependencies, installed with the `-D` flag, that are necessary to run your Typescript code. -### Configure OpenAI API key +### Configure Vercel AI Gateway API key -Create a `.env` file in your project's root directory and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. +Create a `.env` file in your project's root directory and add your Vercel AI Gateway API Key. This key is used to authenticate your application with the Vercel AI Gateway service. Edit the `.env` file: ```env filename=".env" -OPENAI_API_KEY=xxxxxxxxx +AI_GATEWAY_API_KEY=xxxxxxxxx ``` -Replace `xxxxxxxxx` with your actual OpenAI API key. +Replace `xxxxxxxxx` with your actual Vercel AI Gateway API key. - The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` - environment variable. + The AI SDK will use the `AI_GATEWAY_API_KEY` environment variable to + authenticate with Vercel AI Gateway. ## Create Your Application @@ -73,7 +73,6 @@ Replace `xxxxxxxxx` with your actual OpenAI API key. Create an `index.ts` file in the root of your project and add the following code: ```ts filename="index.ts" -import { openai } from '@ai-sdk/openai'; import { ModelMessage, streamText } from 'ai'; import 'dotenv/config'; import * as readline from 'node:readline/promises'; @@ -92,7 +91,7 @@ async function main() { messages.push({ role: 'user', content: userInput }); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages, }); @@ -114,7 +113,7 @@ main().catch(console.error); Let's take a look at what is happening in this code: 1. Set up a readline interface to take input from the terminal, enabling interactive sessions directly from the command line. -2. Initialize an array called `messages` to store the history of your conversation. This history allows the model to maintain context in ongoing dialogues. +2. Initialize an array called `messages` to store the history of your conversation. This history allows the agent to maintain context in ongoing dialogues. 3. In the `main` function: - Prompt for and capture user input, storing it in `userInput`. @@ -125,28 +124,78 @@ Let's take a look at what is happening in this code: ## Running Your Application -With that, you have built everything you need for your chatbot! To start your application, use the command: +With that, you have built everything you need for your agent! To start your application, use the command: -You should see a prompt in your terminal. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Node.js. +You should see a prompt in your terminal. Test it out by entering a message and see the AI agent respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with Node.js. + +## Choosing a Provider + +The AI SDK supports dozens of model providers through [first-party](/providers/ai-sdk-providers), [OpenAI-compatible](/providers/openai-compatible-providers), and [ community ](/providers/community-providers) packages. + +This quickstart uses the [Vercel AI Gateway](https://vercel.com/ai-gateway) provider, which is the default [global provider](/docs/ai-sdk-core/provider-management#global-provider-configuration). This means you can access models using a simple string in the model configuration: + +```ts +model: 'openai/gpt-5.1'; +``` + +You can also explicitly import and use the gateway provider in two other equivalent ways: + +```ts +// Option 1: Import from 'ai' package (included by default) +import { gateway } from 'ai'; +model: gateway('openai/gpt-5.1'); + +// Option 2: Install and import from '@ai-sdk/gateway' package +import { gateway } from '@ai-sdk/gateway'; +model: gateway('openai/gpt-5.1'); +``` + +### Using other providers + +To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly: -## Enhance Your Chatbot with Tools +
+ + + + + + + + + + + + + + + + +
+ +```ts +import { openai } from '@ai-sdk/openai'; + +model: openai('gpt-5.1'); +``` + +## Enhance Your Agent with Tools While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where [tools](/docs/ai-sdk-core/tools-and-tool-calling) come in. Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response. -For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. +For example, if a user asks about the current weather, without tools, the agent would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information. -Let's enhance your chatbot by adding a simple weather tool. +Let's enhance your agent by adding a simple weather tool. ### Update Your Application Modify your `index.ts` file to include the new weather tool: -```ts filename="index.ts" highlight="2,4,25-38" -import { openai } from '@ai-sdk/openai'; +```ts filename="index.ts" highlight="2,4,24-37" import { ModelMessage, streamText, tool } from 'ai'; import 'dotenv/config'; import { z } from 'zod'; @@ -166,7 +215,7 @@ async function main() { messages.push({ role: 'user', content: userInput }); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages, tools: { weather: tool({ @@ -207,18 +256,17 @@ In this updated code: 1. You import the `tool` function from the `ai` package. 2. You define a `tools` object with a `weather` tool. This tool: - - Has a description that helps the model understand when to use it. - - Defines `inputSchema` using a Zod schema, specifying that it requires a `location` string to execute this tool. The model will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information. + - Has a description that helps the agent understand when to use it. + - Defines `inputSchema` using a Zod schema, specifying that it requires a `location` string to execute this tool. The agent will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information. - Defines an `execute` function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API. -Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and the results will be used by the model to generate its response. +Now your agent can "fetch" weather information for any location the user asks about. When the agent determines it needs to use the weather tool, it will generate a tool call with the necessary parameters. The `execute` function will then be automatically run, and the results will be used by the agent to generate its response. -Try asking something like "What's the weather in New York?" and see how the model uses the new tool. +Try asking something like "What's the weather in New York?" and see how the agent uses the new tool. -Notice the blank "assistant" response? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result in the `toolCall` and `toolResult` keys of the result object. +Notice the blank "assistant" response? This is because instead of generating a text response, the agent generated a tool call. You can access the tool call and subsequent tool result in the `toolCall` and `toolResult` keys of the result object. -```typescript highlight="47-48" -import { openai } from '@ai-sdk/openai'; +```typescript highlight="46-47" import { ModelMessage, streamText, tool } from 'ai'; import 'dotenv/config'; import { z } from 'zod'; @@ -238,7 +286,7 @@ async function main() { messages.push({ role: 'user', content: userInput }); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages, tools: { weather: tool({ @@ -280,16 +328,15 @@ Now, when you ask about the weather, you'll see the tool call and its result dis ## Enabling Multi-Step Tool Calls -You may have noticed that while the tool results are visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation. +You may have noticed that while the tool results are visible in the chat interface, the agent isn't using this information to answer your original query. This is because once the agent generates a tool call, it has technically completed its generation. -To solve this, you can enable multi-step tool calls using `stopWhen`. This feature will automatically send tool results back to the model to trigger an additional generation until the stopping condition you define is met. In this case, you want the model to answer your question using the results from the weather tool. +To solve this, you can enable multi-step tool calls using `stopWhen`. This feature will automatically send tool results back to the agent to trigger an additional generation until the stopping condition you define is met. In this case, you want the agent to answer your question using the results from the weather tool. ### Update Your Application Modify your `index.ts` file to configure stopping conditions with `stopWhen`: -```ts filename="index.ts" highlight="39-42" -import { openai } from '@ai-sdk/openai'; +```ts filename="index.ts" highlight="38-41" import { ModelMessage, streamText, tool, stepCountIs } from 'ai'; import 'dotenv/config'; import { z } from 'zod'; @@ -309,7 +356,7 @@ async function main() { messages.push({ role: 'user', content: userInput }); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages, tools: { weather: tool({ @@ -353,19 +400,18 @@ main().catch(console.error); In this updated code: -1. You set `stopWhen` to be when `stepCountIs` 5, allowing the model to use up to 5 "steps" for any given generation. -2. You add an `onStepFinish` callback to log any `toolResults` from each step of the interaction, helping you understand the model's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example. +1. You set `stopWhen` to be when `stepCountIs` 5, allowing the agent to use up to 5 "steps" for any given generation. +2. You add an `onStepFinish` callback to log any `toolResults` from each step of the interaction, helping you understand the agent's tool usage. This means we can also delete the `toolCall` and `toolResult` `console.log` statements from the previous example. -Now, when you ask about the weather in a location, you should see the model using the weather tool results to answer your question. +Now, when you ask about the weather in a location, you should see the agent using the weather tool results to answer your question. -By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit. +By setting `stopWhen: stepCountIs(5)`, you're allowing the agent to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the agent to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit. ### Adding a second tool Update your `index.ts` file to add a new tool to convert the temperature from Celsius to Fahrenheit: -```ts filename="index.ts" highlight="38-49" -import { openai } from '@ai-sdk/openai'; +```ts filename="index.ts" highlight="37-48" import { ModelMessage, streamText, tool, stepCountIs } from 'ai'; import 'dotenv/config'; import { z } from 'zod'; @@ -385,7 +431,7 @@ async function main() { messages.push({ role: 'user', content: userInput }); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages, tools: { weather: tool({ @@ -443,18 +489,18 @@ main().catch(console.error); Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction: -1. The model will call the weather tool for New York. +1. The agent will call the weather tool for New York. 2. You'll see the tool result logged. 3. It will then call the temperature conversion tool to convert the temperature from Fahrenheit to Celsius. -4. The model will then use that information to provide a natural language response about the weather in New York. +4. The agent will then use that information to provide a natural language response about the weather in New York. -This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful. +This multi-step approach allows the agent to gather information and use it to provide more accurate and contextual responses, making your agent considerably more useful. -This example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time and perform actions that interact with the outside world. Tools bridge the gap between the model's knowledge cutoff and current information, while also enabling it to take meaningful actions beyond just generating text responses. +This example demonstrates how tools can expand your agent's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the agent to access and process real-world data in real-time and perform actions that interact with the outside world. Tools bridge the gap between the agent's knowledge cutoff and current information, while also enabling it to take meaningful actions beyond just generating text responses. ## Where to Next? -You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: +You've built an AI agent using the AI SDK! From here, you have several paths to explore: - To learn more about the AI SDK, read through the [documentation](/docs). - If you're interested in diving deeper with guides, check out the [RAG (retrieval-augmented generation)](/docs/guides/rag-chatbot) and [multi-modal chatbot](/docs/guides/multi-modal-chatbot) guides. diff --git a/content/docs/02-getting-started/07-expo.mdx b/content/docs/02-getting-started/07-expo.mdx index 3b8afeaff6d2..6f09d4db5eab 100644 --- a/content/docs/02-getting-started/07-expo.mdx +++ b/content/docs/02-getting-started/07-expo.mdx @@ -1,11 +1,11 @@ --- title: Expo -description: Welcome to the AI SDK quickstart guide for Expo! +description: Learn how to build your first agent with the AI SDK and Expo. --- # Expo Quickstart -In this quickstart tutorial, you'll build a simple AI-chatbot with a streaming user interface with [Expo](https://expo.dev/). Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. +In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface with [Expo](https://expo.dev/). Along the way, you'll learn key concepts and techniques that are fundamental to using the SDK in your own projects. If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/prompt-engineering) and [HTTP Streaming](/docs/advanced/why-streaming), you can optionally read these documents first. @@ -14,9 +14,9 @@ If you are unfamiliar with the concepts of [Prompt Engineering](/docs/advanced/p To follow this quickstart, you'll need: - Node.js 18+ and pnpm installed on your local development machine. -- An OpenAI API key. +- A [ Vercel AI Gateway ](https://vercel.com/ai-gateway) API key. -If you haven't obtained your OpenAI API key, you can do so by [signing up](https://platform.openai.com/signup/) on the OpenAI website. +If you haven't obtained your Vercel AI Gateway API key, you can do so by [signing up](https://vercel.com/d?to=%2F%5Bteam%5D%2F%7E%2Fai&title=Go+to+AI+Gateway) on the Vercel website. ## Create Your Application @@ -32,49 +32,49 @@ Navigate to the newly created directory: ### Install dependencies -Install `ai`, `@ai-sdk/react` and `@ai-sdk/openai`, the AI package, the AI React package and AI SDK's [ OpenAI provider ](/providers/ai-sdk-providers/openai) respectively. +Install `ai` and `@ai-sdk/react`, the AI package and AI SDK's React hooks. The AI SDK's [ Vercel AI Gateway provider ](/providers/ai-sdk-providers/ai-gateway) ships with the `ai` package. You'll also install `zod`, a schema validation library used for defining tool inputs. - The AI SDK is designed to be a unified interface to interact with any large - language model. This means that you can change model and providers with just - one line of code! Learn more about [available providers](/providers) and - [building custom providers](/providers/community-providers/custom-providers) - in the [providers](/providers) section. + This guide uses the Vercel AI Gateway provider so you can access hundreds of + models from different providers with one API key, but you can switch to any + provider or model by installing its package. Check out available [AI SDK + providers](/providers/ai-sdk-providers) for more information. +
- + - + - + - +
-### Configure OpenAI API key +### Configure your AI Gateway API key -Create a `.env.local` file in your project root and add your OpenAI API Key. This key is used to authenticate your application with the OpenAI service. +Create a `.env.local` file in your project root and add your AI Gateway API key. This key authenticates your application with the Vercel AI Gateway. Edit the `.env.local` file: ```env filename=".env.local" -OPENAI_API_KEY=xxxxxxxxx +AI_GATEWAY_API_KEY=xxxxxxxxx ``` -Replace `xxxxxxxxx` with your actual OpenAI API key. +Replace `xxxxxxxxx` with your actual Vercel AI Gateway API key. - The AI SDK's OpenAI Provider will default to using the `OPENAI_API_KEY` - environment variable. + The AI SDK's Vercel AI Gateway Provider will default to using the + `AI_GATEWAY_API_KEY` environment variable. ## Create an API Route @@ -82,14 +82,13 @@ Replace `xxxxxxxxx` with your actual OpenAI API key. Create a route handler, `app/api/chat+api.ts` and add the following code: ```tsx filename="app/api/chat+api.ts" -import { openai } from '@ai-sdk/openai'; import { streamText, UIMessage, convertToModelMessages } from 'ai'; export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages: convertToModelMessages(messages), }); @@ -105,12 +104,69 @@ export async function POST(req: Request) { Let's take a look at what is happening in this code: 1. Define an asynchronous `POST` request handler and extract `messages` from the body of the request. The `messages` variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. -2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `@ai-sdk/openai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. -3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toDataStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-data-stream-response) function which converts the result to a streamed response object. +2. Call [`streamText`](/docs/reference/ai-sdk-core/stream-text), which is imported from the `ai` package. This function accepts a configuration object that contains a `model` provider (imported from `ai`) and `messages` (defined in step 1). You can pass additional [settings](/docs/ai-sdk-core/settings) to further customise the model's behaviour. +3. The `streamText` function returns a [`StreamTextResult`](/docs/reference/ai-sdk-core/stream-text#result-object). This result object contains the [ `toUIMessageStreamResponse` ](/docs/reference/ai-sdk-core/stream-text#to-ui-message-stream-response) function which converts the result to a streamed response object. 4. Finally, return the result to the client to stream the response. This API route creates a POST request endpoint at `/api/chat`. +## Choosing a Provider + +The AI SDK supports dozens of model providers through [first-party](/providers/ai-sdk-providers), [OpenAI-compatible](/providers/openai-compatible-providers), and [ community ](/providers/community-providers) packages. + +This quickstart uses the [Vercel AI Gateway](https://vercel.com/ai-gateway) provider, which is the default [global provider](/docs/ai-sdk-core/provider-management#global-provider-configuration). This means you can access models using a simple string in the model configuration: + +```ts +model: 'openai/gpt-5.1'; +``` + +You can also explicitly import and use the gateway provider in two other equivalent ways: + +```ts +// Option 1: Import from 'ai' package (included by default) +import { gateway } from 'ai'; +model: gateway('openai/gpt-5.1'); + +// Option 2: Install and import from '@ai-sdk/gateway' package +import { gateway } from '@ai-sdk/gateway'; +model: gateway('openai/gpt-5.1'); +``` + +### Using other providers + +To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly: + +
+ + + + + + + + + + + + + + + + +
+ +```ts +import { openai } from '@ai-sdk/openai'; + +model: openai('gpt-5.1'); +``` + +#### Updating the global provider + +You can change the default global provider so string model references use your preferred provider everywhere in your application. Learn more about [provider management](/docs/ai-sdk-core/provider-management#global-provider-configuration). + +Pick the approach that best matches how you want to manage providers across your application. + ## Wire up the UI Now that you have an API route that can query an LLM, it's time to setup your frontend. The AI SDK's [ UI ](/docs/ai-sdk-ui) package abstracts the complexity of a chat interface into one hook, [`useChat`](/docs/reference/ai-sdk-ui/use-chat). @@ -258,8 +314,7 @@ Let's enhance your chatbot by adding a simple weather tool. Modify your `app/api/chat+api.ts` file to include the new weather tool: -```tsx filename="app/api/chat+api.ts" highlight="2,8,11,13-27" -import { openai } from '@ai-sdk/openai'; +```tsx filename="app/api/chat+api.ts" highlight="2,11-25" import { streamText, UIMessage, convertToModelMessages, tool } from 'ai'; import { z } from 'zod'; @@ -267,7 +322,7 @@ export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages: convertToModelMessages(messages), tools: { weather: tool({ @@ -416,8 +471,7 @@ To solve this, you can enable multi-step tool calls using `stopWhen`. By default Modify your `app/api/chat+api.ts` file to include the `stopWhen` condition: -```tsx filename="app/api/chat+api.ts" highlight="11" -import { openai } from '@ai-sdk/openai'; +```tsx filename="app/api/chat+api.ts" highlight="10" import { streamText, UIMessage, @@ -431,7 +485,7 @@ export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: { @@ -473,8 +527,7 @@ By setting `stopWhen: stepCountIs(5)`, you're allowing the model to use up to 5 Update your `app/api/chat+api.ts` file to add a new tool to convert the temperature from Fahrenheit to Celsius: -```tsx filename="app/api/chat+api.ts" highlight="29-42" -import { openai } from '@ai-sdk/openai'; +```tsx filename="app/api/chat+api.ts" highlight="28-41" import { streamText, UIMessage, @@ -488,7 +541,7 @@ export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ - model: openai('gpt-4o'), + model: 'openai/gpt-5.1', messages: convertToModelMessages(messages), stopWhen: stepCountIs(5), tools: {