The Quick Version#
The Vercel AI SDK gives you React hooks and server utilities for building AI-powered interfaces. It handles streaming, provider abstraction, and UI state so you don’t have to manage SSE connections, token buffering, or loading states yourself.
1
2
3
| npx create-next-app@latest my-ai-app --typescript --tailwind
cd my-ai-app
npm install ai @ai-sdk/openai
|
Create the API route:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| // app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
system: "You are a helpful coding assistant. Keep answers concise.",
});
return result.toDataStreamResponse();
}
|
Create the chat UI:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| // app/page.tsx
"use client";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map((m) => (
<div key={m.id} className={m.role === "user" ? "text-right" : "text-left"}>
<span className="inline-block p-3 rounded-lg bg-gray-100 dark:bg-gray-800">
{m.content}
</span>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask something..."
className="flex-1 p-2 border rounded"
disabled={isLoading}
/>
<button type="submit" className="px-4 py-2 bg-blue-600 text-white rounded">
Send
</button>
</form>
</div>
);
}
|
1
| OPENAI_API_KEY=sk-xxx npm run dev
|
That gives you a streaming chat interface. Tokens appear as they’re generated, the UI shows loading state, and message history is managed automatically by the useChat hook.
Switching Between Providers#
The AI SDK abstracts the provider layer. Swap OpenAI for Anthropic, Google, or any other supported provider by changing one import:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
| // app/api/chat/route.ts
import { anthropic } from "@ai-sdk/anthropic";
import { google } from "@ai-sdk/google";
import { streamText } from "ai";
export async function POST(req: Request) {
const { messages, provider } = await req.json();
// Pick the model based on user selection or routing logic
const models: Record<string, any> = {
openai: openai("gpt-4o"),
anthropic: anthropic("claude-sonnet-4-5-20250929"),
google: google("gemini-1.5-pro"),
};
const model = models[provider] || models.openai;
const result = streamText({ model, messages });
return result.toDataStreamResponse();
}
|
1
| npm install @ai-sdk/anthropic @ai-sdk/google
|
Set the API keys as environment variables:
1
2
3
| OPENAI_API_KEY=sk-xxx
ANTHROPIC_API_KEY=sk-ant-xxx
GOOGLE_GENERATIVE_AI_API_KEY=xxx
|
Give the AI tools it can call — search a database, check the weather, run calculations. The SDK handles the tool call loop automatically.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
| // app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, tool } from "ai";
import { z } from "zod";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
tools: {
getWeather: tool({
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
}),
execute: async ({ city }) => {
// Call your actual weather API here
return { city, temperature: 22, condition: "sunny" };
},
}),
calculate: tool({
description: "Evaluate a math expression",
parameters: z.object({
expression: z.string().describe("Math expression to evaluate"),
}),
execute: async ({ expression }) => {
return { result: Function(`return ${expression}`)() };
},
}),
},
maxSteps: 5, // allow up to 5 tool call rounds
});
return result.toDataStreamResponse();
}
|
On the client side, tool results are handled automatically. The useChat hook shows tool invocations and their results in the message stream:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
| // app/page.tsx
"use client";
import { useChat } from "ai/react";
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
maxSteps: 5,
});
return (
<div className="max-w-2xl mx-auto p-4">
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.content}
{m.toolInvocations?.map((tool, i) => (
<pre key={i} className="bg-gray-100 p-2 rounded mt-1 text-sm">
{tool.toolName}: {JSON.stringify(tool.result, null, 2)}
</pre>
))}
</div>
))}
<form onSubmit={handleSubmit} className="mt-4 flex gap-2">
<input value={input} onChange={handleInputChange} className="flex-1 p-2 border rounded" />
<button type="submit" className="px-4 py-2 bg-blue-600 text-white rounded">Send</button>
</form>
</div>
);
}
|
Generating Structured Data#
Use generateObject when you need structured output instead of chat — for data extraction, form filling, or API responses:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
| // app/api/extract/route.ts
import { openai } from "@ai-sdk/openai";
import { generateObject } from "ai";
import { z } from "zod";
export async function POST(req: Request) {
const { text } = await req.json();
const result = await generateObject({
model: openai("gpt-4o"),
schema: z.object({
people: z.array(z.object({
name: z.string(),
role: z.string(),
company: z.string().optional(),
})),
dates: z.array(z.string()),
topics: z.array(z.string()),
}),
prompt: `Extract structured data from this text:\n\n${text}`,
});
return Response.json(result.object);
}
|
The Zod schema enforces the output structure — the SDK handles retries if the model’s output doesn’t match the schema.
useCompletion for Non-Chat Use Cases#
Not everything is a chatbot. Use useCompletion for single-prompt interactions like text generation, summarization, or translation:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| "use client";
import { useCompletion } from "ai/react";
export default function Summarizer() {
const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
api: "/api/summarize",
});
return (
<div>
<form onSubmit={handleSubmit}>
<textarea
value={input}
onChange={handleInputChange}
placeholder="Paste text to summarize..."
rows={6}
className="w-full p-2 border rounded"
/>
<button type="submit" disabled={isLoading}>
{isLoading ? "Summarizing..." : "Summarize"}
</button>
</form>
{completion && <div className="mt-4 p-4 bg-gray-50 rounded">{completion}</div>}
</div>
);
}
|
Common Errors and Fixes#
Streaming stops after 10 seconds on Vercel
Vercel’s free tier has a 10-second function timeout. Set export const maxDuration = 60; in your route file (requires a Pro plan for >10s). Or use edge runtime: export const runtime = "edge";.
useChat doesn’t update in real time
Make sure the API route returns result.toDataStreamResponse(), not a regular JSON response. The data stream format is required for the React hooks to process streaming tokens.
Type errors with tool parameters
Install and import zod for parameter schemas. The AI SDK uses Zod for runtime validation and TypeScript inference. Without it, tool parameters won’t type-check.
API key exposed in client-side code
Never import provider SDKs in client components. All LLM calls must go through server-side API routes (app/api/*/route.ts). The useChat hook calls your API route, which then calls the LLM provider securely.
Multiple providers have different response formats
That’s exactly what the AI SDK solves. The streamText and generateObject functions normalize the response format across all providers. You don’t need provider-specific parsing.