Back to Prompt Library
implementation
Vercel AI SDK Basic Chat Setup
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Local Multi-LLM Chat Agent
Format
Code-aware
Lines
40
Sections
11
Linked challenge
Local Multi-LLM Chat Agent
Prompt source
Original prompt text with formatting preserved for inspection.
40 lines
11 sections
No variables
2 code blocks
Initialize a new Next.js project and integrate the Vercel AI SDK to create a basic streaming chat interface. Configure it to use OpenAI's `gpt-3.5-turbo` (or `o3`). Provide the necessary `api/chat/route.ts` and client-side component code.
```typescript
// api/chat/route.ts
import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-3.5-turbo'),
messages,
});
return result.to Response();
}
```
```typescript
// app/page.tsx (or similar client component)
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat();
return (
<div>
{messages.map(m => (
<div key={m.id}><b>{m.role}:</b> {m.content}</div>
))}
<form onSubmit={handleSubmit}>
<input
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
<button type="submit">Send</button>
</form>
</div>
);
}
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.