Back to Prompt Library
implementation
Initial AI SDK Setup and Gemini Integration
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Ethical Ad Personalization Agent
Format
Code-aware
Lines
18
Sections
7
Linked challenge
Ethical Ad Personalization Agent
Prompt source
Original prompt text with formatting preserved for inspection.
18 lines
7 sections
No variables
1 code block
Your first task is to initialize your AI SDK project and integrate Gemini 2.5 Pro for multimodal content generation. Set up a basic streaming chat endpoint.
```typescript
import { createOpenAI } from '@ai-sdk/openai';
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { streamText } from 'ai';
const google = createGoogleGenerativeAI({ apiKey: process.env.GOOGLE_API_KEY });
const model = google('gemini-2.5-pro');
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: model,
messages,
// You'll need to add tools for ad generation here later
});
return result.to Response();
}
```
Expand this to include a simple tool for 'generateAdContent' that takes user preferences and a conversational context, and returns a multimodal ad draft (text, image URL).Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.