Back to Prompt Library
implementation

Implementing Multi-LLM Provider Switching

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Local Multi-LLM Chat Agent

Format
Code-aware
Lines
31
Sections
10
Linked challenge
Local Multi-LLM Chat Agent

Prompt source

Original prompt text with formatting preserved for inspection.

31 lines
10 sections
No variables
1 code block
Modify the `api/chat/route.ts` to dynamically switch between OpenAI `gpt-3.5-turbo` and Claude Sonnet 4 based on a specific keyword in the user's message (e.g., 'Use Claude:'). Ensure the client-side UI reflects which model is currently being used. You will need to import `createAnthropic` from `@ai-sdk/anthropic` and set up its API key.

```typescript
// api/chat/route.ts
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic'; // Add this
import { streamText } from 'ai';

const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

const anthropic = createAnthropic({
  apiKey: process.env.ANTHROPIC_API_KEY, // Add this
});

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastUserMessage = messages[messages.length - 1]?.content || '';

  let modelToUse = openai('gpt-3.5-turbo');
  let modelName = 'OpenAI';

  if (lastUserMessage.startsWith('Use Claude:')) {
    modelToUse = anthropic('claude-3-sonnet-20240229'); // Use Claude Sonnet 4
    modelName = 'Claude';
    messages[messages.length - 1].content = lastUserMessage.replace('Use Claude:', '').trim();
  }

  const result = await streamText({
    model: modelToUse,
    messages,
  });

  // How to pass modelName back to client for display?
  // You might need to extend streamText response or use a custom API handler.

  return result.toResponse();
}
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.