Back to Prompt Library
implementation
Integrating Local Tool Functionality
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Local Multi-LLM Chat Agent
Format
Code-aware
Lines
39
Sections
8
Linked challenge
Local Multi-LLM Chat Agent
Prompt source
Original prompt text with formatting preserved for inspection.
39 lines
8 sections
No variables
1 code block
Create a client-side tool (e.g., `getLocalTime` which returns the current local time) and integrate it with your AI SDK agent. The agent should be able to call this tool when appropriate (e.g., if the user asks 'What time is it?'). Show how to define the tool and pass it to `useChat` or `streamText`.
```typescript
// lib/tools.ts (example tool definition)
export async function getLocalTime() {
return new Date().toLocaleTimeString();
}
export const tools = {
getLocalTime: {
description: 'Gets the current local time.',
parameters: { type: 'object', properties: {} },
execute: getLocalTime,
},
};
// app/page.tsx (client component, using useChat with tools)
// ...
import { useChat } from 'ai/react';
import { tools } from '../lib/tools';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat', // Your API route
tools: tools, // Pass your tools here
});
// ... rest of your component
}
// api/chat/route.ts (server-side, to enable tool calling on the model)
// ...
import { experimental_streamText } from 'ai'; // Use experimental for tool calling
// ...
const result = await experimental_streamText({
model: modelToUse,
messages,
tools: {
getLocalTime: {
description: 'Gets the current local time.',
parameters: { type: 'object', properties: {} },
},
},
});
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.