Back to Prompt Library
implementation
Implement Observation Agent with Claude Sonnet 4 and BentoML Hook
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Multi-Agent Warehouse Optimization
Format
Code-aware
Lines
30
Sections
5
Linked challenge
Multi-Agent Warehouse Optimization
Prompt source
Original prompt text with formatting preserved for inspection.
30 lines
5 sections
No variables
1 code block
Implement the 'Observation Agent' using Mastra AI. This agent will simulate receiving input from a 'curious AI' camera/drone, which you will model as an inference endpoint deployed on BentoML Cloud (for this prompt, a simple function call will suffice, but conceptualize it as a BentoML service). The agent should use Claude Sonnet 4 to interpret observation data (e.g., 'item detected at location X') and update its internal memory. Integrate a simple tool to 'report_observation' to the Inventory Agent.
```typescript
import { createAgent } from '@mastra-ai/core';
// Simulate a BentoML inference call
async function callBentoMLInference(imageData: string): Promise<{ detected_item: string, location: string }> {
// In a real scenario, this would be an HTTP call to your BentoML service
console.log(`Simulating BentoML inference for image data: ${imageData}`);
return { detected_item: 'SKU12345', location: 'Aisle 3, Shelf 5' };
}
const observationAgent = createAgent({
name: 'observationAgent',
llm: 'claude-sonnet-4',
actions: {
processObservation: async (ctx, imageData: string) => {
const inferenceResult = await callBentoMLInference(imageData); // Call simulated BentoML service
ctx.memory.set('last_observation', inferenceResult); // Update agent's memory
// Use LLM to interpret and potentially call another tool/agent
const response = await ctx.llm.chat([{
role: 'user',
content: `Interprete this observation: Item ${inferenceResult.detected_item} detected at ${inferenceResult.location}. What should I do?`
}]);
// Example tool call or message to another agent based on LLM's response
// await ctx.send('inventoryAgent', { type: 'item_detected', payload: inferenceResult });
return response.content;
}
},
// ... other configurations like memory providers, etc.
});
// Example usage: observationAgent.actions.processObservation('drone_feed_001');
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.