Back to Prompt Library
implementation
Develop Cloud Optimizer Agent with Mock Cloud API and Zapier Integration
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: AI Patent Analysis & Cloud Optimization Agents
Format
Code-aware
Lines
19
Sections
5
Linked challenge
AI Patent Analysis & Cloud Optimization Agents
Prompt source
Original prompt text with formatting preserved for inspection.
19 lines
5 sections
No variables
1 code block
Implement the 'Cloud Optimizer Agent' using the Claude Agents SDK. This agent should:
1. Accept user requests for cloud cost optimization for an AI/ML workload.
2. Use Claude Opus 4.1 to analyze a simulated cloud cost report (provide a sample in the prompt context).
3. Propose concrete cost-saving recommendations (e.g., spot instances, reserved instances, S3 lifecycle policies).
4. Include a custom tool that, when invoked, 'sends_optimization_report_via_zapier(report_summary: str)' to simulate triggering an external Zapier workflow for sending an email notification or creating a task. Implement the mock `send_optimization_report_via_zapier` function.
```python
from anthropic.agents import AnthropicAgent, Tool
def send_optimization_report_via_zapier(report_summary: str) -> str:
# Simulate Zapier webhook call or API interaction
print(f"Triggering Zapier with report: {report_summary[:50]}...")
return "Optimization report sent via Zapier."
zapier_tool = Tool(
name="send_optimization_report",
description="Sends an optimization report via a Zapier workflow.",
input_schema={"type": "object", "properties": {"report_summary": {"type": "string"}}, "required": ["report_summary"]},
function=send_optimization_report_via_zapier
)
# Your agent definition will incorporate this tool and logic for analysis.
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.