Back to Prompt Library
implementation

Build Multi-LLM Data Analysis and Reporting Agents

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: MCP Server for Enterprise Sustainability Reporting

Format
Code-aware
Lines
28
Sections
5
Linked challenge
MCP Server for Enterprise Sustainability Reporting

Prompt source

Original prompt text with formatting preserved for inspection.

28 lines
5 sections
No variables
1 code block
Create the 'DataAnalyzer' and 'ReportGenerator' nodes. The 'DataAnalyzer' should use Claude Opus 4.1 for complex interpretation of raw data and potentially call the TorchServe tool. The 'ReportGenerator' should use GPT-4o for structuring and summarizing findings, risks, and recommendations into the final report format. Design clear communication protocols between these agents in your LangGraph workflow.

```python
from langchain_anthropic import ChatAnthropic
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import ToolExecutor, ToolNode

# Initialize LLMs
claude = ChatAnthropic(model="claude-3-opus-20240229", temperature=0.2, anthropic_api_key="YOUR_ANTHROPIC_API_KEY")
gpt_4o = ChatOpenAI(model="gpt-4o", temperature=0.2, openai_api_key="YOUR_OPENAI_API_KEY")

# Data Analyzer node (simplified)
def data_analyzer_node(state: AgentState):
    print("---DATA ANALYZER---")
    raw_data = state["raw_data"]
    # Use Claude Opus 4.1 to analyze raw_data and identify patterns/metrics
    analysis_prompt = f"Analyze this raw operational data for sustainability metrics: {raw_data}. Identify key consumption figures (water, energy)."
    analysis_result = claude.invoke(analysis_prompt).content
    # Call TorchServe tool here if needed, e.g., for anomaly_detection
    return {"metrics": {"water": 1150.5, "energy": 9500.2}, "messages": [("ai", analysis_result)]}

# Report Generator node (simplified)
def report_generator_node(state: AgentState):
    print("---REPORT GENERATOR---")
    metrics = state["metrics"]
    risks = state["risks"]
    recs = state["recommendations"]
    # Use GPT-4o to compile a structured report
    report_prompt = f"Generate a detailed sustainability report summary based on metrics: {metrics}, identified risks: {risks}, and recommendations: {recs}."
    report_summary = gpt_4o.invoke(report_prompt).content
    return {"compliance_report": {"report_summary": report_summary}, "messages": [("ai", report_summary)]}
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.