Back to Prompt Library
planning

LangGraph State and Node Definition

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: MCP Server for Enterprise Sustainability Reporting

Format
Code-aware
Lines
28
Sections
5
Linked challenge
MCP Server for Enterprise Sustainability Reporting

Prompt source

Original prompt text with formatting preserved for inspection.

28 lines
5 sections
No variables
1 code block
Define the LangGraph state for your sustainability reporting system, including fields for raw data, processed metrics, compliance status, risks, and recommendations. Then, define the initial nodes for your graph: a 'DataReader' (to interface with MCP), a 'DataAnalyzer' (using Claude Opus 4.1), and a 'RiskAssessor'.

```python
from typing import TypedDict, List, Dict, Any
from langchain_core.messages import BaseMessage
from langgraph.graph import StateGraph, START, END
from langchain_core.tools import tool

# Define the graph state
class AgentState(TypedDict):
    raw_data: List[Dict[str, Any]]
    metrics: Dict[str, float]
    compliance_report: Dict[str, Any]
    risks: List[str]
    recommendations: List[str]
    messages: List[BaseMessage]

# Define nodes (functions)
def data_reader(state: AgentState):
    print("---DATA READER---")
    # Simulate MCP data access
    # In a real scenario, this would involve calling an MCP client with 'mcp_token'
    simulated_raw_data = [
        {"source": "iot_sensors_factoryA", "timestamp": "2024-03-01", "water_usage": 120.5, "energy_usage": 1000},
        {"source": "iot_sensors_factoryA", "timestamp": "2024-03-05", "water_usage": 130.0, "energy_usage": 1100}
        # ... more simulated data
    ]
    return {"raw_data": simulated_raw_data, "messages": [("tool", "Data fetched via MCP simulation.")]}

# Define other nodes like data_analyzer, risk_assessor, report_generator
# and wire them into the graph.
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.

Tune next

Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.

Verify after

Check whether the prompt asks for the right evidence, confidence signal, and escalation path.