Back to Prompt Library
implementation
Gemini 2.5 Pro-powered Remediation Planning Node
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Cyberthreat Orchestrator Agent
Format
Code-aware
Lines
13
Sections
6
Linked challenge
Cyberthreat Orchestrator Agent
Prompt source
Original prompt text with formatting preserved for inspection.
13 lines
6 sections
No variables
1 code block
Implement the 'Remediation Planning' node within your LangGraph. This node should receive the classified threat type and severity from the 'Threat Analysis' node's state. Use Gemini 2.5 Pro to generate a detailed, actionable remediation plan. Your prompt to Gemini 2.5 Pro should instruct it to output the plan as a list of sequential steps. Include the Python code for this LangGraph node and its interaction with Gemini 2.5 Pro.
```python
from langchain_google_genai import ChatGoogleGenerativeAI
def remediation_planner(state: AgentState) -> AgentState:
threat_type = state["threat_type"]
severity = state["severity"]
llm = ChatGoogleGenerativeAI(model="gemini-2.5-pro", temperature=0.7)
prompt = f"Given a {severity} level {threat_type} threat, generate a sequential, actionable remediation plan. Output as a numbered list of steps.\nThreat: {threat_type}\nSeverity: {severity}\nRemediation Plan:"
response = llm.invoke(prompt)
# ... parse response into a list of strings ...
state["remediation_plan"] = [response.content]
return state
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Hold the task contract and output shape stable so generated implementations remain comparable.
Tune next
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Verify after
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.