Back to Prompt Library
planning
Design LangGraph Agent Workflow for Content Review
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Editorial Compliance & Content Neutrality System Agent
Format
Code-aware
Lines
24
Sections
1
Linked challenge
Editorial Compliance & Content Neutrality System Agent
Prompt source
Original prompt text with formatting preserved for inspection.
24 lines
1 sections
No variables
1 code block
Using LangGraph, design a stateful multi-agent workflow for autonomous content review. Define four key nodes: 'Content Ingester', 'Bias Analyzer', 'Fact Checker', and 'Compliance Reporter'. The 'Bias Analyzer' node should primarily use GPT-5 Pro for nuanced language interpretation and bias detection. The 'Fact Checker' node should leverage Gemini 3 Flash for quick factual verification, potentially using external tool calls. The 'Compliance Reporter' aggregates findings and generates a final compliance report. Define the edges for sequential processing and conditional routing based on detected issues (e.g., if high bias is detected, route to a human review node; if factual errors, re-route for a second check).
```python
from langgraph.graph import StateGraph, START, END
from langchain_core.messages import BaseMessage
from typing import TypedDict, Annotated, List, Union # Define the state for the LangGraph workflow
class AgentState(TypedDict): messages: Annotated[List[BaseMessage], lambda x, y: x + y] # Conversation history content_to_review: str # The article text being reviewed bias_analysis: str # Output from Bias Analyzer fact_check_results: List[dict] # Output from Fact Checker compliance_report: str # Final report has_issues: bool # Flag for conditional routing # Initialize the LangGraph
graph_builder = StateGraph(AgentState) # Define placeholder nodes (these will be implemented as functions or LangChain Runnables)
# def content_ingester_node(state: AgentState): ...
# def bias_analyzer_node(state: AgentState): ... # Will use GPT-5 Pro
# def fact_checker_node(state: AgentState): ... # Will use Gemini 3 Flash
# def compliance_reporter_node(state: AgentState): ... # Add nodes to the graph
# graph_builder.add_node("ingester", content_ingester_node)
# graph_builder.add_node("bias_analyzer", bias_analyzer_node)
# graph_builder.add_node("fact_checker", fact_checker_node)
# graph_builder.add_node("reporter", compliance_reporter_node) # Set entry point
# graph_builder.set_entry_point("ingester") # Add edges
# graph_builder.add_edge("ingester", "bias_analyzer") # Define conditional edge (example: based on 'has_issues' flag from bias analyzer)
# graph_builder.add_conditional_edges(
# "bias_analyzer",
# lambda state: "fact_checker" if not state.get("has_issues", False) else "human_review_escalation",
# {"fact_checker": "fact_checker", "human_review_escalation": "human_review_node"}
# ) # ... continue defining edges ... # graph_builder.set_finish_point("reporter") # Compile the graph
# app = graph_builder.compile()
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Tune next
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Verify after
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.