Back to Prompt Library
planning

Define LangGraph State and Agents

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: AI Fluency Index Evaluator with LangGraph and OpenAI o4-mini

Format
Code-aware
Lines
23
Sections
1
Linked challenge
AI Fluency Index Evaluator with LangGraph and OpenAI o4-mini

Prompt source

Original prompt text with formatting preserved for inspection.

23 lines
1 sections
No variables
1 code block
Initialize a LangGraph state with variables for `interaction_log`, `fluency_behaviors_identified`, `coaching_history`, and `user_feedback`. Define three LangChain agents: an `InteractionAgent` using OpenAI o4-mini to receive user input, a `BehaviorAnalyst` agent using Llama 4 Maverick to interpret interaction logs against the AI Fluency Index, and a `FluencyCoach` agent using OpenAI o4-mini to generate feedback. Ensure each agent has appropriate tools defined for their tasks. ```python
from typing import TypedDict, Annotated, List
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langchain_community.llms import LlamaCpp
from langgraph.graph import StateGraph, END # Define the state
class AgentState(TypedDict): interaction_log: Annotated[List[str], lambda x, y: x + y] fluency_behaviors_identified: Annotated[List[str], lambda x, y: x + y] coaching_history: Annotated[List[str], lambda x, y: x + y] user_feedback: str # Initialize LLMs
o4_mini_llm = ChatOpenAI(model='GPT-5 Pro-mini', temperature=0.2)
# Placeholder for Llama 4 Maverick, replace with actual loading if available
# Example for LlamaCpp assumes local .gguf file
llama_maverick_llm = LlamaCpp(model_path='/path/to/llama-4-maverick.gguf', temperature=0.1, n_ctx=2048) # Define agents (simplified for prompt, actual implementation would be more complex)
def interaction_agent(state: AgentState): print(f"Interaction Agent received: {state['user_feedback']}") new_log_entry = f"User: {state['user_feedback']}" # Logic to process and potentially generate AI response for next turn return {'interaction_log': [new_log_entry]} def behavior_analyst(state: AgentState): current_interaction = state['interaction_log'][-1] print(f"Behavior Analyst analyzing: {current_interaction}") # Simulate analysis against AI Fluency Index using llama_maverick_llm identified_behaviors = ['Context-setting', 'Iterative clarification'] # Example fluency_score = 7.5 # Example return {'fluency_behaviors_identified': identified_behaviors, 'fluency_score': fluency_score} # Build graph (simplified)
workflow = StateGraph(AgentState)
workflow.add_node('interaction', interaction_agent)
workflow.add_node('analyze', behavior_analyst)
workflow.add_edge('interaction', 'analyze')
# ... more nodes and edges ... # Example of how to add a conditional edge:
# workflow.add_conditional_edges(
# 'analyze',
# lambda state: 'coach' if state['fluency_score'] < 8 else END,
# {'coach': 'coach_node_name'}
# )
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.

Tune next

Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.

Verify after

Check whether the prompt asks for the right evidence, confidence signal, and escalation path.