Back to Prompt Library
planning

Define LangChain Agents and LangGraph Workflow

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Multi-Model Creative Brief Generation with LangChain and GPT-5 Pro

Format
Code-aware
Lines
18
Sections
1
Linked challenge
Multi-Model Creative Brief Generation with LangChain and GPT-5 Pro

Prompt source

Original prompt text with formatting preserved for inspection.

18 lines
1 sections
No variables
1 code block
Initialize your LangChain environment. Define two primary agents: a 'Creative Director' agent powered by GPT-5 Pro and a 'Creative Specialist' agent powered by Claude 4 Sonnet. Construct a LangGraph workflow that orchestrates the collaboration between these agents to generate a music video creative brief. The workflow should start with the Creative Director, then delegate to the Specialist for detailed sections, and finally consolidate. ```python
from langchain_core.messages import BaseMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langgraph.graph import StateGraph, END # Initialize LLMs
gpt_pro = ChatOpenAI(model="gpt-5-pro", temperature=0.7, openai_api_key="YOUR_OPENAI_API_KEY")
claude_sonnet = ChatAnthropic(model="claude-4-sonnet", temperature=0.7, anthropic_api_key="YOUR_ANTHROPIC_API_KEY") # Define agent prompts (simplified)
creative_director_prompt = ChatPromptTemplate.from_template("You are a Creative Director. Your task is to conceptualize a music video brief based on the user's input. Delegate detailed content generation to a specialist. User input: {input}")
creative_specialist_prompt = ChatPromptTemplate.from_template("You are a Creative Specialist. Your task is to elaborate on specific sections of a music video brief provided by the Creative Director. Section to elaborate: {section}, Context: {context}") # Define agent runnables (simplified)
creative_director_agent = creative_director_prompt | gpt_pro
creative_specialist_agent = creative_specialist_prompt | claude_sonnet # Define LangGraph state
class AgentState(BaseMessage): input: str brief_sections: dict = Field(default_factory=dict) current_task: str = "" next_node: str = "" # Build the graph (initial nodes)
graph_builder = StateGraph(AgentState) graph_builder.add_node("creative_director", creative_director_agent)
graph_builder.add_node("creative_specialist", creative_specialist_agent) # Define entry point
graph_builder.set_entry_point("creative_director") # Add conditional edges later
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.

Tune next

Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.

Verify after

Check whether the prompt asks for the right evidence, confidence signal, and escalation path.