Back to Prompt Library
planning
Configure AutoGen Multi-Agent Team for Vehicle Simulation
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: AutoGen Multi-Agent System for Autonomous Vehicle Simulation & Planning
Format
Code-aware
Lines
15
Sections
1
Linked challenge
AutoGen Multi-Agent System for Autonomous Vehicle Simulation & Planning
Prompt source
Original prompt text with formatting preserved for inspection.
15 lines
1 sections
No variables
1 code block
Your first task is to set up an AutoGen multi-agent team for autonomous vehicle simulation. Define at least three agent roles: `SensorInterpreter`, `PathPlanner`, and `DecisionMaker`. Configure them to use GPT-5 Pro and Claude 4 Sonnet for their respective reasoning tasks. The `SensorInterpreter` should relay raw sensor data to the `PathPlanner`. ```python
import autogen
from autogen import AssistantAgent, UserProxyAgent # Configure models for AutoGen
config_list_autogen = [ {"model": "gpt-5-pro", "api_key": "YOUR_OPENAI_KEY"}, {"model": "claude-4-sonnet", "api_key": "YOUR_ANTHROPIC_KEY"},
] # Initialize agents
sensor_interpreter = AssistantAgent( name="SensorInterpreter", llm_config={"config_list": config_list_autogen, "temperature": 0.3, "model": "claude-4-sonnet"}, system_message="You interpret raw sensor data (e.g., lidar, camera) into structured observations about the environment."
) path_planner = AssistantAgent( name="PathPlanner", llm_config={"config_list": config_list_autogen, "temperature": 0.7, "model": "gpt-5-pro"}, system_message="You receive environmental observations and generate optimal, safe driving paths to a destination."
) decision_maker = AssistantAgent( name="DecisionMaker", llm_config={"config_list": config_list_autogen, "temperature": 0.5, "model": "claude-4-sonnet"}, system_message="You receive planned paths and sensor data, making real-time tactical driving decisions like braking, accelerating, or steering."
) # User proxy agent to initiate conversations (simulating human input or environment)
user_proxy = UserProxyAgent( name="HumanDriver", human_input_mode="NEVER", is_termination_msg=lambda x: "TERMINATE" in x.get("content", ""), code_execution_config={"last_n_messages": 3, "work_dir": "coding"},
) # Define group chat and start conversation
# groupchat = autogen.GroupChat(agents=[sensor_interpreter, path_planner, decision_maker, user_proxy], messages=[], max_round=20)
# manager = autogen.GroupChatManager(groupchat=groupchat, llm_config={"config_list": config_list_autogen})
# user_proxy.initiate_chat(manager, message="Initial sensor data: traffic light red at intersection X. Destination: Y.")
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Tune next
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Verify after
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.