Back to Prompt Library
implementation

Deploy Models on Modal/Cerebrium and Integrate with AutoGen Tools

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: AutoGen Multi-Agent System for Autonomous Vehicle Simulation & Planning

Format
Code-aware
Lines
7
Sections
1
Linked challenge
AutoGen Multi-Agent System for Autonomous Vehicle Simulation & Planning

Prompt source

Original prompt text with formatting preserved for inspection.

7 lines
1 sections
No variables
1 code block
To ensure low-latency inference for the autonomous vehicle agents, deploy your GPT-5 Pro and Claude 4 Sonnet instances (or custom fine-tuned models) on Modal and Cerebrium respectively. Then, create custom AutoGen tools that allow your `PathPlanner` and `DecisionMaker` agents to make API calls to these deployed models for their specific reasoning tasks, rather than relying solely on direct `llm_config`. ```python
# Conceptual Modal deployment client
class ModalGPT5ProClient: def plan_path(self, sensor_data: str, current_location: dict, destination: dict) -> str: # Simulate API call to Modal-deployed GPT-5 Pro endpoint return f"Path from Modal GPT-5 Pro for {sensor_data}" # Conceptual Cerebrium deployment client
class CerebriumClaude4SonnetClient: def make_tactical_decision(self, current_situation: str, path_segment: str) -> str: # Simulate API call to Cerebrium-deployed Claude 4 Sonnet endpoint return f"Decision from Cerebrium Claude 4 Sonnet for {current_situation}" # Define AutoGen tools for agents
def modal_path_planning_tool(sensor_data: str, current_location: dict, destination: dict) -> str: """Tool for PathPlanner to query Modal-deployed GPT-5 Pro for path plans.""" return ModalGPT5ProClient().plan_path(sensor_data, current_location, destination) def cerebrium_tactical_decision_tool(current_situation: str, path_segment: str) -> str: """Tool for DecisionMaker to query Cerebrium-deployed Claude 4 Sonnet for tactical decisions.""" return CerebriumClaude4SonnetClient().make_tactical_decision(current_situation, path_segment) # Register tools with agents (example for path_planner)
# path_planner.register_for_llm(name="modal_path_planning", description="...")(modal_path_planning_tool)
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.