Back to Prompt Library
implementation

Orchestrate Agent Collaboration

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: AI Code Audit & Optimization Agent

Format
Text-first
Lines
1
Sections
1
Linked challenge
AI Code Audit & Optimization Agent

Prompt source

Original prompt text with formatting preserved for inspection.

1 lines
1 sections
No variables
0 checklist items
Using the OpenAI Agents SDK (specifically `client.beta.threads.runs.create`), orchestrate a multi-turn conversation where your primary 'Code Lead Agent' calls the `analyze_python_code` tool. Based on the tool's output, design the agent's response to either suggest fixes directly or delegate further analysis to a specialized 'Security Reviewer Agent' or 'Performance Optimizer Agent'. Demonstrate how agents share context and refine their findings based on subsequent tool calls or internal reasoning. Ensure proper `tool_output` handling.

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Hold the task contract and output shape stable so generated implementations remain comparable.

Tune next

Update libraries, interfaces, and environment assumptions to match the stack you actually run.

Verify after

Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.