Back to Prompt Library
testing

Orchestrate Agent Conversation and Evaluation

Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.

Linked challenge: Multi-Agent Ad Policy Auditor

Format
Code-aware
Lines
9
Sections
4
Linked challenge
Multi-Agent Ad Policy Auditor

Prompt source

Original prompt text with formatting preserved for inspection.

9 lines
4 sections
No variables
1 code block
Orchestrate the AutoGen agents' conversation flow, ensuring they communicate effectively to process an ad strategy from topic analysis to privacy audit and fact-checking. Set up a simple execution loop and prepare the final output in the required JSON format for evaluation.

```python
# Example of initiating a group chat
# groupchat = autogen.GroupChat(agents=[user_topic_analyzer, ad_strategist, privacy_auditor, fact_checker], messages=[], max_round=10)
# manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

# user_proxy.initiate_chat(manager, message="Analyze user data for ad strategy and audit: {'user_topics': ..., 'ad_proposal': ..., 'privacy_policy': ...}")

# Ensure the final output is in the specified JSON format after the agents complete their tasks.
# This might involve a final agent or the user_proxy formatting the results.
```

Adaptation plan

Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.