Back to Prompt Library
planning
Setup AutoGen Environment and Initial Agent Roles
Inspect the original prompt language first, then copy or adapt it once you know how it fits your workflow.
Linked challenge: Multi-Agent System for Automated Audit Evidence Collection
Format
Code-aware
Lines
12
Sections
4
Linked challenge
Multi-Agent System for Automated Audit Evidence Collection
Prompt source
Original prompt text with formatting preserved for inspection.
12 lines
4 sections
No variables
1 code block
Set up your Python environment and install AutoGen. Define a 'Researcher' agent and an 'Analyst' agent. Configure them to use a Mistral Large compatible API endpoint. The Researcher should be able to execute web scraping tools (e.g., a dummy Bright Data client function), and the Analyst should be able to process the scraped data.
```python
import autogen
config_list = autogen.config_list_from_json(
'OAI_CONFIG_LIST',
filter_dict={
'model': ['mistral-large']
}
)
# Define Researcher and Analyst agents here
# ...
```Adaptation plan
Keep the source stable, then change the prompt in a predictable order so the next run is easier to evaluate.
Keep stable
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Tune next
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Verify after
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.