Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 7 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Develop custom Python tools that AutoGen agents can use to simulate access to internal logs (e.g., reading from a JSON file representing logs), fetching external news articles (e.g., a mock news API), and analyzing code (e.g., CodeRabbit-inspired static analysis for unusual commits). Ensure these tools are robust and return structured outputs that o4-mini agents can interpret. Describe how you will integrate these tools into your AutoGen agents. Consider how FLAML could optimize agent workflows that depend on tool outputs. ```python
# Example tool function
def get_access_logs(user_id: str, date_range: tuple) -> str: # Simulate fetching logs return f"Simulated logs for {user_id} on {date_range}" # How to register a tool with an agent in AutoGen
# agent_instance.register_for_llm(name="get_access_logs", description="Get access logs for a user.")(get_access_logs)
# Or for UserProxyAgent
# agent_instance.register_function(function_map={"get_access_logs": get_access_logs})
```Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Hold the task contract and output shape stable so generated implementations remain comparable.
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.
Multi-Agent System for Internal Security Anomaly Detection
This challenge focuses on building a sophisticated multi-agent system using AutoGen to detect potential data leaks or anomalous behavior. Participants will design and implement a collaborative team of AI agents capable of monitoring internal communication logs, system access records, and cross-referencing this data with external news feeds or public information. The system will identify patterns and anomalies that might indicate security incidents or insider threats. The core of the challenge involves orchestrating diverse agents, each with specific roles like 'Log Monitor', 'News Analyst', 'Incident Investigator', and 'Reporting Agent'. These agents will communicate and collaborate autonomously, using o4-mini for reasoning and specific tools to interact with simulated data sources. The goal is to build an intelligent, proactive security monitoring system that can identify subtle indicators of risk and present a consolidated, actionable report.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.