Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 1 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Using your configured AutoGen agents, simulate a user interaction where the 'User Proxy Agent' provides a profile and requests an event. Introduce the 'simulated_unsafe_input' from the evaluation task template at a specific point in the conversation. Demonstrate how the Event Planner suggests an event and how the Safety Moderator flags the unsafe input, providing a reason. Provide the Python code for this simulation, including how to capture and output the required evaluation data.
Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.
Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.
Make sure the prompt catches regressions instead of just mirroring the happy-path examples.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt is mostly narrative and instruction-driven, so you can adapt examples and output constraints first without disturbing the structure.
AutoGen Multi-Agent Social Event Planner with o3
Inspired by Tinder's recent updates that integrate AI for events and bolster safety, this challenge focuses on building a sophisticated multi-agent system using Microsoft's AutoGen framework. The system will act as a 'Social Event Planner' for a hypothetical dating or social networking application. It will be capable of autonomously identifying trending local events, suggesting suitable matches for attendees based on their profiles, and proactively moderating interactions for user safety. The system will leverage the o3 model for nuanced conversational understanding and generation, allowing agents to interact naturally and empathetically with users. Across AI will be utilized for persistent memory management, enabling the agents to maintain and retrieve long-term user preferences, interaction history, and learned social cues. AiXplain will facilitate low-code automation for integrating with various external calendar or event APIs, streamlining event discovery. All Hands AI will be integrated to enhance chat moderation and safety features within agent-to-user and agent-to-agent communications, while Squarespace (conceptually, for UI component integration) will represent how the event suggestions and moderated interactions are presented to end-users. This project explores the frontiers of social AI and responsible agent design.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.