Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 1 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Develop the multi-agent debate mechanism within AutoGen. Simulate a scenario where the Infrastructure Planner Agent proposes an initial plan, and the Risk Assessor Agent critiques it based on extreme weather vulnerabilities, while the Policy & Ethics Agent considers long-term social and environmental impacts. The debate should lead to a refined, more resilient infrastructure plan. Provide a detailed log of agent interactions and the final consensus.
Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Hold the task contract and output shape stable so generated implementations remain comparable.
Update libraries, interfaces, and environment assumptions to match the stack you actually run.
Test failure handling, edge cases, and any code paths that depend on hidden context or secrets.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt is mostly narrative and instruction-driven, so you can adapt examples and output constraints first without disturbing the structure.
Multi-Agent Foresight for Resilient Urban Infrastructure Planning with Command R+ & AutoGen
Design and implement a multi-agent system capable of performing 'foresight' for urban infrastructure planning, with a specific focus on resilience against extreme weather events. The system should leverage Command R+ via Amazon Bedrock for advanced reasoning, integrate a grow-and-refine multimodal semantic memory, and employ a multi-agent debate mechanism for risk-aware decision-making. Developers will simulate a scenario where agents analyze diverse data sources (e.g., historical weather patterns, spatial infrastructure maps, urban planning documents) to propose resilient infrastructure solutions, identify potential failure points, and debate the optimal strategies, mimicking the principles of agentic learning and risk-aware planning. The core of this challenge involves orchestrating specialized agents – a Data Analyst Agent, an Infrastructure Planner Agent, a Risk Assessment Agent, and a Policy & Ethics Agent – using AutoGen. These agents will communicate, share insights from their multimodal memories, and engage in structured debates to arrive at comprehensive, foresight-driven recommendations. The multimodal semantic memory, backed by a vector database, will store and retrieve information in various formats (text, images, geospatial data), enabling the agents to build a rich understanding of the urban environment and potential future scenarios, thereby bridging the gap from simple prediction to proactive foresight.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.