Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 39 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Now let's create a concrete implementation plan for building your AI solution. 1. **MVP Definition**: - Define the Minimum Viable Product (MVP) features - What core functionality must be included? - What can be deferred to later versions? - Create user stories for MVP features 2. **Development Phases**: Phase 1 - Foundation (Week 1-2): - Set up development environment - Implement basic infrastructure - Create initial data pipelines - Build basic UI/API structure Phase 2 - AI Implementation (Week 3-4): - Integrate AI models - Implement core AI functionality - Create training/fine-tuning pipelines - Build evaluation frameworks Phase 3 - Integration & Testing (Week 5-6): - Complete system integration - Implement user interfaces - Conduct thorough testing - Performance optimization 3. **Technical Implementation Details**: - Choose specific technologies and frameworks - Define coding standards and best practices - Create testing strategies (unit, integration, E2E) - Plan for CI/CD implementation 4. **Risk Mitigation**: - Identify potential technical risks - Create contingency plans - Define fallback options for critical components - Plan for scalability challenges 5. **Deliverables**: - Working MVP with core features - Documentation (API, user guides, technical specs) - Test suite with >80% coverage - Performance benchmarks - Deployment guide Create a detailed implementation plan with specific tasks, timelines, and deliverables.
Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the source structure until you know which part of the prompt is actually driving the result quality.
Change domain facts, examples, and tool context first before you rewrite the instruction scaffold.
Validate one failure mode at a time so prompt changes stay attributable instead of getting noisy.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt is mostly narrative and instruction-driven, so you can adapt examples and output constraints first without disturbing the structure.
Medical Diagnostics: Design an AI system to assist in preliminary diagnosis from symptom descriptions
Build a healthcare-focused AI system that medical diagnostics: design an ai system to assist in preliminary diagnosis from symptom descriptions. Ensure compliance with medical standards while leveraging AI to improve patient outcomes and healthcare efficiency.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.