Report Refinement and Fact-Checking

Prompt detail, context, and execution controls for real reuse instead of one-off copying.

testingCollaborative Technical Research Crew Public prompt

Operator-ready prompt for reuse, tuning, and workspace runs.

This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.

Best for

Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.

Reuse pattern

Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.

Before first run

Swap domain facts, examples, and any hard-coded entities for your own context.

Tighten the evidence or verification requirement if this is headed toward production.

Decide which failure mode you want to evaluate first before you branch the prompt.

Operator lens

This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.

Best practice: keep one pristine source version, then branch variants around evaluation criteria, evidence thresholds, and output format.
Inspect linked challenge context
Run Profile

Open this prompt inside Workspace when you want a live iteration loop.

Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.

Structured source with 37 active lines to adapt.

Already linked to a challenge workflow.

Sign in to keep private prompt variations.

View linked challenge

Prompt content

Original prompt text with formatting preserved for inspection and clean copy.

Source prompt
37 active lines
9 sections
No variables
1 code block
Raw prompt
Formatting preserved for direct reuse
After the initial report generation, introduce a 'Reviewer' task (or modify an existing agent's role) to perform fact-checking and refinement. This task should utilize `Gemini 2.5 Pro` to cross-reference key statements from the generated report against known facts or by triggering additional targeted `WebSearchTool` queries. The goal is to ensure factual accuracy and improve the report's quality, directly addressing the `FactChecking` evaluation task. Orchestrate this refinement step within your Prefect flow.

```python
# ... (previous agent, tool, and task definitions)

# Add a Reviewer agent (can be the Analyst with an additional task or a new agent)
# For simplicity, let's reuse the Analyst for a review task

task_review_report = Task(
    description=(
        "Review the generated technical report on '{topic}' for factual accuracy, completeness, and coherence. "
        "Identify any unsupported claims or areas needing more detail. "
        "Use the 'retrieve_research' tool to cross-reference facts from ChromaDB, and potentially the 'WebSearchTool' for external verification. "
        "Output a list of identified inaccuracies or suggestions for improvement." 
    ),
    expected_output='A list of factual inaccuracies or suggestions for improving the report.',
    agent=analyst # Assigning to Analyst for review
)

@flow(name="Technical Research & Review Flow")
def technical_research_and_review_flow(topic: str):
    # Ensure reset on ChromaDB for clean runs in evaluation
    chroma_client.reset()
    global chroma_collection # Re-initialize if reset
    chroma_collection = chroma_client.get_or_create_collection(name="tech_research_kb")

    # Instantiate the CrewAI crew
    tech_research_crew = Crew(
        agents=[researcher, analyst, writer],
        tasks=[task_research_topic, task_analyze_findings, task_write_report, task_review_report],
        verbose=True,
        process="sequential" # Execute tasks in sequence
    )

    print(f"Starting research and review on: {topic}")
    crew_result = tech_research_crew.kickoff(inputs={'topic': topic})
    print(f"CrewAI research and review completed. Final output:\n{crew_result}")

    # In a real scenario, you'd parse crew_result to get the final report and review comments
    # For evaluation, you would then pass the report to the FactChecking eval task
    return crew_result

# Example of running the Prefect flow with review:
# technical_research_and_review_flow("The challenges and future of quantum computing in cryptography.")
```

Adaptation plan

Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.

Keep stable

Preserve the rubric, target behavior, and pass-fail criteria as the baseline for evaluation.

Tune next

Adjust fixtures, mocks, and thresholds to the system under test instead of weakening the assertions.

Verify after

Make sure the prompt catches regressions instead of just mirroring the happy-path examples.

Safe workflow

Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.

Prompt diagnostics

Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.

Sections
9
Variables
0
Lists
0
Code blocks
1
Reuse posture

This prompt already mixes executable detail with instructions, so the safest path is to tune examples and interfaces before you rewrite the overall scaffold.

Linked challenge

Collaborative Technical Research Crew

Design and implement a multi-agent system using CrewAI to collaboratively research and synthesize complex technical topics, inspired by detailed historical accounts like RISC-V development or DeepMind documentaries. The crew will consist of specialized agents (e.g., Researcher, Analyst, Technical Writer) working together. They will leverage Gemini 2.5 Pro via Vertex AI for deep understanding and content generation, utilizing ChromaDB as a shared knowledge base for storing research findings and contextual embeddings. Prefect will orchestrate the end-to-end research workflow, managing task dependencies and ensuring robust execution of the multi-agent collaboration.

Agent Building
advanced
Prompt origin
Why open it

Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.

Open challenge context