Report Refinement and Fact-Checking

testingChallenge

Prompt Content

After the initial report generation, introduce a 'Reviewer' task (or modify an existing agent's role) to perform fact-checking and refinement. This task should utilize `Gemini 2.5 Pro` to cross-reference key statements from the generated report against known facts or by triggering additional targeted `WebSearchTool` queries. The goal is to ensure factual accuracy and improve the report's quality, directly addressing the `FactChecking` evaluation task. Orchestrate this refinement step within your Prefect flow.

```python
# ... (previous agent, tool, and task definitions)

# Add a Reviewer agent (can be the Analyst with an additional task or a new agent)
# For simplicity, let's reuse the Analyst for a review task

task_review_report = Task(
    description=(
        "Review the generated technical report on '{topic}' for factual accuracy, completeness, and coherence. "
        "Identify any unsupported claims or areas needing more detail. "
        "Use the 'retrieve_research' tool to cross-reference facts from ChromaDB, and potentially the 'WebSearchTool' for external verification. "
        "Output a list of identified inaccuracies or suggestions for improvement." 
    ),
    expected_output='A list of factual inaccuracies or suggestions for improving the report.',
    agent=analyst # Assigning to Analyst for review
)

@flow(name="Technical Research & Review Flow")
def technical_research_and_review_flow(topic: str):
    # Ensure reset on ChromaDB for clean runs in evaluation
    chroma_client.reset()
    global chroma_collection # Re-initialize if reset
    chroma_collection = chroma_client.get_or_create_collection(name="tech_research_kb")

    # Instantiate the CrewAI crew
    tech_research_crew = Crew(
        agents=[researcher, analyst, writer],
        tasks=[task_research_topic, task_analyze_findings, task_write_report, task_review_report],
        verbose=True,
        process="sequential" # Execute tasks in sequence
    )

    print(f"Starting research and review on: {topic}")
    crew_result = tech_research_crew.kickoff(inputs={'topic': topic})
    print(f"CrewAI research and review completed. Final output:\n{crew_result}")

    # In a real scenario, you'd parse crew_result to get the final report and review comments
    # For evaluation, you would then pass the report to the FactChecking eval task
    return crew_result

# Example of running the Prefect flow with review:
# technical_research_and_review_flow("The challenges and future of quantum computing in cryptography.")
```

Try this prompt

Open the workspace to execute this prompt with free credits, or use your own API keys for unlimited usage.

Usage Tips

Copy the prompt and paste it into your preferred AI tool (Claude, ChatGPT, Gemini)

Customize placeholder values with your specific requirements and context

For best results, provide clear examples and test different variations