Challenge

Misinformation Debunking Team

In response to the pervasive issue of fabricated content and misdirection on social media, this challenge involves building a sophisticated multi-agent system using CrewAI. Your task is to design a team of specialized AI agents to collaboratively debunk misinformation, verify facts, and synthesize neutral, evidence-based reports. The team will be powered by OpenAI o4o for its multimodal reasoning and advanced tool-use capabilities. Each agent within the CrewAI team will have a distinct role (e.g., 'Source Verifier', 'Content Analyzer', 'Report Generator') and will utilize specific tools, including a vector database like Weaviate for rapid semantic search over verified knowledge bases. The system must be capable of processing social media content, identifying false claims, citing credible sources, and producing comprehensive reports, while its operational transparency and performance are monitored and evaluated through LangSmith.

Agent BuildingHosted by Vera
Status
Always open
Difficulty
Advanced
Points
500
Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

In response to the pervasive issue of fabricated content and misdirection on social media, this challenge involves building a sophisticated multi-agent system using CrewAI. Your task is to design a team of specialized AI agents to collaboratively debunk misinformation, verify facts, and synthesize neutral, evidence-based reports. The team will be powered by OpenAI o4o for its multimodal reasoning and advanced tool-use capabilities. Each agent within the CrewAI team will have a distinct role (e.g., 'Source Verifier', 'Content Analyzer', 'Report Generator') and will utilize specific tools, including a vector database like Weaviate for rapid semantic search over verified knowledge bases. The system must be capable of processing social media content, identifying false claims, citing credible sources, and producing comprehensive reports, while its operational transparency and performance are monitored and evaluated through LangSmith.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Evaluation rubric

How submissions are scored

These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.

Max Score: 4
Dimensions
4 scoring checks
Binary
4 pass or fail dimensions
Ordinal
0 scaled dimensions
Dimension 1correctmisinformationidentification

CorrectMisinformationIdentification

The team must correctly identify whether misinformation is present in the post.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 2sourcecredibility

SourceCredibility

All cited sources must be recognized as credible (e.g., from a predefined list of reputable sources).

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 3factualaccuracyscore

FactualAccuracyScore

Automated assessment of the factual correctness of the debunking report. • target: 0.95 • range: 0-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 4collaborationefficiency

CollaborationEfficiency

Number of sequential agent turns to complete the task, indicating efficient workflow. • target: 5 • range: 2-10

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Learning goals

What you should walk away with

  • Master CrewAI for defining agent roles, tasks, and collaborative processes in a multi-agent workflow.

  • Implement OpenAI o4o for advanced reasoning, natural language understanding, and tool execution within individual agents.

  • Design custom tools for agents to interact with external services, including a vector database (Weaviate) for semantic search over a curated knowledge base of facts and sources.

  • Orchestrate a task flow where agents collaborate to analyze social media posts, cross-reference facts, identify logical fallacies, and synthesize a coherent debunking narrative.

  • Utilize Factory AI concepts for deploying, managing, and scaling multiple CrewAI teams as part of a larger enterprise misinformation detection platform.

  • Integrate LangSmith for end-to-end tracing of agent conversations, tool calls, and decision pathways to debug and optimize collaborative performance.

  • Develop robust evaluation metrics for fact-checking accuracy, source credibility, and report neutrality.

Start from your terminal
$npx -y @versalist/cli start misinformation-debunking-team

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation
Rubric: 4 dimensions
·CorrectMisinformationIdentification(1%)
·SourceCredibility(1%)
·FactualAccuracyScore(1%)
·CollaborationEfficiency(1%)
Gold items: 1 (1 public)

Frequently Asked Questions about Misinformation Debunking Team