Misinformation Debunking Team
In response to the pervasive issue of fabricated content and misdirection on social media, this challenge involves building a sophisticated multi-agent system using CrewAI. Your task is to design a team of specialized AI agents to collaboratively debunk misinformation, verify facts, and synthesize neutral, evidence-based reports. The team will be powered by OpenAI o4o for its multimodal reasoning and advanced tool-use capabilities. Each agent within the CrewAI team will have a distinct role (e.g., 'Source Verifier', 'Content Analyzer', 'Report Generator') and will utilize specific tools, including a vector database like Weaviate for rapid semantic search over verified knowledge bases. The system must be capable of processing social media content, identifying false claims, citing credible sources, and producing comprehensive reports, while its operational transparency and performance are monitored and evaluated through LangSmith.
What you are building
The core problem, expected build, and operating context for this challenge.
In response to the pervasive issue of fabricated content and misdirection on social media, this challenge involves building a sophisticated multi-agent system using CrewAI. Your task is to design a team of specialized AI agents to collaboratively debunk misinformation, verify facts, and synthesize neutral, evidence-based reports. The team will be powered by OpenAI o4o for its multimodal reasoning and advanced tool-use capabilities. Each agent within the CrewAI team will have a distinct role (e.g., 'Source Verifier', 'Content Analyzer', 'Report Generator') and will utilize specific tools, including a vector database like Weaviate for rapid semantic search over verified knowledge bases. The system must be capable of processing social media content, identifying false claims, citing credible sources, and producing comprehensive reports, while its operational transparency and performance are monitored and evaluated through LangSmith.
Shared data for this challenge
Review public datasets and any private uploads tied to your build.
How submissions are scored
These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.
CorrectMisinformationIdentification
The team must correctly identify whether misinformation is present in the post.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
SourceCredibility
All cited sources must be recognized as credible (e.g., from a predefined list of reputable sources).
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
FactualAccuracyScore
Automated assessment of the factual correctness of the debunking report. • target: 0.95 • range: 0-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
CollaborationEfficiency
Number of sequential agent turns to complete the task, indicating efficient workflow. • target: 5 • range: 2-10
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
What you should walk away with
Master CrewAI for defining agent roles, tasks, and collaborative processes in a multi-agent workflow.
Implement OpenAI o4o for advanced reasoning, natural language understanding, and tool execution within individual agents.
Design custom tools for agents to interact with external services, including a vector database (Weaviate) for semantic search over a curated knowledge base of facts and sources.
Orchestrate a task flow where agents collaborate to analyze social media posts, cross-reference facts, identify logical fallacies, and synthesize a coherent debunking narrative.
Utilize Factory AI concepts for deploying, managing, and scaling multiple CrewAI teams as part of a larger enterprise misinformation detection platform.
Integrate LangSmith for end-to-end tracing of agent conversations, tool calls, and decision pathways to debug and optimize collaborative performance.
Develop robust evaluation metrics for fact-checking accuracy, source credibility, and report neutrality.
[ok] Wrote CHALLENGE.md
[ok] Wrote .versalist.json
[ok] Wrote eval/examples.json
Requires VERSALIST_API_KEY. Works with any MCP-aware editor.
DocsAI Research & Mentorship
Participation status
You haven't started this challenge yet
Operating window
Key dates and the organization behind this challenge.
Find another challenge
Jump to a random challenge when you want a fresh benchmark or a different problem space.