Agentic News Headline Generator with Fact-Checking and Bias Detection
Following Google's experience with AI-generated headlines sometimes being inaccurate or misleading, this challenge focuses on developing a robust agentic system to generate news headlines. The system, built with the Claude Agents SDK, will emphasize factual accuracy, detect potential biases, and ensure relevance to the source article. Your agent will act as an editorial assistant, using Claude Opus 4.1 for sophisticated reasoning and content generation, combined with external tools for fact-checking and validation. This project highlights the critical role of AI governance, evaluation, and responsible AI practices in content generation workflows.
What you are building
The core problem, expected build, and operating context for this challenge.
Following Google's experience with AI-generated headlines sometimes being inaccurate or misleading, this challenge focuses on developing a robust agentic system to generate news headlines. The system, built with the Claude Agents SDK, will emphasize factual accuracy, detect potential biases, and ensure relevance to the source article. Your agent will act as an editorial assistant, using Claude Opus 4.1 for sophisticated reasoning and content generation, combined with external tools for fact-checking and validation. This project highlights the critical role of AI governance, evaluation, and responsible AI practices in content generation workflows.
Shared data for this challenge
Review public datasets and any private uploads tied to your build.
How submissions are scored
These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.
No factual inaccuracies
Generated headlines must not contain any verifiable factual errors relative to the source article.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Guardrails AI policy enforcement
Generated headlines must adhere to all predefined Guardrails AI policies.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Factual Accuracy Score
Score reflecting the factual correctness of the generated headline (0-1). • target: 0.95 • range: 0.8-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Relevance to Article
Semantic similarity score between headline and article core content (0-1). • target: 0.9 • range: 0.75-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Bias Detection Score (HeyBoss AI)
Measure of how successfully potential biases are detected and mitigated (0-1, lower is better for bias). • target: 0.1 • range: 0-0.3
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
What you should walk away with
Master the Claude Agents SDK for constructing multi-turn, tool-using agents, focusing on defining tools, implementing tool handlers, and managing conversational state with Claude Opus 4.1.
Design and implement custom tools for the Claude agent to perform external lookups, such as simulating an API call to a 'fact-checking service' or a 'bias analysis engine'.
Utilize Claude Opus 4.1's advanced reasoning capabilities to analyze source articles, identify key facts, and generate multiple headline options that are concise, accurate, and engaging.
Integrate HeyBoss AI to monitor and evaluate the agent's generated headlines for potential factual errors, tone issues, or unintended biases, providing real-time feedback.
Build an MLOps pipeline using ZenML to orchestrate the entire headline generation and evaluation workflow, including data ingestion, agent execution, and storing evaluation results.
Implement Guardrails AI to enforce strict output policies on generated headlines, ensuring they meet length constraints, avoid specific keywords, and adhere to a desired factual confidence score.
[ok] Wrote CHALLENGE.md
[ok] Wrote .versalist.json
[ok] Wrote eval/examples.json
Requires VERSALIST_API_KEY. Works with any MCP-aware editor.
DocsAI Research & Mentorship
Participation status
You haven't started this challenge yet
Operating window
Key dates and the organization behind this challenge.
Find another challenge
Jump to a random challenge when you want a fresh benchmark or a different problem space.