Agent Building
Advanced
Always open

Orchestrate Scientific Integrity Agent Crew

With growing concerns about 'AI slop' in scientific publishing, this challenge focuses on developing an agentic system to enforce scientific integrity. You will use CrewAI to orchestrate a team of specialized AI agents that act as a 'Scientific Review Board.' This crew will collaborate to analyze newly generated scientific abstracts or summaries, identify potential factual inaccuracies, inconsistencies, and characteristics of AI-generated content, and verify claims against a knowledge base. The system should highlight suspicious areas and provide justifications for its findings, leveraging the advanced reasoning capabilities of Claude Opus 4.1.

Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

With growing concerns about 'AI slop' in scientific publishing, this challenge focuses on developing an agentic system to enforce scientific integrity. You will use CrewAI to orchestrate a team of specialized AI agents that act as a 'Scientific Review Board.' This crew will collaborate to analyze newly generated scientific abstracts or summaries, identify potential factual inaccuracies, inconsistencies, and characteristics of AI-generated content, and verify claims against a knowledge base. The system should highlight suspicious areas and provide justifications for its findings, leveraging the advanced reasoning capabilities of Claude Opus 4.1.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Evaluation rubric

How submissions are scored

These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.

Max Score: 4
Dimensions
4 scoring checks
Binary
4 pass or fail dimensions
Ordinal
0 scaled dimensions
Dimension 1detect_all_known_errors

Detect All Known Errors

The crew identifies all pre-defined factual errors.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 2justification_quality

Justification Quality

Each flagged issue has a clear and relevant justification.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 3accuracy_of_ai_slop_detection

Accuracy of AI Slop Detection

The percentage of correct 'AI slop' indicators identified. • target: 0.85 • range: 0-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 4review_consensus_score

Review Consensus Score

A measure of agreement among agents on critical findings. • target: 0.9 • range: 0-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Learning goals

What you should walk away with

Master CrewAI's framework for defining roles, goals, and tasks for collaborative AI agents, ensuring clear responsibilities and communication paths.

Implement role-playing agents such as a 'Factual Verifier,' 'Consistency Checker,' and 'AI Slop Detector,' each equipped with specific tools and system prompts.

Integrate Claude Opus 4.1 for the 'AI Slop Detector' and 'Consistency Checker' roles, leveraging its advanced analytical and reasoning capabilities to identify subtle inconsistencies and patterns indicative of AI generation.

Utilize Mistral Saba for the 'Summarizer' agent, to quickly digest and extract key information from scientific texts for initial review by other agents.

Build a tool for the 'Factual Verifier' agent that queries a Pinecone vector database populated with scientific articles and established facts for evidence-based verification.

Design the overall review process within CrewAI, specifying the sequence of tasks, agent hand-offs, and criteria for collaborative decision-making.

Develop a robust output mechanism that provides a summary of findings, specific flagged issues, and justifications from the contributing agents, possibly integrated with DeepOpinion for workflow automation of the publishing feedback loop.

Start from your terminal
$npx -y @versalist/cli start orchestrate-scientific-integrity-agent-crew

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Challenge at a glance
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation
Rubric: 4 dimensions
·Detect All Known Errors(1%)
·Justification Quality(1%)
·Accuracy of AI Slop Detection(1%)
·Review Consensus Score(1%)
Gold items: 2 (2 public)

Frequently Asked Questions about Orchestrate Scientific Integrity Agent Crew