Agent Building
Advanced
Always open

R&D Team for Specialized AI Model Definition

Orchestrate a multi-agent team using CrewAI to simulate an R&D department tasked with defining the requirements and preliminary architecture for a highly specialized AI model, such as one for automating heavy construction equipment. This challenge requires defining distinct roles (e.g., AI Researcher, Robotics Engineer, Project Manager), assigning specific goals, and enabling collaborative problem-solving. Agents must leverage external tools for information gathering and document generation, ultimately producing a comprehensive R&D report detailing the model's purpose, key features, data needs, and architectural considerations. The focus is on complex task decomposition and inter-agent communication facilitated by a shared memory and structured output.

Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

Orchestrate a multi-agent team using CrewAI to simulate an R&D department tasked with defining the requirements and preliminary architecture for a highly specialized AI model, such as one for automating heavy construction equipment. This challenge requires defining distinct roles (e.g., AI Researcher, Robotics Engineer, Project Manager), assigning specific goals, and enabling collaborative problem-solving. Agents must leverage external tools for information gathering and document generation, ultimately producing a comprehensive R&D report detailing the model's purpose, key features, data needs, and architectural considerations. The focus is on complex task decomposition and inter-agent communication facilitated by a shared memory and structured output.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Evaluation rubric

How submissions are scored

These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.

Max Score: 6
Dimensions
6 scoring checks
Binary
6 pass or fail dimensions
Ordinal
0 scaled dimensions
Dimension 1report_completeness

Report Completeness

Checks if all required sections of the R&D report are present and non-empty.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 2coherence_and_readability

Coherence and Readability

Verifies that the report is well-structured and easy to understand.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 3tool_utilization_trace

Tool Utilization Trace

Confirms that agents demonstrably used external tools (e.g., web search, Milvus) during their process via LangSmith traces.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 4factual_accuracy_score

Factual Accuracy Score

Measures the correctness of technical details and factual statements within the report (0-1). • target: 0.9 • range: 0.7-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 5architectural_soundness

Architectural Soundness

Evaluates the feasibility and robustness of the proposed architectural overview (0-1). • target: 0.85 • range: 0.6-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 6inter_agent_communication_effectiveness

Inter-Agent Communication Effectiveness

Assesses the quality and relevance of messages exchanged between agents (0-1), derived from LangSmith traces. • target: 0.9 • range: 0.7-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Learning goals

What you should walk away with

Master the CrewAI framework for defining agents with specific roles, backstories, goals, and associated tools.

Implement advanced task decomposition and delegation strategies within a CrewAI workflow, ensuring agents collaborate effectively.

Integrate a vector database like Milvus to serve as a shared knowledge repository for agents, allowing them to store and retrieve research findings.

Develop custom tools for agents, such as a web search tool (e.g., using Serper API) for external information gathering and a document generation tool.

Leverage the Gemini 2.5 Pro model for individual agent intelligence, focusing on its advanced reasoning and problem-solving capabilities.

Utilize LangSmith for comprehensive tracing, debugging, and evaluation of multi-agent interactions, identifying bottlenecks and improving collaboration patterns.

Design a robust output mechanism for the CrewAI team to synthesize their findings into a structured R&D report, potentially using Pydantic for schema validation.

Start from your terminal
$npx -y @versalist/cli start r-d-team-for-specialized-ai-model-definition

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Challenge at a glance
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation
Rubric: 6 dimensions
·Report Completeness(1%)
·Coherence and Readability(1%)
·Tool Utilization Trace(1%)
·Factual Accuracy Score(1%)
·Architectural Soundness(1%)
·Inter-Agent Communication Effectiveness(1%)
Gold items: 1 (1 public)

Frequently Asked Questions about R&D Team for Specialized AI Model Definition