R&D Team for Specialized AI Model Definition
Orchestrate a multi-agent team using CrewAI to simulate an R&D department tasked with defining the requirements and preliminary architecture for a highly specialized AI model, such as one for automating heavy construction equipment. This challenge requires defining distinct roles (e.g., AI Researcher, Robotics Engineer, Project Manager), assigning specific goals, and enabling collaborative problem-solving. Agents must leverage external tools for information gathering and document generation, ultimately producing a comprehensive R&D report detailing the model's purpose, key features, data needs, and architectural considerations. The focus is on complex task decomposition and inter-agent communication facilitated by a shared memory and structured output.
What you are building
The core problem, expected build, and operating context for this challenge.
Orchestrate a multi-agent team using CrewAI to simulate an R&D department tasked with defining the requirements and preliminary architecture for a highly specialized AI model, such as one for automating heavy construction equipment. This challenge requires defining distinct roles (e.g., AI Researcher, Robotics Engineer, Project Manager), assigning specific goals, and enabling collaborative problem-solving. Agents must leverage external tools for information gathering and document generation, ultimately producing a comprehensive R&D report detailing the model's purpose, key features, data needs, and architectural considerations. The focus is on complex task decomposition and inter-agent communication facilitated by a shared memory and structured output.
Shared data for this challenge
Review public datasets and any private uploads tied to your build.
How submissions are scored
These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.
Report Completeness
Checks if all required sections of the R&D report are present and non-empty.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Coherence and Readability
Verifies that the report is well-structured and easy to understand.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Tool Utilization Trace
Confirms that agents demonstrably used external tools (e.g., web search, Milvus) during their process via LangSmith traces.
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Factual Accuracy Score
Measures the correctness of technical details and factual statements within the report (0-1). • target: 0.9 • range: 0.7-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Architectural Soundness
Evaluates the feasibility and robustness of the proposed architectural overview (0-1). • target: 0.85 • range: 0.6-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
Inter-Agent Communication Effectiveness
Assesses the quality and relevance of messages exchanged between agents (0-1), derived from LangSmith traces. • target: 0.9 • range: 0.7-1
This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.
What you should walk away with
Master the CrewAI framework for defining agents with specific roles, backstories, goals, and associated tools.
Implement advanced task decomposition and delegation strategies within a CrewAI workflow, ensuring agents collaborate effectively.
Integrate a vector database like Milvus to serve as a shared knowledge repository for agents, allowing them to store and retrieve research findings.
Develop custom tools for agents, such as a web search tool (e.g., using Serper API) for external information gathering and a document generation tool.
Leverage the Gemini 2.5 Pro model for individual agent intelligence, focusing on its advanced reasoning and problem-solving capabilities.
Utilize LangSmith for comprehensive tracing, debugging, and evaluation of multi-agent interactions, identifying bottlenecks and improving collaboration patterns.
Design a robust output mechanism for the CrewAI team to synthesize their findings into a structured R&D report, potentially using Pydantic for schema validation.
[ok] Wrote CHALLENGE.md
[ok] Wrote .versalist.json
[ok] Wrote eval/examples.json
Requires VERSALIST_API_KEY. Works with any MCP-aware editor.
DocsAI Research & Mentorship
Participation status
You haven't started this challenge yet
Operating window
Key dates and the organization behind this challenge.
Find another challenge
Jump to a random challenge when you want a fresh benchmark or a different problem space.