AI Development
Advanced
Always open

Accelerated Code Dev & Review Agent

Inspired by Claude's growing footprint in GitHub commits, this challenge focuses on building an advanced agentic development environment. You will use Mastra AI to orchestrate a team of agents that automate parts of the software development lifecycle, from generating code snippets based on user stories to automated testing and code review. The system should integrate with a simulated codebase, providing intelligent suggestions and even committing code. Emphasis is placed on code quality, security, and developer productivity.

Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

Inspired by Claude's growing footprint in GitHub commits, this challenge focuses on building an advanced agentic development environment. You will use Mastra AI to orchestrate a team of agents that automate parts of the software development lifecycle, from generating code snippets based on user stories to automated testing and code review. The system should integrate with a simulated codebase, providing intelligent suggestions and even committing code. Emphasis is placed on code quality, security, and developer productivity.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Evaluation rubric

How submissions are scored

These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.

Max Score: 5
Dimensions
5 scoring checks
Binary
5 pass or fail dimensions
Ordinal
0 scaled dimensions
Dimension 1codesyntacticallycorrect

CodeSyntacticallyCorrect

Generated code is syntactically valid Python.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 2testspass

TestsPass

Generated unit tests pass against the generated function.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 3pep8compliance

PEP8Compliance

Refactored code adheres to PEP8 guidelines.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 4codequalityscore

CodeQualityScore

Automated score based on linting, complexity, and docstrings. • target: 85 • range: 0-100

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 5featurecompleteness

FeatureCompleteness

Percentage of described features correctly implemented. • target: 95 • range: 0-100

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Learning goals

What you should walk away with

Master Mastra AI for defining agent roles, tools, and workflows, including its built-in memory and RAG capabilities for contextual code generation.

Integrate Claude Sonnet 4 for high-quality code generation and complex logical reasoning tasks, particularly for design patterns and architectural decisions.

Deploy Llama 3 8B Instruct via Hugging Face Inference Endpoints for highly optimized, specific code completion, syntax checking, and boilerplate generation.

Build custom tools within Mastra AI to interact with a mock Git repository and a simulated IDE (emulating Cursor's features) for reading, writing, and modifying code files.

Utilize Cohere's embedding models for semantic search over the codebase, enabling agents to quickly find relevant code examples, functions, or documentation for context.

Design an agent team where individual agents (e.g., 'Feature Developer Agent', 'Test Engineer Agent', 'Code Review Agent') collaborate using Mastra AI's messaging primitives.

Implement automated code quality checks and vulnerability scanning using a simulated or simplified code analysis tool.

Start from your terminal
$npx -y @versalist/cli start accelerated-code-dev-review-agent

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Challenge at a glance
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation
Rubric: 5 dimensions
·CodeSyntacticallyCorrect(1%)
·TestsPass(1%)
·PEP8Compliance(1%)
·CodeQualityScore(1%)
·FeatureCompleteness(1%)
Gold items: 2 (2 public)

Frequently Asked Questions about Accelerated Code Dev & Review Agent