AI Development
Advanced
Always open

Mathematical Proof Assistant

This challenge focuses on building an advanced AI system capable of understanding complex mathematical questions, retrieving relevant theorems and definitions from a specialized knowledge base, and constructing logical proofs or counter-examples. Participants will leverage LlamaIndex's advanced RAG capabilities to ensure contextual understanding and Gemini 2.5 Pro's strong reasoning for generating robust mathematical arguments. The emphasis will be on accurate grounding of facts, verifiable proof construction, and systematic evaluation of the AI's mathematical competence against novel problems. The project requires designing and populating a structured mathematical knowledge base using LlamaIndex data connectors, integrating a vector store like ChromaDB for efficient retrieval. Developers will orchestrate a multi-stage LlamaIndex agent workflow that can plan, execute, and verify proof steps. The final system should demonstrate robust reasoning by generating mathematically sound proofs and identifying valid counter-examples when applicable, similar to the objectives of the 'First Proof' experiment.

Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

This challenge focuses on building an advanced AI system capable of understanding complex mathematical questions, retrieving relevant theorems and definitions from a specialized knowledge base, and constructing logical proofs or counter-examples. Participants will leverage LlamaIndex's advanced RAG capabilities to ensure contextual understanding and Gemini 2.5 Pro's strong reasoning for generating robust mathematical arguments. The emphasis will be on accurate grounding of facts, verifiable proof construction, and systematic evaluation of the AI's mathematical competence against novel problems. The project requires designing and populating a structured mathematical knowledge base using LlamaIndex data connectors, integrating a vector store like ChromaDB for efficient retrieval. Developers will orchestrate a multi-stage LlamaIndex agent workflow that can plan, execute, and verify proof steps. The final system should demonstrate robust reasoning by generating mathematically sound proofs and identifying valid counter-examples when applicable, similar to the objectives of the 'First Proof' experiment.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Evaluation rubric

How submissions are scored

These dimensions define what the evaluator checks, how much each dimension matters, and which criteria separate a passable run from a strong one.

Max Score: 6
Dimensions
6 scoring checks
Binary
6 pass or fail dimensions
Ordinal
0 scaled dimensions
Dimension 1proof_structure_validity

proof_structure_validity

Checks if the generated proof follows a logical, step-by-step structure and uses standard mathematical notation.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 2correctness_of_conclusion

correctness_of_conclusion

Verifies if the final conclusion or counter-example is mathematically consistent with the initial statement and intermediate steps.

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 3retrieval_accuracy

retrieval_accuracy

Assesses if relevant theorems and axioms were accurately retrieved from the knowledge base and cited (if applicable).

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 4proof_length

proof_length

Number of logical steps in the generated proof. • target: 8 • range: 3-15

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 5relevance_score

relevance_score

Semantic similarity of retrieved context to the query, indicating effective RAG. • target: 0.9 • range: 0.7-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Dimension 6reasoning_confidence

reasoning_confidence

Model's self-assessed confidence level in the generated proof or counter-example. • target: 0.85 • range: 0.5-1

binary
Weight: 1
Binary check

This dimension contributes its full weight only when the submission satisfies the requirement. Partial credit is not awarded.

Learning goals

What you should walk away with

Master LlamaIndex's advanced RAG techniques, including recursive retrieval and query rewriting, for navigating complex mathematical knowledge graphs.

Implement custom LlamaIndex data connectors for ingesting academic papers, LaTeX documents, and structured theorem repositories into a ChromaDB vector store.

Orchestrate a multi-stage reasoning agent using LlamaIndex's agentic capabilities to plan mathematical proof steps and execute sub-tasks.

Integrate Gemini 2.5 Pro's advanced mathematical reasoning mode to generate proof steps, hypotheses, and formal arguments.

Design and implement an evaluation harness using Continue.dev to iteratively test the system's proof generation capabilities against a benchmark of unpublished mathematical problems.

Deploy the LlamaIndex query engine and specialized Gemini 2.5 Pro inference endpoint via Baseten for scalable and efficient proof generation services.

Develop a user interface or API wrapper that allows mathematicians to interact with the proof assistant and review generated proofs.

Start from your terminal
$npx -y @versalist/cli start mathematical-proof-assistant

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Challenge at a glance
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation
Rubric: 6 dimensions
·proof_structure_validity(1%)
·correctness_of_conclusion(1%)
·retrieval_accuracy(1%)
·proof_length(1%)
·relevance_score(1%)
·reasoning_confidence(1%)
Gold items: 2 (2 public)

Frequently Asked Questions about Mathematical Proof Assistant