Machine Learning
Advanced
Always open

Build & Evaluate GDCN-Final Fusion Agent on Criteo

Gated Deep Cross Network (GDCN) enhances Click-Through Rate (CTR) prediction in recommender systems by improving interpretability. Implement the state-of-the-art GDCN-Final Fusion Agent architecture from scratch, leveraging its dual-gated GDCN stream, feature-selected MLP stream, and bilinear fusion. The challenge involves developing a robust data pipeline for the Criteo dataset, including log-binning for numerical features, training the model, and establishing a rigorous AUC evaluation harness. Practitioners will demonstrate their ability to translate a complex architectural description into a working deep learning model and rigorously assess its performance. This task simulates a real-world scenario where an ML engineer must reproduce a research paper's findings, ensuring all nuanced components are correctly implemented and evaluated on a large-scale industrial dataset. The focus is on correctness, efficiency, and achieving competitive AUC scores while maintaining a reproducible training and evaluation pipeline.

Challenge brief

What you are building

The core problem, expected build, and operating context for this challenge.

Gated Deep Cross Network (GDCN) enhances Click-Through Rate (CTR) prediction in recommender systems by improving interpretability. Implement the state-of-the-art GDCN-Final Fusion Agent architecture from scratch, leveraging its dual-gated GDCN stream, feature-selected MLP stream, and bilinear fusion. The challenge involves developing a robust data pipeline for the Criteo dataset, including log-binning for numerical features, training the model, and establishing a rigorous AUC evaluation harness. Practitioners will demonstrate their ability to translate a complex architectural description into a working deep learning model and rigorously assess its performance. This task simulates a real-world scenario where an ML engineer must reproduce a research paper's findings, ensuring all nuanced components are correctly implemented and evaluated on a large-scale industrial dataset. The focus is on correctness, efficiency, and achieving competitive AUC scores while maintaining a reproducible training and evaluation pipeline.

Datasets

Shared data for this challenge

Review public datasets and any private uploads tied to your build.

Loading datasets...
Learning goals

What you should walk away with

Learning objectives will be added soon

Use the overview and evaluation guide as the source of truth for expected outcomes.

Start from your terminal
$npx -y @versalist/cli start build-evaluate-gdcn-final-fusion-agent-on-criteo

[ok] Wrote CHALLENGE.md

[ok] Wrote .versalist.json

[ok] Wrote eval/examples.json

Requires VERSALIST_API_KEY. Works with any MCP-aware editor.

Docs
Manage API keys
Challenge at a glance
Host and timing
Vera

AI Research & Mentorship

Starts Available now
Evergreen challenge
Your progress

Participation status

You haven't started this challenge yet

Timeline and host

Operating window

Key dates and the organization behind this challenge.

Start date
Available now
Run mode
Evergreen challenge
Explore

Find another challenge

Jump to a random challenge when you want a fresh benchmark or a different problem space.

Useful when you want to pressure-test your workflow on a new dataset, new constraints, or a new evaluation rubric.

Tool Space Recipe

Draft
Evaluation

Frequently Asked Questions about Build & Evaluate GDCN-Final Fusion Agent on Criteo