Operator-ready prompt for reuse, tuning, and workspace runs.
This item is set up for developers who want to inspect the original language, fork it into Workspace, and adapt the evidence model without losing the source prompt structure.
Implementation handoffs, eval setup, and prompt tuning where you need the original structure intact.
Inspect first, copy once, then fork into Workspace when you want variants, notes, and model settings attached to the same run.
Swap domain facts, examples, and any hard-coded entities for your own context.
Tighten the evidence or verification requirement if this is headed toward production.
Decide which failure mode you want to evaluate first before you branch the prompt.
This prompt already carries implementation detail, tool context, and a final-output instruction. Keep that structure intact when you tune it, or your comparison runs get noisy fast.
Open this prompt inside Workspace when you want a live iteration loop.
Copy for quick reuse, or run it in Workspace to keep prompt variants, model settings, and prompt-history changes in one place.
Structured source with 1 active lines to adapt.
Already linked to a challenge workflow.
Sign in to keep private prompt variations.
Prompt content
Original prompt text with formatting preserved for inspection and clean copy.
Before writing any code, outline the GDCN-Final Fusion Agent's architecture in detail. Specifically, describe how you will implement the `GatedCrossLayer` (including the sigmoid gates for noise filtering) and the `FeatureSelectionGate` for the MLP stream in PyTorch. Detail how the `BilinearInteractionLayer` will fuse the outputs of the Cross stream and MLP stream, explaining the tensor operations involved.
Adaptation plan
Keep the source stable, then branch your edits in a predictable order so the next prompt run is easier to evaluate.
Preserve the role framing, objective, and reporting structure so comparison runs stay coherent.
Swap in your own domain constraints, anomaly thresholds, and examples before you branch variants.
Check whether the prompt asks for the right evidence, confidence signal, and escalation path.
Copy once for a pristine source snapshot, then move the prompt into Workspace when you want variants, run history, and side-by-side tuning without losing the original.
Prompt diagnostics
Quick signals for how structured this prompt already is and where adaptation work is likely to happen first.
This prompt is mostly narrative and instruction-driven, so you can adapt examples and output constraints first without disturbing the structure.
Build & Evaluate GDCN-Final Fusion Agent on Criteo
Gated Deep Cross Network (GDCN) enhances Click-Through Rate (CTR) prediction in recommender systems by improving interpretability. Implement the state-of-the-art GDCN-Final Fusion Agent architecture from scratch, leveraging its dual-gated GDCN stream, feature-selected MLP stream, and bilinear fusion. The challenge involves developing a robust data pipeline for the Criteo dataset, including log-binning for numerical features, training the model, and establishing a rigorous AUC evaluation harness. Practitioners will demonstrate their ability to translate a complex architectural description into a working deep learning model and rigorously assess its performance. This task simulates a real-world scenario where an ML engineer must reproduce a research paper's findings, ensuring all nuanced components are correctly implemented and evaluated on a large-scale industrial dataset. The focus is on correctness, efficiency, and achieving competitive AUC scores while maintaining a reproducible training and evaluation pipeline.
Use the challenge page to recover the original task boundaries before you tune the prompt. That keeps your variants grounded in the same evaluation target instead of drifting into a different problem.