MLRC 2026: Reproducibility as an Official Track at NeurIPS
When Joelle Pineau launched the first Machine Learning Reproducibility Challenge at ICLR 2018, it was a small community experiment: could we systematically invite researchers to reproduce published results and share what they found? Eight years and eight editions later, we are delighted to announce that MLRC 2026 will be an official track at NeurIPS 2026 — the first time in MLRC’s history that reproducibility science has a dedicated home inside a major ML conference.
This is more than a logistical milestone. It reflects how the community’s relationship with reproducibility has matured. Reproducibility is no longer just a checkbox. It is, increasingly, a scientific object worthy of its own rigorous study — and NeurIPS recognizing it as such is a meaningful signal to the field.
From challenge to venue
The early iterations of MLRC (v1, v2, v3) were structured as challenges: pick a paper, try to reproduce it, report what happened. They were enormously valuable as an educational exercise, especially for early-career researchers. Courses like FACT AI at the University of Amsterdam built entire curricula around the challenge, and the quality of the work that came out of those courses showed just how seriously students took the opportunity.
But as we reflected on what we learned across those iterations, it became clear that MLRC could and should be more. Reproducibility is not binary — a paper is never simply “reproducible” or “not.” The most insightful submissions were always the ones that engaged deeply with the specific claims of a paper, pushed those claims into new settings (generalization), tested their limits, discovered novel insights on top of the paper’s claims, and reported back with nuance. We wanted to create a venue that actively sought out that kind of contribution. Over the years, we improved the program to incentivize these submissions.
One persistent challenge has been incentives. Reproducibility studies are not easily rewarded by the ML community’s standard metric of novelty — they do not propose new architectures, beat state-of-the-art numbers, or introduce new datasets. Getting researchers to invest serious effort in this kind of work required building a publication and recognition path that made that investment worthwhile. We have updated this systematically over the years: early iterations published through ReScience, a respected open journal for reproducibility across computational science; then in 2023 we transitioned to TMLR, bringing MLRC papers into a high-prestige, well-indexed ML venue with a rigorous open review process. While the initial MLRC’s operated primarily as satellite workshops to conferences, in 2022 (and 2023), we partnered with NeurIPS in the Journal to Conference track, where the reproducibility papers were presented in poster sessions alongside the conference papers. In 2025, we ran MLRC’s first in-person conference at Princeton University, further elevating the incentive in the form of a dedicated conference and physical stage, inviting keynote speakers to present, and having a full-day event with orals, posters, and networking sessions. Each step raised the incentives, and the NeurIPS track is the next step in that progression.
What this means in practice
MLRC 2026 accepted reproducibility papers will be presented in person at NeurIPS 2026 in Sydney, Australia (December 6–13, 2026), alongside papers from the Main Track and the Evaluations & Datasets Track. The submission and review process remains anchored in TMLR — reproducibility papers must first be accepted at TMLR within the eligibility window, and then undergo a light compatibility review by the MLRC committee to confirm suitability for the track.
This TMLR-first model is deliberate. TMLR’s open, continuous reviewing cycle allows authors to refine their work and get expert feedback before it is considered for presentation. It also means that accepted papers carry the full weight of TMLR’s review standards, independent of MLRC. The MLRC committee’s role is then to identify papers among accepted TMLR submissions that represent the best of reproducibility science and would benefit from the visibility of a NeurIPS venue.
What we are looking for
MLRC has always welcomed a broad range of reproducibility work, and that continues this year. We are looking for papers that take reproducibility seriously as a scientific question — not just as a means to an end, but as a contribution in its own right.
This includes:
- Reproductions and replications that rigorously test specific claims from published papers, whether they confirm, partially replicate, or fail to reproduce prior results
- Generalizability studies that extend original findings to new settings, datasets, or model architectures — adding insights the original paper could not offer
- Meta-reproducibility studies examining reproducibility patterns across a body of related work
- Methods and tools that make reproducibility research more accessible or rigorous
- AI-assisted reproducibility, including studies that use or critically evaluate automated approaches to replicating research papers
- Reproducibility of AI systems and agents as subjects of study in their own right
We want to be clear: negative results and partial failures to reproduce are as valuable as confirmations. Science advances by understanding where claims hold and where they do not. A careful, well-documented failure to reproduce a result — with a clear account of what was tried and what was found — is a genuine contribution to the literature.
Note: Work focused on evaluation methodology more broadly may also be a good fit for the Evaluations & Datasets NeurIPS 2026 track. We encourage authors to consider both venues when deciding where to submit.
How to submit
To be eligible for MLRC 2026, your reproducibility paper must be accepted as is / with minor revisions to TMLR and must have been submitted to TMLR between June 20, 2025 23:59 AOE and September 30, 2026 23:59 AOE. Please check our CFP for more details.
We accept submissions through three paths:
- Expression of interest (EOI) before acceptance — if your paper is under review at TMLR, submit an EOI by June 4, 2026 AOE to let us know you intend to submit. You will update the form once your paper is accepted.
- Self-nomination after acceptance — if your paper has already been accepted to TMLR within the window, submit directly.
- Area Chair nomination — TMLR Area Chairs may nominate accepted papers using the same form.
The hard deadline to have your TMLR acceptance in our system is September 30, 2026 AOE.
Important Dates
- Earliest date of the acceptance of your TMLR reproducibility paper for consideration to this year’s MLRC: June 20, 2025 23:59 AOE
- Soft deadline (EOI / intent to submit): June 4, 2026 23:59 AOE
- Deadline to have TMLR decisions in our system: September 30, 2026 23:59 AOE
- Author notifications: October 7, 2026
- NeurIPS 2026: December 6–13, 2026
Closing thoughts
Reproducibility has always been foundational to science. What MLRC has tried to do, across eight editions, is make reproducibility a first-class research activity in machine learning — one that is worth investing in, publishing, and being recognized for. Having MLRC as an official NeurIPS track is an affirmation that the community values this work, and we hope it encourages more researchers to take reproducibility seriously as a scientific contribution.
We look forward to seeing the community’s work at NeurIPS in Sydney. Please visit our Call for Papers for full details, and do not hesitate to reach out at reproducibility-chairs@neurips.cc with any questions.
The MLRC 2026 Organizing Team: Koustuv Sinha (Meta), Jessica Forde (Brown University), Ana Lucic (University of Amsterdam), Fernando Pascoal Dos Santos (University of Amsterdam), Candace Ross (Meta), Adina Williams (Meta), Naila Murray (Meta), and Joelle Pineau (McGill University / Mila / Cohere).