ML Reproducibility Challenge 2023

Welcome to the ML Reproducibility Challenge 2023 (MLRC 2023). This is the seventh edition of the event (v1, v2, v3, v4, v5, v6). The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.

Final decisions for MLRC 2023

We are now releasing the final list of decisions for MLRC 2023. This list includes the previous partial list published on July 5th, 2024. We have given additional time to TMLR to complete the reviews, however it is unfortunate that few papers are still awaiting a decision due to unresponsive Action Editors from TMLR. As we need to wrap up this edition, we are proceeding with the final list of 22 accepted papers. Congratulations to all!

  • Ana-Maria Vasilcoiu, Batu Helvacioğlu, Thies Kersten, Thijs Stessen; GNNInterpreter: A probabilistic generative model-level explanation for Graph Neural Networks, OpenReview
  • Miklos Hamar, Matey Krastev, Kristiyan Hristov, David Beglou; Explaining Temporal Graph Models through an Explorer-Navigator Framework, OpenReview
  • Clio Feng, Colin Bot, Bart den Boef, Bart Aaldering; Reproducibility Study of “Explaining RL Decisions with Trajectories”, OpenReview
  • Ethan Harvey, Mikhail Petrov, Michael C. Hughes; Transfer Learning with Informative Priors: Simple Baselines Better than Previously Reported, OpenReview
  • Gijs de Jong,Macha Meijer,Derck W.E. Prinzhorn,Harold Ruiter; Reproducibility study of FairAC, OpenReview
  • Nesta Midavaine, Gregory Hok Tjoan Go, Diego Canez, Ioana Simion, Satchit Chatterji; On the Reproducibility of Post-Hoc Concept Bottleneck Models; OpenReview
  • Jiapeng Fan, Paulius Skaigiris, Luke Cadigan, Sebastian Uriel Arias; Reproducibility Study of “Learning Perturbations to Explain Time Series Predictions”, OpenReview
  • Karim Ahmed Abdel Sadek, Matteo Nulli, Joan Velja, Jort Vincenti; Explaining RL Decisions with Trajectories’: A Reproducibility Study, OpenReview
  • Markus Semmler, Miguel de Benito Delgado; Classwise-Shapley values for data valuation OpenReview
  • Daniel Gallo Fernández, Răzvan-Andrei Matișan, Alejandro Monroy Muñoz, Janusz Partyka; Reproducibility Study of “ITI-GEN: Inclusive Text-to-Image Generation” OpenReview
  • Kacper Bartosik, Eren Kocadag, Vincent Loos, Lucas Ponticelli; Reproducibility study of “Robust Fair Clustering: A Novel Fairness Attack and Defense Framework”, OpenReview
  • Barath Chandran C; CUDA: Curriculum of Data Augmentation for Long‐Tailed Recognition, OpenReview
  • Christina Isaicu, Jesse Wonnink, Andreas Berentzen, Helia Ghasemi; Reproducibility Study of “Explaining Temporal Graph Models Through an Explorer-Navigator Framework", OpenReview
  • Iason Skylitsis, Zheng Feng, Idries Nasim, Camille Niessink; Reproducibility Study of “Robust Fair Clustering: A Novel Fairness Attack and Defense Framework”, OpenReview
  • Fatemeh Nourilenjan Nokabadi, Jean-Francois Lalonde, Christian Gagné; Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers, OpenReview
  • Luan Fletcher, Robert van der Klis, Martin Sedlacek, Stefan Vasilev, Christos Athanasiadis; Reproducibility study of “LICO: Explainable Models with Language-Image Consistency", OpenReview
  • Wouter Bant, Ádám Divák, Jasper Eppink, Floris Six Dijkstra; On the Reproducibility of: “Learning Perturbations to Explain Time Series Predictions”, OpenReview
  • Berkay Chakar,Amina Izbassar,Mina Janićijević,Jakub Tomaszewski; Reproducibility Study: Equal Improvability: A New Fairness Notion Considering the Long-Term Impact, OpenReview
  • Oliver Bentham, Nathan Stringham, Ana Marasović; Chain-of-Thought Unfaithfulness as Disguised Accuracy, OpenReview
  • Shivank Garg, Manyana Tiwari; Unmasking the Veil: An Investigation into Concept Ablation for Privacy and Copyright Protection in Images OpenReview
  • Adrian Sauter, Milan Miletić, Ryan Ott, Rohith Saai Pemmasani Prabakaran; Studying How to Efficiently and Effectively Guide Models with Explanations” - A Reproducibility Study, OpenReview
  • Thijmen Nijdam, Taiki Papandreou-Lazos, Jurgen de Heus, Juell Sprott; Reproducibility Study Of Learning Fair Graph Representations Via Automated Data Augmentations, OpenReview

If you are an author of the below mentioned papers and have not submitted the form with the camera ready items, please consider doing so at the earliest. We will reach out to the accepted authors soon with the next steps. We will also announce the best paper awards and share details on the logistics of NeurIPS poster session in the coming weeks.

Update, Sept 13th, 2024: A couple of papers received acceptance status post our final date of MLRC 2023 acceptance. We have now incorporated them too in the final list.

An update on decisions

July 5th, 2024

We initially communicated to have all decisions of MLRC 2023 out by 31st of May, 2024. Unfortunately, several submissions are still under review at TMLR, and we are waiting for the final decisions to trickle in. Overall, MLRC 2023 had 46 valid submissions, out of which we have recieved decisions on 61% of them. We are in touch with TMLR to expedite the process of decisions for the remaining submissions - we expect all decisions to come in by the next couple of weeks.

Until then, we are happy to announce the (partial) list of accepted papers. Congratulations to all 🎉! If you are an author of the below mentioned papers and have not submitted the form with the camera ready items, please consider doing so at the earliest. We will reach out to the accepted authors soon with the next steps.

(partial paper list removed as we release the final list above)

[Deprecated] Call For Papers

We invite contributions from academics, practitioners and industry researchers of the ML community to submit novel and insightful reproducibility studies. Please read our blog post regarding our retrospectives of running the challenge and the future roadmap. We are happy to announce the formal partnership with Transactions of Machine Learning Research (TMLR) journal. The challenge goes live on October 23, 2023.

We recommend you choose any paper(s) published in the 2023 calendar year from the top conferences and journals (NeurIPS, ICML, ICLR, ACL, EMNLP, ICCV, CVPR, TMLR, JMLR, TACL) to run your reproducibility study on.

In order for your paper to be submitted and presented at MLRC 2023, it first needs to be accepted and published at TMLR. While TMLR aims to follow a 2-months timeline to complete the review process of its regular submissions, this timeline is not guaranteed. If you haven’t already, we therefore recommend submitting your original paper to TMLR by February 16th, 2024, that is a little over 3 months in advance of the MLRC publication announcement date.

Key Dates

  • Challenge goes live: October 23, 2023
  • Deadline to share your intent to submit a TMLR paper to MLRC: February 16th, 2024 at the following form: https://forms.gle/JJ28rLwBSxMriyE89. This form requires that you provide a link to your TMLR submission. Once it gets accepted (if it isn’t already), you should then update the same form with your paper camera ready details. Your accepted TMLR paper will finally undergo a light AC review to verify MLRC compatibility.
  • We aim to announce the accepted papers by May 31st, 2024 July 17th, 2024, pending decisions of all papers.

Contact Information