ML Reproducibility Challenge 2023

Welcome to the ML Reproducibility Challenge 2023 (MLRC 2023). This is the seventh edition of the event (v1, v2, v3, v4, v5, v6). The primary goal of this event is to encourage the publishing and sharing of scientific results that are reliable and reproducible. In support of this, the objective of this challenge is to investigate reproducibility of papers accepted for publication at top conferences by inviting members of the community at large to select a paper, and verify the empirical results and claims in the paper by reproducing the computational experiments, either via a new implementation or using code/data or other information provided by the authors.

An update on decisions

July 5th, 2024

We initially communicated to have all decisions of MLRC 2023 out by 31st of May, 2024. Unfortunately, several submissions are still under review at TMLR, and we are waiting for the final decisions to trickle in. Overall, MLRC 2023 had 46 valid submissions, out of which we have recieved decisions on 61% of them. We are in touch with TMLR to expedite the process of decisions for the remaining submissions - we expect all decisions to come in by the next couple of weeks.

Until then, we are happy to announce the (partial) list of accepted papers. Congratulations to all 🎉! If you are an author of the below mentioned papers and have not submitted the form with the camera ready items, please consider doing so at the earliest. We will reach out to the accepted authors soon with the next steps.

  • Ana-Maria Vasilcoiu, Batu Helvacioğlu, Thies Kersten, Thijs Stessen; GNNInterpreter: A probabilistic generative model-level explanation for Graph Neural Networks, OpenReview
  • Miklos Hamar, Matey Krastev, Kristiyan Hristov, David Beglou; Explaining Temporal Graph Models through an Explorer-Navigator Framework, OpenReview
  • Clio Feng, Colin Bot, Bart den Boef, Bart Aaldering; Reproducibility Study of “Explaining RL Decisions with Trajectories”, OpenReview
  • Ethan Harvey, Mikhail Petrov, Michael C. Hughes; Transfer Learning with Informative Priors: Simple Baselines Better than Previously Reported, OpenReview
  • Gijs de Jong,Macha Meijer,Derck W.E. Prinzhorn,Harold Ruiter; Reproducibility study of FairAC, OpenReview
  • Nesta Midavaine, Gregory Hok Tjoan Go, Diego Canez, Ioana Simion, Satchit Chatterji; On the Reproducibility of Post-Hoc Concept Bottleneck Models; OpenReview
  • Jiapeng Fan, Paulius Skaigiris, Luke Cadigan, Sebastian Uriel Arias; Reproducibility Study of “Learning Perturbations to Explain Time Series Predictions”, OpenReview
  • Karim Ahmed Abdel Sadek, Matteo Nulli, Joan Velja, Jort Vincenti; Explaining RL Decisions with Trajectories’: A Reproducibility Study, OpenReview
  • Markus Semmler, Miguel de Benito Delgado; Classwise-Shapley values for data valuation OpenReview
  • Abdel Sadek Karim Ahmed, Nulli Matteo, Velja Joan, Vincenti Jort; Explaining RL Decisions with Trajectories: A Reproducibility Study, OpenReview
  • Daniel Gallo FernĂĄndez, Răzvan-Andrei Matișan, Alejandro Monroy Muñoz, Janusz Partyka; Reproducibility Study of “ITI-GEN: Inclusive Text-to-Image Generation” OpenReview
  • Kacper Bartosik, Eren Kocadag, Vincent Loos, Lucas Ponticelli; Reproducibility study of “Robust Fair Clustering: A Novel Fairness Attack and Defense Framework”, OpenReview
  • Barath Chandran C; CUDA: Curriculum of Data Augmentation for Long‐Tailed Recognition, OpenReview
  • Kacper Bartosik, Eren Kocadag, Vincent Loos, Lucas Ponticelli; Reproducibility study of “Robust Fair Clustering: A Novel Fairness Attack and Defense Framework”, OpenReview
  • Christina Isaicu, Jesse Wonnink, Andreas Berentzen, Helia Ghasemi; Reproducibility Study of “Explaining Temporal Graph Models Through an Explorer-Navigator Framework", OpenReview
  • Iason Skylitsis, Zheng Feng, Idries Nasim, Camille Niessink; Reproducibility Study of “Robust Fair Clustering: A Novel Fairness Attack and Defense Framework”, OpenReview
  • Fatemeh Nourilenjan Nokabadi, Jean-Francois Lalonde, Christian GagnĂ©; Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers, OpenReview
  • Luan Fletcher, Robert van der Klis, Martin Sedlacek, Stefan Vasilev, Christos Athanasiadis; Reproducibility study of “LICO: Explainable Models with Language-Image Consistency", OpenReview

If you have submitted your paper to TMLR and still awaiting a decision, please consider contacting the Action Editor assigned to your paper, and cc us in your correspondance. We hope the decisions will be reflected soon, and we will update this list with additional accepted papers.

[Deprecated] Call For Papers

We invite contributions from academics, practitioners and industry researchers of the ML community to submit novel and insightful reproducibility studies. Please read our blog post regarding our retrospectives of running the challenge and the future roadmap. We are happy to announce the formal partnership with Transactions of Machine Learning Research (TMLR) journal. The challenge goes live on October 23, 2023.

We recommend you choose any paper(s) published in the 2023 calendar year from the top conferences and journals (NeurIPS, ICML, ICLR, ACL, EMNLP, ICCV, CVPR, TMLR, JMLR, TACL) to run your reproducibility study on.

In order for your paper to be submitted and presented at MLRC 2023, it first needs to be accepted and published at TMLR. While TMLR aims to follow a 2-months timeline to complete the review process of its regular submissions, this timeline is not guaranteed. If you haven’t already, we therefore recommend submitting your original paper to TMLR by February 16th, 2024, that is a little over 3 months in advance of the MLRC publication announcement date.

Key Dates

  • Challenge goes live: October 23, 2023
  • Deadline to share your intent to submit a TMLR paper to MLRC: February 16th, 2024 at the following form: https://forms.gle/JJ28rLwBSxMriyE89. This form requires that you provide a link to your TMLR submission. Once it gets accepted (if it isn’t already), you should then update the same form with your paper camera ready details. Your accepted TMLR paper will finally undergo a light AC review to verify MLRC compatibility.
  • We aim to announce the accepted papers by May 31st, 2024 July 17th, 2024, pending decisions of all papers.

Contact Information