There are two complementary tracks for submitting to this workshop: the Retrospectives Track, and the Meta-Analysis Track. We provide information about each of these tracks below, including how to submit and reviewing criteria.

Retrospectives Track

The goal of the Retrospectives Track is to encourage researchers to self-reflect about their own previous work, in the form of writing retrospectives. A retrospective is basically a blog posts where research critically analyze their past papers.

If you have a paper where, looking back, you think “huh, there’s a lot more I could say about that now” — either you realized that your methodology was slightly flawed, or there’s some results that took many tricks to work that you didn’t fully specify, or there’s new work that changes your intuition, or something else — then consider writing a retrospective! We think it’s important for this knowledge to be out in the open; when a paper doesn’t accurately reflect the authors’ opinion about their work, other researchers may misunderstand the significance of the idea, or waste time building off of shaky results.

If you’re unsure about what a retrospective might look like, see this explanation of what they could include, or some previous examples here.

How to submit

NeurIPS workshop submissions to the Retrospectives Track will be handled through the ML Retrospectives site. The site gives instructions on how to upload a retrospective (by creating a submission issue in our GitHub repository), and there’ll be a label you can mark if you want your retrospective to be reviewed for possible inclusion in the NeurIPS Retrospectives workshop. All retrospectives are written in Markdown — you can view a template here.

Reviewing criteria

While we will host any submitted retrospective on our ML Retrospectives site (so long as it is written by an author of the original paper and follows our code of conduct), for this NeurIPS workshop we will be selecting for higher quality retrospectives that provide interesting insights, admissions, or new perspectives on old work. While there is no page limit for retrospectives, we encourage authors to be concise.

Submissions will be evaluated on the following criteria:

  • Clarity of writing
  • Vulnerability and honesty in discussion of original paper
  • Quality of discussion of limitations, significance of new insights, and/or presentation of new results or figures
  • Impact of original paper

We expect all submissions to be written clearly and honestly, and contain discussions of limitations or new perspectives on the original work. While the impact of the original paper will be considered, we also encourage retrospective submissions for papers with low impact / few citations.

Meta-Analysis Track

The goal of the Meta-Analysis Track is to reflect on the state of the machine learning field as a whole, including disseminating newly-emerging consensus or conflict, sharing practical advice for training models or tuning hyperparameters, and other insights. With this track we also encourage the participation of more junior researchers who do not already have papers.

A meta-analysis paper is not the same as a review paper. A review paper aims to summarize and synthesize a wide range of papers in a specific subfield, with the goal of being very thorough, as well as providing some insights as to how the papers relate to each other. On the other hand, the goal of a meta-analysis paper isn’t to summarize the content of papers, but rather to discuss and analyze an interesting aspect of a set of papers (e.g. evaluation methodology, conflicting claims, etc.), or give an opinion about emerging trends. A meta-analysis paper doesn’t have to be thorough (it could discuss only a few papers) or limited to a narrow subfield (it could analyze broader trends across the ML community). They can also include additional results, figures, or tables.

There are many kinds of meta-analysis papers. Here are a few examples:

  • Highlighting poor scientific practices in a given area (e.g. related to evaluation, scholarship, etc.). An example of a recent paper in this category is “Troubling trends in machine learning scholarship”, Lipton & Steinhardt (2018).
  • Summarizing best practices in a given subfield (e.g. hyperparameter choices, evaluation procedures, etc.)
  • Pointing out conflicting claims in related papers, along with a discussion of these claims.
  • Doing an empirical analysis of a set of related methods that have not been compared before on the same benchmark. An example of a paper in this category is “LSTM: A search space odyssey”, Greff et al. (2015).
  • Describing emerging trends in a specific area, along with a critique or analysis of those trends.
  • Discussing what research areas/ questions are the most important or promising for achieving a certain goal (e.g. for ‘understanding’ natural language, overcoming bias in ML algorithms, creating ‘artificial general intelligence’, etc.) An example of a paper in this category is “Building machines that learn and think like people”, Lake et al. (2016).
  • Giving a historical perspective on how a subfield has changed over a longer timescale.
  • Discussing how the scientific consensus has changed on a specific research question over time.

How to submit

For submissions to this track, submit through our OpenReview workshop portal.

Reviewing criteria

Meta-analysis papers can be up to 6 pages in length (not including references and appendices), and should be submitted in PDF form using the NeurIPS 2019 style guidelines. Submissions will be evaluated on the following criteria:

  • Clarity of writing
  • Significance of new insights provided by meta-analysis, including e.g. pointing out inconsistencies between papers, replicating results, etc.

We expect all submissions to follow the guidelines of basic courtesy and respect, and to abide by our code of conduct.