Retrospectives @ICML 2020
Call for Papers
The goal of the Retrospectives Workshop is to encourage researchers to self-reflect on their previous work and trends in the field by publishing retrospectives. A retrospective is basically a blog post in which researchers critically analyze one of their past papers and its context in the field as a whole.
If you have a paper where, looking back, you think “huh, there’s a lot more I could say about that now” — either you realized that your methodology was slightly flawed, or there’s some results that took many tricks to work that you didn’t fully specify, or there’s new work that changes your intuition, or opinions on how your work was interpreted, or something else — then consider writing a retrospective!
This is a venue for authors to reflect on their previous publications, to talk about how their thoughts have changed following publication, to identify shortcomings in their analysis or results, and to discuss resulting extensions. We think it’s important for this knowledge to be out in the open; when a paper doesn’t accurately reflect the authors’ current opinion about their work, other researchers may misunderstand the significance of the idea, or waste time building off of shaky results.
If you’re unsure about what a retrospective might look like, see this explanation of what they could include and publications from our NeurIPS 2019 workshop, such as:
- An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution
- Learning the structure of deep sparse graphical models
- A Retrospective for “Deep Reinforcement Learning That Matters - A Closer Look at Policy Gradient Algorithms, Theory and Practice”
- Lessons Learned from The Lottery Ticket Hypothesis.
While a retrospective focuses on a single paper, we also solicit reflections on multiple papers, on specific subfields, or on the community as a whole, in the style of a meta-analysis. These reflections can include more context about the community, including disseminating newly-emerging consensus or disagreement, sharing practical advice for training models, or other insights. Such a submission could discuss and analyze an interesting aspect of a set of papers (e.g. evaluation methodology, conflicting claims, etc.), or give an opinion about emerging trends. We include some examples below.
- Highlighting poor scientific practices in a given area (e.g. related to evaluation, scholarship, etc.), such as “Troubling trends in machine learning scholarship”, Lipton & Steinhardt (2018).
- Doing an empirical analysis of a set of related methods that have not been compared before on the same benchmark, such as “LSTM: A search space odyssey”, Greff et al. (2015).
- Discussing what research areas / questions are the most important or promising for achieving a certain goal (e.g. for ‘understanding’ natural language, overcoming bias in ML algorithms, creating ‘artificial general intelligence’, etc.), such as “Learning and Evaluating General Linguistic Intelligence”, Yogatama et al. (2019).
- Describing current trends in a community and their impact on the participants, such as “Green AI”, Schwartz et al. (2019).
How to submit
Submissions to the Retrospectives workshop will be handled through our OpenReview site. Please upload a pdf of your retrospective to the website.
Main deadline: May 17 23:59 Anywhere on Earth. Accept/reject notification will be sent out June 1st.
Late-breaking deadline: June 21 23:59 Anywhere on Earth. Accept/reject notification will be sent out July 1st.
Camera ready versions will be submitted as markdown files through our GitHub repository page for publication online. The camera-ready submission template is available on GitHub.
Reviewing criteria
We will be selecting for submissions that provide interesting insights, admissions, or new perspectives on old work. Morevover, we expect all submissions to follow the guidelines of basic courtesy and respect, and to abide by our code of conduct. While there is no page limit, we encourage authors to be concise.
Submissions will be evaluated on the following criteria:
- Adherence to our code of conduct
- Clarity of writing
- Vulnerability and honesty in discussion, particularly if the submission is by the original author
- Quality of discussion of limitations
- Significance of new insights
- Presentation of new results or figures