This is a brief ‘quick and dirty’ analysis of review data after all reviews were returned. We are aiming to produce more comprehensive analyses of the data in the future, but we wanted to share this information during the rebuttal period in case it is useful to authors in planning their next course of action. We’ve taken every effort to ensure these data are accurate, but given the time pressures, results of the full analysis may vary slightly from these data.
……The TPC team
- The number of papers submitted has increased by 8% on last year.
- Average review scores are very similar to those from CHI2016 (we don’t have the analysis for 2017). There’s no evidence that reviewers or ACs are “more grumpy” this year.
- Rebuttals have been shown to increase average scores (2016 data). We expect the same this year and provide links to useful articles about how to write a rebuttal.
- The new process for reviews has reduced the burden on the CHI community by approximately 2100 reviews.
Number of papers submitted to CHI2018
Back in September, 3029 abstracts (and associated metadata) were submitted. By the final cut-off date, 86% of these turned into complete submissions resulting in 2592 papers. This represents a rise of 8% (172 papers) on the previous year.
Number of reviews
Of these 2592:
- 8 were withdrawn
- 38 were ‘Quick Rejected’
- 7 were ‘Desk Rejected’
- 2 were rejected for other reasons
Quick Rejected = papers were read and they were missing something critical that would make replication, analysis or validation of claims impossible.
Desk Rejected = out of scope for chi or some obvious error like over page limit
The 2537 papers remaining received 5105 external reviews. 41 papers received three external reviews. One received four.
Including papers that have already been withdrawn or rejected, reviews were written by 2651 reviewers. The majority (56%) of reviewers completed only one review. One reviewer completed thirteen.
|Number of reviews per reviewer||Number of reviewers||Yield|
In addition, 314 ACs (including a few SCs) were assigned to a median of 16 papers (on average, 8 as 1AC, 8 as 2AC). ACs wrote full reviews for 8 papers (as 2AC) and meta-reviews for 8 papers (as 1AC).
The mean of the mean scores given to papers for the 2018 conference was 2.56 (SD=0.74). (Yeah, we know we took means of ordinal data here, but luckily there is no R2 of this blog post!). In total, 1356 papers (53%) received an average score between 2.0 and 2.9 (inclusive). Only 129 papers have a mean score of 4.0 or greater. There is one paper with an average of 5.0. The CHI 2016 conference (the last year we have data and detailed analysis for) had an unweighted average of all first round reviewer scores of 2.63 (SD=0.73) and the median was 2.625.
The mean of mean expertise (2018) ratings was 3.22 (SD=0.34) on a 4-pt scale. This shows that reviewers have excellent expertise ratings.
This year ACs were instructed to avoid giving papers a score of 3.0. Instead, they were encouraged to give a score of 2.5 or 3.5 in order to give a clear indication to authors as to whether the AC felt they would be able to argue for the paper, in its current state, before the rebuttal. The intent here was that ACs should not be sitting on the fence and avoiding forming a view on the paper. This seems to have been somewhat successful.
Only 33 1AC scores of 3.0 were given (~1%). 2ACs were much more likely to give scores of 3.0 (195, ~8%) when writing their reviews. There are currently 621 papers (~24%) with 1AC score 3+ and 1971 papers with 1AC score <3. We expect these scores to move, as rebuttals are reviewed and discussed.
ACs seem to have tracked their reviewers closely in most, but not all, instances. On average, the mean of 1AC and 2AC scores is ~0.09 less than the mean of the R1 and R2 scores.
We are now in the rebuttal phase. Authors can sometimes wonder whether it’s worth their time to write a rebuttal if their paper has received low scores from reviewers. At this stage, of papers not already rejected, 15.6% of papers have a mean score equal to or greater than 3.5.
- 396 of papers with mean score ≥ 3.5
- 2141 of papers with mean score < 3.5
While we all understand that papers with higher scores are more likely to be accepted, there is a reason we just don’t auto-accept based on scores, but actually give authors the opportunity to write rebuttals and then discuss the papers at a PC meeting. This important part of the process provides an opportunity for authors to respond to some of the reviewers comments and for the committee to select papers.
Two years ago there was an analysis of whether rebuttals change reviewer scores. (Tl:dr They did) We can therefore expect the number of papers with mean scores above 3.0 to increase when we’ve received the rebuttals and reviewers and ACs have responded to them.
We know that CHI is a highly selective conference. Historic acceptance rates demonstrate that somewhere around 24% of papers are ultimately accepted:
- 23.6% (2012-16) Papers+Notes acceptance rate
- 24.5% 2016 Papers+Notes acceptance rate
- 25.0% 2017 Papers+Notes acceptance rate
Remember, this year there are no more notes so it is also interesting to look at the historic acceptance rates JUST FOR PAPERS
- 25.1% (2012-16) average Papers only acceptance rate
- 27.3% 2016 Papers only acceptance rate
So even if your paper has a mean score and a 1AC score below 3.0 there’s every reason to write your rebuttal so that some of the 2.5 1AC scores can move up to 3.5!
A number of members of our community have provided views on how to write a rebuttal:
- Writing rebuttals by Niklas Elmqvist, University of Maryland, College Park
- Writing CHI Rebuttals by Gene Golovchinsky
- SIGCHI Rebuttals: Some Suggestions How to Write Them by Albrecht Schmidt
- A CHI Rebuttal by Simone O’Callaghan
- How to Write SIGCHI Rebuttals by Hyunyoung Song
Anna Cox and Mark Perry
Technical Programme Chairs, ACM CHI 2018
Analytics Chair, ACM CHI 2018