Home » CHI 2018 » The first part of the review process

Deadlines

14 September 2017
(Originally 12 September 2017)
Papers: Title, abstract, authors, subcommittee choice, and all other metadata
Updates on PCS problems

19 September 2017
Papers: Submission files

11 October 2017
Doctoral Consortium
Case Studies
Courses
Accepted Courses

13 October 2017
Workshops/Symposia
Accepted Workshops/Symposia

27 October 2017
Art Exhibition

22 November 2017
Paper rebuttals are due at 20:00 EST (8pm in Montreal)

2 January 2018
Student Design Competition
Student Research Competition

15 January 2018
alt.chi
Early Career Development Symposium
Late-Breaking Work
Panels & Fireside Chats
Special Interest Groups (SIGs)
Video Showcase

25 January 2018
Demonstrations

The first part of the review process

This is a brief ‘quick and dirty’ analysis of review data after all reviews were returned. We are aiming to produce more comprehensive analyses of the data in the future, but we wanted to share this information during the rebuttal period in case it is useful to authors in planning their next course of action. We’ve taken every effort to ensure these data are accurate, but given the time pressures, results of the full analysis may vary slightly from these data.

……The TPC team

Headlines

  1. The number of papers submitted has increased by 8% on last year.
  2. Average review scores are very similar to those from CHI2016 (we don’t have the analysis for 2017). There’s no evidence that reviewers or ACs are “more grumpy” this year.
  3. Rebuttals have been shown to increase average scores (2016 data). We expect the same this year and provide links to useful articles about how to write a rebuttal.
  4. The new process for reviews has reduced the burden on the CHI community by approximately 2100 reviews.

Number of papers submitted to CHI2018

Back in September, 3029 abstracts (and associated metadata) were submitted. By the final cut-off date, 86% of these turned into complete submissions resulting in 2592 papers. This represents a rise of 8% (172 papers) on the previous year.

Number of reviews

Of these 2592:

  • 8 were withdrawn
  • 38 were ‘Quick Rejected’
  • 7 were ‘Desk Rejected’
  • 2 were rejected for other reasons

Quick Rejected = papers were read and they were missing something critical that would make replication, analysis or validation of claims impossible.
Desk Rejected = out of scope for chi or some obvious error like over page limit

The 2537 papers remaining received 5105 external reviews. 41 papers received three external reviews. One received four.

Including papers that have already been withdrawn or rejected, reviews were written by 2651 reviewers. The majority (56%) of reviewers completed only one review. One reviewer completed thirteen.

Number of reviews per reviewer Number of reviewers Yield
1 1484 1484
2 550 1100
3 299 897
4 163 656
5 75 375
6 43 258
7 16 112
8 20 160
9 12 108
10 7 70
11 3 33
12 2 24
13 1 13

In addition, 314 ACs (including a few SCs) were assigned to a median of 16 papers (on average, 8 as 1AC, 8 as 2AC). ACs wrote full reviews for 8 papers (as 2AC) and meta-reviews for 8 papers (as 1AC).

Review scores

The mean of the mean scores given to papers for the 2018 conference was 2.56 (SD=0.74). (Yeah, we know we took means of ordinal data here, but luckily there is no R2 of this blog post!). In total, 1356 papers (53%) received an average score between 2.0 and 2.9 (inclusive). Only 129 papers have a mean score of 4.0 or greater. There is one paper with an average of 5.0. The CHI 2016 conference (the last year we have data and detailed analysis for) had an unweighted average of all first round reviewer scores of 2.63 (SD=0.73) and the median was 2.625.

The mean of mean expertise (2018) ratings was 3.22 (SD=0.34) on a 4-pt scale. This shows that reviewers have excellent expertise ratings.

Distribution of mean review scores showing a skewed normal distribution with a peak of 2.5.

This year ACs were instructed to avoid giving papers a score of 3.0. Instead, they were encouraged to give a score of 2.5 or 3.5 in order to give a clear indication to authors as to whether the AC felt they would be able to argue for the paper, in its current state, before the rebuttal. The intent here was that ACs should not be sitting on the fence and avoiding forming a view on the paper. This seems to have been somewhat successful.

Left: Distribution of 1AC scores showing very few 3s. Right: Distribution of 2AC scores showing more 3s and a peak of 2-2.5

Only 33 1AC scores of 3.0 were given (~1%). 2ACs were much more likely to give scores of 3.0 (195, ~8%) when writing their reviews. There are currently 621 papers (~24%) with 1AC score 3+ and 1971 papers with 1AC score <3. We expect these scores to move, as rebuttals are reviewed and discussed.

ACs seem to have tracked their reviewers closely in most, but not all, instances. On average, the mean of 1AC and 2AC scores is ~0.09 less than the mean of the R1 and R2 scores.

Distribution of AC scores relative to the average of the external reviews. Shows a normal distribution centred around 0 with the vast majority being within -1 and 1.

Writing Rebuttals

We are now in the rebuttal phase. Authors can sometimes wonder whether it’s worth their time to write a rebuttal if their paper has received low scores from reviewers. At this stage, of papers not already rejected, 15.6% of papers have a mean score equal to or greater than 3.5.

  • 396 of papers with mean score ≥ 3.5
  • 2141 of papers with mean score < 3.5

While we all understand that papers with higher scores are more likely to be accepted, there is a reason we just don’t auto-accept based on scores, but actually give authors the opportunity to write rebuttals and then discuss the papers at a PC meeting. This important part of the process provides an opportunity for authors to respond to some of the reviewers comments and for the committee to select papers.

Two years ago there was an analysis of whether rebuttals change reviewer scores. (Tl:dr They did) We can therefore expect the number of papers with mean scores above 3.0 to increase when we’ve received the rebuttals and reviewers and ACs have responded to them.

We know that CHI is a highly selective conference. Historic acceptance rates demonstrate that somewhere around 24% of papers are ultimately accepted:

  • 23.6% (2012-16) Papers+Notes acceptance rate
  • 24.5% 2016 Papers+Notes acceptance rate
  • 25.0% 2017 Papers+Notes acceptance rate

Remember, this year there are no more notes so it is also interesting to look at the historic acceptance rates JUST FOR PAPERS

  • 25.1% (2012-16) average Papers only acceptance rate
  • 27.3% 2016 Papers only acceptance rate

So even if your paper has a mean score and a 1AC score below 3.0 there’s every reason to write your rebuttal so that some of the 2.5 1AC scores can move up to 3.5!

A number of members of our community have provided views on how to write a rebuttal:

Anna Cox and Mark Perry
Technical Programme Chairs, ACM CHI 2018

Sandy Gould
Analytics Chair, ACM CHI 2018


5 Comments

  1. Dear CHI reviewer,

    Research is not just about a score, a rebuttal is also an opportunity to discuss, comment, understand and progress.

  2. I submitted a rebuttal last night, but didn’t get any confirmation.. I logged on to the precision website but now i don’t see the review nor the rebuttal. Is there a way to check if my rebuttal is recorded?

Leave a comment

Your email address will not be published. Required fields are marked *

News

8th January 2018
Getting ready for CHI? Have a look at the CHI 2018 "Health Blog" by m.c. schraefel

4th January 2018
Direct registration for exhibitors is now open. For additional information, read “For Exhibitors”.

20th December 2017
Check out the CHI 2018 accepted courses

18th December 2017
Check out the CHI 2018 accepted workshops and symposia

Champion Sponsors

Bloomberg

Facebook

Google

Microsoft

Scroll Up