Peer reviewing of papers is the mainstay of modern academic publishing but it has well known problems. In this paper, we take a statistical modelling view to show a particular problem in the use of selectivity measures to indicate the quality of a conference. One key problem with the process of conference reviewing is the failure to make a useful feedback loop between the referees of the papers accepted at the conference and their importance, acceptance and relevance to the audience. In addition, we make some new criticisms of selectivity as a measure of quality. This paper is literally a work in progress because the 2012 BCS HCI itself conference will be used to close the feedback loop by making the connection between the reviews provided on papers and your (audience) perceptions of the papers. At the conference, participants will generate the results of this work.
Content
Author and article information
Contributors
Harold Thimbleby
Paul Cairns
Conference
Publication date:
September
2012
Publication date
(Print):
September
2012
Pages: 410-415
Affiliations
[0001]Department of Computing Science
University of Swansea, Wales
[0002]Department of Computer Science
University of York, England