Peer Review
Reviews have been widely debated in literary bar rooms of late. You can’t get reviewed, you can’t get a fair review, and you can’t even get a job reviewing. The maladies of peer review have been even more widely debated in the scientific community.
Recently, the CHI 2010 rebuttal period has ignited an interesting discussion of the review process. James Landay lashes out with a harsh opinion of the process.
The reviewers simply do not value the difficulty of building real systems and how hard controlled studies are to run on real systems for real tasks. This is in contrast with how easy it is to build new interaction techniques and then to run tight, controlled studies on these new techniques with small, artificial tasks
He’s backed by several commenters, many prominent in the computer science community, who offer intelligently written responses.
Gene Golovchinsky responds to Landay’s post with an apt assessment of the problems with the current review process and an interesting alternative.
The [new] scheme works like this: a paper is published on some open-access site such as arxiv.org. People read it and comment on it. Other people rate the comments. In the end, you have a set of high-quality comments that comprise the review of the paper. Authors are free to add new versions that address some of the comments, thereby improving the paper. A subset of these papers that receive positive reviews can then be selected for presentation at conferences. Reviewers who consistently write good reviews should be rewarded by recognizing another class of contribution in the tenure review process.
CHI 2010 rebuttals are currently under review.