Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial

JAMA. 1998 Jul 15;280(3):237-40. doi: 10.1001/jama.280.3.237.

Abstract

Context: Anxiety about bias, lack of accountability, and poor quality of peer review has led to questions about the imbalance in anonymity between reviewers and authors.

Objective: To evaluate the effect on the quality of peer review of blinding reviewers to the authors' identities and requiring reviewers to sign their reports.

Design: Randomized controlled trial.

Setting: A general medical journal.

Participants: A total of 420 reviewers from the journal's database.

Intervention: We modified a paper accepted for publication introducing 8 areas of weakness. Reviewers were randomly allocated to 5 groups. Groups 1 and 2 received manuscripts from which the authors' names and affiliations had been removed, while groups 3 and 4 were aware of the authors' identities. Groups 1 and 3 were asked to sign their reports, while groups 2 and 4 were asked to return their reports unsigned. The fifth group was sent the paper in the usual manner of the journal, with authors' identities revealed and a request to comment anonymously. Group 5 differed from group 4 only in that its members were unaware that they were taking part in a study.

Main outcome measure: The number of weaknesses in the paper that were commented on by the reviewers.

Results: Reports were received from 221 reviewers (53%). The mean number of weaknesses commented on was 2 (1.7, 2.1, 1.8, and 1.9 for groups 1, 2, 3, and 4 and 5 combined, respectively). There were no statistically significant differences between groups in their performance. Reviewers who were blinded to authors' dentities were less likely to recommend rejection than those who were aware of the authors' identities (odds ratio, 0.5; 95% confidence interval, 0.3-1.0).

Conclusions: Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports.

Publication types

  • Clinical Trial
  • Randomized Controlled Trial

MeSH terms

  • Authorship
  • Humans
  • Peer Review*
  • Publication Bias
  • Publishing / standards*
  • Quality Control
  • Single-Blind Method