Tuesday, May 29, 2007

Improving Peer Review

Michael L. Callaham1 and John Tercier, "The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality" PLoS.
Our study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).
Landkroon AP, Euser AM, Veeken H, Hart W, Overbeke AJ. "Quality assessment of reviewers' reports using a simple instrument."
OBJECTIVE: To validate and test a simple instrument for assessing the quality of a review. METHODS: In this prospective observational study, the quality of 247 reviews of 119 original articles submitted to the Dutch Journal of Medicine was assessed using a 5-point scale that has been used for years by Obstetrics & Gynecology. Each review was assessed by three editors of the journal. Intraobserver variability, calculated as an intraclass correlation coefficient, was assessed by having the same editors rate 76 reviews for a second time. Validation of the scale was done in two ways. First, editors of three other medical journals were asked to rate the 247 reviews using the same 5-point scale. Second, all reviews were sent to the authors of the article with a questionnaire consisting of 12 yes-or-no questions and one question asking for an overall score for the review. RESULTS: The interobserver intraclass correlation coefficient for the three editors was 0.62 (95% confidence interval 0.50-0.71) for the first assessment of 247 reviews. For the second assessment of 76 reviews, the interobserver intraclass correlation coefficient was 0.62 (0.45-0.74). The intraobserver intraclass correlation coefficient for each of the internal editors ranged from 0.66 to 0.88. The interobserver intraclass correlation coefficient for the external editors was 0.60 (0.51-0.68). The interobserver intraclass correlation coefficient for all six editors was 0.62 (0.55-0.68). The authors' response rate to the questionnaires was 83%. A significant correlation was found between the mean total editorial quality assessment and the overall score of the authors (intraclass correlation coefficient 0.28, 0.14-0.41). CONCLUSION: This 5-point scale proved to be a simple, reliable, and valid instrument enabling editors to assess the quality of reviews. A significant correlation was found between mean editorial quality assessment and the quality as determined by authors. LEVEL OF EVIDENCE: III.

No comments: