A total of 2,264 manuscripts submitted to the Journal of General Internal Medicine (JGIM) were sent by the editors for external review to two or three reviewers each during the study period. These manuscripts received a total of 5,881 reviews provided by 2,916 reviewers. Twenty-eight percent of all reviews recommended rejection.There is a general agreement that those who submit article for publication, and even more those who submit proposals for funding spend more time and effort in those tasks than do the reviewers who evaluate them.
However, the journal's overall rejection rate was much higher -- 48 percent overall and 88 percent when all reviewers for a manuscript agreed on rejection (which occurred with only 7 percent of manuscripts). The rejection rate was 20 percent even when all reviewers agreed that the manuscript should be accepted (which occurred with 48 percent of manuscripts).
Researchers are taught to do research and write research reports and proposals; they receive feedback on their submissions for publication and funding. Reviewers are not taught to do reviews and do not receive feedback on the reviews that they do. So why would anyone assume that reviewers will be right in the judgments that they provide on the things that they review?
I am a little surprised by a system in which 28 percent of reviews recommend rejections but 48 percent to articles are actually rejected. I am more surprised by a system in which 12 percent of submissions that receive unanimous recommendations for rejection are actually published, or in which 20 percent of submissions that receive unanimous recommendations for acceptance are actually rejected.
If one uses a rating scale, one can estimate the probability of each rating, p(r) from the past experience with submissions and reviewer ratings. The average uncertainty of ratings is then by the normal information theory estimate, p(r) ln p(r) summed over all rating values. One can, in the same way, estimate the a posteriori probabilities of other reviewers giving each rating given that one reviewer has given rating r. The information in rating r is then, formally, the difference in the a posteriori and a priori average uncertainty. This approach offers an approach to deciding how many reviews to seek for an average submission.
Extensions of this approach can help to decide how much credence to give to individual reviewers who have a record with the organization.
No comments:
Post a Comment