Tuesday, June 19, 2007

"Peer Review Peered at, Reviewed"

According to Science magazine:
NIH has created two working groups--one external, one internal--to examine the "content, criteria, and culture of peer review" in light of flat budgets, a rising number of grant applications, shrinking success rates, and a dearth of experienced reviewers.


I spent more than a decade managing research programs that involved peer review, and have consulted in other countries about peer review processes.

I once represented USAID in a joint research project with the U.S. National Science Foundation and the U.S. National Institutes of Health. It was fun watching the experts from NIH and NSF go at each other on how the peer review was to be done.

Peer review is the worst possible way to judge the merit of scientific work, except for all the other ways that I know of (to paraphrase Churchill).

Basically, only peer scientists have enough explicit and tacit knowledge to adequately judge scientific proposals and results. One difficulty of course is identifying peers. On the one hand, you don't want to limit the selection to only people working in a narrow byway of research for fear that they are in a dead end and don't realize it. On the other hand, going too widely will result in lack of really relevant expertise.

However, working scientists who are asked to do peer review probably still don't understand the specific research that they are reviewing as well as the implementing scientists. Moreover, they are busy people, doing the review more out of a sense of scientific responsibility than for pay (and often they are not payed at all), so that they do not attend to the effort as much as one might wish.

The in-person or telephone panel process tends to reach consensus and bring out many aspects of the work, but is expensive in time of the scientists, complicated to arrange, and the consensus may not be accurate.

It is hard to balance the needs of privacy to encourage frank expression of evaluation of the proposed or reported work, the need for transparency in decision making, and the needs for communication among the reviewing and implementing scientists to improve the quality of the work.

When one comes to applied research, the problems are exacerbated. Not only is it important to have expertise in the review from the scientific community, but also from the application community. Sometimes a multi-phased screening process is needed to figure out if something is not only scientifically feasible but practical in application.

I was always surprised as a working research program manager at how little research there was on the review process, and how little of the results of that research was known to my peer managers of peer review. There is in fact a whole body of research on how people make decisions and how the panel process can be improved. More such research needs to be done, and greater efforts need to be made to disseminate the knowledge gained from such research and apply it to science and technology review processes.

No comments: