Wednesday, November 25, 2015

D.C. court considers how to screen out ‘bad science’ in local trials


There is an interesting article in today's Washington Post titled "D.C. court considers how to screen out ‘bad science’ in local trials".Having managed "peer reviews" of science project proposals for nearly two decades, I have come to the conclusion that this is not an easy thing to do, and even if you have been doing it for some time, you may still get it wrong from time to time;

Fortunately, DC has access to the National Acamey of Science, NIH, NSF. and other organizations with lots of experience doing this, and can delegate the work.

One thing I found useful was developing a data base on how often each expert agreed with other experts. I discovered when I first tried this that expert judgments in that field, which we often taken as 100 percent correct, actually agreed on the final recommendation no more that 90 percent of the time.

In one situation in which the same group of "experts" repeatedly judged similar objects, one of the "expert's" reviews was negatively correlated with all the others. 

No comments: