Wednesday, January 12, 2011

significance tests may be a bad way of studying small influences on outcomes

There is an article in The New York Times making a surprisingly sophisticated point on the interpretation of statistical information:
For decades, some statisticians have argued that the standard technique used to analyze data in much of social science and medicine overstates many study findings — often by a lot. As a result, these experts say, the literature is littered with positive findings that do not pan out: “effective” therapies that are no better than a placebo; slight biases that do not affect behavior; brain-imaging correlations that are meaningless.....

The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent......

“But if the true effect of what you are measuring is small,” said Andrew Gelman, a professor of statistics and political science at Columbia University, “then by necessity anything you discover is going to be an overestimate” of that effect.....

In the 1960s, a team of statisticians led by Leonard Savage at the University of Michigan showed that the classical approach could overstate the significance of the finding by a factor of 10 or more. By that time, a growing number of statisticians were developing methods based on the ideas of the 18th-century English mathematician Thomas Bayes.
The article also notes the value of looking at the odds ratio as an alternative to significance testing.

Think about areas such as the genetic and environmental effects on behavior. Assume that there are many interacting genes and environmental factors, each of which has a small impact on the probability of a certain kind of behavior, and which together may not completely determine that behavior. It would seem that it would be very hard to do convincing science on the impact of each of these factors.

Incidentally, when I was doing statistics in the 1960s there used to be a saying that the field of statistics was built on shifting sands until Savage came along and rebuilt it on empty space.

1 comment:

Anonymous said...

They will certainly likewise post case history on just how they took on several complicated software application advancement projects in the past. Our own workers, www.gdanalysis.com, have actually definitely offered globally clients for greater than fifteen years. We have a premium team of software and app designers that are familiar with all company fields. click here for more info Significance testing