Bayes Theorem
Source |
Ideally scientists should never totally believe in a scientific theory; they should always hold the possibility that a theory is not true, but can be replaced by an alternative that more adequately explains observations. Newtonian physics was great at predicting the paths of planets around the sun and balls shot from cannons on earth; Einstein's physics reduced to Newton's for those applications and better predicted the path of light from distant stars around the sun. Thus Physicists accepted Einstein's theory to replace Newton's as more nearly true. They still search actively for a theory that is still more nearly true than Einstein's.
Scientists seek observations that disagree with the predictions of current theories, for when they can find such an observation it may be a clue to the nature of a better theory. However, they know that such observations are likely to be anomalous -- the result of experimental error of a statistical fluke. One of the functions of peer review is to search for experimental errors or errors in reporting of results.
I suppose an observation that disagrees with previous theory is that IQs as measured by existing tests have tended to increase from year to year (the Flynn Effect). Psychologists are seeking to find better theories of human intelligence that predict such a change, and/or better means of observing the individual's level of intelligence.
Scientific gold is a prediction from a new theory that differs from the corresponding prediction of accepted theory, and that is implementable in an experiment or controlled observation. Thus Einstein, from his theory of relativity, predicted an observation of the apparent location of a distant star (the light from which passed close to the sun during a total eclipse) that differed from that from Newton, and the observation when finally made proved his prediction accurate.
There is all the difference in the world between guessing the nature that a new theory might have from an observation versus accurately predicting an improbable observation from a new theory. The distinction is similar to the distinction between using statistics to test a hypothesis versus drawing a hypothesis from statistical analysis of observations. In the second case, a new set of observations is required to make the hypothesis more credible.
Incidentally, Baysian analysis explains something about scientific publications. I have lived through more than 20,000 days; the sun always came up. Bayes might say that I am pretty sure that the sun will also come up tomorrow. The observation that it did so today is not very interesting because it does not much change my estimate of the probability that it will come up tomorrow. Research results that add a tiny bit of confidence to an already well accepted hypothesis are not likely to be published. On the other hand, a very surprising observation (that changes credibility of what is currently believed) is likely to be published in scientific journals; it is also likely to be wrong. The equipment may have malfunctioned; the dials may have been misread, the experiment may have been poorly conceived or planned. Thus, other scientists should seek to replicate the observation in their own laboratories to assure that that originally reported was not an error or a statistical fluke. Indeed, it is he very publication of the results that encourages others to try to replicate them.
Two aspects of science thus stand out:
- Many results reported in scientific journals will turn out not to be replicable (and reported results should not be "believed" until they are replicated.)
- Scientific consensus tends to be credible because the assertions have proven resistant to many challenges and have been supported by many observations.
The 97 percent of scientists who believe that the cumulative effect of man's activity on earth over many years will lead to climate change is thus very credible; as I interpret the precautionary principle, we should act to ameliorate the risk of catastrophic consequences of climate change.
Good Enough for Practical Purposes
I suppose that our confidence in the scientific consensus about human activity causing destructive climate change is good enough for the practical purpose of taking preventive action.
In like manner, Newton's theory that the attraction of gravity on an apple is proportional to the product of the masses of the earth and an apple, and that therefore the acceleration of an apple falling from a tree is the same for big and small apples, is good enough for our practical purposes of avoiding falling apples and catching them before they hit the ground.
All engineers tend to feel that we don't need "truth" and "exact predictions" to do good engineering. Good enough theory and good enough predictions -- combined with margins of safety -- lead to practical solutions.
Science is not true, but a lot of science is true enough for practical purposes!
Science is not true, but a lot of science is true enough for practical purposes!
No comments:
Post a Comment