Monday, September 01, 2014

Research on forecasting.

Tetlock also found that specialists are not significantly more reliable than non-specialists in guessing what is going to happen in the region they study. Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”
The quotation is from a review of Philip Tetlock's book,  Expert Political Judgment: How Good Is It? How Can We Know?, from the New Yorker magazine.

Some years ago I compared the forecasts of GDP made by World Bank economists with simple linear extrapolations of GDP. On average, the linear extrapolations were slightly more accurate. Now economists realize that there is a great deal of inertia in national economic systems, so they tend to be relatively conservative in forecasting deviations from the trend. My data seemed to show that when they did forecast a significant change from the trend, they tended to overestimate the actual deviation. I have known and worked with several World Bank economists, and I have the greatest respect for their competence. I think my findings were consistent with Tetlock's view that experts are frequently not as good at forecasts as we might expect from their frequent appearances in televised news programs.

Another quote from the New Yorker:
Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.
Foreign affairs are complicated. There seem to be high probabilities of making mistakes in predictions on what is going to happen in Ukraine, Syria, Iraq and other hot spots. Perhaps it is best to make small decisions and not work from a grand plan (one big thing) in foreign policy. Certainly it seems wise to look for the "black swan", the event or piece of information that doesn't fit with forecasts and assumptions.
Tetlock also has an unscientific point to make, which is that “we as a society would be better off if participants in policy debates stated their beliefs in testable forms”—that is, as probabilities—“monitored their forecasting performance, and honored their reputational bets.” He thinks that we’re suffering from our primitive attraction to deterministic, overconfident hedgehogs.
I wonder whether Tetlock is a hedgehog or a fox? How good is his intuition as to what would improve our society? 

No comments: