Superforecasting: The Art and Science of Prediction by Philip E. Tetlock and Dan Gardner
Published in September of 2015.
Many of us know Tetlock for his work on expert political judgement - research that concluded that most commentators, pundits and prognosticators (and bloggers) are no more accurate than a dart-throwing chimpanzee.
In Superforecasting, Tetlock moves beyond describing why analysts are so bad at predicting the future to trying to figure out how to become better at the gig. His approach to solving this riddle is both empirical and rigorous
Over the course of many years, Tetlock and his colleagues ran a large scale experiment in forecasting called the Good Judgment Project. This experiment, which trained and tracked the performance of over 2,000 individuals, was able to systematically evaluate the factors that lead to better forecasting behaviors.
The big results Tetlock’s research are:
- Some people are much better at forecasting than others.
- People can be trained to become better at predicting the future.
- Foxes make better forecasters than hedgehogs.
Hedgehogs are people who make predictions based on their unshakeable belief in what they see as a few fundamental truths. Foxes, by contrast, are guided in their forecasts by drawing on diverse strands of evidence and ideas. Hedgehogs know a few big things, foxes know lots of little things. When new information comes to light a fox is likely to adjust her forecast, where a hedgehog will likely discount the new data. For a fox, being wrong is an opportunity to learn new things.
Tetlock clearly favors the foxes. The types of probabilities that that the participants in the Good Judgment Project were asked to calculate were all time-bound and measurable. They tended to focus on issues such as commodity prices, financial results, and political outcomes.
As a card carrying hedgehog, I’m skeptical (and willing to discount) evidence that a foxy approach is superior. The big ideas that I’m completely certain about include my belief in the value of a liberal arts education, the necessity to increase public support of postsecondary education, and a belief that all educators should be fairly compensated.
None of these beliefs are very helpful in predicting if that MOOC provider will be revenue positive by 2019, or if that LMS company will be sold to another owner.
It may be that there is a big difference between basing decision on core values as opposed to judgments about the likelihood of future events.
We need to decide if we are more interested in predicting the future or shaping it.
If this sounds like a critique of Superforecasting, that is not my intent. If I had read Superforecasting in 2015 (the year that the book was published), I may have nominated the book for the best nonfiction work of that year. The book is measured, balanced, and self-critical in all its conclusions. Tetlock never strays very far from the data in making his arguments.
As someone who spends most of his time thinking about the future of higher education, I found the lessons of Superforecasting enormously useful in helping me think about how I think about the future.
What are you reading?
Read more by
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading