Why Experts Almost Always Get It Wrong

Our world is complex and messy. Forecasting requires intellectual teamwork, and the gathering of evidence from different sources

Isaiah Berlin Day in Riga. (Foto: Flickr)

In a now classic experiment, political scientist Philip E. Tetlock showed that the predictions made by political experts are only slightly better than a random guess, and worse than the predictions made by a statistical model. The research was summarized in his 2005 book, Expert Political Judgement: How Good Is It? How Can We Know?

In this early work, Professor Tetlock recruited 284 experts from a variety of fields, such as government officials, professors, journalist, and others. Many of these experts were often asked to comment, or to offer advice, on political and economic trends. Tetlock asked the experts to make roughly 28,000 predictions estimating the probability of future events over a nineteen year period from 1984 to 2003. The questions were along the lines of: Would Gorbachev be ousted in a coup? Would the United States go to war in the Persian Gulf?

The results were embarrassing; monkeys throwing darts would have done better than the experts. Those experts with the biggest media profile were particularly bad forecasters. These are the presumed experts, like me, that often appear in TV programs, in newspaper columns, in web searches, and in bookshelves. Researchers, intrigued as to whether these results were exclusive to political predictions, tested the predictive accuracy of experts in other fields such as technology-trends and the outcome of Supreme Court cases. The results were the same: experts, almost always, get it wrong.

Professor Tetlock’s study came to the attention of the intelligence community, and prompted further work with the aim of improving geo-political and geo-economic forecasting. The latest research performed by the Good Judgement Project, suggests that some cognitive styles are more accurate at predicting than others. Using the two personality types identified by Isaiah Berlin in his 1950 essay “The Hedgehog and the Fox”, the research compares the track record of predictive accuracy for “foxes” and “hedgehogs.”

Hedgehogs, in Tetlock’s terminology, are those experts that confidently look at events in terms of one big idea that they use almost exclusively as their reference point e.g., climate change, terrorism, Donald Trump, etc. Foxes, in contrast, do not appear as confident as hedgehogs. Foxes are thinkers familiar with many small things who are very skeptical of grand explanatory schemes. Foxes know many things, whereas hedgehogs know only one thing. But it is hedgehogs, who dominate the media when it comes to forcefully predicting the future; and they are most often wrong.

Tetlock explains that, being deeply knowledgeable on one subject narrows our focus, and increases our confidence. But this narrow focus and confidence obscures dissenting views until they are no longer visible to the hedgehog. As a result, what should be the collection and analysis of unbiased information turns, for the hedgehog, into a self-serving gathering of biased material. This process of self-deception manifests itself in the self-assurance shown by hedgehogs.

In other words, hedgehog-thinkers, who know one big thing, assertively extrapolate the explanatory power of that big thing into many spheres and are dismissive of those that “do not get it.” Fox-thinkers, in contrast, seek to put together diverse sources of information and appear quite hesitant about their ability to forecast future events.

Psychologists tell us that one reason we desire expert predictions is because we have a “need for closure.” That is, we want an answer to a question. Even if that answer is wrong, we find a wrong answer preferable to enduring a state of confusion and ambiguity. But, then again, if experts are almost always wrong, why should we listen to these psychological experts?

Our world is complex and messy. Forecasting requires intellectual teamwork, and the gathering of evidence from different sources. It involves thinking probabilistically, and being willing to admit error and change course. Forecasting involves incalculable contingencies and variables, not just one big idea. Foxes are comfortable with this predictive environment; hedgehogs are not.

Let’s consider this next time we listen to, or read a prediction from an expert who sees the world in terms of one big explanatory idea. He is likely to be wrong. People and environments are full of surprises.

Subscribe free to our daily newsletter
Sign up here to get the latest news, updates and special reports delivered directly to your inbox.
You can unsubscribe at any time