Books
Having a Crystal Ball

Superforecasting: The Art and Science of Prediction
(Crown, 2015)
Could anyone have predicted in June that Donald Trump would lead the race for Republican presidential nominee—in October? Quite possibly, at least according to Superforecasting: The Art and Science of Prediction, a new book from Philip E. Tetlock, a psychology and political science professor, and journalist Dan Gardner. “Foresight isn’t a mysterious gift bestowed at birth,” they write. “It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person.”
The thrust of the book is twofold. First, the authors straightforwardly describe the findings of Tetlock’s long-running Good Judgment Project (GJP), an experiment designed to root out the qualities that make accurate forecasters. Second, and more ambitiously (if not exactly realistically), Tetlock wants to hold forecasters—whether TV pundits or intelligence analysts—accountable for getting it right. (Since the book is written in the first person from Tetlock’s point of view, I’ll take the authors’ cue.)
With its catchy title and promise of improving one’s ability to predict the future, Superforecasting first appears to be Gladwellian pop science. But a self-help book this is not. The aim of superforecasters is not to predict whether they’ll snag that promotion or find the man of their dreams; they are weighing in on whether Yasser Arafat was poisoned with polonium or how many Syrian refugees will make the United Nations’ tally. “This book is not about how to be happy,” Tetlock writes. “It’s about how to be accurate.”
To accomplish this goal, Tetlock envisions applying the scientific method to the forecasting world in the same way that the medical community relies on randomized trials. We must demand forecasts that avoid fuzzy language, express probability with a number, and come with deadlines, he argues. It’s not just that “Donald Trump may remain a viable candidate for longer than you might expect.” It’s that, “There is a 70% chance that Donald Trump will continue campaigning as a Republican until May 1, 2016.”
Tetlock has some experience with this. As he acknowledges, he is perhaps most famous for a metaphor—that experts are no better at predicting the future than a chimp throwing darts at a dartboard. The trope dates back to a forecasting experiment he ran in the 1980s where he found that most of his 300 expert subjects were unable to out-predict random guessing, despite their expertise. What is often overlooked, however, is that some of the experts significantly outperformed the chimp, suggesting that there are qualities that improve forecasting ability.
Tetlock further tested this theory through a forecasting tournament sponsored by the Intelligence Advanced Research Project Activity (IARPA), a federal agency that aims to improve the U.S. intelligence community. Between September 2011 and May 2015, five teams competed to generate the most accurate predictions, mostly about current events. By the third year, Tetlock’s Good Judgment Project—made up of 2,800 volunteers—had done so well that IARPA dropped the other teams.
The results give rise to some tantalizing questions—Why did GJP succeed where top-notch universities and professional analysts failed? Why were some of Tetlock’s team members “superforecasters”?—that the book answers in a happily measured, nuanced way.
Of course, many GJP participants were smarter-than-average professionals with math or science backgrounds. A certain level of intelligence, comfort with numbers, and curiosity about current events was also immensely helpful. But Tetlock found that elbow grease and critical thinking skills were more important. “Intelligence and knowledge help but they add little beyond a certain threshold,” he writes, adding, “High-powered subject-matter experts would not be much better forecasters than attentive readers of the New York Times.”
The crucial step is to slow down and interrogate your first thought, to assume your first conclusion is wrong. (For this, Tetlock is indebted to Daniel Kahneman, the author of Thinking Fast and Slow, who is frequently quoted in the book.) Another key element is to make small, thoughtful updates to one’s initial forecast. “What makes the difference is correctly identifying and responding to subtler information so you zero in on the eventual outcome faster than others,” he writes. And don’t underestimate the power of “grit”—the tenacity to track down the right information. (One superforecaster cited in the book tracks down the author of an obscure online paper to help inform one of his predictions.)
Moreover, Tetlock readily acknowledges the limitations of superforecasting—that humans will never be able to predict several years into the future, that sometimes we overlook the role of luck. (Anyone can predict a stock crash, he notes, by constantly warning of a stock crash and waiting.) The GJP process of settling on a prediction is “slow, methodical and demanding.” He adds: “What makes [superforecasters] so good is less what they are than what they do.” In other words, you probably won’t try this at home.
Engagingly, Tetlock also repeatedly challenges the reader to doubt him, and two chapters focus on significant critiques of his work. “Beliefs are hypotheses to be tested, not treasures to be guarded,” he writes in true superforecaster fashion. “It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.”
Indeed, for all its careful analysis, the book is at its least successful when relying on supposedly clever terminology, such as “tip-of-the-nose” thinking and “foxes versus hedgehogs.” The terms are confusing, leading to monstrous metaphors—at one point Tetlock (half jokingly) recommends acting like a fox with dragonfly eyes.
What emerges is readable and laudable, if less than earth-shattering. In the end, the findings are, well, predictable: an intelligent person who devotes time to researching a problem, narrows the parameters of the question, interrogates the hypothesis, and monitors new information will be better able to predict the future.
So will Superforecasting change television news? Will restrained commentary replace thoughtless punditry, even as the presidential election approaches? I can’t pretend to predict the future. But let’s face it: a chimp throwing darts makes for some great TV.