Book Review: Superforecasting: The Art & Science of Prediction
What most accurately describes the essence of intelligent, objective, public service-oriented politics? Is it primarily an honest competition among the dominant ideologies and ideas of our times, or is it a self-interested quest for influence and power or a combination of the two? Does it boil down to understanding the biological functioning of the human mind and how it sees and thinks about the world? Or, or is it something else entirely?
Turns out, it isn’t even close. Superforecasting comes down squarely on the side of getting the biology right. Getting things right requires being non-ideological about reality (facts) and common sense (logic). Everything else is a distant second.
Superforecasting: The Art & Science of Prediction, written by Philip E. Tetlock and Dan Gardener (Crown Publishers, September 2015), describes Tetlock’s ongoing research into asking what factors, if any, can be identified that contribute to a person’s ability to predict the future. In Superforecasting, Tetlock asks how well average but intellectually engaged people can do compared to experts, including professional national security analysts with access to classified information. Tetlock and his research team found that the interplay between dominant, unconscious, distortion-prone intuitive human cognitive processes (“System 1”) and less-influential but conscious, rational processes (“System 2”) was a key factor in how accurately people predicted future events.
Tetlock observes that a “defining feature of intuitive judgment is its insensitivity to the quality of the evidence on which the judgment is based. It has to be that way. System 1 can only do its job of delivering strong conclusions at lightning speed if it never pauses to wonder whether the evidence at hand is flawed or inadequate, or if there is better evidence elsewhere. . . . . we are creative confabulators hardwired to invent stories that impose coherence on the world.”
It turns out, that with minimal training and the right mind set, some people, “superforecasters”, routinely trounce the experts. Based on a 4-year study, the “Good Judgment Project”, funded by the DoD’s Intelligence Advanced Research Projects Agency, about 2,800 volunteers made over a million predictions on topics that ranged from potential conflicts between countries to currency fluctuations. Those predictions had to be, and were, precise enough to be analyzed and scored.
About 1% of the 2,800 volunteers turned out to be superforecasters who beat national security analysts by about 30% at the end of the first year. One even beat commodities futures markets by 40%. The superforecaster volunteers did whatever they could to get information, but they nonetheless beat professional analysts who were backed by computers and programmers, spies, spy satellites, drones, informants, databases, newspapers, books and whatever else that lots of money can buy. As Tetlock put it, “. . . . these superforecasters are amateurs forecasting global events in their spare time with whatever information they can dig up. Yet they somehow managed to set the performance bar high enough that even the professionals have struggled to get over it, let alone clear it with enough room to justify their offices, salaries and pensions.”
The top 1-2% of volunteers were carefully assessed for personal traits. In general, superforecasters tended to be people who were eclectic about collecting information and open minded in their world view. They were also able to step outside of themselves and look at problems from an “outside view.” To do that they searched out and aggregated other perspectives, which goes counter to the human tendency to seek out only information that confirms what we already know or want to believe. That tendency is an unconscious bias called confirmation bias. The open minded trait also tended to reduce unconscious System 1 distortion of problems and potential outcomes by other unconscious cognitive biases such as the powerful but very subtle and hard to detect “what you see is all there is” bias, hindsight bias and scope insensitivity, i.e., not giving proper weight to the scope of a problem.
Superforecasters tended to break complex questions down into component parts so that relevant factors could be considered separately, which also tends to reduce unconscious bias-induced fact and logic distortions. In general, superforecaster susceptibility to unconscious biases, e.g., rigid liberal or conservative political ideology, was significantly lower than for other participants. That appeared to be due mostly to their capacity to use conscious System 2 thinking to recognize and then reduce unconscious System 1 biases.
About 15 traits were common among superforecasters, including (i) cautiousness based on an innate knowledge that little or nothing was certain, (ii) being reflective, i.e., introspective and self-critical, and (iii) being pragmatic and not wedded to any particular agenda or political or economic ideology. Unlike political ideologues, they were pragmatic and did not try to “squeeze complex problems into the preferred cause-effect templates [or treat] what did not fit as irrelevant distractions.”
What the best forecasters knew about a topic and their political ideology was much less important than how they thought about problems, gathered information and then updated thinking and changed their minds based on new information. The best engaged in an endless process of information and perspective gathering, weighing information relevance and questioning and updating their own judgments when it made sense. It was work that required effort and discipline. Political ideological rigor was detrimental, not helpful.
Regarding common superforecaster traits, Tetlock observed that “a brilliant puzzle solver may have the raw material for forecasting, but if he also doesn’t have an appetite for questioning basic, emotionally-charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking.” Superforecasters have a real capacity for self-critical thinking. Political, economic and religious ideology is mostly beside the point.
The topic of predicting the future might seem to some to have little relevance and/or importance to politics and political policy. That belief is wrong. Tetlock cites an example that makes the situation crystal clear. In an interview in 2014 with General Michael Flynn, head of the Defense Intelligence Agency, DoD’s 17,000 employee equivalent to the CIA, Flynn said “I think we’re in a period of prolonged societal conflict that is pretty unprecedented.” A quick Google search of the phrase “global conflict trends” and some reading was all it took to prove that belief was wrong.
Why did Gen. Flynn, a high-ranking, intelligent and highly accomplished intelligence analyst make such an important, easily-avoided mistake? The answer lies in System 1 and its powerful but unconscious “what you see is all there is” (WYSIATI) bias. He succumbed to his incorrect belief because he spent 3-4 hours every day reading intelligence reports filled with mostly bad news. In Flynn’s world, that was all there was. In his unconscious mind, his knowledge had to be correct and he therefore didn’t bother to check his basic assumption. Most superforecasters would not have made that mistake. They train themselves to relentlessly pursue information from multiple sources and would have found what Google had to say about the situation.
Tetlock asserts that partisan pundits opining on all sorts of things routinely fall prey to the WYSIATI bias for the same reason. They frequently don’t check their assumptions against reality and/or will knowingly lie to advance their agendas. Simply put, partisan pundits are usually wrong (about 85-90% of the time) because of their ideological rigidity and the intellectual sloppiness it engenders.
Limits and criticisms of forecasting
In Superforecasting, Tetlock points out that predicting the future has limits. Although he is not explicit about it, forecasting most questions for time frames more than about 18-36 months in the future appears to become increasingly less possible because predictions simply fade into randomness. That makes sense, given complexity and the number of factors that can affect outcomes. Politics and the flow of human events are simply too complicated for long-term forecasting to ever be feasible. What is not known is the ultimate time range where the human capacity to predict fades into the noise of randomness. More research is needed.
A criticism of the research argues that superforecasters operating in a specified time frame, 1-year periods in this case, are flukes and they cannot defy psychological gravity for long. Instead, the criticism argues that superforecasters will simply revert to the mean and settle back to the ground the rest of us stand on. In other words, they would become more or less like everyone else with essentially no ability to predict future events.
The Good Judgment Project allowed testing of that concern. The result was the opposite of what the criticism predicted. Although some faded, many of the people identified as superforecasters at the end of year 1 actually got better in years 2 and 3 of the 4-year experiment. Apparently, those people not only learned to limit the capacity of their unconscious System 1 mental processes to distort fact and logic, but they also consciously maintained that skill and improved on how the conscious but rational System 2 (the rider) was able to counteract the fact- and logic-distorting lenses of unconscious System 1 biases. Although the mental effort needed to be objective was significant, most superforecasters could nonetheless defy psychological gravity, at least over a period of several years.
The intuitive-subjective politics problem
On the one hand, Tetlock sees a big upside for “evidence-based policy”: “It could be huge - an “evidence-based forecasting” revolution similar to the “evidence-based medicine” revolution, with consequences every bit as significant.” On the other hand, he recognizes the obstacle that intuitive or subjective (System 1 biased), status quo two-party partisan politics faces: “But hopelessly vague language is still so common, particularly in the media, that we rarely notice how vacuous it is. It just slips by. . . . . If forecasting can be co-opted to advance their [narrow partisan or tribe] interests, it will be. . . . . Sadly, in noisy public arenas, strident voices dominate debates, and they have zero interest in adversarial collaboration.”
The rational-objective politics theoretical solution
For evidence-based policy, Tetlock sees the Holy Grail of his research as “. . . . using forecasting tournaments to depolarize unnecessarily polarized policy debates and make us collectively smarter.” He asserts that consumers of forecasting need to “stop being gulled by pundits with good stories and start asking pundits how their past predictions fared - and reject answers that consist of nothing but anecdotes and credentials. And forecasters will realize . . . . that these higher expectations will ultimately benefit them, because it is only with the clear feedback that comes with rigorous testing that they can improve their foresight.”
What Tetlock is trying to do for policy will be uncomfortable for most ideologues who believe in subjective ideologies. That’s the problem with letting unbiased fact and logic roam free - they will go wherever they want without much regard for personal ideologies or morals. Tetlock advocates change via focusing policy and politics on understanding human biology and ideologically unbiased reality, not political ideology.
Tetlock focuses on evidence-based policy, while DP’s focus is on evidence-based or “objective” politics. In essence, Tetlock tries to coax pundits and policy makers into objectivity based on human cognitive science and higher competence by asking the public and forecast consumers to demand better from the forecasters they rely on to form opinions and world views. What tetlock in trying to do is to increase the capacity of our conscious mind’s to enlighten our unconscious power to hide and distort fact and logic. Based on Tetlock’s research, optimal policy making, and by extension, optimal politics, does not boil down to being more conservative, liberal, capitalist, socialist or Christian. Instead, it is a matter of finding an optimum balance in the distribution of mental influence between the heavily biased intuition-subjectivity of unconscious System 1 and less-biased reason-objectivity of conscious System 2.
That optimum balance and objectivity based thereon won’t lead to perfect policy or politics. But, the result will be significantly better in the long run than what the various, usually irrational, intuitive or subjective mind sets and narrow or small ideologies deliver now. Those small ideologies include American liberalism, conservatism, socialism, capitalism, Christianity and variants thereof such as libertarianism and environmentalism.
Wether objective politics attains widespread public acceptance over time is an open question. People don’t like being objective — it goes against the grain of human cognitive biology. However, Tetlock sees some hope and points to one observer, an engineer who observed: "'I think it's going to get stranger and stranger' for people listen to the advice of experts whose views are informed only by their subjective judgment." Time will tell how this new ideology’s struggle against extinction plays out.