Case Study: The Superforecasters — What Calibrated People Do Differently

The Discovery

Philip Tetlock spent decades studying expert prediction. His first major finding, published in Expert Political Judgment (2005), was discouraging: most experts predicted the future no better than chance. Their confidence far exceeded their accuracy. The more famous and media-present the expert, the worse their predictions.

But Tetlock didn't stop there. In 2011, he launched the Good Judgment Project — a forecasting tournament sponsored by the Intelligence Advanced Research Projects Activity (IARPA) — that identified a small group of people who predicted geopolitical events with remarkable accuracy. He called them "superforecasters."

The superforecasters were not smarter than other experts (though they were generally intelligent). They were not better informed (they used the same publicly available information). They were not from elite institutions. What distinguished them was how they thought — and their cognitive habits are a practical demonstration of epistemic humility in action.

What Superforecasters Do

1. They think in probabilities, not certainties. Superforecasters rarely say "this will happen" or "this won't happen." They say "there is a 70% chance of X." This forces them to quantify their uncertainty rather than hiding behind false precision.

2. They update frequently. When new information arrives, superforecasters adjust their estimates — sometimes by a few percentage points, sometimes dramatically. They treat each estimate as provisional rather than as a commitment.

3. They decompose problems. Instead of making a single holistic judgment ("Will Russia invade Ukraine?"), they break the question into sub-questions ("What is Russia's military readiness? What are the costs and benefits of invasion from Putin's perspective? What is the probability of a diplomatic resolution?") and combine the sub-estimates. This forces them to examine their assumptions rather than relying on intuition.

4. They actively seek disconfirming evidence. Most people seek information that confirms what they already believe. Superforecasters deliberately look for reasons their current estimate might be wrong — the personal equivalent of a red team.

5. They keep score. Superforecasters track their predictions and check them against outcomes. They know their calibration score. They know which types of questions they tend to get right and which they tend to get wrong. This feedback loop is essential — without it, overconfidence cannot be detected.

6. They distinguish between the knowable and the unknowable. Superforecasters are comfortable saying "I don't know." They recognize that some questions are genuinely unpredictable and that expressing false confidence about unknowable events degrades their overall calibration.

The Connection to Epistemic Humility

Every superforecaster habit maps onto the epistemic humility framework:

Superforecaster Habit Epistemic Humility Principle
Think in probabilities Calibrated uncertainty — confidence matches evidence
Update frequently Conclusions are provisional — subject to revision
Decompose problems Expose assumptions to scrutiny
Seek disconfirming evidence Active search for your own errors
Keep score External feedback on accuracy
Distinguish knowable/unknowable Metacognitive awareness of limits

The superforecasters demonstrate that epistemic humility is not a vague attitude — it is a set of specific, learnable cognitive practices. And these practices produce measurably better outcomes: superforecasters outperformed intelligence analysts with access to classified information, because their cognitive habits were better calibrated, even with less data.

The Institutional Implication

Superforecasting is an individual practice — but it has institutional implications. If individual calibration can be improved through specific practices, institutional calibration might be improved through specific structures:

  • Prediction markets (Chapter 34) institutionalize the probability-thinking and score-keeping habits
  • Pre-registration (Chapter 34) institutionalizes the decomposition and assumption-exposure habits
  • Post-publication review (Chapter 34) institutionalizes the disconfirming-evidence-seeking habit
  • After-action reviews (Chapter 28) institutionalize the score-keeping habit — when they work

The gap between individual superforecasters and institutional practice is the gap between what is cognitively possible and what is structurally incentivized. Superforecasters practice epistemic humility because they choose to. Most institutions do not practice it because their incentive structures reward confidence over calibration.

Analysis Questions

1. Superforecasters are better calibrated than intelligence analysts with access to classified information. What does this tell you about the relative importance of information vs. cognitive habits in producing accurate beliefs?

2. Each superforecaster habit requires cognitive effort — thinking in probabilities is harder than thinking in certainties, seeking disconfirming evidence is harder than confirming what you believe. What structural features of most professional environments discourage these habits? How could those structures be changed?

3. Could the superforecasting practices be taught to everyone in a field — not just to self-selected forecasting enthusiasts? Design a training program for professionals in your field based on the six superforecaster habits. What resistance would you expect?