Further Reading: The Humility Chapter
Tier 1: Verified Sources
Tetlock, Philip E., and Dan Gardner. Superforecasting: The Art and Science of Prediction. Crown, 2015. The definitive treatment of what makes people good and bad at prediction. Tetlock identifies the specific cognitive habits of superforecasters — thinking in probabilities, updating frequently, decomposing problems, seeking disconfirming evidence — that constitute epistemic humility in practice.
Schulz, Kathryn. Being Wrong: Adventures in the Margin of Error. Ecco, 2010. The most accessible and engaging exploration of the experience of being wrong. Schulz's central insight — "the feeling of being wrong feels exactly like the feeling of being right, right up until the moment you realize you're wrong" — is the foundation of this chapter.
Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. The comprehensive treatment of the cognitive biases that produce overconfidence, including anchoring, availability, representativeness, and the illusion of validity. Kahneman's WYSIATI ("What You See Is All There Is") framework explains why metacognitive blindness is a structural feature of human cognition, not an individual failing.
Tetlock, Philip E. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press, 2005. The research that demonstrated expert prediction is no better than chance — and that the most confident, media-present experts are often the worst predictors. Essential context for understanding why calibration matters.
Tavris, Carol, and Elliot Aronson. Mistakes Were Made (But Not by Me). Houghton Mifflin Harcourt, 2007. The psychology of self-justification — why people resist acknowledging their own errors. Tavris and Aronson explain the cognitive mechanisms (cognitive dissonance, confirmation bias, memory distortion) that make the humility described in this chapter so difficult to achieve and so valuable when achieved.
Tier 2: Attributed Claims
The finding that people get approximately 3-5 out of 10 correct on 90% confidence interval exercises is among the most replicated findings in judgment and decision-making research, documented across dozens of studies since the 1970s.
The Good Judgment Project, launched in 2011 and funded by IARPA, is well documented in Tetlock's publications and in public IARPA reports. The finding that superforecasters outperformed intelligence analysts with classified access is reported in Tetlock and Gardner (2015).
The concept of "pre-mortem" was developed by Gary Klein and is documented in his research on naturalistic decision-making.
Recommended Reading Sequence
- Start with Schulz (Being Wrong) — for the experience and philosophy of error
- Then Tetlock and Gardner (Superforecasting) — for the practices that improve calibration
- Then Kahneman (Thinking, Fast and Slow) — for the cognitive science of overconfidence
- Then Tavris and Aronson (Mistakes Were Made) — for why changing your mind is so hard
- Then Tetlock (Expert Political Judgment) — for the evidence on expert prediction failure