Further Reading: Chapter 6
Ellenberg, Jordan. How Not to Be Wrong: The Power of Mathematical Thinking. Penguin Press, 2014. The best accessible mathematics book for non-mathematicians. Chapters on probability and statistical reasoning are directly relevant. Ellenberg explains expected value, base rates, and Bayesian thinking with clarity and genuine mathematical depth. Start here.
Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011. Chapters 11–16 cover probability biases (representativeness, availability, anchoring) that directly explain why human probability intuition fails. Foundational.
Silver, Nate. The Signal and the Noise: Why So Many Predictions Fail — but Some Don't. Penguin Press, 2012. Silver is a Bayesian, and this book is largely about how to think probabilistically in domains ranging from weather forecasting to politics. Directly relevant to the 2016 election case study.
Gigerenzen, Gerd. Reckoning with Risk: Learning to Live with Uncertainty. Penguin Books, 2002. Gigerenzen argues that the way probability is communicated (percentages vs. natural frequencies) dramatically affects comprehension. The medical test examples in Chapter 6 draw from his work. He advocates for "natural frequency" framing as more intuitive and less error-prone.
Mlodinow, Leonard. The Drunkard's Walk: How Randomness Rules Our Lives. Pantheon Books, 2008. A readable exploration of how random processes govern outcomes we typically attribute to skill or character. The chapters on coincidence and rare events are particularly relevant to the "surprised by the wrong things" section.
Tversky, Amos, and Daniel Kahneman. "Judgment Under Uncertainty: Heuristics and Biases." Science 185, no. 4157 (1974): 1124–1131. The foundational paper on cognitive biases in probability judgment. Introduces representativeness, availability, and anchoring as systematic sources of error. Dense but worth reading directly.
Tetlock, Philip E., and Dan Gardner. Superforecasting: The Art and Science of Prediction. Crown Publishers, 2015. Research on why some people are dramatically better at probabilistic forecasting. The book's core finding: "superforecasters" are better calibrated, more Bayesian in their updates, and more comfortable with genuine uncertainty. Relevant to the calibration section.