Exercises: Precision Without Accuracy
Difficulty Guide: ⭐ Foundational | ⭐⭐ Intermediate | ⭐⭐⭐ Challenging | ⭐⭐⭐⭐ Advanced/Research
Part A: Conceptual Understanding ⭐
A.1. Explain the precision-accuracy distinction using the archery analogy. Why is Archer C (precise but not accurate) more dangerous than Archer D (neither)?
A.2. What is the "specificity heuristic"? Why does "the market will return 7.3%" feel more authoritative than "the market will probably go up"?
A.3. Define "error propagation." How does combining imprecise measurements through calculation compound the imprecision?
A.4. What is an "uncertainty budget"? How would it change the way a GDP forecast is presented?
A.5. Explain why the institutional demand for legible numbers drives false precision.
Part B: Applied Analysis ⭐⭐
B.1. Choose a precise number used in your field. Estimate its actual confidence interval. If the confidence interval were visible, would the number still drive the same decisions?
B.2. Identify a sharp cutoff in your field (a numerical threshold that produces a categorical decision). Analyze whether the measurement's precision justifies the sharpness of the cutoff.
B.3. Compare the VaR failure in finance to the IQ cutoff in criminal justice. What structural features do these cases share?
B.4. Apply the "uncertainty budget" concept to a quantitative claim in your field. List all sources of uncertainty and estimate the total uncertainty.
B.5. The chapter argues that Nate Silver's 2016 forecast (29% chance for Trump) was more accurately calibrated than forecasts giving 99%. Explain why a less precise probability estimate can be more accurate.
Part C: Research Design Challenges ⭐⭐–⭐⭐⭐
C.1. Design a policy for your organization that requires uncertainty communication alongside every quantitative report.
C.2. Propose an alternative to BMI for clinical use that balances accuracy (measuring what matters) with practical utility (usable in routine clinical encounters).
Part D: Synthesis & Critical Thinking ⭐⭐⭐
D.1. The chapter argues precision is an "amplifier" that makes every other failure mode more persuasive. Trace this amplification through three specific failure modes from earlier chapters.
D.2. Is it ever appropriate to report false precision? (Consider: patients may be more compliant with a precise medication schedule than a vague one. Precise targets may motivate action better than ranges.) When does the benefit of false precision outweigh its cost?
D.3. The fat-tail problem shows that normal-distribution models underestimate extreme risks. But fat-tail models are less precise (they produce ranges rather than point estimates). Design an institutional decision process that uses fat-tail models despite their imprecision.
Part M: Mixed Practice (Interleaved) ⭐⭐–⭐⭐⭐
M.1. (From Ch.4) Goodhart's Law + precision: when a precise metric becomes a target, does the false precision make the Goodhart effect worse?
M.2. (From Ch.11) The rating agencies assigned precise ratings (AAA) to imprecise risks. How did the incentive structure and the precision problem interact?
M.3. (From Ch.5) Publication bias inflates effect sizes. Precision without accuracy makes these inflated effects appear more reliable. Trace the interaction.
M.4. (Integration) Update your Epistemic Audit with the precision-accuracy diagnostic.
Part E: Research & Extension ⭐⭐⭐⭐
E.1. Read Taleb's The Black Swan (chapters on fat tails). Write a 1,500-word analysis of how the normal distribution assumption creates false precision in risk modeling.
E.2. Investigate the precision of a specific measurement in your field. Document the reported precision, the actual confidence interval, and the gap between them.
Solutions
Selected solutions in appendices/answers-to-selected.md.