Case Study 2: Calibrating Your Confidence in Psychology as a Science

The Exercise

Here is a meta-calibration exercise: rate your confidence (0–100%) in these statements about psychology as a field, then compare to the evidence-based assessment.

Statement Your Rating Evidence Assessment
"Psychology has produced genuine, useful knowledge" ___ 90% (cognitive, clinical, developmental, health psych)
"The replication crisis has been adequately addressed" ___ 50% (reforms underway; legacy literature still biased)
"Psychology findings generalize across cultures" ___ 40% (WEIRD bias is substantial)
"Pre-registered studies are more reliable than older ones" ___ 80% (by design; reduces p-hacking and HARKing)
"Pop psychology accurately represents the science" ___ 15% (most pop psych is oversimplified)
"You can evaluate psychology claims yourself" ___ 75% (with the toolkit; not perfectly, but much better than without it)
"Psychology will continue to improve its methods" ___ 75% (the reform movement is strong)

What Good Calibration Looks Like

A well-calibrated person: - Has high confidence in well-replicated findings (CBT, Big Five, exercise, cognitive biases) - Has moderate confidence in partially supported findings (attachment dimensions, meditation, growth mindset for at-risk students) - Has low confidence in debunked claims (learning styles, ego depletion, MBTI, NLP, 10% brain) - Has explicit uncertainty about genuinely unresolved questions (social media effects, depression prevalence trends, epigenetic inheritance of trauma) - Doesn't pretend to know what isn't known — saying "I don't know" when the evidence is genuinely mixed

A poorly calibrated person either: - Has high confidence across the board ("all psychology is true") — the credulous error - Has low confidence across the board ("all psychology is fake") — the nihilistic error - Has mismatched confidence (high confidence in debunked claims, low confidence in well-established ones) — the misinformation error

Your Calibration Score

If your ratings in the calibration exercise are within 15 percentage points of the evidence assessments for most items, you're well-calibrated. If they're off by 30+ points for multiple items, revisit the relevant chapters.

The goal is not to memorize specific confidence numbers. The goal is to develop an intuition for evidence strength that automatically adjusts your confidence based on the quality of the support behind a claim.

Discussion Questions

  1. Is it possible to be perfectly calibrated? What would perfect calibration require?
  2. The nihilistic error (trusting nothing) and the credulous error (trusting everything) are both wrong. Which is more common after reading a book like this?
  3. How do you maintain calibration over time, as new evidence accumulates and some findings are updated?
  4. If you teach someone the toolkit, how do you assess whether they've become well-calibrated?