Chapter 15 Self-Assessment Quiz

Calibration: Why You Think You Know It When You Don't (and How to Fix It)

Instructions: This quiz is itself a calibration exercise. Before answering each question, rate your confidence (High / Medium / Low) that you'll get it right. After finishing, check your answers and compare your confidence ratings to your actual results. The pattern you find IS your calibration data for this chapter's material.


Section 1: Multiple Choice

Choose the best answer for each question.

1. Calibration is best defined as:

a) The ability to discriminate between items you know and items you don't b) The degree to which your overall confidence matches your overall accuracy c) The process of adjusting your study strategies based on feedback d) The speed at which you can retrieve information from long-term memory

Your confidence: High / Medium / Low


2. The overconfidence effect refers to:

a) A tendency specific to arrogant or careless people b) A tendency to overestimate performance only on exams c) A systematic bias in which people's confidence consistently exceeds their accuracy d) A bias that only affects novices and disappears with expertise

Your confidence: High / Medium / Low


3. The hard-easy effect predicts that:

a) People are overconfident on easy items and underconfident on hard items b) People are overconfident on hard items and underconfident on easy items c) People are equally overconfident on all items regardless of difficulty d) Difficulty has no systematic effect on calibration

Your confidence: High / Medium / Low


4. The unskilled-and-unaware problem describes the finding that:

a) Highly skilled people tend to dramatically underestimate their performance b) People who perform worst on a task are also the worst at estimating their performance, and tend to dramatically overestimate it c) Skilled people are overconfident and unskilled people are underconfident d) Training makes calibration worse because knowledge creates more uncertainty

Your confidence: High / Medium / Low


5. Why is the unskilled-and-unaware problem described as a "double bind"?

a) Because it affects both skilled and unskilled people equally b) Because the skills needed to perform well are the same skills needed to evaluate performance accurately — so low performers lack both c) Because the problem gets worse with practice d) Because it affects both academic and non-academic domains

Your confidence: High / Medium / Low


6. Which of the following is NOT one of the cognitive cues that produce overconfidence?

a) Fluency — how easily information comes to mind b) Familiarity — how "known" or "seen before" something feels c) Retrieval effort — how hard you have to work to recall something d) Coherence — how well your mental model "hangs together"

Your confidence: High / Medium / Low


7. Hindsight bias is:

a) The tendency to predict future performance based on past performance b) The tendency, after learning an outcome, to believe you predicted it or would have predicted it c) The tendency to study material from the end of a chapter first d) The tendency to remember your successes more than your failures

Your confidence: High / Medium / Low


8. Foresight bias is:

a) The tendency to plan too far ahead when studying b) The tendency to believe you'll know answers on a test because the material feels familiar before the test c) The tendency to predict that future exams will be easier than they are d) The tendency to study material in chronological order

Your confidence: High / Medium / Low


9. In Mia Chen's calibration story, she predicted a B+ on her second biology exam and got a C-. On her third exam, she predicted a C and got a B+. What lesson does this illustrate?

a) Studying less leads to better outcomes b) Your confidence, left to its own devices, is a poor predictor of accuracy — you need external data to calibrate c) Lowering expectations always improves performance d) Overconfidence is only a problem on the first exam

Your confidence: High / Medium / Low


10. According to the chapter, when researchers test people who rate their confidence at "100% sure," those people are typically correct:

a) 100% of the time b) 95-98% of the time c) 85-90% of the time d) 70-75% of the time

Your confidence: High / Medium / Low


11. The predict-test-compare cycle works as a calibration training technique because:

a) It forces you to study harder before each test b) It provides granular feedback that lets you build a personal history of how your confidence maps to your accuracy c) It reduces test anxiety by making predictions a habit d) It eliminates the overconfidence effect after a single use

Your confidence: High / Medium / Low


12. Confidence interval practice targets overconfidence by:

a) Asking you to lower your confidence on every prediction b) Requiring you to express uncertainty as a range rather than a single point, which forces acknowledgment of uncertainty c) Having you predict intervals between study sessions d) Teaching you to always pick the middle option on multiple-choice tests

Your confidence: High / Medium / Low


Section 2: True/False with Justification

Determine whether each statement is true or false based on the chapter, then write 1-2 sentences explaining your reasoning.

13. True or False: A student who is overconfident on hard items and underconfident on easy items is displaying the hard-easy effect.

Your confidence: High / Medium / Low Your justification: ___


14. True or False: Experience and expertise completely eliminate the overconfidence effect.

Your confidence: High / Medium / Low Your justification: ___


15. True or False: The chapter's threshold concept is that calibration unreliability is a permanent, uncorrectable flaw in human cognition.

Your confidence: High / Medium / Low Your justification: ___


Section 3: Short Answer

Answer in 2-5 sentences. Aim for clarity and precision.

16. Explain why experience doesn't automatically correct overconfidence. Name at least two mechanisms that protect overconfidence from self-correction.

Your confidence: High / Medium / Low


17. In the Kenji tracking sheet example, Diane posted a record of Kenji's predictions vs. actual quiz scores on the refrigerator. Why was this simple intervention so effective? Connect it to at least one calibration training technique from the chapter.

Your confidence: High / Medium / Low


18. The chapter argues that calibration matters more than resolution for one critical reason. What is that reason? Explain in your own words.

Your confidence: High / Medium / Low


Section 4: Applied Scenario

19. Read the following scenario and answer the questions that follow.

Scenario: A nursing student takes five practice exams over the course of a semester. For each, she records her predicted score and actual score:

Practice Exam Predicted Score Actual Score
1 88% 71%
2 85% 73%
3 82% 75%
4 78% 77%
5 79% 81%

a) Calculate the calibration gap for each practice exam. What trend do you notice?

b) On Practice Exam 5, the student slightly underestimated her score. Is this a calibration failure, or is it evidence of good calibration development? Explain.

c) What is the most likely explanation for the improvement in her calibration over time?

d) How does her trajectory compare to Mia Chen's calibration arc in the chapter?

Your confidence: High / Medium / Low


20. Metacognitive exercise: Now that you've completed this quiz, go back and analyze your confidence ratings.

  • Count how many "High confidence" items you got right and how many you got wrong.
  • Count how many "Medium confidence" items you got right and how many you got wrong.
  • Count how many "Low confidence" items you got right and how many you got wrong.

Now answer:

a) Is your resolution for this chapter's material good? (Are high-confidence items mostly right and low-confidence items mostly wrong?)

b) Is your calibration for this chapter's material good? (Does your overall confidence roughly match your overall accuracy?)

c) Do you see the hard-easy effect? (Were you more overconfident on harder questions and more accurate — or underconfident — on easier questions?)

d) How does your calibration on this quiz compare to the calibration you observed in the Section 15.5 exercise (if you completed it)?


Answer Key

Section 1: Multiple Choice

1. b) Calibration is the degree to which your overall confidence matches your overall accuracy. Option (a) describes resolution, not calibration. Options (c) and (d) describe metacognitive control and retrieval speed, respectively.

2. c) The overconfidence effect is a systematic bias — not specific to arrogant people (a), not limited to exams (b), and not eliminated by expertise (d), though expertise can reduce it.

3. b) The hard-easy effect shows overconfidence on hard items (where you don't know enough to appreciate how much you're missing) and underconfidence on easy items (where you know enough to imagine ways you could be wrong). Option (a) has the pattern reversed.

4. b) The core finding is that the lowest performers dramatically overestimate their performance. Option (a) describes a partial truth (skilled people do sometimes slightly underestimate), but the defining feature is the large overestimation by low performers.

5. b) The "double bind" refers to the fact that competence in a domain and the ability to evaluate one's competence in that domain rely on the same underlying skills. If you lack the skills to perform well, you also lack the skills to realize you're performing poorly.

6. c) Retrieval effort is actually associated with lower confidence (things that are hard to recall feel less known), and the chapter identifies it as a potential corrective to overconfidence, not a cause. The other three — fluency, familiarity, and coherence — are all described as cues that inflate confidence.

7. b) Hindsight bias is the retroactive revision of your memory of your predictions — believing, after learning the answer, that you "knew it all along." This erases evidence of your overconfidence by rewriting your prediction history.

8. b) Foresight bias is the forward-looking counterpart of hindsight bias: before a test, you predict strong performance because the material feels familiar. It's driven by fluency illusions (from Chapter 8) and inflates pre-test confidence.

9. b) Mia's story illustrates that feelings of confidence (whether high or low) don't reliably predict performance. Neither her overconfidence on exam 2 nor her underconfidence on exam 3 was accurate. She needs external data — not feelings — to calibrate.

10. c) When people rate their confidence at 100% ("absolutely certain"), they are typically correct about 85-90% of the time. Even your strongest certainty is wrong roughly one time in eight. This is one of the most striking demonstrations of the overconfidence effect.

11. b) The predict-test-compare cycle works by providing the granular, item-level feedback that your brain can't generate internally. Over time, you build up a personal dataset of how your confidence levels map to your actual accuracy, allowing your internal signals to self-correct.

12. b) Confidence intervals force you to express uncertainty as a range ("between 72 and 88") rather than a false-precision point estimate ("85"). This makes you explicitly acknowledge the possibility of being wrong and think about the full range of plausible outcomes.

Section 2: True/False with Justification

13. True. This is the textbook definition of the hard-easy effect: overconfidence on hard items (confidence exceeds accuracy) and underconfidence on easy items (accuracy exceeds confidence). The pattern is driven by difficulty-dependent access to the information needed for accurate self-assessment.

14. False. Experience and expertise reduce but do not completely eliminate the overconfidence effect. The chapter notes that experts show a reduced hard-easy effect compared to novices, but the bias never fully disappears. Hindsight bias, selective memory, and lack of granular feedback all protect overconfidence from correction even among experienced individuals.

15. False. The threshold concept is that calibration is systematically biased — but the chapter is clear that calibration responds to training. It is correctable. The transformative insight isn't "you're permanently flawed" but rather "your confidence is biased in ways you can't detect without deliberate testing, and once you know this, you'll always want to verify rather than trust your feelings of certainty."

Section 3: Short Answer (Sample Responses)

16. Experience doesn't automatically correct overconfidence because several mechanisms protect it. Hindsight bias causes you to retroactively believe you "knew it all along," erasing evidence of past overconfidence from your memory. Selective memory means you remember confident-and-correct instances more vividly than confident-and-wrong instances. The environment rarely provides the granular, item-by-item feedback needed to detect systematic calibration patterns. And social environments often reward confidence regardless of accuracy, reinforcing overconfident behavior.

17. Diane's tracking sheet was effective because it implemented the predict-test-compare cycle (Technique 1) in a simple, visible, ongoing format. By recording Kenji's predictions alongside his actual scores, she made his calibration gap visible and undeniable — defeating both hindsight bias (he couldn't retroactively claim he "knew" he'd score lower) and selective memory (the data was permanent). The refrigerator placement kept the data salient, and the accumulation of data over weeks allowed patterns to emerge that single instances would have missed.

18. Calibration matters more than resolution because calibration affects the decision about whether to study at all. A student with good resolution but poor calibration can correctly identify which items are her weakest — but if her overall confidence is inflated, she'll think her weak items are "slightly shaky" when they're actually "not learned." She'll stop studying too soon, budget too little time, and walk into the exam feeling ready when she isn't. Resolution helps you prioritize within a study session; calibration determines whether you do the study session in the first place.

Section 4: Applied Scenario (Sample Response)

19a. Calibration gaps: Exam 1: +17, Exam 2: +12, Exam 3: +7, Exam 4: +1, Exam 5: -2. The trend is a dramatic reduction in overconfidence over time, from a 17-point gap to near-perfect calibration (and even a slight underconfidence of 2 points on the final exam).

19b. The slight underestimation on Exam 5 (-2 points) is not a meaningful calibration failure. It's evidence of excellent calibration development — she's within the margin of normal variation. A 2-point underestimate is far better than the 17-point overestimate she started with. Some calibration researchers would consider anything within 5 points to be "well-calibrated." The slight underconfidence may also reflect the overcorrection pattern seen in Mia's story — after being burned by overconfidence, students sometimes swing slightly toward caution.

19c. The most likely explanation is calibration training through repeated predict-test-compare cycles. Each practice exam gave her concrete feedback about the gap between her predicted and actual performance. Over five exams, she accumulated enough data to recalibrate her internal signals — her feeling of "82% ready" is now much more likely to correspond to actual 82% performance than it was at the start. She's also likely improving at the actual material, but the calibration improvement is specifically in the match between prediction and performance.

19d. Her trajectory closely mirrors Mia's. Mia started with a large overconfidence gap (predicted B+, got C-), then overcorrected (predicted C, got B+), then — by implication — moved toward accurate calibration. The nursing student shows the same pattern but with more data points, making the gradual convergence visible. Both stories illustrate that calibration isn't corrected in a single insight — it improves incrementally through repeated feedback.


Scoring Guide

Score Interpretation
18-20 correct Excellent understanding. Your comprehension of calibration concepts is strong. Now compare your confidence ratings to your accuracy — that comparison is the real lesson.
14-17 correct Good understanding with some gaps. Review the concepts you missed, and pay special attention to whether high-confidence items were the ones you missed — that would demonstrate the very overconfidence the chapter describes.
10-13 correct Partial understanding. Reread Sections 15.1, 15.2, and 15.4, then retake the quiz. Focus on the difference between calibration and resolution, and the mechanisms that sustain overconfidence.
Below 10 The material needs more processing time. Reread the chapter with active retrieval practice, then retake the quiz after a 24-hour delay. Remember — this score is monitoring data, not a verdict on your ability.

💡 Metacognitive Note: Question 20 asked you to analyze your own confidence-accuracy pattern across this quiz. If you completed the calibration exercise in Section 15.5 of the chapter, you now have TWO data points about your own calibration — one for general knowledge and one for this chapter's concepts. Compare them. Is your calibration pattern similar across domains, or different? That comparison tells you something about whether your calibration bias is general or domain-specific.


End of Chapter 15 Quiz.