Chapter 28 Quiz: Learning in the Age of AI

Answer all questions from memory before checking the answer key.


Question 1

According to this chapter, what is the "knowledge paradox" of AI?

A) The more you use AI, the less knowledge you accumulate over time B) The less you know, the more you need AI; but the less you know, the less able you are to use AI correctly — requiring expertise to use it well while being most needed by those without expertise C) AI systems have more knowledge than humans, making human knowledge acquisition unnecessary D) Knowledge acquired with AI assistance is paradoxically less durable than knowledge acquired without it


Question 2

The chapter argues that some types of knowledge and skill have declining value in the age of AI, while others have increasing value. Which of the following correctly describes this distinction?

A) Factual recall has declining marginal value when facts can be looked up in seconds; pattern recognition, judgment, creative synthesis, and deep expertise in fast-moving fields have increasing value B) Technical skills have declining value; interpersonal skills have increasing value — all technical work will be done by AI C) Factual and conceptual knowledge both have declining value; only physical and social skills retain value D) All knowledge has declining value as AI becomes more capable; the speed of change makes learning any specific domain an inefficient investment


Question 3

What is the recommended Socratic tutoring protocol for using AI effectively in learning?

A) Ask AI to explain topics to you clearly at your level of background knowledge B) Explain the topic to AI first, then ask AI to generate probing questions that distinguish genuine understanding from surface familiarity; use AI's questions to identify gaps; receive targeted explanations for those gaps C) Ask AI to quiz you with multiple choice questions on your topic D) Have AI write a practice essay on the topic, then read it and discuss it with AI


Question 4

The generation effect (from Chapter 7) is particularly relevant to AI-assisted learning. Why?

A) AI systems use generation methods to produce their outputs, which is why they're effective learning tools B) Asking AI to generate content is more effective than asking it to explain content C) Generating an answer yourself — even incorrectly — before receiving the answer from AI produces substantially better learning than receiving the AI's answer first, because the struggle primes the brain to learn from the correct answer D) AI tools generate the most helpful responses when users first generate a detailed description of their learning needs


Question 5

What does this chapter identify as the primary risk of using AI to write essays or code for learning purposes?

A) AI output is often incorrect, so learners build knowledge on a flawed foundation B) AI-written essays and code are detectable and create academic integrity risks C) The activities of writing and coding exist, in learning contexts, to develop specific cognitive skills (analytical thinking, programming skill) — and if AI performs those activities, the cognitive development is bypassed even when the output looks good D) AI makes the learning process feel too easy, reducing the motivation to engage deeply


Question 6

According to this chapter, what is the "fluency illusion" in the context of AI learning?

A) AI systems appear more fluent in language than they actually are, deceiving learners about their reliability B) Reading AI output that is calibrated to your vocabulary and presented authoritatively produces a strong feeling of understanding, even when durable learning has not occurred — because the feeling of comprehension is not the same as having encoded information into long-term memory C) The fluency of AI writing hides its logical weaknesses, which are only visible to expert readers D) AI systems that produce fluent output are paradoxically less useful for learning because fluency implies oversimplification


Question 7

What does the chapter recommend regarding AI and language learning specifically?

A) AI translation tools should be used liberally because they speed up acquisition by reducing comprehension friction B) AI is most useful for language learning as a translation tool; conversation practice with humans is overrated C) AI tutors and conversation practice are genuinely useful; AI translation undermines acquisition if used as a crutch because it bypasses the comprehension work that drives acquisition D) AI tools have no specific application to language learning beyond basic grammar checking


Question 8

What does "epistemic dependency" mean, and why is it a risk?

A) Depending on AI for efficient information retrieval rather than memorizing everything — generally acceptable B) Relying on an external system (AI) for your beliefs, reasoning, and conclusions rather than developing independent judgment to form your own — a risk because it prevents development of independent thinking and judgment C) The risk that AI will provide incorrect information that you then build incorrect beliefs upon D) Becoming dependent on AI tools for productivity such that you struggle to work without them


Question 9

Which of the following is described as a productive use of AI for research?

A) Using AI to generate accurate, citable summaries of specific papers that can be directly referenced B) Using AI for literature orientation (getting the lay of the land in an unfamiliar area), while verifying any specific factual claims against primary sources before relying on them C) Replacing primary source reading with AI synthesis, which saves significant time with minimal accuracy cost D) Using AI to generate new research questions since it has access to more research than any human


Question 10

David's case study demonstrates which specific learning approach with AI?

A) Using AI to generate practice problems without attempting them first B) Using AI to explain difficult concepts in multiple ways until one clicks C) Explaining a concept to AI first, receiving probing questions that identify gaps, then receiving targeted explanations for those gaps with follow-up comprehension checks — using AI as a Socratic tutor rather than as an explainer D) Using AI to review his code and identify bugs rather than debugging himself


Question 11

Maya's case study shows which consequence of over-reliance on AI for essay writing?

A) Her essays declined in quality over time as AI-generated content became more generic B) Her content knowledge declined as she stopped reading the assigned material carefully C) The analytical capacity to construct original arguments and defend positions — skills the essay-writing was supposed to develop — did not develop, becoming apparent in exam and seminar contexts where AI was unavailable D) Her writing style became unrecognizable as her own, causing professors to suspect plagiarism


Question 12

The chapter concludes with the metaphor of AI as a "power tool." What does this metaphor convey?

A) AI is dangerous and should be used only by experts; novices should avoid it entirely B) AI amplifies the quality of the person using it — expert users become more productive and produce higher-quality work, while inexperienced users produce fluent output they don't understand and build calibration problems they can't detect. The skill of the user matters, not just the tool. C) AI is powerful but will eventually replace all human cognitive work, making skill development less important D) AI is like a power saw in that it makes repetitive tasks faster, but shouldn't be used for creative or analytical work


Answer Key

1. B — The knowledge paradox: the less you know, the more you need AI; the less you know, the less able you are to use AI correctly (because detecting AI errors requires domain expertise). Deep knowledge makes AI more useful, not less necessary.

2. A — Factual recall (looking up in 5 seconds) has declining marginal value. Pattern recognition, judgment, creative synthesis, and the ability to direct and evaluate AI in fast-moving technical domains have increasing value.

3. B — The Socratic protocol: explain the topic yourself first, ask AI to generate probing questions based on your explanation, use those questions to identify gaps, receive targeted explanations for the gaps. This keeps the cognitive work on your side.

4. C — The generation effect: generating your own answer before receiving the correct one produces better learning than receiving the answer first. Struggling produces better learning than not struggling, even when the struggle produces a wrong answer.

5. C — The primary risk is that essays and code assignments exist to develop specific cognitive skills (analytical thinking, programming skill). If AI performs those activities, the development is bypassed — the output looks fine but the learning didn't happen.

6. B — The AI-specific fluency illusion: AI explanations are calibrated to your vocabulary and sound authoritative, producing a strong sensation of understanding. But feeling of comprehension is not the same as durable encoding, and the retrieval work that produces retention hasn't been done.

7. C — AI tutors (conversation practice, grammar explanations) are genuinely useful. AI translation undermines acquisition by bypassing the comprehension work (struggling to understand) that drives vocabulary and structure acquisition.

8. B — Epistemic dependency means relying on AI for your beliefs, reasoning, and conclusions rather than forming your own. The risk: prevents development of independent judgment that is irreplaceable for complex, novel, ethically loaded situations.

9. B — Productive AI use in research: orientation (general landscape of a field). Never: citing AI-generated specific claims without primary source verification. AI fabricates citations and misattributes findings in ways that can be very hard to detect.

10. C — David explained attention mechanisms to AI first, received probing questions that identified specific gaps, received targeted explanations for gaps with follow-up comprehension checks. This is Socratic tutoring — AI as questioner and gap-identifier, not AI as explainer.

11. C — The specific consequence: Maya's analytical capacity to construct original arguments from scratch, and to defend them in real time, did not develop because AI had been performing this cognitive work. The skills the assignments were designed to build were bypassed.

12. B — The power tool metaphor: AI amplifies the quality of the person using it. Expert users become more productive; inexperienced users produce confident output they don't understand and build calibration problems they can't detect. You still need to be the skilled worker.