Further Reading — Chapter 24
Learning in the Age of AI: What's Still Worth Knowing When Machines Can Look It Up
This annotated bibliography provides resources for deeper exploration of the concepts introduced in Chapter 24. Sources are organized by tier following this textbook's citation honesty system. Given the rapidly evolving nature of AI, some resources may be updated or superseded by the time you read this — the frameworks and principles, however, should remain applicable.
Tier 1 — Verified Sources
These are well-known, widely available works that the authors are confident exist with the details provided.
Books
Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio/Penguin.
One of the best practical guides to thinking about AI as a collaborator rather than a threat or a savior. Mollick, a Wharton professor, brings a grounded, evidence-informed perspective to how AI changes work, education, and creativity. Particularly relevant to this chapter's discussion of human-AI collaboration and the tool-vs.-replacement distinction. Mollick is neither utopian nor dystopian — he's practical, which is refreshing.
Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton.
A thorough, well-written exploration of the challenges of building AI systems that do what we actually want them to do. While focused on the technical and philosophical dimensions of AI alignment, the book provides valuable context for understanding why AI produces hallucinations, why AI outputs shouldn't be trusted uncritically, and why human judgment remains essential. Relevant to the chapter's discussion of critical evaluation and what remains uniquely human.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
A critical examination of AI's social, economic, and environmental impacts. Crawford challenges the notion that AI is a neutral tool, arguing that it embeds the biases and priorities of its creators. Relevant to this chapter's discussion of AI literacy — understanding what AI systems are and aren't, and thinking critically about claims that AI will make human learning obsolete.
Carr, N. (2014). The Glass Cage: Automation and Us. W. W. Norton.
An accessible exploration of what happens to human skills, judgment, and engagement when automation takes over tasks. Carr examines automation's effects in aviation, medicine, architecture, and daily life. Directly relevant to this chapter's discussion of deskilling, automation complacency, and the cognitive costs of offloading. The aviation examples in Case Study 2 draw on patterns Carr documents.
Oakley, B. (2014). A Mind for Numbers: How to Excel at Math and Science (Even If You Flunked Algebra). TarcherPerigee.
Previously recommended in Chapter 1, Oakley's book deserves a second mention here. Her core argument — that effective learning requires active engagement, not passive consumption — is even more relevant in the AI era. Every strategy she recommends (active recall, interleaving, spaced practice) works precisely because it keeps the learner doing the cognitive work. AI makes it trivially easy to skip that work. Oakley's book is the antidote.
Research Articles and Reports
Clark, A., & Chalmers, D. (1998). "The Extended Mind." Analysis, 58(1), 7-19.
The original philosophical paper proposing the extended mind thesis — the idea that cognitive processes can extend beyond the brain into external tools and environment. A landmark paper in philosophy of mind that has become increasingly relevant in discussions of AI and cognition. Short, readable, and provocative. Directly referenced in this chapter's discussion of cognitive offloading and whether "knowing" something in your head is meaningfully different from knowing how to access it through AI.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips." Science, 333(6043), 776-778.
A widely cited study demonstrating that people remember less information when they expect to have future access to it (the "Google effect"). When participants knew they could look something up later, they were less likely to encode it in memory. Directly relevant to the knowledge paradox and the concern that AI availability may reduce the motivation to learn — and to remember — information.
Tier 2 — Attributed Sources
These are findings and claims attributed to specific researchers or research traditions. The general claims are well-established in the literature, but specific publication details beyond what is provided have not been independently verified for this bibliography.
Research on automation complacency in aviation (Parasuraman & Manzey, various publications).
Raja Parasuraman and Dietrich Manzey published extensively on automation complacency and its effects on human monitoring performance. Their work established that humans predictably reduce their monitoring effort when automated systems are reliable, leading to missed errors when the automation fails. The concept of "complacency" in human-automation interaction draws heavily on their theoretical and experimental work. The aviation and medical examples in this chapter and in Case Study 2 reflect patterns documented in this research tradition.
Research on GPS and spatial cognition (Bohbot, Ishikawa, and others).
Multiple research teams have investigated the relationship between GPS use and spatial cognition. Work by Veronique Bohbot and colleagues has examined hippocampal changes associated with navigation strategy, while Toru Ishikawa and colleagues have studied how GPS affects spatial learning of routes. The consistent finding — that GPS use is associated with reduced spatial memory and wayfinding ability — is referenced in Case Study 2's discussion of cognitive offloading and skill atrophy.
The FAA study on manual flying skills (2013).
The Federal Aviation Administration released a safety review noting concerns about pilot manual flying skills degradation under high-automation cockpit environments. This report, along with subsequent FAA guidance recommending periodic manual flying practice, informed the aviation examples in Case Study 2. The specific recommendations for deliberate manual practice serve as a model for the "deliberate maintenance" concept applied to AI-assisted learning.
Research on the generation effect (Slamecka & Graf, 1978, and subsequent studies).
The generation effect — the finding that information actively generated by the learner is better remembered than information passively received — was originally demonstrated by Slamecka and Graf and has been replicated extensively. This chapter extends the concept to AI interactions: generating your own attempt before consulting AI preserves the generation effect, while asking AI for the answer first eliminates it.
Research on AI in education (emerging body of work, multiple research groups).
As of the time of writing, a rapidly growing body of research is examining how AI tools affect student learning, academic integrity, critical thinking, and metacognition. Early findings from studies at multiple universities suggest patterns consistent with this chapter's framework: AI can enhance learning when used as a supplement to (not replacement for) student effort, and AI use is most beneficial for students who already have stronger metacognitive skills. This research is evolving quickly; readers are encouraged to search for recent publications on "AI and student learning" or "generative AI in education."
Tier 3 — Illustrative Sources
These are constructed examples, composite cases, or pedagogical resources created for this textbook.
Marcus Thompson — composite character (continued from Chapter 1). In this chapter, Marcus's story is extended to illustrate the tool-vs.-replacement distinction with AI in data science learning. His experience of submitting AI-generated code without learning, followed by his development of "Rules of Engagement," reflects common patterns reported in anecdotal and emerging research accounts of adult learners using AI tools.
Zara Okonkwo — composite character (Case Study 2). Based on emerging patterns in educational technology observations regarding student AI use. Illustrates the progression from supplemental AI use to dependent AI use and the resulting deskilling across reading, writing, retrieval practice, and metacognitive monitoring skills.
The AI Learning Ladder — pedagogical framework. A teaching tool created for this textbook to help learners evaluate their AI interactions along a continuum from most learning-enhancing (Rung 5) to most learning-replacing (Rung 1). While grounded in learning science principles (retrieval practice, generation effect, deep processing), the specific "ladder" framework is original to this textbook.
The Explain-Before-You-Ask Protocol — technique. A learning strategy created for this textbook that combines retrieval practice, metacognitive monitoring, and targeted AI querying. The technique is grounded in established principles (retrieval practice improves learning; metacognitive monitoring improves query precision) but the specific protocol as described is original.
Recommended Next Steps
If you want to go deeper on Chapter 24's topics, here's a prioritized reading path:
-
Highest priority: Read the first few chapters of Mollick's Co-Intelligence. It's the most practical and balanced guide to thinking about AI as a collaborator. If you're skeptical about AI, it will show you the genuine benefits. If you're enthusiastic about AI, it will temper your enthusiasm with realism.
-
If you're concerned about deskilling: Read Carr's The Glass Cage. It's the most accessible and comprehensive treatment of what happens to human skills when automation takes over. The aviation and medical chapters are particularly compelling.
-
If you want the philosophical foundation: Read Clark & Chalmers' "The Extended Mind" paper. It's only 12 pages, and it frames the fundamental question: where does "your" cognition end and "the tool's" begin? Then apply their framework to AI and ask whether their conclusions still hold.
-
If you want to stay current: Search for recent peer-reviewed research on "generative AI and student learning" or "AI-assisted education." This field is moving very fast, and new evidence is emerging regularly. Apply the critical evaluation skills from this chapter to whatever you find — including checking whether the studies are well-designed, adequately powered, and honest about limitations.
-
If you want a challenge: Read Crawford's Atlas of AI alongside this chapter and consider the dimensions of AI use this chapter doesn't address — power, equity, environmental cost, labor exploitation. Consider how these dimensions complicate the individual-learner focus of this chapter. The metacognitive skill of recognizing what a text leaves out is, itself, a form of deep processing.
End of Further Reading for Chapter 24. The AI landscape will continue to evolve. The metacognitive framework for navigating it will not. Invest in the framework.