Chapter 28 Further Reading: Learning in the Age of AI


On AI and Cognition

Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton. Although written before large language models, Carr's argument about how internet use changes thinking patterns is directly relevant to AI-assisted learning. His concern — that habitual shallow information retrieval shapes cognitive habits — applies with greater force to AI that provides complete answers on demand. A valuable corrective to the assumption that tool availability is unambiguously positive for cognition.

Sparrow, B., Liu, J., & Wegner, D. M. (2011). "Google effects on memory: Cognitive consequences of having information at our fingertips." Science, 333(6043), 776–778. The "Google effects" study showing that people who expect to have access to a fact are less likely to encode it in memory — and are more likely to remember where they can find the fact than what the fact is. This "transactive memory" phenomenon applies with greater force to AI. Directly relevant to the factual recall section of this chapter.

Heersmink, R. (2016). "A taxonomy of cognitive artifacts: Function, information, and categories." Review of Philosophy and Psychology, 7(1), 123–141. A philosophical analysis of how cognitive artifacts (tools that augment cognition) interact with human cognitive development. Provides conceptual tools for thinking carefully about when AI use enhances vs. substitutes for human cognitive capacity.


On the Value of Deep Knowledge

Hirsch, E. D. (1987). Cultural Literacy: What Every American Needs to Know. Houghton Mifflin. A controversial but important argument for the cognitive value of having extensive background knowledge. Hirsch's argument — that comprehension depends on a large base of prior knowledge that enables inference and connection — directly addresses why "you can just look it up" is not equivalent to actually knowing. Relevant to the factual knowledge section of this chapter.

Willingham, D. T. (2009). Why Don't Students Like School? A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom. Jossey-Bass. Cognitive scientist Dan Willingham's accessible treatment of how human cognition works, with particular attention to why prior knowledge is the most powerful determinant of new learning. The chapter on "Why Do Students Remember Everything That's on Television and Forget Everything I Say?" is directly relevant to the knowledge paradox.

Ericsson, K. A., & Pool, R. (2016). Peak: Secrets from the New Science of Expertise. Houghton Mifflin Harcourt. The comprehensive treatment of how expertise actually develops and what it consists of. Directly relevant to the chapter's argument that deep expertise — not surface-level AI-assisted performance — is what creates genuine value.


On Socratic Learning and Tutoring

Bloom, B. S. (1984). "The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring." Educational Researcher, 13(6), 4–16. The original two-sigma paper documenting the enormous advantage of one-on-one tutoring. AI Socratic tutoring represents a potentially accessible approximation of this effect, which makes the Socratic tutoring protocol in this chapter particularly significant.

Collins, A., Brown, J. S., & Newman, S. E. (1989). "Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics." In L. B. Resnick (Ed.), Knowing, Learning, and Instruction. Lawrence Erlbaum. The cognitive apprenticeship framework — using modeling, coaching, scaffolding, and Socratic dialogue to develop expert thinking — provides the theoretical basis for the AI Socratic tutoring approach. The specific techniques (making expert thinking visible, providing scaffolding that fades as competence develops) map onto good AI tutoring practice.

Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). "Collaborative dialogue patterns in naturalistic one-to-one tutoring." Applied Cognitive Psychology, 9(6), 495–522. Research on what human tutoring conversations actually look like and what makes them effective. The finding that good tutors ask questions far more often than they provide explanations is the empirical basis for the Socratic tutoring recommendation.


On AI Capabilities and Limitations

Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon. A critical examination of AI capabilities and limitations by a cognitive scientist and mathematician. Useful corrective to both AI hype and AI dismissal. Particularly valuable on the ways AI systems fail in subtle, hard-to-detect ways that require expertise to identify — directly relevant to the calibration problem section.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman. A computer scientist's early and prescient argument that certain types of human judgment should not be delegated to machines — not because computers can't perform the calculation, but because the judgment is human in nature. The argument is more relevant than ever in the age of AI that produces human-sounding outputs.


Emerging and Ongoing Research

The MIT Sloan Management Review and Harvard Business Review regularly publish practitioner-oriented research on AI in the workplace, including emerging evidence on AI's effects on skill development, judgment, and organizational learning. The research is moving faster than book publication cycles; these outlets are the best sources for current findings.

AI and Education Research: The field of AI in education (AIED) is producing rapidly accumulating research on how AI tutoring systems affect learning outcomes, what prompting strategies are most effective, and under what conditions AI assistance helps vs. hurts learning. Key journals include the Journal of Artificial Intelligence in Education and the British Journal of Educational Technology.

A note on currency: this chapter was written with knowledge current as of early 2026. The AI landscape is changing rapidly, and some specific tool recommendations and capability assessments will be outdated within months of publication. The underlying principles — generate before consuming, protect the cognitive work that produces learning, verify AI outputs with domain expertise — are likely to remain valid regardless of how AI capabilities develop.