Further Reading — Chapter 12

Deep Processing vs. Shallow Processing: The Difference Between Remembering and Understanding

This annotated bibliography provides resources for deeper exploration of the concepts introduced in Chapter 12. Sources are organized by tier following this textbook's citation honesty system.


Tier 1 — Verified Sources

These are well-known, widely available works that the authors are confident exist with the details provided.

Foundational Research Articles

Craik, F. I. M., & Lockhart, R. S. (1972). "Levels of Processing: A Framework for Memory Research." Journal of Verbal Learning and Verbal Behavior, 11(6), 671-684.

This is the paper that started it all. Craik and Lockhart proposed that memory is not best understood as a series of storage boxes but as a byproduct of the depth of processing applied during encoding. The paper introduced the levels of processing framework, distinguished between maintenance rehearsal (repeating information without deepening processing) and elaborative rehearsal (processing information at increasingly deeper levels), and argued that "depth" rather than "time" determines memory durability. It is one of the most cited papers in cognitive psychology. At only 14 pages, it is accessible to advanced undergraduates and remains remarkably readable fifty years later. If you read one primary source from this chapter, make it this one.

Craik, F. I. M., & Tulving, E. (1975). "Depth of Processing and the Retention of Words in Episodic Memory." Journal of Experimental Psychology: General, 104(3), 268-294.

This paper provides the experimental evidence for the levels of processing framework. Across ten experiments, Craik and Tulving demonstrated that words processed at a semantic level (meaning-based orienting questions) were recalled at dramatically higher rates than words processed at structural (appearance) or phonemic (sound) levels. The paper also explored the role of elaboration and congruity in enhancing semantic encoding. At 27 pages, it is longer and more technical than the 1972 paper, but the experimental designs are elegant and the results are presented clearly. This is the empirical backbone of the chapter.

Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). "Self-Referent Encoding and the Self-Reference Effect in Memory." Journal of Personality and Social Psychology, 35(9), 677-688.

The study that discovered the self-reference effect. Rogers, Kuiper, and Kirker extended Craik and Tulving's paradigm by adding a fourth orienting condition: "Does this word describe you?" Self-referent processing produced recall rates significantly higher than standard semantic processing — a finding that was unexpected and has been replicated dozens of times since. The paper is important not only for its finding but for its implication: it demonstrated that depth of processing has gradations within the semantic level, and that the self-concept provides a uniquely powerful organizational framework for encoding.

Meta-Analyses and Reviews

Symons, C. S., & Johnson, B. T. (1997). "The Self-Reference Effect in Memory: A Meta-Analysis." Psychological Bulletin, 121(3), 371-394.

This meta-analysis confirmed the robustness of the self-reference effect across dozens of studies. Symons and Johnson found a reliable advantage for self-referent encoding over other forms of semantic encoding, including both organizational encoding (sorting into categories) and other-referent encoding (judging whether words describe someone else). The meta-analysis is valuable because it addresses concerns about boundary conditions — showing that the self-reference effect holds across different age groups, materials, and experimental designs.

Books

Brown, P. C., Roediger, H. L., III, & McDaniel, M. A. (2014). Make It Stick: The Science of Successful Learning. Harvard University Press.

Chapter 4 ("Embrace Difficulties") and Chapter 5 ("Avoid Illusions of Knowing") are particularly relevant to Chapter 12's themes. The discussion of desirable difficulties overlaps with deep processing — both frameworks argue that effortful encoding produces more durable learning. The treatment of illusions of knowing directly parallels the chapter's argument that shallow processing feels productive but isn't.

Willingham, D. T. (2009). Why Don't Students Like School? A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom. Jossey-Bass.

Willingham's accessible treatment of memory and encoding includes an excellent discussion of why students default to shallow processing and what teachers can do to promote deep thinking. His chapter on the importance of meaning — arguing that memory is "the residue of thought" — is essentially a restatement of the levels of processing framework in plain language. This is perhaps the most readable introduction to the cognitive science behind deep processing.

Research on Self-Explanation

Chi, M. T. H., de Leeuw, N., Chiu, M., & LaVancher, C. (1994). "Eliciting Self-Explanations Improves Understanding." Cognitive Science, 18(3), 439-477.

Michelene Chi's landmark study on self-explanation established that students who explain material to themselves as they study — working through the logic step by step, identifying assumptions, connecting new information to prior knowledge — learn significantly more than students who study the same material without self-explaining. Chi's research demonstrates that self-explanation is a specific, teachable form of elaborative processing. The paper also introduced the distinction between "good" and "poor" self-explanations, showing that the quality of elaboration matters, not just its presence. This research connects directly to the chapter's argument that not all semantic processing is equally effective.


Tier 2 — Attributed Sources

These are findings and claims attributed to specific researchers or research traditions. The general claims are well-established in the literature, but specific publication details beyond what is provided have not been independently verified for this bibliography.

Research by Morris, Bransford, and Franks on transfer-appropriate processing.

Morris, Bransford, and Franks (1977) challenged the levels of processing framework by showing that "deep" processing doesn't always produce the best memory. In their key experiment, participants who processed words phonemically (rhyme-based questions) actually outperformed participants who processed words semantically — when the memory test required rhyme recognition. This principle of transfer-appropriate processing — that the effectiveness of encoding depends on how memory will be tested — is an important qualification to the levels of processing framework. For most academic purposes, semantic encoding remains optimal because exams test meaning. But the principle reminds us that depth should match the demands of the situation.

Research by Hunt and colleagues on distinctiveness and memory.

R. Reed Hunt and his collaborators have published extensively on the role of distinctiveness in memory. Their work clarifies that distinctiveness is not just about physical salience (the von Restorff effect) but about processing — specifically, the degree to which an item is processed in a way that makes it stand out from other encoded items. Hunt's work on the complementary roles of relational and item-specific processing — which the chapter draws on in Section 12.6 — demonstrates that optimal memory performance requires both types of encoding.

Research by Stein and Bransford on elaboration and memory.

Barry Stein and John Bransford conducted a series of studies in the late 1970s and early 1980s showing that precise elaboration (elaboration that clarifies the significance of the fact in context) produces better memory than imprecise elaboration (adding details that don't clarify why the fact matters). Their classic example — "The fat man read the sign" vs. "The fat man read the sign warning about thin ice" — demonstrates that elaboration works by building meaningful connections that serve as retrieval cues, not simply by adding more information.

Research by Sparrow, Liu, and Wegner on the Google effect.

Betsy Sparrow and colleagues at Columbia University published a series of studies (2011) showing that when people expect to have access to information via computer, they are less likely to encode the information itself but more likely to remember where the information is stored. This "Google effect" is discussed in Case Study 2 and illustrates how technology can inadvertently promote shallow processing by reducing the perceived need for deep encoding.

Research by McDaniel and Einstein on distinctiveness in encoding.

Mark McDaniel and Gilles Einstein have contributed to understanding how bizarre and unusual imagery enhances memory — a finding related to the von Restorff effect and the role of distinctiveness in encoding. Their work shows that distinctive elaboration (creating unusual, vivid mental images) produces better memory than common elaboration, particularly in free recall tests. This research supports the chapter's practical recommendation to vary encoding techniques and create vivid, unusual examples.


Tier 3 — Illustrative Sources

These are constructed examples, composite cases, or pedagogical resources created for this textbook.

Dr. James Okafor — composite character. Based on common patterns in medical education research, particularly studies comparing surface and deep approaches to learning in clinical training. Okafor illustrates elaborative processing in pharmacology — the difference between memorizing drug facts (knowing THAT) and building causal networks that connect mechanisms to clinical decisions (knowing WHY). His approach is consistent with research on expert knowledge structures in medicine.

Sarah (Okafor's classmate) — composite character. Represents the surface approach to medical learning — organized, diligent, and time-intensive, but operating at shallow semantic depth. Her spreadsheet method illustrates how structural and organizational activity can substitute for genuine understanding.

The Group Project Team (Tyler, Amara, Ryan, Nora) — constructed scenario. Illustrates four different processing depths applied to the same material in a contemporary digital learning context. The scenario is designed to demonstrate the Google effect and the distinction between information access and understanding.


If you want to go deeper on Chapter 12's topics before moving to Chapter 13, here's a prioritized reading path:

  1. Highest priority: Read the Craik and Lockhart (1972) paper. At only 14 pages, it's one of the most readable and influential papers in cognitive psychology. Reading the original will deepen your understanding of the framework and give you a sense of the intellectual context in which it was proposed. Budget 45-60 minutes.

  2. If you want the experimental evidence: Read Craik and Tulving (1975), focusing on Experiments 1-4 (the core demonstration of the depth effect). The experimental designs are elegant and clearly described. Budget 1-2 hours for the relevant sections.

  3. If the self-reference effect intrigued you: Read the Rogers, Kuiper, and Kirker (1977) paper. It's short and clearly written, and it extends the levels of processing paradigm in a direction that has immediate practical applications for how you study. Budget 30-45 minutes.

  4. If you want to understand self-explanation better: Read Chi et al. (1994). This paper is more technical but provides the evidence base for self-explanation as a deep processing strategy. If you're interested in tutoring, teaching, or explaining concepts to others, this is essential reading. Budget 1-2 hours.

  5. If you want a broad, accessible overview: Read Chapter 3 of Willingham's Why Don't Students Like School? His treatment of "memory is the residue of thought" provides a beautifully clear restatement of the levels of processing framework. Budget 30 minutes.

  6. If you're interested in the technology angle: Search for the Sparrow, Liu, and Wegner (2011) paper on the Google effect. It's a short, widely discussed study that raises important questions about how technology changes our relationship with knowledge. Budget 30 minutes.


Online Resources

The Learning Scientists (learningscientists.org). Their treatment of "elaboration" as one of six effective learning strategies provides practical tips for implementing deep processing. Includes downloadable posters and guides that translate the research into actionable techniques.

Retrieval Practice website (retrievalpractice.org). While primarily focused on retrieval practice, this site includes excellent resources on combining retrieval with elaboration — precisely the combination that produces the deepest encoding.

Coursera: "Learning How to Learn" by Barbara Oakley. This free online course covers levels of processing concepts in video format, with visual demonstrations and personal stories that make the abstract framework concrete. Particularly useful if you learn well from video and want to hear these ideas explained by an engaging instructor.


End of Further Reading for Chapter 12.