Case Study 2: Surface Learning in the Age of Google — When Information Access Replaces Understanding

This case study examines a widespread contemporary phenomenon: how instant access to information can paradoxically reduce the depth at which students process and learn. The scenario and characters are constructed to illustrate research findings on the "Google effect," transactive memory, and the relationship between search behavior and encoding depth. (Tier 3 — illustrative example.)


The Setup

It's 11:30 PM on a Tuesday, and four college sophomores are working on a group presentation for their Environmental Science class. The assignment: explain the causes and consequences of ocean acidification, propose three policy solutions, and evaluate the feasibility of each.

Meet the group:

  • Nora is the organizer. She has divided the assignment into sections and assigned each person a part.
  • Tyler is responsible for the causes of ocean acidification.
  • Amara is handling the ecological consequences.
  • Ryan is evaluating policy solutions.

All four students have laptops open. All four have access to the same information — textbook, lecture slides, Google, Wikipedia, YouTube, academic databases. The assignment is due in 36 hours.

Watch what happens.

Tyler: The Copy-Paste Learner

Tyler's approach is efficient. He opens Google and types "causes of ocean acidification." Within seconds, he has a Wikipedia article, three news stories, a NOAA explainer, and two YouTube videos.

He reads the Wikipedia article's introduction: "Ocean acidification is the ongoing decrease in the pH of the Earth's ocean, caused by the uptake of carbon dioxide from the atmosphere." He copies this sentence into his notes. He scrolls to the section on chemistry, reads that CO2 dissolves in seawater to form carbonic acid, and copies the relevant chemical equation. He watches a three-minute YouTube video that animates the process. He writes a bullet point: "CO2 + H2O -> H2CO3 (carbonic acid) -> lowers pH."

Total time: 22 minutes. Tyler feels done. He has the key facts. He can present them. He pastes his bullet points into the shared Google Doc and moves on to his other homework.

Tyler's depth of processing: Structural to shallow semantic. He has interacted primarily with text on screens — reading, copying, pasting. He has processed meaning at the sentence level ("CO2 dissolves in seawater to form acid, which lowers pH"). But he has not asked a single "why?" or "how?" question. He has not generated an explanation in his own words. He has not connected this information to anything else he knows.

If you asked Tyler right now, "Why does increased CO2 in the atmosphere lead to ocean acidification and not just more CO2 dissolved as a gas?" he would not be able to answer. He has the fact but not the mechanism.

If you asked him in a week, he would remember even less. The memory was encoded shallowly, with no elaboration, no personal connection, and no distinctive features to aid retrieval. It will fade rapidly.

Amara: The Earnest but Shallow Researcher

Amara takes a different approach. She reads the textbook section carefully, taking handwritten notes. She highlights key sentences. She creates a neat outline:

Ecological Consequences of Ocean Acidification
I. Effects on marine organisms
   A. Coral bleaching
   B. Shell dissolution in mollusks
   C. Disruption of food chains
II. Ecosystem-level impacts
   A. Loss of biodiversity
   B. Decline of fisheries
   C. Coastal erosion (loss of coral reefs)

Amara's notes are well-organized. She has captured the main categories and subcategories. She could present this material competently.

Amara's depth of processing: Moderate — relational processing without item-specific depth. Her outline shows she has organized the information into a logical structure (relational processing). But she hasn't explored why coral bleaches, how shells dissolve at a molecular level, or what makes the food chain disruption different from other ecological disruptions. Each item in her outline is a label, not an explanation.

Amara has built the forest but hasn't examined any individual trees. On an exam, she could list the consequences of ocean acidification but couldn't explain the mechanism behind any of them.

Ryan: The Deep Processor (Eventually)

Ryan starts the same way as Tyler — Googling "ocean acidification policy solutions." He finds several proposals: carbon taxes, marine protected areas, alkalinity enhancement, reducing fossil fuel dependence.

But then something shifts. He reads about alkalinity enhancement — the idea of adding crusite limestone or other minerals to the ocean to neutralize the acid — and thinks: "Wait. That sounds insane. How much limestone would you need to treat an entire ocean? And wouldn't that have side effects?"

He falls down a rabbit hole. He reads a research paper abstract about the logistical challenges. He calculates (roughly) how many tons of limestone it would take. He connects it to something he learned in his geology class about weathering rates. He realizes that natural weathering already neutralizes some ocean acidity, but over geological time scales — millions of years — not the decades we need.

Then he asks himself: "If I were a senator and someone proposed this in a hearing, what questions would I ask? What would convince me it's feasible or not?"

He spends an hour and a half on the policy section — three times as long as Tyler spent on causes. His section of the Google Doc is longer, more detailed, and organized around arguments rather than bullet points.

Ryan's depth of processing: Deep elaborative processing with self-reference elements. He asked "why?" and "how?" He connected new information to prior knowledge (geology class). He evaluated feasibility through a role-play exercise (imagining himself as a senator) that engaged the self-reference effect. He generated his own calculations. He was genuinely curious, and that curiosity drove him to process at a level none of his groupmates reached.

Ryan will remember this material for months. Not because he tried to memorize it, but because he thought about it deeply. The memory is a byproduct of the processing, exactly as Craik and Lockhart's framework predicts.

Nora: The Metacognitive Manager

Nora has a different challenge. She's not responsible for any single content section — she's coordinating the group, designing the slide deck, and writing the introduction and conclusion.

She reads everyone's contributions in the Google Doc. She reformats Tyler's bullet points, expands Amara's outline into full sentences, and integrates Ryan's detailed analysis into a coherent narrative.

But notice what she's doing: she's reading everyone else's work and making it look good. She's interacting with the format of the presentation (structural processing) and the surface meaning of each section (shallow semantic processing). She understands the sentences. She can make the slides flow. But she hasn't engaged with the underlying science at any depth, because she was focused on the presentation layer, not the content layer.

Nora's depth of processing: Structural to shallow semantic. Despite handling all the material, Nora has processed none of it deeply. She knows the outline (because she created it). She can navigate the slides (because she built them). But she hasn't built a mental model of ocean acidification. If someone in the audience asks a probing question, she'll deflect to a groupmate.

This is a common trap for students who take organizational roles in group projects: the management work feels productive and is productive for the project, but it substitutes structural processing for semantic processing. The work of formatting, editing, and coordinating is real work — but it's not the work of learning.

The Presentation

The group presents. The slides look professional. Each person covers their section competently. They receive a B+.

But what have they actually learned?

Tyler will remember almost nothing in two weeks. He copied information from one screen to another without thinking about it. His encoding was shallow, undistinctive, and disconnected from everything else he knows.

Amara will remember the overall structure — coral bleaching, food chain disruption, fisheries decline — but won't be able to explain any of the mechanisms. Her relational processing captured the outline but none of the content beneath it.

Ryan will remember the alkalinity enhancement debate for months, possibly years. He'll bring it up in unrelated conversations. He'll recognize references to it in news articles. His encoding was deep, elaborative, personally connected, and distinctive. He didn't study more. He processed more deeply.

Nora will remember that she made a nice slide deck. The science itself will be gone within days.

The Google Effect

This case study illustrates a phenomenon that researchers have called the Google effect (or digital amnesia) — the tendency for people to remember where to find information rather than the information itself.

(Tier 2 — well-documented phenomenon; Sparrow, Liu, & Wegner, 2011)

In a series of studies, Betsy Sparrow and colleagues at Columbia University found that when people expect to have access to information later (e.g., it will be saved on a computer), they are less likely to encode the information itself — but more likely to remember where it's stored. The brain, in effect, outsources memory to the device.

This is not inherently bad. Transactive memory — distributing knowledge across a group, with each person knowing where to find what — is a legitimate and efficient strategy for organizations. You don't need to memorize your doctor's phone number if it's in your contacts.

But for learning, the Google effect is devastating. Learning requires that information be encoded deeply enough to be retrievable, applicable, and transferable. If you can always look something up, your brain has no incentive to process it deeply. The information flows through working memory — understood momentarily — and then evaporates, because no durable memory trace was ever formed.

Tyler's copy-paste approach is the Google effect in action. He didn't need to understand ocean acidification because it was right there on the screen. He processed the information deeply enough to read it and copy it, and not one bit deeper. The screen did the remembering for him.

The Deeper Problem: Confusing Access with Understanding

There's a more insidious version of this problem, and it affects not just Tyler but an entire generation of learners who grew up with smartphones.

When you can Google any fact in seconds, the experience of having access to information feels identical to the experience of knowing the information. Both produce the same immediate outcome: you can answer the question. The difference only becomes apparent later, when the question comes up again and the phone isn't in your hand — on an exam, in a conversation, in a job interview, in a clinical situation.

This is the levels of processing problem applied to technology. The availability of instant information doesn't just reduce motivation to encode deeply — it provides a constant stream of fluency experiences. "I looked it up, I read it, I understand it" triggers the same illusion of competence that rereading produces (Chapter 2, Chapter 8). The information feels known because it was recently encountered. But encountering and knowing are not the same thing.

💡 Key Insight: Technology doesn't inherently produce shallow processing. Ryan used the same Google search engine as Tyler. The difference was what happened after the search. Tyler consumed; Ryan interrogated. Tyler accepted; Ryan questioned. The technology was the same. The processing depth was wildly different. The problem isn't Google — it's the assumption that finding an answer is the same as understanding it.

What This Means for You

This case study isn't an argument against using Google, Wikipedia, or AI tools. It's an argument for being deliberate about what you do with the information you find.

Here are the practical implications:

1. The Search-and-Close Protocol. When you look something up, read the answer, then close the tab and try to explain the answer in your own words. If you can't, you didn't learn it — you just looked at it. This transforms a shallow search into a retrieval practice opportunity.

2. The "Why?" Follow-Up. Every time you Google a fact, ask yourself one follow-up question: "Why is this true?" Then try to answer that question before searching again. This shifts you from structural/phonemic processing to semantic/elaborative processing.

3. The Nora Warning. If you're the organizer in a group project, schedule time to process the content, not just the format. Reading and reformatting your groupmates' work is not the same as learning the material. Before the presentation, close the slides and ask yourself: "Could I explain this without the slides in front of me?"

4. The Tyler Test. After any research session, close all your tabs and write a one-paragraph summary from memory. If you can't write more than a sentence, your research produced information transfer (screen to document) but not learning (information to understanding).

5. The Pre-Search Prediction. Before you Google something, try to answer the question from memory first. Even if your answer is wrong or incomplete, the act of attempting retrieval creates a "slot" in memory that the correct answer will fill more readily — this is the pretesting effect from Chapter 10.

The Uncomfortable Truth

Here's the question this case study is really asking: In a world where any fact is available in three seconds, what does it mean to actually know something?

The answer, from the levels of processing framework, is clear: knowing something means having processed it deeply enough that you can retrieve it, explain it, apply it, and connect it to other things you know — without the screen in front of you.

Information access is not understanding. Recognition is not recall. Encountering is not encoding. These distinctions have always existed, but technology has made them urgent. When information was scarce and difficult to find (go to the library, find the book, read the chapter), the act of finding it often produced some level of deep processing along the way. Now that information is abundant and effortless to find, the processing doesn't happen automatically. You have to do it on purpose.

This is why metacognition — the subject of this entire book — has never been more important. The ability to monitor your own understanding, to distinguish between "I can look this up" and "I actually understand this," is not just an academic skill. It's the difference between a person who knows things and a person who knows where things are.

Both are useful. Only one is learning.


Discussion Questions

  1. Map the processing levels. For each of the four group members (Tyler, Amara, Ryan, Nora), identify their primary encoding level (structural, phonemic, semantic shallow, semantic deep) and explain your reasoning with specific evidence from the case study.

  2. Diagnose the Google effect. Tyler copied information from a website into a Google Doc. At what point during this process could he have shifted to deep processing? Design a specific intervention — a question, a technique, a change in procedure — that would have deepened his encoding.

  3. Evaluate Amara's approach. Amara created a well-organized outline, which represents relational processing. What specific additions to her study method would add item-specific processing? Design a version of her outline that incorporates both types.

  4. Analyze Ryan's curiosity. Ryan's deep processing was partly driven by genuine curiosity ("Wait, that sounds insane"). Is curiosity necessary for deep processing, or can deep processing be achieved through deliberate technique even in the absence of curiosity? What does this imply for studying subjects you find boring?

  5. The Nora problem. Many students spend significant time on presentation formatting, note organization, color-coding, and other structural activities that feel productive. Using the levels of processing framework, explain why these activities are often "learning theater" — they look like studying but don't produce durable knowledge. Is there a way to make organizational work also function as deep processing?

  6. Technology and processing depth. The case study argues that "the problem isn't Google — it's the assumption that finding an answer is the same as understanding it." Do you agree? Are there features of modern technology that actively encourage shallow processing? Are there features that could support deep processing if used intentionally?

  7. Apply to your own experience. Think about the last time you looked something up during a study session. Did you process the information deeply enough to explain it without the screen? What would you do differently now, having read this case study?

  8. The AI question. This case study focuses on Google, but generative AI tools (like ChatGPT) add another layer: they don't just find information, they explain it. Does having an AI explain something to you produce deeper processing than reading a Wikipedia article? Shallower? What matters isn't the source — it's what you do with the explanation. Design a protocol for using AI tools that maximizes deep processing. (This question previews Chapter 24.)


End of Case Study 2. The relationship between technology and metacognition is explored further in Chapter 24 (AI and Learning: What Machines Can and Can't Do for You).