35 min read

Every public health campaign about vaccines, every fact-checking website, and every journalism ethics course rests on a shared assumption: that people, once they know the truth, will update their beliefs accordingly. This assumption is not...

Chapter 35: Prebunking and Inoculation Theory

Learning Objectives

By the end of this chapter, students will be able to:

  1. Explain why debunking is often less effective than expected, including the continued influence effect and motivational backlash.
  2. Describe the origins of inoculation theory in the work of William McGuire and its biological metaphor.
  3. Distinguish between the two core components of inoculation: forewarning and refutational preemption.
  4. Summarize the empirical research on prebunking conducted by van der Linden, Lewandowsky, Cook, and colleagues.
  5. Compare logic-based inoculation with fact-based inoculation and evaluate the scalability advantages of each.
  6. Analyze the design principles and empirical evaluations of the Bad News game and its successors.
  7. Evaluate Google's prebunking ad campaigns in Central and Eastern Europe as a model for at-scale intervention.
  8. Identify the principal scalability challenges of inoculation, including decay effects and reaching resistant populations.
  9. Apply prebunking principles to practical contexts including platform deployment, classroom use, and public health communication.

Introduction

Every public health campaign about vaccines, every fact-checking website, and every journalism ethics course rests on a shared assumption: that people, once they know the truth, will update their beliefs accordingly. This assumption is not unreasonable. But decades of research in cognitive psychology and communication science have revealed that it is far more fragile than we tend to imagine.

The problem is not simply that misinformation spreads quickly or that corrections travel slowly. The problem runs deeper. Even when corrections reach people, even when those corrections are clear and credible, they often fail to dislodge false beliefs. Sometimes they make those beliefs stronger. This persistent puzzle—why does telling people the truth so often fail to undo the damage of a lie?—has generated one of the most productive research agendas in contemporary social science.

This chapter explores a paradigm shift in that research: from debunking to prebunking. Rather than waiting for misinformation to take hold and then attempting to correct it, prebunking seeks to prepare people in advance, building cognitive resistance before false claims arrive. The guiding framework is inoculation theory, which borrows its central metaphor from immunology. Just as a weakened pathogen can train the immune system to resist a full-strength virus, a weakened dose of misinformation—accompanied by a refutation of the manipulative technique it employs—can train the mind to resist the real thing.


Section 35.1: The Limits of Debunking

Why Corrections Often Fail

The intuitive response to misinformation is to correct it. Present accurate information, cite credible sources, and trust that rational human beings will update their beliefs. This approach—debunking—has been the dominant strategy of fact-checkers, science communicators, journalists, and public health officials for decades.

The evidence on debunking is decidedly mixed. Studies consistently show that corrections can reduce belief in specific false claims under controlled laboratory conditions. But the effect sizes are often modest, the conditions under which corrections succeed are highly specific, and in real-world settings the impact of corrections is frequently overwhelmed by the volume and velocity of misinformation itself.

Several mechanisms explain why corrections underperform.

Processing fluency and repetition. When a false claim is repeated—even in the context of denial—it becomes more familiar. Familiarity is processed as a signal of truth. The psychological concept of "illusory truth" captures this phenomenon: repeated exposure to a claim increases its perceived credibility regardless of whether the claim is true or false (Hasher, Goldstein, & Toppino, 1977; Pennycook et al., 2018). A correction that repeats the original false claim before refuting it may therefore inadvertently reinforce the very error it seeks to eliminate.

The familiarity backfire effect. Related to illusory truth, early research (Skurnik et al., 2005) suggested that corrections could backfire spectacularly among older adults: warning them that a claim was false actually increased their likelihood of later misremembering it as true, because the memory for the source tag (false) decayed faster than the memory for the claim itself. While later research has questioned whether this "familiarity backfire" is as robust or widespread as initially thought (Wood & Porter, 2019), it remains a genuine concern in populations with memory vulnerabilities.

Prior beliefs and motivated reasoning. When corrections challenge strongly held prior beliefs, they may be rejected as partisan attacks or as evidence of conspiratorial suppression. Research on motivated cognition demonstrates that people are significantly better at identifying flaws in arguments that threaten their worldview than in arguments that support it (Kunda, 1990). Corrections that are correctly identified as politically or ideologically motivated—even when they are factually accurate—face a particularly steep hill.

The "worldview backfire effect." The related concept of backfire effects—the idea that corrections can strengthen false beliefs in some contexts—has been contested in recent years. Researchers Wood and Porter (2019) conducted a large-scale replication study and failed to find evidence for worldview backfire across a range of politically contentious topics. However, the debate has revealed something important: corrections reliably reduce specific false beliefs, but they do not reliably change the deeper attitudes and worldviews that generate those beliefs in the first place.

The Continued Influence Effect

Perhaps the most well-documented limitation of debunking is the continued influence effect (CIE). First named by Wilkes and Leatherbarrow (1988) and systematically studied by Ullrich Ecker and colleagues at the University of Western Australia, the CIE refers to the persistence of misinformation's influence on reasoning and behavior even after a retraction has been issued and understood.

The classic paradigm involves presenting participants with a narrative that includes a specific piece of false information (e.g., a fire was caused by flammable materials being stored in a warehouse). Later, the false information is explicitly corrected (the materials were not present after all). When participants are asked to reason about the scenario—answer questions, explain what happened—they continue to rely on the retracted information, even when they explicitly state that they know it was incorrect.

This is not a processing failure. Participants understand and remember the correction. But they continue to use the false information to fill causal gaps in their understanding. The reason appears to be that corrections do not automatically "overwrite" false memories; they create competing memory traces. When people reconstruct a narrative from memory, they draw on the most accessible, causally coherent account, which often includes the original misinformation.

The CIE has important practical implications. A dramatic example comes from public health: studies of vaccine misinformation show that corrections reduce intention to vaccinate in some populations, apparently because the correction process requires repeating the false claim ("Vaccines do NOT cause autism") and thereby activates both the false claim and its emotional associations before delivering the negation.

Overcorrection and Audience Fatigue

A further complication is the problem of overcorrection—the possibility that aggressive fact-checking or debunking creates a reactance response. Psychological reactance (Brehm, 1966) is the motivational state that arises when people perceive a threat to their freedom of belief. Heavy-handed correction can activate reactance, causing people to cling more tightly to the corrected belief as an assertion of autonomy.

There is also the problem of correction fatigue. In information environments saturated with false claims, constant correction efforts may be ignored, processed superficially, or even generate cynicism about the possibility of knowing anything for certain. The phenomenon of "epistemic nihilism"—the motivated conclusion that truth is unknowable and all sources are equally biased—is arguably amplified by the very abundance of corrections and counter-claims.


Section 35.2: Inoculation Theory Origins

McGuire's Biological Metaphor

William McGuire was a social psychologist at Yale University who, in the early 1960s, was working on a seemingly esoteric problem: why do people hold beliefs that have never been seriously challenged? McGuire observed that certain widely shared beliefs—what he called "cultural truisms"—were held so universally and uncritically that most people had never needed to defend them. Beliefs like "it's a good idea to brush your teeth twice a day" or "it's wrong to burn the American flag" occupied this category in the 1950s and early 1960s: they were part of the ideological water everyone swam in, never questioned, never tested.

McGuire's insight was that such unchallenged beliefs were, paradoxically, highly vulnerable. Like a person raised in a completely sterile environment who has no immune system, a belief that has never encountered opposition lacks the cognitive defenses to resist attack. When such a belief is finally challenged—by a skilled propagandist, a charismatic counter-cultural figure, or simply a compelling alternative argument—it may collapse with startling ease.

The solution McGuire proposed borrowed directly from immunology. Just as a vaccine introduces a weakened pathogen to stimulate immune system development before full exposure, an "attitudinal inoculation" would expose the belief-holder to weakened forms of the arguments against the belief, along with counter-arguments, in order to stimulate the development of cognitive defenses.

McGuire's early experiments (1961, 1964) confirmed the core hypothesis. Participants who received inoculation treatments were significantly more resistant to persuasion than those who received no treatment or who received "supportive" treatments (arguments for the belief, without exposure to counter-arguments). The finding was robust and replicable.

The Two Components: Forewarning and Refutational Preemption

McGuire's inoculation framework has two essential components, and both are necessary for the effect to work.

Forewarning is the explicit warning that an attempt to persuade or manipulate is coming. The warning does not need to specify the exact content of the attack; it can be general: "You may encounter misleading information about this topic that is designed to make you doubt X." Forewarning serves several functions: it activates critical processing, signals the context as one in which skepticism is warranted, and may trigger threat response that motivates the development of counter-arguments.

Refutational preemption is the exposure to a weakened version of the misleading argument, together with a refutation of that argument. This component is what distinguishes inoculation from simple forewarning. Rather than just alerting people that manipulation is coming, refutational preemption actually shows them an example of the manipulation and explains why it fails or misleads. This gives people specific cognitive ammunition to recognize and resist the real attack when it arrives.

The interplay between these two components is significant. Research has found that refutational preemption alone can produce inoculation effects, but the effects are stronger when accompanied by forewarning. Forewarning alone produces weaker effects. The combination appears to be more than additive: the forewarning primes critical processing, and the refutational preemption provides the tools that critical processing can apply.

From Attitudes to Manipulation Techniques

McGuire's original work focused on specific attitudinal propositions—inoculating people against specific arguments against specific beliefs. This approach has one important limitation: it requires knowing in advance exactly which counter-arguments will be deployed. It is, in the language of information security, a signature-based defense rather than a behavioral one.

For decades, inoculation research proceeded largely within McGuire's original framing. But beginning in the 2010s, researchers began exploring a fundamentally different approach: inoculating people not against specific false claims, but against the rhetorical and psychological techniques that make false claims compelling. This shift—from content-based to technique-based inoculation—opened up the possibility of prebunking at scale.


Section 35.3: The Science of Prebunking

Van der Linden, Lewandowsky, Cook, and Colleagues

The modern prebunking research program was significantly advanced by a network of researchers centered at the University of Cambridge (Sander van der Linden), the University of Sydney and later George Mason University (Stephan Lewandowsky), and Skeptical Science / Queensland University (John Cook). While these researchers have distinct methodological emphases, they share a core commitment to applying inoculation theory to contemporary misinformation about science, politics, and public health.

Sander van der Linden's contributions have been particularly influential. His 2017 paper with Cook, Lewandowsky, and Roozenbeek introduced the concept of "broad" inoculation against misinformation about scientific consensus—demonstrating that brief exposures to the logic of science denial could buffer people against subsequent exposure to consensus-denying messages. This work established several key principles that have guided subsequent research.

First, inoculation effects are not restricted to attitudinal trivia or laboratory-induced beliefs; they can be found for consequential real-world issues including climate change, COVID-19 vaccines, and geopolitical disinformation.

Second, the effects are not restricted to specific demographic or political groups; while effect sizes vary, inoculation appears to work across the political spectrum.

Third, the technique-based approach—exposing people to the rhetorical strategies used by denialists rather than specific denialist claims—can generate "broad spectrum" immunity that extends to novel variations of misinformation.

Lewandowsky's contributions have focused particularly on the cognitive mechanisms underlying inoculation and debunking. His influential "Debunking Handbook" (Cook & Lewandowsky, 2011, revised 2020 with eighteen coauthors) synthesized the psychological research on misinformation correction and established evidence-based guidelines for effective debunking. While focused primarily on debunking rather than prebunking, the Handbook helped establish the intellectual scaffolding within which prebunking research has developed.

John Cook's work on the "FLICC" framework—Fake Experts, Logical Fallacies, Impossible Expectations, Cherry Picking, and Conspiracy Theories—provided a taxonomy of science denial techniques that could serve as the content for technique-based inoculation. The FLICC framework (later extended and refined) has been incorporated into prebunking games, educational curricula, and training programs.

The Cambridge Social Decision-Making Lab

The Social Decision-Making Lab at Cambridge, directed by Sander van der Linden, has been the primary institutional home of prebunking research in the 2010s and 2020s. The lab's interdisciplinary approach combines experimental psychology, behavioral economics, political science, and computational social science.

Key contributions from the Cambridge lab include:

  • The development of the "fake news game" paradigm, in which participants are invited to produce misinformation rather than merely evaluate it, thereby gaining "psychological immunity" through perspective-taking.
  • Field experiments demonstrating that inoculation messages delivered through social media advertising can reduce susceptibility to misinformation at scale.
  • Research on the mechanisms of inoculation, including the role of motivated reasoning, identity threat, and partisan identity.
  • Cross-national studies examining whether inoculation effects hold across different cultural and political contexts.

The "Fake News Game" Studies

Van der Linden and colleagues' "fake news game" research introduced a distinctive methodological innovation: rather than simply exposing participants to weakened misinformation, the games invited participants to create misinformation using specific manipulative techniques. The reasoning was that producing manipulation—even in a playful, game context—would make participants more aware of the techniques involved and more resistant to them when encountered from external sources.

This "active inoculation" approach has several theoretical advantages over "passive inoculation" (reading about manipulation techniques). Active engagement leads to deeper processing, better retention, and more robust attitude change. The game format also permits iterative exposure across multiple rounds, producing a kind of "booster" effect within a single session.


Section 35.4: Logic-Based Inoculation

Teaching Manipulation Techniques Rather Than Specific Claims

The most significant innovation in contemporary prebunking research is the shift from fact-based to logic-based (or technique-based) inoculation. Fact-based inoculation, in the McGuire tradition, aims to protect people against specific false claims by pre-emptively refuting them. Logic-based inoculation, by contrast, aims to protect people against the classes of rhetorical manipulation that false claims rely on.

The distinction matters enormously for scalability. The information environment contains effectively unlimited numbers of false or misleading claims, constantly mutating and adapting. A prebunking program that addresses specific claims will always be in a reactive position, chasing the latest version of the latest narrative. A prebunking program that addresses rhetorical techniques, by contrast, may generate broader protection against an entire category of future claims.

The theoretical basis for this approach rests on the distinction between "surface structure" and "deep structure" in argumentation. Surface structure consists of the specific claims, examples, and emotional appeals used in a particular piece of misinformation. Deep structure consists of the underlying logical fallacies, psychological triggers, and social dynamics that make the misinformation compelling. Inoculating against deep structure should, in principle, confer protection against any surface-level variation.

Cook's FLICC framework provides one useful taxonomy of deep-structure manipulation techniques. Other researchers have proposed slightly different categorizations, but there is broad convergence on a set of core techniques: false consensus (claiming that "everyone knows" or "experts agree" when they do not), conspiracy framing (explaining events as the product of secret malevolent coordination), emotional manipulation (using fear, disgust, or outrage to bypass analytical thinking), discrediting (attacking the credibility of sources rather than their arguments), false dichotomy (presenting only two options when more exist), and impersonation (creating the appearance of legitimate expertise or official endorsement).

The Scalability Advantage

The scalability advantage of logic-based inoculation is significant but not unlimited. Research by van der Linden, Roozenbeek, and colleagues has documented that inoculation against specific manipulation techniques does transfer to novel applications of those techniques—but the transfer is not unlimited. People inoculated against emotionally manipulative climate denial may be somewhat more resistant to emotionally manipulative COVID-19 misinformation, but the effect is substantially smaller than the direct protection against climate-related emotional manipulation.

This finding suggests that logic-based inoculation operates at an intermediate level of abstraction: more general than specific claims, but more specific than a general epistemological orientation toward critical thinking. Effective prebunking programs may need to address multiple techniques, using the technique-based approach to maximize coverage while remaining attentive to domain specificity.

Go Viral! and the Bad News Game

Two educational games developed by Cambridge and the DROG agency have served as vehicles for testing logic-based inoculation at scale. "Go Viral!" was developed specifically for the COVID-19 context and focuses on three manipulation techniques commonly used in COVID-19 misinformation: exploiting emotions, using fake experts, and spreading conspiracy theories. "Bad News" is a broader game covering six manipulation techniques used in political misinformation.

Both games involve participants "playing the role" of a misinformation producer, constructing fake news stories using the techniques in question. The theory is that active production of misinformation—even in a clearly labeled game context—builds stronger "psychological antibodies" than passive reading. Studies of both games have found significant pre-to-post improvements in the ability to identify manipulative techniques, with moderate effect sizes maintained over follow-up periods.


Section 35.5: Fact-Based Inoculation

Technique-Specific vs. Topic-Specific Approaches

Fact-based inoculation, in the original McGuire tradition, targets specific factual claims. This approach has the advantage of precision: the inoculation is directly tailored to the specific misinformation a person is likely to encounter. It has the disadvantage of requiring advance knowledge of what misinformation will be encountered, and it does not transfer well to novel false claims.

Research on fact-based inoculation in contemporary contexts has found that it can be effective for specific high-priority topics: vaccine safety, climate change, election integrity. But the required investment per inoculation topic is substantial, and the protection degrades when the misinformation varies from the specific form inoculated against.

Comparing Efficiency and Applicability

The comparison between logic-based and fact-based inoculation is not simply a matter of which is "better." The two approaches have different optimal use cases.

Fact-based inoculation is preferable when: - The specific false claims are known in advance (e.g., predictable myths about a new vaccine or a recurring seasonal health topic). - The target audience is likely to encounter primarily one form of the false claim. - The topic is specific and bounded, making it possible to address the relevant claims without attempting to address an entire domain. - The stakes are extremely high for specific false beliefs (e.g., a dangerous medical myth that could lead to specific harmful behavior).

Logic-based inoculation is preferable when: - The misinformation landscape is varied and rapidly evolving. - A diverse audience will encounter misinformation in different forms and across different topics. - The goal is long-term resilience rather than protection against a specific current threat. - Scalable delivery is a priority (e.g., a social media campaign reaching millions).

In practice, many effective prebunking programs combine both approaches: establishing a general orientation toward critical processing (logic-based) while also addressing specific high-prevalence myths relevant to the target audience (fact-based).


Section 35.6: The Bad News Game and Its Successors

Design Principles

Bad News, developed by DROG in collaboration with van der Linden's Cambridge lab and first released in 2018, is a browser-based game in which players take on the role of a misinformation producer. The game presents players with a social media persona and a developing narrative, and at each choice point offers multiple options for how to advance the narrative. Options that employ the game's six targeted manipulation techniques (impersonation, emotion, polarization, conspiracy, discrediting, and trolling) reward the player with "followers" and "credibility points."

The design logic is straightforward: by producing misinformation in a clearly labeled game context, players gain experiential understanding of how manipulation techniques work. This experiential understanding is expected to generalize to recognition when those techniques are encountered in the wild.

The game was designed with several evidence-based educational principles in mind:

Active learning. Participants produce rather than simply observe, which requires deeper cognitive engagement.

Feedback loops. The game provides immediate feedback on the consequences of each choice, connecting actions to outcomes.

Escalating challenge. The game introduces techniques sequentially, allowing mastery of simpler techniques before more complex ones are encountered.

Narrative engagement. The story format maintains interest and may support deeper processing through narrative transportation effects.

Emotional engagement. The game's satirical tone and the experience of "being the villain" may generate moral emotions (unease, recognition) that support durable learning.

Empirical Evaluations

The Bad News game has been evaluated in multiple studies with generally positive results.

Roozenbeek and van der Linden (2019) conducted the foundational evaluation, recruiting an online convenience sample and measuring confidence in identifying manipulative news headlines before and after game play. They found a significant decrease in perceived reliability of fake news headlines (effect size approximately d = 0.38) and a marginally significant improvement in ability to identify manipulation techniques. Importantly, the improvement was observed across the political spectrum and was not moderated by prior media literacy.

Maertens et al. (2020) conducted a randomized controlled trial with a more rigorous design, including an active control condition and follow-up assessments at two and four weeks post-treatment. They found that the inoculation effect was maintained at two weeks but showed significant decay by four weeks, supporting the need for "booster" interventions.

Roozenbeek et al. (2022) examined the generalizability of Bad News inoculation effects across cultural contexts, finding that the game produced significant effects in 19 countries with diverse media environments. Effect sizes were somewhat smaller in countries with high existing media literacy (e.g., Scandinavian countries) but remained positive throughout.

Effect Sizes and Durability

Meta-analytic summaries of prebunking game research (including Bad News, Go Viral!, and several smaller variants) suggest typical effect sizes in the range of d = 0.25 to d = 0.45 for immediate post-treatment outcomes. These are modest by the standards of psychology research but substantial in a public health context: even small per-person reductions in misinformation susceptibility, applied at scale, could meaningfully reduce population-level belief in false claims.

Durability is a more significant concern. The available evidence suggests that inoculation effects decay substantially over time. Booster sessions (brief re-exposures to the inoculation content) can maintain effects, but the optimal frequency and format of booster interventions has not yet been systematically determined.

The Go Viral! Successor and Other Games

Go Viral! (Roozenbeek et al., 2021) was developed specifically to address COVID-19 misinformation, covering three core techniques: exploiting emotions, using fake experts, and using conspiracy theories. The game was commissioned by the United Nations and the UK Cabinet Office and has been played by several million users.

A randomized controlled trial by Basol, Roozenbeek, and van der Linden (2020) found that Go Viral! significantly improved participants' ability to identify manipulative COVID-19 misinformation. The effects were particularly pronounced for recognition of emotional manipulation. The study was conducted rapidly (during the early COVID-19 pandemic) and therefore has some methodological limitations, but it represents an important demonstration of the potential for rapid-deployment prebunking tools.

Subsequent games in this tradition include "Harmony Square" (targeting election integrity misinformation), "Cat Park" (targeting climate misinformation), and several others developed for specific contexts and target audiences.


Section 35.7: Google's Prebunking Campaigns

The 2022 YouTube Prebunking Ad Campaigns

In 2022, Google's Jigsaw unit collaborated with Cambridge's Social Decision-Making Lab to conduct what is arguably the most ambitious real-world test of prebunking at scale: a series of short video advertisements delivered through YouTube to audiences in Poland, Czech Republic, and Slovakia. The campaign was designed to inoculate populations in Central and Eastern Europe against Russian disinformation tactics, which had become particularly salient following Russia's invasion of Ukraine in February 2022.

The advertisements—each approximately 90 seconds long—presented a specific manipulation technique (e.g., conspiracy framing, emotional manipulation, discrediting) with a brief explanation of how the technique works and why it is misleading. The ads were designed to be engaging and visually appealing, suitable for voluntary viewing in a commercial advertising context.

The research team, led by Jon Roozenbeek, conducted a pre-registered field experiment embedded in the advertising campaign. Participants were recruited through Google's consumer panel and randomly assigned to see the prebunking ads or control ads (unrelated commercial content). Outcome measures included susceptibility to misinformation headlines using the techniques covered in the ads, as well as measures of general critical reasoning about information.

Field Experiment Results

Roozenbeek and colleagues (2022) reported significant effects of the prebunking ads on susceptibility to misinformation. Participants who saw the prebunking ads were better able to identify manipulative news headlines using the targeted techniques, compared to control participants. The effect was found across all three countries in the study.

Effect sizes were modest (approximately d = 0.20 to d = 0.30), consistent with the broader prebunking literature. But the significance of the study lies not primarily in the effect sizes—which are difficult to compare across paradigms—but in the demonstration that prebunking can work in a naturalistic, real-world advertising context.

The study also found that the effects were not moderated by political ideology, suggesting that prebunking through advertising does not create the partisan backfire effects that sometimes accompany explicit corrections.

One particularly important finding was that the prebunking ads were effective even when viewed only once, without the interactive elements of the game-based approaches. This suggests that brief, passive exposure to inoculation content can produce meaningful effects, which is significant for the practical delivery of prebunking at scale.

Implications for At-Scale Prebunking

The Google/Cambridge field experiments represent a proof of concept for what might be called "prebunking as infrastructure": the idea that inoculation content could be delivered through existing media and advertising channels to reach very large audiences before significant misinformation exposures occur.

This approach has several advantages. It leverages existing distribution infrastructure that already reaches billions of people. It does not require any behavior change on the part of the audience beyond minimal attention (the ads are shown in the normal course of video consumption). And it can be targeted to populations or regions where specific misinformation campaigns are anticipated.

But the approach also raises significant questions. How often must the inoculation be refreshed? How can the inoculation content keep pace with evolving manipulation techniques? How should effects be measured at scale, given the difficulty of administering pre-post assessments to advertising audiences? What are the ethical implications of using behavioral advertising infrastructure for public health purposes?

These questions do not undermine the promise of the approach, but they indicate the substantial research and policy work that remains before prebunking campaigns could be deployed systematically as a public health intervention.


Section 35.8: Scalability Challenges

Inoculation Decay

The most consistently documented limitation of prebunking is that its effects decay over time. Studies using follow-up assessments at two, four, and eight weeks post-treatment generally find substantial effect size reductions with each measurement occasion. The typical pattern is a rapid initial decay followed by slower stabilization at a level well below the immediate post-treatment effect.

The cognitive mechanisms of inoculation decay are not fully understood. Memory decay of the specific refutational preemption content is likely a factor: people forget the specific counter-arguments they were taught, and therefore lose the cognitive resources to deploy them when encountering misinformation. But decay may also reflect the fading of the "threat narrative" established by forewarning—over time, the sense of being in an information environment where manipulation is possible may diminish, reducing vigilance.

Booster shots—brief re-exposures to inoculation content, delivered at intervals—have been proposed as a solution to inoculation decay, by analogy with vaccine booster doses. Some research supports the effectiveness of booster shots in maintaining inoculation effects. But the practical challenge of delivering booster content at appropriate intervals to large audiences is substantial.

Reaching Resistant Populations

A second major scalability challenge is reaching populations that are already heavily exposed to misinformation and may have developed resistance to inoculation itself. Several factors complicate prebunking effectiveness in such populations.

Prior exposure to the misinformation. Inoculation is most effective when delivered before exposure to the target misinformation. For populations already saturated with false claims, prebunking may arrive too late. Research suggests that post-exposure inoculation (more accurately described as retroactive inoculation or correction-with-inoculation) can still be effective, but the effects are smaller than pre-exposure inoculation.

Ideological resistance. In populations where belief in a particular form of misinformation is closely tied to group identity, inoculation attempts may be interpreted as partisan attacks, generating reactance. Prebunking research has generally found small or negligible moderation by political ideology, but most studies have been conducted in relatively moderate political contexts. The effects in highly polarized populations with deep ideological investments in specific false beliefs remain less well understood.

Motivated skepticism. Sophisticated consumers of conspiracy theories are often highly aware that efforts to debunk their beliefs exist, and they may interpret prebunking as exactly the kind of "establishment" manipulation the prebunking is supposed to alert them to. This "meta-conspiracy" interpretation—in which the prebunking itself is framed as evidence of the conspiracy—is a genuine challenge for reaching committed belief communities.

Platform-Level Challenges

Social media platforms face particular challenges in deploying prebunking at scale. Prebunking content placed adjacent to actual misinformation may itself be misidentified as misinformation, or may generate backlash from users who feel their content is being criticized. Prebunking that is too visible may reduce engagement with other platform content. And platforms must balance prebunking with other content moderation approaches (labeling, downranking, removal) without creating inconsistencies that undermine the effectiveness of any single approach.


Section 35.9: Integrating Prebunking into Practice

Platform Deployment

Social media platforms represent the most obvious deployment context for prebunking at scale. Several platforms have already piloted various forms of prebunking, typically embedded in "media literacy moments"—brief educational content delivered to users before they see specific types of content.

Twitter/X's "read before you share" prompts and Facebook's "context" labels are not prebunking in the strict sense, but they represent analogous approaches: providing contextual information that interrupts automatic processing and encourages more deliberate evaluation. YouTube's information panels under health-related videos provide brief factual context that partially addresses the forewarning component of inoculation.

True platform-level prebunking would go further: proactively delivering inoculation content—ideally interactive content in the game format—to users before they encounter misinformation, using platform data to identify users at elevated risk of exposure. This raises significant privacy and ethical concerns, but the technical capability exists.

Classroom Use

Prebunking content has been integrated into educational contexts at multiple levels. In secondary education, games like Bad News and Harmony Square have been used as classroom activities, typically embedded in social studies, civics, or media literacy courses. The game format is well-suited to the classroom context, where a teacher can guide discussion of the techniques identified during game play.

Several practical considerations affect classroom implementation. First, the games require reliable internet access and appropriate devices, which may not be available in under-resourced schools. Second, the games work best as part of a structured educational sequence, not as standalone activities; teachers need to be prepared to debrief the experience and connect it to broader media literacy concepts. Third, the effect sizes observed in research settings may not translate straightforwardly to classroom settings, where motivation, attention, and prior knowledge vary considerably.

Public Health Communication Applications

The COVID-19 pandemic dramatically accelerated interest in prebunking as a public health communication tool. Public health agencies faced an unprecedented "infodemic" of false claims about vaccines, treatments, and the disease itself, and traditional correction strategies—issuing statements, fact-checking, countering through social media—were insufficient.

Several public health applications of prebunking have been developed. The World Health Organization incorporated prebunking elements in their EPI-WIN communication strategy. The UK government's Counter Disinformation Unit worked with academic partners to develop prebunking materials. National public health agencies in several countries (including Canada, Denmark, and Finland) have integrated prebunking into their health communication campaigns.

For public health applications, the most effective format appears to be brief, visually engaging content that can be delivered through existing media channels (social media, YouTube, television public service announcements) and that clearly identifies specific manipulation techniques without requiring prior knowledge of the topic. The content should be politically neutral—focused on the manipulation technique rather than on specific political actors—to avoid the appearance of partisan advocacy.


Callout Box: The "Psychological Vaccine" Metaphor

The biological metaphor at the heart of inoculation theory is powerful, intuitive, and pedagogically useful. Like a vaccine, an inoculation exposes the "cognitive immune system" to a weakened form of the threat in order to stimulate the development of defenses. Like a vaccine, the protection conferred is not absolute and decays over time, requiring boosters. Like a vaccine, inoculation works better as a preventive measure than as a treatment after full exposure.

But the metaphor has limits worth noting. Unlike biological immunity, cognitive immunity does not develop automatically from a weakened pathogen exposure; it requires the person to actively process the refutational preemption. Unlike vaccines, prebunking cannot be administered without the knowledge and at least partial cooperation of the recipient (unlike, say, covertly adding vaccine to a water supply—which would be unethical in any case). And unlike the biological immune system, the cognitive "immune system" is not a unified biological mechanism; it is a collection of loosely related cognitive capacities that can be strengthened in some respects while remaining vulnerable in others.

These limitations of the metaphor suggest caution in making strong claims about prebunking as a "cure" for misinformation. It is better understood as one tool in a toolkit that must include structural, regulatory, and educational approaches as well.


Callout Box: The FLICC Framework

John Cook's FLICC taxonomy (Fake Experts, Logical Fallacies, Impossible Expectations, Cherry Picking, and Conspiracy Theories) provides a useful entry point for understanding the kinds of manipulation techniques that logic-based inoculation targets. The framework was developed initially in the context of climate change denial but has been extended to other domains of science and public affairs denial.

  • Fake Experts: Promoting the opinions of people who appear to have relevant expertise but do not (or who are not representative of expert consensus in their field).
  • Logical Fallacies: Using formally or informally invalid argumentative moves (ad hominem, false equivalence, slippery slope, etc.) to support a conclusion that does not follow from the premises.
  • Impossible Expectations: Demanding a standard of proof from mainstream science that is applied to mainstream science but never to the alternative views being advanced.
  • Cherry Picking: Selectively presenting evidence that supports a desired conclusion while ignoring the larger body of evidence that contradicts it.
  • Conspiracy Theories: Explaining the existence of contrary evidence as the product of a widespread, coordinated cover-up by powerful institutions or actors.

Key Terms

Continued influence effect (CIE): The persistence of misinformation's influence on reasoning and behavior even after a correction has been issued and understood.

Debunking: The post-hoc correction of false beliefs, typically through presenting accurate information and/or demonstrating the falsity of specific claims.

Fact-based inoculation: Prebunking that addresses specific false factual claims by pre-emptively exposing and refuting them; also called content-based inoculation.

FLICC framework: A taxonomy of science denial and misinformation techniques: Fake Experts, Logical Fallacies, Impossible Expectations, Cherry Picking, Conspiracy Theories (developed by John Cook).

Forewarning: In inoculation theory, the explicit notification that an attempt to manipulate or persuade is forthcoming, which motivates preparation of counter-arguments.

Inoculation decay: The reduction in inoculation effectiveness over time, analogous to the waning of vaccine-induced immunity.

Logic-based inoculation: Prebunking that addresses the underlying manipulative techniques used in misinformation rather than specific false claims; also called technique-based inoculation.

Motivational backlash: The increase in false belief that sometimes results from corrections, attributed to reactance or identity-protective cognition.

Prebunking: Inoculation-based interventions delivered before exposure to misinformation, intended to build resistance to manipulation techniques.

Refutational preemption: In inoculation theory, the exposure to a weakened form of the counter-argument together with its refutation, which provides specific cognitive tools for resisting the full-strength counter-argument.


Discussion Questions

  1. The continued influence effect suggests that corrections can actually reinforce misinformation through repetition of the false claim. How should communicators balance the need to address specific false claims against the risk of amplifying them?

  2. McGuire's original inoculation research was conducted on "cultural truisms"—beliefs so widely shared they had never been challenged. In what sense is contemporary misinformation research dealing with the opposite problem?

  3. What are the ethical implications of designing prebunking games in which players take on the role of misinformation producers? Could this approach have unintended negative consequences?

  4. The Google/Cambridge prebunking campaign used YouTube advertising infrastructure to deliver inoculation content at scale. What are the ethical tensions in using commercial advertising platforms for public health purposes? Who controls the content, and who bears responsibility for its effects?

  5. Inoculation theory predicts that "booster shots" are necessary to maintain inoculation effects over time. What practical mechanisms could platforms, governments, or educators use to deliver such boosters at appropriate intervals?

  6. If logic-based inoculation works at an intermediate level of abstraction (more general than specific claims, but not fully general), what are the implications for the curriculum design of a media literacy program based on the inoculation approach?

  7. Research suggests that inoculation effects are not strongly moderated by political ideology. Does this finding support the conclusion that prebunking is a politically neutral intervention? What assumptions does this conclusion rest on?

  8. Some critics have argued that focusing on prebunking and inoculation individualizes the misinformation problem, placing the burden of resistance on individuals rather than on platforms, governments, or information producers. Evaluate this critique.


Summary

This chapter has examined the shift from debunking to prebunking as a response to the limitations of post-hoc correction. Beginning with the evidence on why corrections fail—the continued influence effect, illusory truth, motivated reasoning, and correction fatigue—we traced the development of inoculation theory from McGuire's 1961 biological metaphor to contemporary technique-based prebunking research.

The core contribution of the contemporary prebunking program is the insight that logic-based inoculation—teaching people to recognize manipulation techniques rather than refuting specific false claims—can provide broader and more scalable protection than content-based approaches. Empirical support for this approach comes from multiple laboratory and field experiments, culminating in Google's 2022 YouTube prebunking campaigns that demonstrated significant effects in naturalistic real-world settings.

Important challenges remain: inoculation effects decay over time, reaching committed believers is difficult, and the logistical challenges of at-scale deployment have not been fully solved. But prebunking has established itself as one of the most empirically promising tools in the misinformation countermeasures toolkit, and further development of its methods and applications will be a central priority for research and policy in the coming years.


References

Basol, M., Roozenbeek, J., & van der Linden, S. (2020). Good news about Bad News: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1), 2.

Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.

Cook, J., & Lewandowsky, S. (2011). The Debunking Handbook. University of Queensland.

Cook, J., Lewandowsky, S., Ecker, U. K. H., et al. (2020). The Debunking Handbook 2020. Available at debunkinghandbook.org.

Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16(1), 107-112.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.

Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1-16.

McGuire, W. J. (1961). The effectiveness of supportive and refutational defenses in immunizing and restoring beliefs against persuasion. Sociometry, 24(2), 184-197.

McGuire, W. J. (1964). Inducing resistance to persuasion: Some contemporary approaches. Advances in Experimental Social Psychology, 1, 191-229.

Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865-1880.

Roozenbeek, J., & van der Linden, S. (2019). Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1), 65.

Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M., & van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 across 26 countries. Royal Society Open Science, 7(10), 201199.

Roozenbeek, J., van der Linden, S., Goldberg, B., Rathje, S., & Lewandowsky, S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances, 8(34), eabo6254.

Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warnings about false claims become recommendations. Journal of Consumer Research, 31(4), 713-724.

van der Linden, S. (2022). Foolproof: Why misinformation infects our minds and how to build immunity. W. W. Norton.

van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), 1600008.

Wilkes, A. L., & Leatherbarrow, M. (1988). Editing episodic memory following the identification of error. Quarterly Journal of Experimental Psychology, 40(2), 361-387.

Wood, T., & Porter, E. (2019). The elusive backfire effect: Mass attitudes' steadfast factual adherence. Political Behavior, 41(1), 135-163.