Case Study 35.2: Google's Prebunking Ad Campaigns in Central/Eastern Europe — A Field Experiment

Overview

In early 2022, following Russia's invasion of Ukraine and the accompanying wave of Russian-linked disinformation targeting Central and Eastern European audiences, Google's Jigsaw unit partnered with the Social Decision-Making Lab at the University of Cambridge to conduct one of the most ambitious real-world tests of prebunking ever attempted. The collaboration resulted in a series of short video advertisements delivered through YouTube to audiences in Poland, Czech Republic, and Slovakia, alongside a rigorously designed field experiment to measure their effectiveness.

This case study examines the methodology and results of that field experiment, explores the implications for at-scale prebunking, and considers the ethical and practical challenges raised by deploying behavioral science techniques through commercial advertising infrastructure.


Background: The Disinformation Context

Russian Disinformation in Central and Eastern Europe

Central and Eastern European countries have been frequent targets of Russian influence operations. These campaigns typically combine several manipulation techniques: emotional appeals to cultural and ethnic identity, conspiracy framing around NATO and Western institutions, discrediting of mainstream journalists and fact-checkers, and impersonation of legitimate news sources.

Following Russia's February 2022 invasion of Ukraine, these campaigns intensified significantly. Russian state media and affiliated outlets promoted narratives claiming NATO provocations were responsible for the conflict, that Ukrainian soldiers were committing atrocities against Russian-speaking civilians, and that Western economic sanctions would harm ordinary Europeans more than the Russian government. These narratives circulated on social media platforms including YouTube, Facebook, and Telegram, reaching significant audiences in Poland, Czech Republic, and Slovakia.

Jigsaw's Prior Work

Jigsaw (formerly Google Ideas) is a technology incubator within Google that works on "challenges to open societies," including extremism, censorship, and disinformation. Jigsaw had previously explored prebunking approaches through its collaboration with researchers at Cambridge, and the Ukraine-related disinformation context provided a compelling and urgent case for a large-scale field test.

The decision to use YouTube advertising as the delivery mechanism was driven by practical considerations: YouTube already reaches massive audiences in the target countries, the advertising infrastructure allows for rapid deployment and A/B testing, and video content can deliver inoculation material in an engaging format without requiring any behavior change on the part of the audience beyond minimal attention.


Study Methodology

Intervention Design

The prebunking advertisements were designed through an iterative collaborative process between Jigsaw's team and Cambridge researchers. The design goals were:

  1. Each advertisement should target a single, specific manipulation technique used in Russian-linked disinformation.
  2. The content should be politically neutral — focused on the manipulation technique as a technique, not on specific political actors or governments.
  3. The advertisements should be engaging and visually polished, appropriate for a commercial advertising context in which most viewers would be consuming entertainment content.
  4. Each advertisement should be approximately 90 seconds long, allowing it to be delivered as a "mid-roll" or "pre-roll" advertisement without requiring a significant time commitment.

Three advertisements were developed, each targeting one manipulation technique: - Conspiracy framing: An ad explaining how conspiracy theories work — why they are appealing, how they are structured, and why the patterns they identify are often misleading. - Emotional manipulation: An ad explaining how emotional language and imagery can be used to bypass analytical thinking, with specific examples of the linguistic and visual cues that signal emotionally manipulative content. - Discrediting: An ad explaining how the strategy of attacking source credibility, rather than engaging with the substance of evidence, is used to undermine trust in legitimate information sources.

Each advertisement included both forewarning (alerting viewers that this technique is used to manipulate them) and refutational preemption (showing an example of the technique in action, with an explanation of why it misleads).

Experimental Design

The field experiment was pre-registered at the Open Science Framework before data collection began — an important indicator of methodological rigor, as pre-registration prevents post-hoc hypothesis adjustment and selective reporting.

Participant recruitment: Participants were recruited through Google's consumer research panel, which provides access to representative samples of internet users in the target countries. Approximately 5,000 participants were recruited in Poland, 5,000 in Czech Republic, and 5,000 in Slovakia (totaling approximately 15,000 participants).

Randomization: Participants were randomly assigned to either the prebunking condition (seeing one of the prebunking advertisements) or a control condition (seeing an unrelated commercial advertisement of similar length and format). The randomization was stratified by country and by estimated prior exposure to disinformation (based on responses to a brief pre-screening questionnaire).

Outcome measures: The primary outcome measure was a "manipulation recognition" task: participants were shown a series of social media posts (some using the targeted manipulation technique, some not) and asked to rate the reliability of each post and to identify whether it used any of the manipulation techniques covered in the study. Secondary outcomes included confidence in ability to recognize manipulation, general skepticism toward online information, and self-reported likelihood of sharing the test posts.

Timeline: Participants completed baseline measures before exposure to the advertisement. Outcome measures were completed immediately after advertisement exposure.

Blinding and Controls

Participants were not told that the study was specifically about prebunking or disinformation inoculation; they were told only that they were participating in a study about media consumption. This partial blinding was intended to reduce demand effects.

Control advertisements were selected to be comparable to the prebunking ads in length, visual quality, and topic relevance (they covered topics related to online information, but did not provide inoculation content), reducing the likelihood that differences between conditions were driven simply by the novelty or entertainment value of the prebunking ads.


Results

Primary Outcome: Manipulation Recognition

The primary analyses found that participants who saw the prebunking advertisements were significantly better at identifying manipulative social media posts using the targeted technique, compared to participants who saw the control advertisements.

Effect sizes: The manipulation recognition advantage for the prebunking condition was in the range of d = 0.20 to d = 0.30, depending on the specific manipulation technique and country. These effect sizes are modest but consistent with the broader prebunking literature. The smallest effects were for the conspiracy framing technique (d ≈ 0.18-0.22 across countries) and the largest for the emotional manipulation technique (d ≈ 0.26-0.31).

Country differences: Effects were significant in all three countries. Poland showed somewhat larger effects than Czech Republic and Slovakia, though the differences were not dramatic and may reflect sampling variation rather than genuine cultural differences.

Transfer to novel content: Critically, the prebunking effect was found not just for posts closely resembling those shown in the advertisements, but also for novel posts using the same technique with different content. This transfer finding is essential for practical application: it demonstrates that the inoculation conferred by the advertisement generalizes beyond the specific examples used.

Secondary Outcomes

Confidence: Participants in the prebunking condition reported higher confidence in their ability to identify manipulation in online content (g = 0.17 to 0.25 across countries). This is a meaningful secondary outcome because confidence can affect the likelihood of pausing to evaluate content critically in real-world settings.

Skepticism: Participants in the prebunking condition showed modestly higher general skepticism toward online information (g = 0.12 to 0.20). Importantly, this increase in skepticism was not accompanied by decreased discrimination between real and fake content — participants were more skeptical of manipulative content but not more skeptical of legitimate content. This "calibrated skepticism" is the ideal outcome of prebunking.

Sharing intention: Participants in the prebunking condition reported lower likelihood of sharing the manipulative test posts (g = 0.14 to 0.22). This is particularly significant because sharing is the primary mechanism by which misinformation spreads; reductions in sharing intention are directly relevant to practical mitigation.

Moderation Analyses

Political ideology: The effect of prebunking was not moderated by political ideology (participants' self-reported political orientation). This finding is consistent with laboratory research suggesting that technique-based prebunking does not generate the partisan backfire effects that can accompany content-specific corrections.

Prior exposure to disinformation: Participants with higher self-reported prior exposure to the targeted disinformation narratives showed modestly smaller prebunking effects, consistent with the theoretical prediction that inoculation is most effective before exposure. However, even in high-prior-exposure groups, the prebunking effect remained statistically significant.

Education: Effect sizes were not strongly moderated by education level, suggesting that the prebunking advertisements do not require a high level of baseline literacy to be effective.

Single-exposure finding: A particularly notable result was that the prebunking effect was significant after a single advertisement exposure, without any interactive elements. This demonstrates that passive viewing of a well-designed prebunking advertisement can produce meaningful effects, which is crucial for the practicality of at-scale deployment through advertising.


Implications for At-Scale Prebunking

Proof of Concept for Advertising-Delivered Inoculation

The most important implication of the Google/Cambridge field experiments is the proof of concept: prebunking can be delivered at scale through advertising infrastructure, with effects that are significant and not trivially small, in real-world settings rather than artificial laboratory conditions.

This proof of concept opens up a policy possibility that would have seemed speculative before these studies: systematic deployment of prebunking content through commercial advertising channels as a public health response to anticipated disinformation campaigns. If the effects found in these studies replicate at even larger scale, the aggregate population-level impact could be meaningful.

The Infrastructure Advantage

One of the most significant advantages of the advertising delivery model is that it leverages existing infrastructure that already reaches billions of people without requiring any new behavioral patterns on the part of the audience. Prebunking games require users to voluntarily seek out and engage with educational content — a significant barrier. Prebunking advertisements require only that users watch content they were already going to see.

This infrastructure advantage is accompanied by a targeting advantage: advertising platforms have extensive data on user demographics, interests, and behavior that could in principle be used to target prebunking content to users who are most at risk of encountering specific forms of misinformation. While this raises significant privacy concerns (discussed below), it suggests a more sophisticated deployment model than simply broadcasting prebunking content to everyone.

Challenges for Sustained Deployment

The field experiment provides a strong demonstration of immediate effects but does not address several challenges that would need to be solved for sustained deployment:

Content updating: Misinformation techniques evolve, and prebunking advertisements designed for 2022 Russian disinformation may not be effective against 2025 disinformation using different techniques. A sustained prebunking advertising program would need a mechanism for regularly updating content based on monitoring of current manipulation trends.

Reach saturation: Repeated exposure to the same prebunking content may lead to "inoculation fatigue" — habituation that reduces the effectiveness of subsequent exposures. The optimal frequency and variety of prebunking advertisements is unknown.

Inoculation decay and boosters: The field experiment measured only immediate effects. Given the longitudinal research showing significant inoculation decay within weeks, the long-term impact of a single advertisement exposure is likely small. A sustained campaign would need to deliver booster content at appropriate intervals.

Attribution and gaming: Sophisticated disinformation actors are likely aware of prebunking research and may adapt their techniques to exploit the gaps in prebunking coverage. A prebunking program that becomes publicly known may invite disinformation actors to develop "anti-prebunking" narratives that reframe inoculation attempts as evidence of establishment manipulation.


Ethical Analysis

The deployment of behavioral science techniques through commercial advertising without explicit informed consent raises genuine ethical concerns. Participants in the field experiment were partially blinded to the study's true purpose — they knew they were participating in a study about media, but not that the study was specifically designed to test a psychological inoculation intervention.

The consent issue is more acute in actual deployment (outside the research context): viewers of YouTube prebunking advertisements are not told that the advertisements are designed to change their psychological resistance to misinformation. They see what appears to be an educational public service announcement, without knowing that it was designed using inoculation theory and that its effects have been studied.

Whether this is ethically acceptable depends on the ethical framework applied. A consequentialist analysis would weigh the benefits of reduced misinformation susceptibility against the costs of proceeding without explicit consent. A deontological analysis might conclude that persons have a right to know when their psychology is being deliberately modified, regardless of the beneficial intent. A communitarian analysis might focus on the collective benefit of reducing population-level misinformation susceptibility.

Most prebunking researchers have defended the advertising approach by noting that the content is transparent and informative — it does not contain any hidden messages or subliminal content; it simply presents information about manipulation techniques in an engaging format. On this view, prebunking advertisements are no more ethically problematic than public health advertisements about smoking or seat belt use. The question is whether this analogy holds.

Who Controls the Content?

A deeper ethical concern involves the question of who decides what counts as a "manipulation technique" for purposes of prebunking. The current research has focused on relatively uncontroversial techniques — emotional manipulation, impersonation, conspiracy framing — that most researchers and policy-makers would readily identify as problematic. But the line between "manipulation" and "persuasion" is not always clear, and a prebunking infrastructure controlled by a small set of actors could in principle be used to inoculate populations against legitimate political speech.

This concern is not hypothetical. Political actors routinely accuse their opponents of "emotional manipulation" and "conspiracy framing," while presenting their own emotional and conspiratorial rhetoric as factual. A prebunking program that effectively trains audiences to recognize and resist these techniques must be careful not to train them to resist only one side's deployment of these techniques.

The Jigsaw/Cambridge approach attempted to address this concern by focusing on techniques rather than on specific political positions, and by using politically neutral examples. But the selection of which techniques to target, the framing of the advertisements, and the decision about what counts as "manipulative" versus "legitimate persuasion" all involve value judgments that are not politically neutral.

Data and Privacy

The targeting of prebunking advertisements using advertising platform data raises significant privacy concerns. If Google uses behavioral and demographic data to identify users who are most susceptible to or most likely to encounter specific forms of misinformation, and then targets those users with prebunking content, it is using surveillance data for a potentially beneficial public purpose — but the data use itself may be problematic regardless of the purpose.


Lessons for Future Campaigns

Key Success Factors

Based on the Jigsaw/Cambridge experience, several factors appear to be important for successful at-scale prebunking campaigns:

Political neutrality: Focusing on manipulation techniques rather than specific political narratives reduces the risk of partisan backlash and extends the reach of the campaign across the political spectrum.

Visual quality and engagement: Prebunking content deployed through commercial advertising must compete for attention with polished commercial content. Investments in production quality are not cosmetic; they affect the probability of genuine engagement.

Pre-registration and rigorous evaluation: Building rigorous evaluation into prebunking campaigns from the start — rather than as an afterthought — generates the evidence needed to improve future campaigns and makes accountability possible.

Coordination with other interventions: Prebunking advertising is likely most effective as one component of a broader counter-disinformation strategy, coordinated with platform content moderation, fact-checking, and media literacy education.

Unanswered Questions

The Jigsaw/Cambridge field experiments, despite their methodological strengths, leave important questions unanswered:

  • What are the effects of sustained, repeated exposure to prebunking advertisements over months or years?
  • How can prebunking advertising be kept current with evolving manipulation techniques without an expensive, ongoing content production process?
  • What is the optimal targeting strategy: broad reach to everyone, or precise targeting to high-risk individuals?
  • How do effects vary across different disinformation contexts (health, elections, geopolitics) and different media environments?
  • Can prebunking advertising effects be detected in population-level behavior data (e.g., actual sharing behavior on social media) rather than in survey-based attitude measures?

These questions define a substantial research agenda for the coming years.


Conclusion

The Google/Cambridge prebunking advertising campaigns in Central and Eastern Europe represent a landmark in the application of inoculation theory to real-world misinformation. They demonstrate, for the first time in a rigorous pre-registered field experiment, that prebunking delivered through commercial advertising can significantly reduce susceptibility to manipulation techniques even after a single brief exposure.

The implications are substantial. If these findings replicate and scale, prebunking advertising could become a routine tool in the counter-disinformation toolkit of governments, platforms, and civil society organizations. The infrastructure for delivery already exists; what is needed is the research, policy framework, and institutional will to deploy it responsibly.

The ethical challenges — consent, content control, data use, potential for misuse — are real and should not be minimized. But they are challenges to be addressed through governance and accountability structures, not reasons to abandon an approach that shows genuine promise for reducing one of the most consequential information challenges of our time.


Discussion Questions for Case Study 35.2

  1. The Jigsaw/Cambridge field experiment pre-registered its analysis plan before data collection. Why is pre-registration particularly important for industry-funded research? What specific biases does it protect against?

  2. The advertisements targeted Russian disinformation techniques in Central and Eastern Europe. Does the context of a specific geopolitical conflict make this campaign more or less ethically defensible than a context-neutral prebunking campaign? Why?

  3. The single-exposure finding — that significant effects occurred after viewing a single advertisement once — is both practically important and theoretically surprising. What mechanisms could explain this finding? What follow-up research would help clarify whether the finding is robust?

  4. Design a governance framework for at-scale prebunking advertising that would address the concerns about who controls the content and what counts as a "manipulation technique." Who should have decision-making authority? What accountability mechanisms should be in place?

  5. The campaign focused on audiences in Poland, Czech Republic, and Slovakia. Should similar campaigns have been targeted at Russian-speaking audiences within Russia? What additional ethical considerations would that involve?