Case Study 33.1: The Bad News Game — Prebunking via Simulation

Overview

In 2018, a browser-based game went quietly live at getbadnews.com. It was not marketed as an educational tool or a propaganda awareness program. Players were invited to "become a disinformation mastermind" — to build a fake news empire, manipulate a fictional social media following, and deploy the techniques of influence operations to achieve their in-game goals. Within two years, it had been played by over a million people in more than fifteen languages, studied in peer-reviewed publications across three continents, and deployed in formal educational settings by governments including the Netherlands, the United Kingdom, and Sweden.

Bad News is the most extensively studied scalable inoculation game in existence. This case study examines its design history, its experimental evidence base, and the lessons it provides for the broader challenge of deploying inoculation theory at population scale.


Background: The Problem Bad News Was Designed to Solve

By 2016, it was clear that social media platforms had become environments in which disinformation spread faster and farther than corrections could follow. The standard response — debunking, fact-checking, content moderation — was demonstrably insufficient. False headlines traveled further than corrections; fact-checks reached largely already-skeptical audiences; content moderation was too slow and too inconsistent to prevent widespread exposure to false claims.

Sander van der Linden and his colleagues at the Cambridge Social Decision-Making Lab were working on a different approach: not faster corrections, but preemptive resistance. If people could be inoculated against the techniques of disinformation before encountering specific false claims, the specific false claims would be less effective when they arrived.

The challenge was delivery. Text-based inoculation messages worked in controlled studies, but they required attention, motivation, and a willingness to read material that felt — to most potential audiences — like homework. Video-based inoculation worked better, but production costs were high and distribution required platform cooperation. What van der Linden's colleague Jon Roozenbeek proposed was a game.

The logic was straightforward. Games are intrinsically motivating — people play them voluntarily, for extended periods, with high attention. Games also operationalize the "active inoculation" principle: rather than reading about how disinformation works, players would have to do it, making decisions about which manipulation techniques to apply and experiencing the consequences. The perspective-shift — from passive target to active producer — had the potential to produce qualitatively stronger inoculation than any passive format.


Design: How the Game Works

Bad News opens with a simple framing: you are a nobody with no followers on a fictional social media platform. Your goal is to build a following of one million people. The means are up to you.

The game proceeds through six "badge" challenges, each corresponding to one of six disinformation technique categories:

Impersonation — You create a fake account impersonating a credible news organization or public figure, learning how false authority is constructed through the appearance of legitimacy.

Emotion — You craft posts designed to maximize emotional arousal — fear, outrage, disgust — to drive engagement and sharing, regardless of factual accuracy.

Polarization — You deploy us-vs.-them framing to deepen social divisions and build a tribal audience, selecting the most divisive versions of real-world political conflicts.

Conspiracy — You construct a conspiracy narrative linking unrelated events, learning how unfalsifiability is built into conspiratorial framing structures.

Discredit — You attack the credibility of mainstream information sources — journalists, scientists, government agencies — using ad hominem, conflict-of-interest insinuation, and fake evidence.

Trolling — You deploy harassment and coordinated inauthentic behavior to suppress credible voices and dominate information environments.

At each step, players must make choices — selecting from multiple options the one most likely to grow their following. The game provides feedback: choosing well-crafted manipulative content results in visible follower growth; less effective choices result in stagnation. This operant feedback loop gives players experiential knowledge of what disinformation techniques work, and why.

The game takes approximately fifteen minutes to complete and is designed to be self-explanatory — no introduction, no reading of instructions, no prior knowledge required.


The Experimental Evidence

Initial Validation Study (Roozenbeek and van der Linden, 2019)

The first peer-reviewed study of Bad News was published in Palgrave Communications in 2019. The study used an online experiment with a nationally representative sample of participants from the United States (n = 15,031). Participants were randomly assigned to play Bad News (treatment condition) or to play a different game unrelated to disinformation (control condition). Both groups then assessed the reliability of a series of social media posts — some genuine news, some disinformation using techniques from the FLICC categories.

The key finding: participants who played Bad News rated manipulative social media posts as significantly less reliable than control participants, while their assessments of genuine news posts were not significantly different. This "accuracy discernment" pattern — not merely becoming more skeptical, but becoming more discriminating — replicated the design goal.

Effect sizes were moderate (Cohen's d approximately 0.15–0.22 across technique categories), consistent with the broader inoculation literature. The effects were present across partisan identity groups: both self-identified liberals and conservatives showed significant improvements in manipulation detection.

Notably, the study found that self-reported confidence in the ability to spot disinformation was significantly higher among Bad News players than controls — a confidence calibration finding that has both positive and potentially worrying implications. Better-calibrated confidence is valuable; overconfidence can itself be exploited.

Cross-Cultural Replication (Roozenbeek et al., 2020)

A follow-up study tested Bad News across eight different countries, including both Western liberal democracies (Germany, Netherlands, United Kingdom) and countries with less developed media literacy traditions. The cross-cultural study found significant inoculation effects across all eight countries, with somewhat larger effects in countries with lower baseline media literacy.

This cross-cultural finding is significant for two reasons. First, it suggests that the game's inoculation mechanism is not culturally specific — it does not depend on prior media literacy education or cultural familiarity with the concept of disinformation. Second, it suggests that the populations with the greatest baseline vulnerability to disinformation may benefit most from the game — a pattern of differential effectiveness that favors deployment in precisely the highest-need contexts.

Age and Education Effects

Multiple studies of Bad News have examined whether the game works comparably across age groups and education levels. The consistent finding is that effect sizes are relatively stable across these variables — younger and older participants, more and less educated participants all show measurable improvements. This is practically important: it suggests that Bad News is not merely preaching to a media-literate choir but also reaching less media-literate populations.

Some studies find modestly larger effects among younger participants (18–34 age range), which may reflect greater familiarity with the social media context the game depicts, greater engagement with the game format, or developmental differences in attitude malleability.


The Scaling Challenge: Reaching One Million Players

From a research perspective, Bad News is an exceptionally well-validated tool. From a public health perspective, the question is reach. An inoculation tool that has been played by one million people is impressive. In a country of 330 million, it is also a small fraction of the population.

Van der Linden and Roozenbeek's response to the scaling challenge has taken several forms.

Integration into formal education. The Netherlands incorporated Bad News into its national media literacy curriculum, reaching students in secondary schools across the country. This integration model — using an existing institutional distribution mechanism — is the most efficient path to broad coverage, but it is slow, requires institutional buy-in, and reaches only students in formal education.

Government deployment. The UK government's RESIST counter-disinformation unit worked with the Bad News team to deploy Go Viral! (the COVID-19 inoculation game developed on the same framework) across social media platforms. This deployment model is faster but produces shorter-form exposure and smaller effect sizes.

Platform integration. YouTube has worked with the Bad News team on prebunking video content, and there has been ongoing discussion of how game-based inoculation could be integrated into social media platform experiences. Platform integration would dramatically expand reach but requires cooperation from platforms that have commercial incentives that may not align with inoculation goals.

The scaling challenge remains the most significant practical constraint on the Bad News approach. As van der Linden has written, "An inoculation program that reaches 10% of a population leaves 90% unprotected. In a networked information environment, unprotected nodes are entry points for the whole network."


Critical Reception and Methodological Concerns

Bad News has attracted enthusiastic coverage from journalists and policymakers and generally positive reception from researchers. Several methodological concerns have also been raised.

Self-selection effects. People who voluntarily play Bad News are not a random sample of the population — they are people who sought out or accepted an invitation to play a game about disinformation. This self-selection may mean that the game primarily reaches people who are already relatively skeptical and media-literate, while the highest-need populations (deep in disinformation ecosystems, low in media literacy) do not engage. The cross-cultural study's finding of larger effects in low-media-literacy populations provides some reassurance, but it does not resolve the self-selection concern for naturalistic deployment.

Long-term effects. Most Bad News studies measure outcomes immediately post-game or within one week. The decay rate of inoculation effects, as discussed in Chapter 33's main text, raises questions about how much protection a single fifteen-minute game session provides over a period of months.

Ecological validity. Outcome measures in Bad News studies typically involve rating the reliability of social media posts in a controlled research context. Whether this translates to changed behavior in actual social media environments — reduced sharing of disinformation, different engagement patterns, changed voting behavior — has not been directly measured and is difficult to assess.

The "fun" trade-off. The game was deliberately designed to be engaging and enjoyable. Some researchers have asked whether the entertaining game experience produces attitude change that is qualitatively different from — perhaps shallower than — the effortful counterarguing that McGuire's theory identifies as the engine of resistance. The evidence to date does not resolve this question.


Lessons for Inoculation Design

Bad News provides several design lessons that extend beyond the specific game to inoculation intervention design generally.

Active generation is both the strongest mechanism and the hardest to scale. The game's effectiveness derives substantially from having players actively produce disinformation. But active production requires approximately fifteen minutes of engaged attention — a high bar for population-scale deployment. Designing for active generation at scale requires either accepting smaller deployment footprints or finding ways to compress the active generation experience without destroying the mechanism.

Cross-partisan framing is non-negotiable. The consistent finding of comparably-sized effects across partisan groups depends critically on the game's cross-partisan design: it uses examples from multiple political directions, treats all political actors as potential disinformation producers, and frames inoculation as protection of one's own epistemic autonomy. Single-partisan inoculation designs would likely show the identity-protection failures described in Section 33.9.

Distribution partnerships are as important as design quality. The most carefully designed inoculation message is worthless if it doesn't reach people before disinformation exposure. Building distribution partnerships — with schools, platforms, governments — is a distinct challenge from building an effective intervention, and one that requires different skills and resources.

Evaluation should be ongoing, not one-time. The Bad News experience suggests that inoculation tools need continuous evaluation, iteration, and updating as disinformation techniques evolve. A game calibrated to 2018 disinformation techniques may be partially obsolete by 2024 as those techniques have evolved. Design processes that allow rapid iteration — short feedback loops between deployed interventions and updated designs — are more valuable than one-time rigorous evaluations.


Discussion Questions

  1. The Bad News game asks players to become disinformation producers. Is there an ethical risk in this approach — could it teach people to produce disinformation better, rather than to resist it? How would you evaluate this risk empirically?

  2. The self-selection concern suggests that Bad News may primarily reach the already-skeptical. Design a brief distribution strategy that would specifically target the highest-need populations — people deeply embedded in disinformation-heavy information ecosystems — while maintaining the game's engagement.

  3. Compare Bad News (15-minute game), Go Viral! (5-minute game), and YouTube prebunking videos (60-90 seconds). Based on the research evidence reviewed in this chapter and case study, what would the optimal "inoculation program" combining these formats look like? How would you allocate resources across formats?

  4. The scaling challenge raises the question of whether inoculation programs require government or platform involvement to reach meaningful population coverage. What are the risks and benefits of each partnership model (government-sponsored curricula, government-sponsored social media campaigns, platform-integrated inoculation)?