On the morning of March 15, 2020, Maya's phone erupted with notifications. Her aunt had forwarded a video claiming that 5G towers caused COVID-19 by weakening the immune system. Her cousin had shared a post asserting that drinking bleach could cure...
In This Chapter
- Overview
- Learning Objectives
- 33.1 Defining the Problem: Misinformation, Disinformation, Malinformation
- 33.2 Why False News Spreads: The Vosoughi et al. (2018) Findings
- 33.3 The Engagement-Optimization Engine as Misinformation Amplifier
- 33.4 The Liar's Dividend: When Fake Everything Makes Real Evidence Invisible
- 33.5 COVID-19 Misinformation: A Case Study in Infodemic
- 33.6 Health Misinformation and the Anti-Vaccination Movement
- 33.7 Financial Misinformation: Meme Stocks, Crypto, and Get-Rich-Quick Schemes
- 33.8 Platform Responses: Interventions and Their Limitations
- 33.9 News Deserts and the Local Journalism Collapse
- 33.10 Computational Propaganda: Bot Networks and Coordinated Inauthentic Behavior
- 33.11 The Path Forward: Structural vs. Symptomatic Responses
- Voices from the Field
- Maya's Perspective
- Velocity Media Sidebar: The Engagement-Truth Tradeoff
- Summary
- Discussion Questions
Chapter 33: Misinformation and Engagement Optimization: The Epistemic Crisis
Overview
On the morning of March 15, 2020, Maya's phone erupted with notifications. Her aunt had forwarded a video claiming that 5G towers caused COVID-19 by weakening the immune system. Her cousin had shared a post asserting that drinking bleach could cure infection. Her grandmother had been told by a Facebook group that the virus was a bioweapon engineered in a Chinese lab. Each piece of content arrived wrapped in urgency, from people she trusted, amplified by algorithms that had registered the spike in COVID-related engagement and doubled down accordingly.
Maya was experiencing what the World Health Organization would call an "infodemic" — an overabundance of information, some accurate and some not, that makes it hard to find trustworthy guidance when people need it most. But the infodemic did not arise spontaneously. It was, in significant part, a product of the same engagement-optimization systems this book has been examining — systems built to maximize time-on-platform, emotional arousal, and sharing behavior, without regard to whether the content driving that behavior was true.
This chapter examines the relationship between misinformation and algorithmic amplification. We begin with definitional clarity: what distinguishes misinformation, disinformation, and malinformation, and why the distinctions matter. We then examine the landmark research demonstrating that false information spreads faster and further than true information on social media, and we analyze why this is the case. The chapter explores how engagement-optimization systems function as structural amplifiers of misinformation, creating what researchers call the "epistemic crisis" — a degradation of shared factual foundations necessary for democratic governance, public health, and social cohesion.
We examine case studies of specific misinformation crises, from anti-vaccination movements to QAnon to COVID-19, and we assess the platform interventions designed to counter misinformation — fact-checking labels, reduced distribution, prebunking campaigns — with honest attention to what the evidence shows about their effectiveness. We close by examining the structural conditions that make misinformation particularly dangerous in an era of collapsing local journalism and rising computational propaganda.
Learning Objectives
After completing this chapter, students will be able to:
- Distinguish between misinformation, disinformation, and malinformation with precision, and explain why the distinctions have practical significance
- Describe the empirical findings of the Vosoughi et al. (2018) study and explain the mechanisms by which false news spreads faster than true news
- Explain how engagement-optimization algorithms structurally amplify misinformation, independent of intent
- Analyze specific misinformation crises — including QAnon and COVID-19 vaccine misinformation — through the lens of algorithmic amplification
- Evaluate the effectiveness of platform interventions against misinformation, including fact-checking labels, reduced distribution, and prebunking campaigns
- Articulate the relationship between news deserts, local journalism collapse, and increased vulnerability to misinformation
- Define computational propaganda and explain the role of bot networks in coordinated inauthentic behavior
33.1 Defining the Problem: Misinformation, Disinformation, Malinformation
Precision matters when analyzing information disorders. The terms "misinformation," "disinformation," and "fake news" are often used interchangeably in public discourse, but they describe meaningfully different phenomena with different causes, different actors, and different policy implications. Conflating them leads to analysis that is both intellectually imprecise and practically useless.
33.1.1 The Tripartite Framework
Researchers Claire Wardle and Hossein Derakhshan, in their landmark 2017 report for the Council of Europe, proposed a framework that has become foundational in the field. They identified three distinct categories of information disorder based on two axes: whether the content is false, and whether the creator intends to cause harm.
Misinformation is false information shared without intent to harm. The person sharing it believes it to be true. Maya's aunt, forwarding the 5G/COVID-19 video, was almost certainly sharing what she genuinely believed. She was spreading misinformation — false information, no malicious intent. Misinformation includes rumors, poorly understood science, medical myths, and urban legends. It is extraordinarily common, has existed throughout human history, and is not primarily a product of bad actors.
Disinformation is false information shared with the deliberate intent to deceive or cause harm. State-sponsored influence operations, political operatives deliberately spreading false narratives about opponents, and coordinated bot networks amplifying fabricated claims are all engaged in disinformation. The crucial distinction is intent: disinformation requires an actor who knows the information is false and spreads it anyway for strategic purposes. While disinformation garners significant media attention — particularly around elections — it represents a smaller portion of total false information in circulation than misinformation does.
Malinformation is true information used with the intent to harm. Publishing someone's genuine private communications to damage their reputation, releasing accurate personal information (doxxing) to enable harassment, or strategically deploying real information selectively to create a false overall impression all constitute malinformation. Malinformation is particularly difficult to counter because the content itself is factually accurate — the harm lies in its deployment.
33.1.2 Why the Distinctions Matter
The tripartite framework is not merely academic categorization. It has direct implications for platform policy, legal frameworks, and counter-intervention design.
If most false information online is misinformation rather than disinformation, then counter-interventions focused primarily on detecting and removing malicious foreign actors will address only a fraction of the problem. Research consistently suggests that most misinformation is spread by ordinary people who believe it — not by coordinated campaigns, though coordinated campaigns do exist and can amplify content significantly.
The framework also matters for free speech considerations. Disinformation, spread by an actor who knows it is false, has a weaker claim to First Amendment protection under U.S. law than speech by someone who genuinely believes what they are saying. Policy responses that treat all false information as disinformation risk inappropriately curtailing sincere expression.
Finally, the framework helps explain why "fake news" is an analytically useless term. Its casual deployment conflates intentional fabrication with honest error, applies the same label to commercial clickbait as to state-sponsored influence operations, and has been so thoroughly weaponized politically — particularly by figures who apply it to any coverage they find unfavorable — that it has lost descriptive utility. We will not use the term "fake news" in this chapter except when discussing its political deployment.
33.1.3 The Scale of the Problem
The scale of information disorder on social media platforms is staggering. A 2019 study by Renée DiResta and colleagues at the Oxford Internet Institute found that state-sponsored disinformation operations had been active in at least 70 countries. A 2020 Reuters Institute Digital News Report found that social media was the primary news source for a significant portion of users in every country surveyed. The First Draft research coalition has documented misinformation crises in contexts ranging from elections in France, Germany, and the United States to public health emergencies across multiple continents.
Understanding misinformation requires understanding not just the content itself but the infrastructure through which it spreads — and that infrastructure, as this chapter will argue, is systematically biased toward amplifying false content over true content.
33.2 Why False News Spreads: The Vosoughi et al. (2018) Findings
In 2018, a research team at MIT's Media Lab published what became one of the most cited papers in the history of misinformation research. Soroush Vosoughi, Deb Roy, and Sinan Aral analyzed every fact-checked story that had been verified by six independent fact-checking organizations and shared on Twitter between 2006 and 2017 — approximately 126,000 stories in total, shared by roughly three million people more than four and a half million times. Their findings, published in Science, overturned several comfortable assumptions about how false information spreads.
33.2.1 Core Findings
The paper's central finding was unambiguous: false news spreads faster, further, deeper, and more broadly than true news. More specifically:
- False news reached 1,500 people approximately six times faster than true news
- True news almost never reached more than 1,000 people, while top false stories reached 100,000 people
- Falsehoods were 70 percent more likely to be retweeted than true stories
- Political falsehoods were the most viral of all categories studied, spreading three times faster than other types of false content
These findings were robust across time, controlling for news categories, and — most significantly — they held even when accounting for the activity of bots. Vosoughi and colleagues ran their analysis with and without bot activity and found that bots amplified both true and false news at roughly equal rates. The differential spread of false news was driven primarily by humans, not automated systems.
33.2.2 The Novelty Hypothesis
Why does false news spread faster? Vosoughi and colleagues proposed and tested several hypotheses. The most strongly supported was what they called the "novelty hypothesis": false news is more novel than true news. The stories people found surprising, unexpected, and new were more likely to be shared — and false stories were more novel than true ones.
This finding has significant implications. Novelty is intrinsically engaging. Human cognitive architecture is wired to pay attention to the unexpected, because unexpected events require updating our model of the world. A true story that is genuinely surprising is engaging, but truth is constrained by what actually happened — and what actually happened is often mundane, incremental, and consistent with prior expectations. False news is constrained by no such limitation. Fabricators can craft stories that are maximally surprising, maximally emotionally provocative, and maximally consistent with pre-existing anxieties or wishes.
33.2.3 The Emotional Dimension
The study also found that false news generated higher levels of emotional response than true news. True stories were more likely to generate replies expressing anticipation and sadness. False stories were more likely to generate replies expressing surprise and disgust. This emotional profile matters: surprise and disgust are both associated with high arousal states that drive sharing behavior. Research in affective psychology has consistently found that high-arousal emotions — whether positive or negative — are more likely to motivate action than low-arousal emotions like sadness or contentment.
The engagement systems of social media platforms are, as previous chapters have documented, designed to maximize engagement signals including shares, likes, and comments. A content-agnostic system optimizing for these signals will, all else being equal, systematically favor content that generates high-arousal emotional responses. Because false news is disproportionately structured to produce such responses, the engagement-optimization engine functions as a structural amplifier of false information — not through any deliberate intent to spread falsehood, but through the indifferent pursuit of engagement.
33.2.4 Replication and Extension
The Vosoughi findings have since been replicated and extended across multiple platforms and languages. A 2019 study by Hunt Allcott and colleagues examined misinformation on Facebook. Research by Andreas Jungherr and colleagues examined similar dynamics on German Twitter during the 2017 federal election. Studies in the context of the COVID-19 pandemic found consistent results: health misinformation spread faster and wider than accurate public health information, even as platforms deployed unprecedented resources to counter it.
The consistency of findings across platforms, countries, and information domains suggests that the differential spread of false information is not a platform-specific artifact but a structural feature of social media engagement dynamics.
33.3 The Engagement-Optimization Engine as Misinformation Amplifier
The previous section established that false news spreads faster than true news due to its novelty and emotional arousal. This section examines how the algorithmic systems of major social media platforms interact with those characteristics to amplify the problem systematically.
33.3.1 Emotional Content vs. Accurate Content in Optimization
The fundamental optimization target of major social media platforms — engagement — is not truth-agnostic but truth-biased in a specific direction. When an algorithm is trained to maximize the likelihood that a user will interact with content (like, share, comment, or spend time viewing), it learns to predict what generates those interactions. Because false news disproportionately generates high-engagement emotional responses, a content-agnostic engagement optimizer will, over time, learn to surface false news.
This is not a design decision in the sense of an engineer choosing to amplify falsehoods. It is an emergent property of optimizing for engagement on a corpus of content where emotional arousal and falsity are correlated. The algorithm is doing exactly what it was designed to do — and what it was designed to do turns out to be structurally incompatible with epistemic health.
Researchers at MIT Media Lab and elsewhere have attempted to quantify this dynamic. Guillaume Chaslot, a former YouTube engineer who spent years studying the platform's recommendation algorithm before founding AlgoTransparency, estimated that YouTube's recommendation algorithm was significantly more likely to recommend videos containing misinformation than videos from established news organizations. His methodology has been contested, but the directional finding has been supported by multiple independent analyses.
Internal research at Facebook, leaked to the Wall Street Journal in 2021 as part of the "Facebook Papers," showed that the company's own researchers had identified similar dynamics. A 2019 internal report acknowledged that Facebook's algorithm amplified "misinformation, toxicity, and violent content" and that engagement-based ranking was responsible. The report was not acted upon in ways proportional to its findings.
33.3.2 Recommendation Algorithms and the Radicalization Pipeline
The recommendation algorithm — the system that suggests what content to watch, read, or follow next — plays a particularly significant role in misinformation dynamics. Where the feed algorithm determines what users see in their primary content stream, the recommendation algorithm actively suggests new content, potentially introducing users to sources and topics they had not previously sought out.
Research has documented what journalists and researchers have called the "rabbit hole" phenomenon: users who begin with mainstream content are gradually recommended increasingly extreme or conspiratorial content. A 2019 study by Ribeiro and colleagues at EPFL analyzed YouTube recommendation data and found pathways from mainstream political content to far-right and white nationalist channels. A 2020 study by researchers at Universidade Federal de Minas Gerais found similar patterns in Brazilian political content.
The mechanism is not mysterious: extreme and conspiratorial content tends to generate high engagement (precisely because it is novel and emotionally arousing), so it attracts watch time and the recommendation algorithm learns to surface it. Users who demonstrate interest in politically charged content are served increasingly charged content because engagement signals suggest they respond to it.
33.3.3 Filter Bubbles and Epistemic Closure
The recommendation dynamics described above interact with what Eli Pariser famously called "filter bubbles" — the tendency of personalization algorithms to surround users with information consistent with their existing beliefs and preferences, reducing exposure to challenging or contradictory information. Whether filter bubbles constitute a primary driver of polarization is debated (some researchers argue their effects are modest compared to offline social sorting), but they represent a genuine structural feature of algorithmic content delivery.
In the context of health misinformation, filter bubble dynamics have specific documented consequences. A 2020 study by David Broniatowski and colleagues at George Washington University found that vaccine-hesitant users on Twitter existed in recommender-reinforced clusters where they were disproportionately exposed to vaccine-skeptical content. Users who began following one vaccine-skeptical account were algorithmically recommended additional vaccine-skeptical accounts, creating echo chambers that deepened hesitancy rather than exposing users to accurate health information.
The study of recommendation effects on vaccine hesitancy is particularly instructive because the outcomes are measurable. Vaccination rates, disease incidence, and hospitalization data provide concrete measures of what misinformation-amplified hesitancy costs. A 2021 study in The Lancet estimated that the "infodemic" accompanying COVID-19 contributed significantly to reduced vaccination uptake in high-hesitancy communities, with measurable effects on hospitalizations and deaths.
33.4 The Liar's Dividend: When Fake Everything Makes Real Evidence Invisible
A sophisticated and underappreciated consequence of the misinformation ecosystem is what researcher Robert Chesney and Danielle Citron called the "liar's dividend" in a 2019 paper: when the existence of convincing AI-generated fake content becomes widely known, it becomes easier for bad actors to dismiss genuine evidence as fabricated.
33.4.1 Deepfakes and Plausible Deniability
The development of deepfake technology — AI systems capable of generating highly convincing video of people saying or doing things they never said or did — has created an asymmetric epistemic situation. A bad actor no longer needs to successfully fabricate convincing evidence of their innocence; they need only raise enough doubt that genuine video evidence can be dismissed as "probably a deepfake." The burden of proof shifts from fabrication to authentication, and authentication is both technically difficult and inaccessible to ordinary users.
This dynamic played out in several documented cases. During the 2019 Gabon political crisis, a video of President Ali Bongo — who had been absent from public view due to a stroke — was initially dismissed by opposition figures as a deepfake, though subsequent analysis suggested it was genuine. The mere existence of deepfake technology created sufficient doubt to serve political purposes.
33.4.2 The Epistemic Pollution Effect
More broadly, the prevalence of fabricated content creates what researchers call "epistemic pollution" — an environment in which users apply generalized skepticism to all information. While healthy skepticism is epistemically virtuous, generalized distrust is not; it makes it impossible to distinguish trustworthy from untrustworthy information, essentially leveling the epistemic playing field between credible journalism and fabricated content.
When Maya sees a genuine investigative news story, she cannot easily distinguish it from a sophisticated piece of disinformation. The tools to do so — source evaluation, corroboration checking, understanding of editorial standards — require time, expertise, and access to multiple information sources. For most users, most of the time, these tools are not practically available.
33.5 COVID-19 Misinformation: A Case Study in Infodemic
The COVID-19 pandemic produced the most extensively documented instance of a large-scale misinformation crisis in the social media era. The World Health Organization coined the term "infodemic" to describe the overabundance of information — accurate and inaccurate — that accompanied the pandemic, and the term has since been adopted across public health and information research communities.
33.5.1 The Scale of COVID-19 Misinformation
Early research quantifying COVID-19 misinformation found it circulating at extraordinary scale and velocity. A March 2020 study by Frenkel, Alba, and Zhong documented at least 40 significant COVID-19 misinformation narratives that had collectively accumulated hundreds of millions of shares, views, and engagements within the first two months of the pandemic. A 2020 study published in BMJ Global Health found that approximately 25% of the most-viewed COVID-19 videos on YouTube contained misinformation.
The misinformation was heterogeneous in content and origin. Some was commercially motivated — supplement sellers and alternative medicine practitioners promoting unproven cures. Some was politically motivated — narratives about the virus's origins designed to serve geopolitical agendas. Some was ideologically motivated — anti-vaccination activists connecting COVID-19 vaccines to existing conspiratorial frameworks. And a significant portion was simply well-intentioned error, shared by people who believed they were helping their communities.
33.5.2 Specific Misinformation Narratives
The COVID-19 infodemic included several particularly impactful false narratives:
Vaccine misinformation included claims that mRNA vaccines would alter DNA (false; mRNA does not enter the nucleus), that vaccines contained tracking microchips (false), that vaccines caused infertility (unsupported by evidence), and that vaccine trials had skipped essential safety testing (false). These narratives circulated primarily through anti-vaccination networks that had been building on social media platforms for years before the pandemic, and they were amplified by recommendation algorithms that had already learned to surface vaccine-skeptical content to users who engaged with similar material.
Treatment misinformation included early enthusiasm for hydroxychloroquine (which clinical trials subsequently showed to be ineffective against COVID-19), misinformation about ivermectin (an antiparasitic effective against certain infections but not COVID-19), claims that drinking bleach or consuming UV light could cure infection, and numerous claims about herbal remedies and supplements. Some of these misinformation narratives had real-world consequences: poison control centers documented increases in calls related to bleach and disinfectant consumption following a presidential press conference in which their use was raised.
Origin misinformation included both the "lab leak" hypothesis — which ranged from reasonable scientific uncertainty to elaborate conspiracy theory depending on its specific framing — and claims about 5G networks, which had no evidentiary basis. The challenge with origin misinformation is that genuine scientific uncertainty about COVID-19's origins created space for more extreme and unsupported narratives.
33.5.3 Platform Interventions and Their Effects
Major platforms deployed a range of interventions in response to COVID-19 misinformation, representing the most extensive set of platform-side counter-misinformation efforts to date.
Facebook implemented information labels directing users to authoritative health information, applied reduced distribution to content flagged as potentially false by fact-checkers, and created a dedicated COVID-19 Information Center. Twitter labeled tweets containing COVID-19 misinformation and applied escalating enforcement up to account suspension. YouTube removed videos containing COVID-19 misinformation that violated its medical misinformation policies and promoted authoritative health channels.
The effects of these interventions have been studied extensively, with mixed results. A 2022 study in PNAS found that platform interventions slowed but did not stop the spread of COVID-19 misinformation. A study by Loomba and colleagues in Nature Human Behaviour found that exposure to common vaccine misinformation reduced willingness to be vaccinated by between 6.2 and 6.4 percentage points — a finding with significant public health implications — but that corrections reduced this effect somewhat. The general finding is that interventions help at the margins but do not fundamentally reverse the structural advantage false information enjoys under engagement-optimized algorithms.
33.6 Health Misinformation and the Anti-Vaccination Movement
The COVID-19 pandemic did not create anti-vaccination sentiment; it accelerated and amplified a movement that had been building on social media platforms for over a decade. Understanding the anti-vaccination movement on social media provides important context for understanding how misinformation communities form, persist, and grow within algorithmic ecosystems.
33.6.1 The Pre-Existing Infrastructure
Before COVID-19, anti-vaccination content had established substantial infrastructure on all major platforms. Research by the Center for Countering Digital Hate found that new Facebook users who joined groups related to pregnancy or natural parenting were reliably recommended anti-vaccination groups within weeks. The pathway from new parent to anti-vaccination community was algorithmically mediated: the algorithm learned that anti-vaccination content generated high engagement and that users interested in parenting-related health topics were likely to engage with it.
33.6.2 The "Anti-Vaxx to Alt-Right" Pipeline
Several researchers documented pathways between vaccine-skeptical communities and broader conspiratorial and far-right communities on social media platforms. Vaccine skepticism, particularly in its more conspiratorial forms (vaccines as instruments of population control, vaccine mandates as government overreach), connects to broader anti-establishment narratives that overlap with other forms of political extremism. Recommendation algorithms, serving content based on engagement patterns, can move users along these pathways without any awareness on the user's part.
33.6.3 Measurable Health Effects
The consequences of vaccine misinformation are not abstract. The World Health Organization listed "vaccine hesitancy" as one of the top ten threats to global health in 2019, before COVID-19. Between 2017 and 2019, measles cases increased dramatically in several countries that had previously achieved elimination status, with healthcare authorities attributing the resurgence in part to declining vaccination rates driven by misinformation.
Research by the Center for Countering Digital Hate estimated in 2020 that roughly 65 percent of COVID-19 vaccine misinformation online originated from just twelve accounts — the so-called "Disinformation Dozen" — whose content was then amplified by recommendation and sharing dynamics across multiple platforms. This concentration of origin is important: it suggests that targeted action against a small number of highly connected actors could significantly reduce misinformation volume, though platforms were slow to act on this research.
33.7 Financial Misinformation: Meme Stocks, Crypto, and Get-Rich-Quick Schemes
The misinformation landscape extends well beyond politics and health. Financial misinformation — false or misleading claims about investment opportunities — has caused significant economic harm and represents a growing challenge for platforms whose engagement systems interact with financial incentives in particular ways.
33.7.1 Meme Stocks and the GameStop Episode
In January 2021, a coordinated movement of retail investors on the Reddit forum r/WallStreetBets drove the stock price of GameStop (GME) from approximately $20 to nearly $500 per share within weeks. The WallStreetBets movement combined legitimate analysis of short-squeeze dynamics with significant misinformation about GameStop's business fundamentals, motivational content that sometimes crossed into manipulation, and coordinated behavior that regulators subsequently examined for potential market manipulation. The recommendation dynamics of Reddit and TikTok played a role in spreading information and misinformation about GameStop investment, attracting retail investors who lacked the knowledge to evaluate claims critically.
33.7.2 Cryptocurrency Pump-and-Dump Schemes
Cryptocurrency markets, being less regulated than traditional securities markets, have proven particularly vulnerable to coordinated pump-and-dump schemes in which promoters accumulate holdings in low-value tokens, generate social media buzz through misleading claims, and then sell as retail investors buy in. Research published in the Journal of Financial Economics documented hundreds of pump-and-dump schemes in cryptocurrency markets between 2018 and 2022. Celebrity endorsements of cryptocurrency projects — some of which were subsequently revealed to be paid promotions without adequate disclosure — reached tens of millions of followers.
33.7.3 Platform Responsibility and Regulatory Response
Financial misinformation occupies a complicated regulatory space. Securities fraud law provides some tools against the most explicit forms of manipulation, and the SEC and FTC have taken enforcement actions against some influencers for undisclosed paid promotions. But platform algorithms that amplify financial misinformation because it generates engagement are not directly liable under existing law, and platforms have been slow to implement effective policies against financial misinformation.
33.8 Platform Responses: Interventions and Their Limitations
Platforms have deployed a range of interventions designed to reduce misinformation on their systems. Understanding what these interventions do, what they don't do, and what the evidence shows about their effectiveness is essential for informed analysis.
33.8.1 Fact-Checking Partnerships and Labels
The most visible platform intervention is the content label: a notice attached to specific pieces of content flagging them as potentially false, containing misinformation, or disputed by fact-checkers. Facebook has partnered with a network of independent fact-checking organizations since 2016. Twitter (prior to its 2022 acquisition) introduced Community Notes, a crowd-sourced fact-checking system. YouTube applies labels to content related to specific high-priority topics directing viewers to authoritative sources.
Research on the effectiveness of fact-checking labels is mixed. A 2020 study by Clayton and colleagues found that warning labels reduced the perceived accuracy of labeled content but had an "implied truth effect" — content that appeared next to labeled misinformation but was not itself labeled received a boost in perceived accuracy, regardless of whether it was accurate. Users infer that unlabeled content has been evaluated and found acceptable, a potentially dangerous heuristic.
A 2021 study by Pennycook and colleagues found that accuracy nudges — simply prompting users to think about whether content was accurate before sharing — reduced the sharing of false headlines without reducing sharing of accurate headlines. This finding has generated significant research interest.
33.8.2 Reduced Distribution
Beyond labeling, platforms have experimented with "demotion" — reducing the algorithmic amplification of content identified as potentially false without removing it. Demotion has the advantage of avoiding accusations of censorship (the content remains accessible) while reducing the structural amplification that makes misinformation particularly dangerous. But it raises its own challenges: what content gets demoted, based on whose judgment, through what process, with what appeals mechanism?
33.8.3 The Prebunking Approach: Inoculation Theory
One of the more promising research directions in misinformation counter-intervention is prebunking, or inoculation. Drawn from Michael Pfau's inoculation theory in persuasion research, prebunking involves exposing people to weakened forms of misinformation techniques before they encounter real misinformation, thereby building resistance.
The theoretical mechanism is analogous to vaccination: exposure to a weakened or attenuated form of a threat produces protective resistance. Applied to misinformation, inoculation involves teaching people about the specific rhetorical techniques used in misinformation — emotional manipulation, false experts, logical fallacies, conspiracy reasoning — so they can recognize them when they encounter them.
Research by Sander van der Linden and colleagues has produced encouraging results. A 2022 study in partnership with Google found that short prebunking videos explaining misinformation techniques reduced susceptibility to misinformation by between 5 and 10 percentage points in randomized controlled trials. A study using a prebunking video game ("Bad News") found sustained effects on misinformation resistance over weeks after play.
The prebunking approach is notable because it works with the grain of the engagement-optimization system rather than against it: prebunking content can be engaging, emotionally resonant, and shareable — precisely the properties that the engagement system rewards. Whether prebunking interventions can scale to be meaningfully effective at the population level remains an active research question.
33.9 News Deserts and the Local Journalism Collapse
The misinformation ecosystem does not develop in an information vacuum. The structural conditions of the broader information environment — particularly the collapse of local news — shape the vulnerability of communities to misinformation.
33.9.1 The Scale of Local News Collapse
The United States has lost more than a quarter of its newspapers since 2005, according to research by Penny Abernathy at Northwestern University's Medill School of Journalism. Between 2004 and 2022, approximately 2,500 local newspapers closed, leaving vast swaths of the country — Abernathy calls them "news deserts" — without any local news coverage. Similar patterns have been documented in the United Kingdom, Canada, Australia, and across Europe, driven by the migration of advertising revenue from local print media to digital platforms that do not support local journalism in any comparable way.
33.9.2 News Deserts and Misinformation Vulnerability
The connection between news deserts and misinformation vulnerability is both theoretically intuitive and empirically supported. Local journalism performs a monitoring function — covering local government, local courts, local schools — that no other institution replicates. When local journalism disappears, the information vacuum it leaves is filled by alternatives, and social media platforms are the most convenient alternative available.
Research by Joshua Darr and colleagues found that communities in news deserts were more likely to rely on national political media, contributing to nationalization of political discourse and reduced engagement with local issues. Research by Danny Hayes and Jennifer Lawless found that the collapse of local political journalism reduced civic knowledge and civic participation.
In the context of misinformation, communities without trusted local information sources are more vulnerable to misinformation precisely because they lack the institutional reference point that local journalism provides. When Maya needs to know whether a rumor about her local school board is true, she can no longer call the local newspaper. She consults Facebook groups and NextDoor, where the dynamics are those of social media engagement optimization, not journalism.
33.9.3 The Platform Responsibility Question
The irony that platforms have contributed to the collapse of local journalism — by absorbing the advertising revenue that previously sustained it — while simultaneously becoming the primary information source for communities left without local journalism is noted by virtually every researcher in the field. Platforms have made modest investments in local journalism, but these investments have been widely viewed as inadequate relative to the scale of the problem their advertising models created.
33.10 Computational Propaganda: Bot Networks and Coordinated Inauthentic Behavior
Beyond organic misinformation dynamics, the information environment is actively shaped by coordinated campaigns of artificial amplification. Understanding computational propaganda — the use of automated and semi-automated accounts to manipulate political discourse — is essential for a complete picture of the misinformation landscape.
33.10.1 Bot Networks and Amplification
Social media bot networks — accounts partially or fully controlled by automated software — can rapidly amplify specific content, creating the appearance of widespread organic support for particular narratives or candidates. Research by the Oxford Internet Institute's Computational Propaganda Project has documented bot networks active in elections and political conflicts across more than 70 countries.
Bots serve several functions in computational propaganda campaigns. Amplifier bots interact with target content at scale, boosting its apparent popularity and improving its standing in algorithmic rankings. Astroturfing bots post original content designed to create the impression of grassroots sentiment. Harassment bots target critics of specific narratives or political figures, effectively chilling speech through intimidation.
33.10.2 Coordinated Inauthentic Behavior
Beyond automated bots, platforms have documented what Facebook calls "coordinated inauthentic behavior" (CIB): networks of accounts operated by real people behaving in coordinated ways to artificially amplify specific narratives. CIB campaigns can be more sophisticated than bot campaigns because the accounts appear more human. Facebook's Threat Intelligence team has published regular reports documenting CIB campaigns linked to government actors in Russia, China, Iran, and other countries, as well as domestic political operatives in multiple countries. The Internet Research Agency, a Russian organization with documented links to Russian intelligence, ran one of the most extensively studied CIB campaigns in the run-up to the 2016 U.S. presidential election.
33.10.3 The Detection and Attribution Challenge
Detecting coordinated inauthentic behavior is substantially easier for platforms with access to behavioral metadata than for outside researchers. Platforms can observe login patterns, device fingerprints, posting times, and network structures that are not visible to external analysts. This creates a significant information asymmetry: platform threat intelligence teams have far better detection capabilities than academic researchers or civil society organizations — but their reporting is partial and self-curated.
33.11 The Path Forward: Structural vs. Symptomatic Responses
The misinformation crisis on social media is not primarily a problem of bad actors posting false content that needs to be found and removed. It is a structural problem: the engagement-optimization systems of major platforms are structurally biased toward amplifying content that is emotionally arousing and novel, and false content is disproportionately emotionally arousing and novel. Addressing the misinformation crisis requires engaging with this structural reality.
33.11.1 Structural Interventions
Several researchers have proposed structural interventions that address the root cause rather than symptoms. These include:
Changing the optimization target: If platforms optimized not for raw engagement but for engagement correlated with user satisfaction (research suggests these diverge significantly), or for engagement with accurate content, the structural amplification bias would be reduced. This requires platforms to accept that some content that generates high engagement should be surfaced less, which conflicts with short-term advertising revenue incentives.
Mandatory algorithmic auditing: Requiring platforms to submit their recommendation algorithms to independent auditors who could assess misinformation amplification patterns would create accountability that self-reporting does not. The European Union's Digital Services Act, which came into force in 2023, includes provisions for algorithmic auditing of very large online platforms — the most significant regulatory intervention of this type to date.
Friction and slowdowns: Research suggests that adding friction to the sharing process — requiring users to read content before sharing it, for example, or adding a delay to the sharing interface — reduces misinformation sharing. Twitter ran a pilot in 2020 prompting users to read articles before retweeting them and found it increased read rates by 40 percent.
33.11.2 The Epistemic Commons
Ultimately, the misinformation crisis is a crisis of what some researchers call the "epistemic commons" — the shared informational foundations necessary for democratic self-governance. When citizens cannot agree on basic empirical facts, when institutions of shared knowledge are systematically discredited, and when information environments are shaped by systems optimizing for engagement rather than accuracy, the conditions for democratic governance are degraded.
Restoring epistemic commons requires interventions across multiple levels: structural changes to how platforms operate, investments in journalism and media literacy education, regulatory frameworks that create accountability, and individual practices of media literacy and source evaluation. No single intervention is sufficient. The misinformation crisis requires a systemic response to what is, at root, a systemic problem.
Voices from the Field
"The thing that most people don't understand about misinformation on social media is that it's not mainly about bad actors deliberately spreading lies. Most of it is ordinary people sharing things they believe are true, amplified by systems that have no interest in truth — systems that only care whether you clicked."
— Dr. Kate Starbird, Associate Professor, University of Washington, and founding member of the Center for an Informed Public
"We talk about the 'infodemic' as if it happened spontaneously alongside the pandemic, like a natural disaster. But the infodemic had infrastructure. It had networks that had been building for years, recommendation algorithms that had been learning for years what kinds of health content generated engagement. COVID-19 didn't create the infodemic. It revealed what was already there."
— Dr. Renée DiResta, Research Manager, Stanford Internet Observatory
Maya's Perspective
The 5G video her aunt forwarded looked convincing. It had a doctor speaking authoritatively, overlaid graphics connecting 5G towers to immune suppression, and a YouTube view count in the hundreds of thousands. Maya's first instinct was to wonder whether there was something to it — after all, so many people had watched it.
Then she noticed the YouTube sidebar. Alongside the 5G video were recommendations for videos about the "plandemic," about alternative cancer treatments, about the globalist agenda. The recommendation engine had categorized her aunt into a cluster of health skeptics and was serving her accordingly.
Maya tried to find a fact-check for the 5G video. She found three — from the AP, Reuters, and the BBC — all labeling it false. She forwarded the fact-checks to her aunt. Her aunt replied: "You can't trust the mainstream media. This is all connected."
Maya felt the frustration that researchers describe in studies of correction effects: providing accurate information to someone who has received misinformation sometimes reinforces rather than corrects the misbelief, because the correction is interpreted as evidence of the conspiracy rather than refutation of it.
Velocity Media Sidebar: The Engagement-Truth Tradeoff
At Velocity Media's Q3 strategy meeting, Dr. Aisha Johnson presented data showing that health-related content with emotional language outperformed fact-checked health content by an average of 340% on standard engagement metrics. The implication was uncomfortable: Velocity's algorithm was learning to surface content that was more likely to be misleading.
"We can add friction," Dr. Johnson proposed. "A 'Read Before You Share' prompt. A one-second delay before the share dialog opens. Research shows these reduce impulsive sharing significantly."
Marcus Webb pushed back. "That's a friction tax on every piece of content shared on this platform, the vast majority of which is not misinformation. You're degrading the user experience for accurate content to solve a problem concentrated in a small percentage of posts."
"The friction applies to everything," Dr. Johnson acknowledged. "But the content that suffers most from friction is content people share impulsively — and impulsive sharing is correlated with misinformation. The correlation is uncomfortable but it's real."
The meeting concluded without a decision. The friction intervention remained in the research proposal queue.
Summary
This chapter has examined the relationship between misinformation and engagement-optimization systems on social media platforms. We began by establishing definitional clarity — distinguishing misinformation, disinformation, and malinformation — and then examined the landmark Vosoughi et al. (2018) research demonstrating that false news spreads faster and further than true news on social media, driven primarily by human sharing behavior rather than automated bots, and attributable to the greater novelty and emotional arousal of false content.
We analyzed how engagement-optimization algorithms structurally amplify misinformation, not through deliberate intent but through the optimization of engagement signals correlated with emotional arousal and novelty. We examined specific misinformation crises — COVID-19 misinformation, the anti-vaccination movement, financial misinformation — and assessed platform interventions including fact-checking labels, reduced distribution, and prebunking, finding that each provides marginal benefit without addressing the structural problem.
The chapter also examined structural conditions that amplify misinformation vulnerability, including news deserts created by the collapse of local journalism, and the dynamics of computational propaganda through bot networks and coordinated inauthentic behavior. We concluded by arguing that addressing the misinformation crisis requires structural interventions — changes to optimization targets, mandatory algorithmic auditing, and friction-based behavioral design — rather than purely symptomatic responses.
Discussion Questions
-
The Vosoughi et al. (2018) study found that human sharing behavior, not bots, was the primary driver of false news spread. What are the implications of this finding for policy responses that focus primarily on detecting and removing bot networks?
-
The "liar's dividend" describes how the existence of deepfakes makes it easier to dismiss genuine evidence as fabricated. How might this dynamic be addressed? What would adequate responses look like at the platform level, the legal level, and the individual media literacy level?
-
COVID-19 produced the most extensive set of platform counter-misinformation interventions to date. Evaluate the evidence on their effectiveness. What does this evidence suggest about the limits of platform self-regulation in addressing misinformation?
-
The collapse of local journalism has left communities relying on social media for local information. Who bears responsibility for this situation — platforms, advertisers, consumers, regulators, or some combination? What, if anything, should be done about it?
-
The prebunking/inoculation approach to misinformation shows promise in experimental settings. What would be required to scale prebunking to meaningful population-level effects? What are the barriers to deployment, and who might overcome them?
-
Engagement-optimization algorithms amplify misinformation as an emergent property of optimizing for engagement, not through deliberate intent. Does the absence of deliberate intent affect your moral evaluation of platform responsibility? Should intent matter in determining platform liability for misinformation amplification?
-
The "Disinformation Dozen" research found that approximately 65% of vaccine misinformation originated from just 12 accounts. What does this concentration suggest about the relative effectiveness of targeted enforcement versus systemic algorithmic changes as counter-misinformation strategies?