Further Reading: Misinformation, Disinformation, and Platform Governance
The sources below provide deeper engagement with the themes introduced in Chapter 31. They are organized by topic and include a mix of foundational research, policy analysis, accessible popular works, and regulatory texts. Annotations describe what each source covers and why it is relevant to the chapter's core questions.
The Science of Misinformation Spread
Vosoughi, Soroush, Deb Roy, and Sinan Aral. "The Spread of True and False News Online." Science 359, no. 6380 (2018): 1146-1151. The most comprehensive empirical study of how false information spreads online, analyzing 126,000 stories tweeted by approximately 3 million people. The finding that false news spreads faster, farther, and more broadly than true news — driven by human sharing behavior, not bots — is foundational to Chapter 31's analysis of the structural disadvantage truth faces in engagement-optimized systems. Essential reading for understanding why misinformation is a systems problem, not merely a human judgment problem.
Wardle, Claire, and Hossein Derakhshan. "Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making." Council of Europe Report DGI(2017)09, 2017. The report that established the misinformation/disinformation/malinformation framework used throughout this chapter. Wardle and Derakhshan's three-part typology (agent, message, interpreter) provides the conceptual vocabulary for analyzing information disorder with precision rather than treating all "fake news" as a single undifferentiated problem. Freely available online and remarkably readable for a policy report.
Berger, Jonah, and Katherine L. Milkman. "What Makes Online Content Viral?" Journal of Marketing Research 49, no. 2 (2012): 192-205. The study that demonstrated the role of emotional arousal in sharing behavior. Content that triggers high-arousal emotions (anger, excitement, anxiety) is significantly more likely to be shared than content triggering low-arousal emotions (sadness, contentment). This finding underpins the chapter's analysis of why misinformation — often crafted to maximize emotional arousal — has a structural advantage in social media environments.
Lewandowsky, Stephan, et al. "Misinformation and Its Correction: Continued Influence and Successful Debiasing." Psychological Science in the Public Interest 13, no. 3 (2012): 106-131. A comprehensive review of the "continued influence effect" — the finding that initial misinformation continues to influence reasoning even after effective correction. This paper is essential for understanding why fact-checking, while valuable, cannot simply undo the damage of false claims. It also reviews strategies for more effective corrections.
Platform Governance and Content Moderation
Kosseff, Jeff. The Twenty-Six Words That Created the Internet. Ithaca, NY: Cornell University Press, 2019. A deeply researched history of Section 230, tracing its origins, its interpretation by courts, and its role in shaping the modern internet. Kosseff provides essential context for understanding why Section 230 was enacted, what it was intended to do, and how its application has evolved far beyond its original scope. Accessible to non-lawyers and indispensable for anyone seeking to understand US platform governance.
Douek, Evelyn. "Governing Online Speech: From 'Posts-as-Trumps' to Proportionality and Probability." Columbia Law Review 121, no. 3 (2021): 759-834. A groundbreaking legal analysis arguing that content moderation should be understood not as a series of individual editorial decisions but as a system of governance operating at massive scale under structural constraints. Douek's content moderation trilemma (fast, accurate, scalable — pick two) provides the analytical framework used in Section 31.3.3 and is essential for moving beyond simplistic critiques of platform moderation failures.
Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press, 2019. An ethnographic study of the human workers who perform content moderation — reviewing disturbing images, violent videos, and hateful text for hours each day, often for low wages and with minimal psychological support. Roberts makes visible the human labor that sustains platform content moderation, connecting the chapter's discussion of content moderation practices to the labor concerns examined in Chapter 33.
Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press, 2018. A systematic analysis of how platforms make content moderation decisions, the values embedded in those decisions, and the consequences for public discourse. Gillespie's central argument — that platforms are not neutral conduits but active curators of public discourse — provides essential context for the publisher/utility/platform debate examined in Section 31.6.
The EU Digital Services Act
European Commission. "Digital Services Act." Regulation (EU) 2022/2065 of the European Parliament and of the Council, October 19, 2022. The full text of the DSA. While lengthy and legal in nature, the regulation is clearly structured and readable by non-specialists. Key provisions to focus on include: Article 14 (terms of service), Article 27 (recommendation systems), Articles 34-35 (risk assessment and mitigation for VLOPs), and Article 40 (data access for researchers). Freely available on EUR-Lex.
Husovec, Martin. "The DSA's Systemic Risk Assessment: What Platforms Must Do." Stanford Center for Internet and Society Working Paper, 2023. A practical analysis of what the DSA's systemic risk assessment requirement means in operational terms for platforms. Husovec examines what risks must be assessed, what methodologies are appropriate, and how regulators are likely to evaluate compliance. Useful for understanding the DSA not as an abstract legal framework but as a set of concrete operational requirements.
Interventions: Fact-Checking, Prebunking, and Media Literacy
Roozenbeek, Jon, and Sander van der Linden. "Fake News Game Confers Psychological Resistance Against Online Misinformation." Palgrave Communications 5, no. 65 (2019). The study behind the Bad News browser game, which teaches players to identify misinformation by placing them in the role of a disinformation creator. The paper demonstrates that "inoculation" — exposure to weakened forms of manipulation techniques — builds lasting resistance to misinformation. A key source for the prebunking discussion in Section 31.5.2.
Roozenbeek, Jon, et al. "Psychological Inoculation Improves Resilience Against Misinformation on Social Media." Science Advances 8, no. 34 (2022). The paper reporting the results of Google's large-scale prebunking campaign on YouTube, which showed short videos explaining manipulation techniques to millions of users. The intervention increased users' ability to identify manipulated content by 5-10 percentage points and worked across political ideologies — a finding with significant implications for scalable, non-partisan misinformation interventions.
Pennycook, Gordon, et al. "Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings." Management Science 66, no. 11 (2020): 4944-4957. The paper that identified the implied truth effect — the unintended consequence where labeling some content as false makes unlabeled content appear more credible. This finding is crucial for understanding the limitations of content labeling as an intervention and for designing more effective labeling strategies that avoid this perverse effect.
Humprecht, Edda, et al. "Resilience to Online Disinformation: A Framework for Cross-National Comparative Research." The International Journal of Press/Politics 25, no. 3 (2020): 493-516. A comparative study of what makes some countries more resilient to disinformation than others. The paper identifies structural factors — media system characteristics, political polarization, trust in institutions — that shape national vulnerability. Particularly relevant for understanding why Finland's media literacy approach succeeds in its specific context and what conditions would be necessary for replication elsewhere.
Health Misinformation and the Infodemic
Loomba, Sahil, et al. "Measuring the Impact of COVID-19 Vaccine Misinformation on Vaccination Intent in the UK and USA." Nature Human Behaviour 5 (2021): 337-348. A rigorous experimental study demonstrating that exposure to COVID-19 vaccine misinformation decreased vaccination intent by approximately 6 percentage points. This paper provides the empirical foundation for the chapter's argument that health misinformation has measurable, consequential effects on public health behavior — and that these effects are large enough to influence the course of a pandemic.
World Health Organization. "Infodemic Management: An Overview of Infodemic Management During COVID-19, January 2020-May 2021." WHO Report, 2021. The WHO's comprehensive overview of the infodemic phenomenon, including the taxonomy of health misinformation types, the mechanisms of spread, and the interventions deployed. Provides global context that extends beyond the US and EU focus of much platform governance scholarship.
These readings are starting points for deeper investigation. As subsequent chapters examine digital inequality (Chapter 32), labor and automation (Chapter 33), and environmental data ethics (Chapter 34), the governance challenges introduced here — accountability, transparency, amplification, and the tension between individual rights and systemic harms — will recur in new contexts.