> "A lie can travel halfway around the world while the truth is still putting on its shoes."
Learning Objectives
- Distinguish between misinformation, disinformation, and malinformation with concrete examples
- Explain the mechanisms by which false information spreads online, including algorithmic amplification and emotional valence
- Compare and contrast US Section 230 and the EU Digital Services Act as regulatory approaches to platform content
- Evaluate the effectiveness and limitations of fact-checking, prebunking, and media literacy interventions
- Analyze the platform accountability debate: publishers, utilities, or something new
- Apply the Power Asymmetry and Accountability Gap frameworks to platform governance decisions
- Assess how health misinformation intersects with data-driven health technology and public trust
In This Chapter
- Chapter Overview
- 31.1 Defining the Problem: Misinformation, Disinformation, and Malinformation
- 31.2 How False Information Spreads: The Science of Virality
- 31.3 Platform Content Moderation: Approaches and Challenges
- 31.4 Regulatory Frameworks: Section 230 vs. the EU DSA
- 31.5 Interventions: Fact-Checking, Prebunking, and Media Literacy
- 31.6 Platform Accountability: Publisher, Utility, or Something Else?
- 31.7 VitraMed: Health Misinformation and the Trust Crisis
- 31.8 Chapter Summary
- What's Next
- Chapter 31 Exercises → exercises.md
- Chapter 31 Quiz → quiz.md
- Case Study: The Infodemic — COVID-19 Misinformation on Social Media → case-study-01.md
- Case Study: Section 230 vs. the EU DSA — Two Approaches to Platform Liability → case-study-02.md
Chapter 31: Misinformation, Disinformation, and Platform Governance
"A lie can travel halfway around the world while the truth is still putting on its shoes." — Attributed to Mark Twain (likely apocryphal — which is itself a lesson about misinformation)
Chapter Overview
In January 2021, as COVID-19 vaccines became available to the public, a short video appeared on social media claiming that mRNA vaccines altered human DNA. The video was slickly produced, cited real (but misinterpreted) studies, and was narrated by a person who appeared to be a physician. Within 72 hours, it had been viewed over 20 million times across platforms. Fact-checkers debunked it within hours of its posting — but the debunking reached a fraction of the audience that had already seen the original claim.
This is the information crisis of our time. Not that false information exists — it always has — but that digital platforms have created an information ecosystem in which false claims can spread faster, farther, and more persuasively than at any previous point in human history. And the systems that accelerate this spread — algorithmic recommendation, engagement optimization, targeted distribution — are the same data-driven systems we've been studying throughout this book.
This chapter examines how false information spreads, why platforms struggle to contain it, how different regulatory frameworks attempt to address the problem, and why the tension between platform accountability and free expression remains one of the most difficult governance challenges of the data age. It connects directly to our recurring themes: the Power Asymmetry between platforms and their billions of users, the Consent Fiction embedded in content moderation policies no one reads, and the Accountability Gap that persists when harmful content causes real-world damage but no entity bears clear responsibility.
In this chapter, you will learn to: - Classify false information by intent and context - Trace the mechanisms by which algorithmic systems amplify misleading content - Compare regulatory approaches to platform governance across jurisdictions - Evaluate interventions designed to combat misinformation - Analyze the structural incentives that make platform accountability so difficult
31.1 Defining the Problem: Misinformation, Disinformation, and Malinformation
31.1.1 Three Distinct Concepts
Not all false or misleading information is the same. The distinctions matter — for understanding the problem, for designing interventions, and for making governance decisions. Scholars and policymakers generally distinguish three categories, following the framework articulated by Claire Wardle and Hossein Derakhshan (2017) for the Council of Europe:
Misinformation is false or inaccurate information shared without the intent to deceive. The person sharing it genuinely believes it to be true. Examples include: - A grandparent sharing a health claim they saw on Facebook, believing it to be medical advice - A journalist reporting preliminary study results that are later retracted - A social media user sharing a satirical article they mistook for real news
Disinformation is false or misleading information created and spread deliberately to deceive, manipulate, or cause harm. The intent to mislead is what distinguishes it from misinformation. Examples include: - State-sponsored influence campaigns designed to undermine election integrity - Deliberately fabricated health claims designed to sell unproven treatments - Corporate astroturfing campaigns designed to manufacture doubt about scientific consensus
Malinformation is genuine information shared with the intent to cause harm — typically by stripping it of context, revealing private information maliciously, or deploying true facts strategically to mislead. Examples include: - Doxxing — revealing someone's private address or identity to facilitate harassment - Leaking authentic but private information to damage a political opponent - Sharing real statistics without context to create a misleading impression
"These distinctions matter more than people think," Sofia Reyes told the class during a guest presentation on platform governance. "When policymakers treat all 'bad information' as the same problem, they end up with policies that either do nothing or censor legitimate speech. A grandmother sharing bad health advice needs media literacy education. A state-sponsored troll farm needs a geopolitical response. Treating them identically helps no one."
31.1.2 The Information Disorder Spectrum
Wardle and Derakhshan's framework goes beyond simple categories to identify a spectrum of information disorder that considers:
- The agent — Who creates and distributes the content? Individuals, organized groups, states, automated bots?
- The message — What form does it take? Fabricated content, manipulated images, misleading headlines, imposter sites?
- The interpreter — How does the audience receive and reinterpret it? What pre-existing beliefs shape their response?
This three-part framework reveals why the problem is so difficult to address. An agent may create disinformation, but once it enters the information ecosystem, individual users may share it as misinformation — genuinely believing it to be true. A fact that begins as malinformation (true but decontextualized) may be retold with additional false details, transforming into misinformation. The categories are not fixed; they shift as content moves through networks.
Key Distinction: The difference between misinformation and disinformation is intent, not content. The same false claim can be misinformation in the hands of one person (who believes it) and disinformation in the hands of another (who knows it's false but spreads it strategically). This makes enforcement extraordinarily difficult — you cannot moderate intent at scale.
31.1.3 Why Now? The Structural Conditions
False information is not new. Propaganda, rumor, and hoaxes predate the internet by millennia. What is new is the infrastructure that amplifies it:
- Speed: Digital platforms enable instantaneous global distribution. A post can reach millions before any fact-checker encounters it.
- Scale: Social media platforms have billions of users. Facebook alone has over 3 billion monthly active users as of 2024.
- Targeting: Data-driven advertising infrastructure allows disinformation to be targeted to the audiences most susceptible to specific messages.
- Friction reduction: Sharing requires a single tap. There is no moment of reflection built into the interface — no equivalent of walking to a mailbox.
- Algorithmic amplification: Recommendation algorithms optimize for engagement, and false, outrageous, or emotionally charged content tends to generate more engagement than accurate, nuanced content.
- Economic incentives: Engagement-driven advertising models reward content that captures attention, regardless of its truth value.
These structural conditions mean that any analysis of misinformation must go beyond individual bad actors to examine the systems that accelerate the spread of false information. As Dr. Adeyemi put it, "You can fact-check a million claims and still not address the structural incentive to produce them."
31.2 How False Information Spreads: The Science of Virality
31.2.1 The MIT Study: Falsehood Flies
The most comprehensive empirical study of how false information spreads online was published by Vosoughi, Roy, and Aral in Science in 2018. The researchers analyzed approximately 126,000 stories tweeted by approximately 3 million people more than 4.5 million times between 2006 and 2017. Every story had been verified as true or false by at least one of six major fact-checking organizations.
Their findings were striking:
- False news reached more people than true news. The top 1% of false news cascades diffused to between 1,000 and 100,000 people, while true stories rarely reached more than 1,000.
- False news spread faster. It took true stories approximately six times as long to reach 1,500 people as it took false stories.
- False news was more novel. False stories were significantly more novel than true stories, and people were more likely to share novel information.
- False news inspired different emotions. False stories inspired fear, disgust, and surprise. True stories inspired sadness, anticipation, and trust.
Critically, the researchers controlled for bot activity. The results held — and were actually stronger — when bots were removed from the analysis. Humans, not bots, were the primary drivers of false information spread.
"The finding about novelty is the one that keeps me up at night," Dr. Adeyemi admitted. "Truth is often boring. It's incremental, qualified, hedged. Falsehood can be crafted to be shocking, novel, emotionally arousing. In a system that optimizes for engagement, truth starts at a structural disadvantage."
31.2.2 Emotional Valence and the Sharing Decision
Why do people share false information? The research points to several overlapping mechanisms:
Emotional arousal. Content that triggers high-arousal emotions — anger, outrage, fear, excitement — is more likely to be shared than content that triggers low-arousal emotions like sadness or contentment (Berger & Milkman, 2012). False information is often specifically crafted to maximize emotional arousal.
Identity affirmation. People share information that aligns with and affirms their existing beliefs and group identities (Osmundsen et al., 2021). Sharing a claim that supports your worldview signals group membership. The truth value of the claim is often secondary to its identity-affirming function.
Social currency. Sharing novel, surprising information confers social status. Being the first in your network to share a piece of "news" — even if false — generates attention and engagement. Platform metrics (likes, shares, comments) provide quantified feedback that reinforces this behavior.
Cognitive shortcuts. Daniel Kahneman's dual-process theory (Chapter 4) helps explain why false information succeeds. System 1 thinking — fast, intuitive, emotional — processes most social media content. The effort required for System 2 thinking — slow, analytical, critical — is rarely triggered by the rapid-scroll interface of a social media feed.
Connection to Chapter 4: The attention economy, examined in depth in Chapter 4, creates the structural conditions for misinformation to thrive. Engagement optimization rewards the same emotional triggers that make false information spread. The problem is not a bug in the system; it is a feature of a business model built on capturing and holding human attention.
31.2.3 Algorithmic Amplification: The Engine Room
Platforms do not simply host content; they actively shape its distribution through recommendation algorithms. These algorithms determine what appears in a user's feed, what is recommended next, and what trends are highlighted. Their design decisions have direct consequences for the spread of false information.
The recommendation pipeline. When a piece of content is posted, the platform's algorithm evaluates it along several dimensions: predicted engagement (will this generate clicks, likes, shares, comments?), relevance to the user (based on past behavior, interests, demographics), and recency. Content with high predicted engagement is promoted to more users, creating a feedback loop: high engagement leads to more distribution, which leads to more engagement.
The amplification problem. Research has repeatedly documented that this feedback loop amplifies false and extreme content:
- Internal Facebook research, leaked by Frances Haugen in 2021, found that the platform's algorithm promoted "angry" reactions at five times the weight of other reactions, effectively amplifying divisive content.
- YouTube's recommendation algorithm was found to create "rabbit holes" — sequences of increasingly extreme recommended videos that led users from mainstream content to conspiracy theories (Ribeiro et al., 2020).
- Twitter's own research (Huszar et al., 2022) found that its algorithm amplified right-leaning political content more than left-leaning content in six out of seven countries studied — a finding that complicated narratives from all political perspectives.
Eli had been following the algorithmic amplification research closely. "This is the same structural logic as predictive policing," he observed. "A system optimized for one objective — engagement, in this case — produces harmful externalities that fall disproportionately on certain communities. My grandmother sees more health misinformation in her Facebook feed than my roommate does, because she's in demographic groups that engagement algorithms have learned to target."
31.2.4 The Role of Networks: Super-Spreaders and Closed Groups
Not all users contribute equally to the spread of false information. Research has identified several network-level dynamics:
Super-spreaders. A small number of accounts are responsible for a disproportionate share of misinformation distribution. One study found that just 12 accounts (the "Disinformation Dozen") were responsible for 65% of anti-vaccine misinformation on social media platforms (Center for Countering Digital Hate, 2021).
Closed groups and encrypted channels. Platforms like WhatsApp and Telegram facilitate the spread of misinformation in encrypted group chats where content moderation is technically impossible without breaking encryption. This creates a tension between privacy (encryption protects users from surveillance) and platform accountability (encrypted channels become vectors for unchecked misinformation).
Cross-platform migration. When content is removed from one platform, it often migrates to another. Users who are banned from mainstream platforms move to alternative platforms with minimal content moderation, then use those platforms to coordinate sharing back onto mainstream platforms. The information ecosystem is interconnected; no single platform can solve the problem alone.
31.3 Platform Content Moderation: Approaches and Challenges
31.3.1 The Scale of the Problem
Content moderation at platform scale is an unprecedented challenge. Consider the volume:
- Facebook users share approximately 1.3 million pieces of content per minute
- YouTube receives approximately 500 hours of video uploads per minute
- X (formerly Twitter) processes hundreds of millions of posts per day
No human workforce can review this volume. Platforms therefore rely on a combination of automated systems (machine learning classifiers) and human review, with automation handling the initial filtering and human reviewers handling appeals and edge cases.
Behind the Scenes: Content moderation is performed by a global workforce, often based in countries with lower labor costs — the Philippines, Kenya, India. These workers review disturbing content (violence, child exploitation, self-harm) for hours each day. Research has documented significant psychological harm, including PTSD, among content moderation workers (Roberts, 2019). The human cost of content moderation is itself an ethical issue, often invisible to platform users.
31.3.2 Types of Content Moderation
Platforms employ several approaches to content moderation, often in combination:
Pre-publication filtering. Automated systems scan content before it is posted, flagging or blocking content that matches known harmful material. This approach is most effective for clearly defined categories (e.g., known child sexual abuse material, which can be matched against databases) and least effective for novel misinformation, satire, or content whose harm depends on context.
Post-publication review. Content is posted but may be reviewed — by automated systems or human moderators — after publication. If found to violate platform policies, it may be removed, labeled, or "demoted" (reduced in algorithmic distribution). The time between publication and review creates a window during which harmful content spreads.
User reporting. Platforms rely on users to flag content they believe violates community standards. This crowd-sourced approach is scalable but susceptible to coordinated abuse (mass reporting of legitimate content) and uneven application (marginalized communities' reports are often ignored while their content is disproportionately flagged).
Labeling and context. Rather than removing content, platforms may add contextual labels — "this claim has been disputed by fact-checkers," "this post is from a state-affiliated media account." Research suggests that labels can reduce sharing by 10-25% (Clayton et al., 2020), but their effectiveness varies by audience and context.
Demotion. Platforms can reduce the algorithmic distribution of content without removing it. This approach avoids censorship concerns (the content remains accessible) but reduces its reach. Critics argue that demotion is non-transparent — users and creators may not know their content has been suppressed.
31.3.3 The Content Moderation Trilemma
Legal scholar Evelyn Douek has argued that content moderation faces a trilemma: platforms cannot simultaneously be fast, accurate, and scalable. They must choose two at the expense of the third:
- Fast + Scalable = automated systems that remove content quickly at scale, but make many errors (false positives and false negatives)
- Fast + Accurate = expert human review that catches nuance, but cannot scale to billions of posts
- Scalable + Accurate = careful, context-sensitive review of all content, but at a pace too slow to prevent harmful content from going viral
This trilemma explains many of the frustrations users experience with content moderation. Every over-removal (a legitimate post flagged as misinformation) and every under-removal (harmful content left up for days) reflects a structural constraint, not just a policy failure.
"This is the Accountability Gap in action," Dr. Adeyemi observed. "Platforms make thousands of moderation decisions every minute, each of which affects someone's ability to speak. But there's no consistent standard, no transparent process, no meaningful appeal — and when they get it wrong, no accountability. We've essentially delegated the governance of public discourse to private companies operating under structural constraints that guarantee systematic errors."
31.4 Regulatory Frameworks: Section 230 vs. the EU DSA
31.4.1 Section 230 of the Communications Decency Act (United States)
Section 230 of the Communications Decency Act (1996) is the legal foundation of US internet governance. It contains two key provisions:
Section 230(c)(1): "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
This provision immunizes platforms from liability for content posted by their users. If a user posts defamatory content on Facebook, the defamed person can sue the user — but not Facebook.
Section 230(c)(2): Platforms may moderate content "in good faith" without losing their immunity. This provision was designed to encourage platforms to remove harmful content without fear that their moderation decisions would expose them to liability.
The practical effect of Section 230 is that US platforms have almost complete legal immunity for user-generated content. They can choose to moderate — or not moderate — without legal consequence. This has been both praised (as enabling the growth of the internet by protecting platforms from crushing litigation) and criticized (as creating a massive Accountability Gap by shielding platforms from responsibility for harms facilitated by their systems).
Reform proposals have come from across the political spectrum:
- Conservative critics argue that Section 230 allows platforms to censor conservative speech without accountability. They propose requiring platforms to be "neutral" or stripping immunity from platforms that moderate viewpoint-based content.
- Progressive critics argue that Section 230 shields platforms from accountability for amplifying hate speech, misinformation, and content that causes real-world harm. They propose conditioning immunity on reasonable content moderation practices.
- Structural reformers argue that the debate over Section 230 misses the point: the real problem is algorithmic amplification, not hosting. They propose distinguishing between hosting content (which should be protected) and amplifying content through algorithmic recommendation (which should not).
31.4.2 The EU Digital Services Act (2024)
The European Union's Digital Services Act (DSA), which became fully applicable in February 2024, represents a fundamentally different approach to platform governance. Rather than broad immunity, the DSA imposes graduated obligations based on platform size, with the most stringent requirements falling on "Very Large Online Platforms" (VLOPs) — those with more than 45 million monthly active users in the EU.
Key provisions include:
Transparency reporting. VLOPs must publish detailed transparency reports on content moderation decisions, including the number of items removed, the reasons for removal, the use of automated systems, and the outcomes of appeals.
Systemic risk assessments. VLOPs must conduct annual assessments of systemic risks arising from their services, including risks related to the dissemination of illegal content, the impact on fundamental rights, and the impact on civic discourse and electoral processes. They must implement risk mitigation measures and submit to independent audits.
Algorithmic transparency. VLOPs must provide the European Commission with access to data necessary to monitor compliance, including data about their recommendation algorithms. Users must be offered the option to receive recommendations that are not based on profiling.
Crisis response. The DSA includes provisions for crisis response mechanisms, allowing the European Commission to require VLOPs to take specific action during public security crises.
Trusted flaggers. The DSA establishes a system of "trusted flaggers" — organizations with particular expertise (e.g., anti-hate speech organizations, consumer protection bodies) whose reports of illegal content must be prioritized by platforms.
Callout Box: Two Models of Platform Governance
Dimension Section 230 (US) Digital Services Act (EU) Default posture Immunity Graduated obligation Liability for user content Generally no Conditional ("notice and action") Transparency requirements Voluntary Mandatory (detailed reporting) Algorithmic accountability None Risk assessments, audits, user choice Enforcement Private litigation (largely foreclosed) Regulatory oversight (national coordinators + Commission) Philosophy Market self-regulation Risk-based regulation First Amendment considerations Central constraint Not applicable (EU framework)
31.4.3 Beyond US and EU: Global Approaches
Other jurisdictions have adopted their own approaches:
- Australia's Online Safety Act (2021) empowers an eSafety Commissioner to issue removal notices for harmful content and to require platforms to report on their safety systems.
- Brazil's Marco Civil da Internet establishes a "notice and takedown" regime, but courts have struggled with enforcement against global platforms.
- India's IT Rules (2021) require platforms to appoint grievance officers, enable tracing of message originators (conflicting with encryption), and comply with government takedown orders within tight timelines — raising concerns about government censorship.
- Singapore's POFMA (Protection from Online Falsehoods and Manipulation Act, 2019) gives the government broad power to require "corrections" on content it deems false, with criminal penalties for non-compliance — drawing criticism from press freedom organizations.
Each approach reflects different cultural values, political systems, and threat assessments. There is no emerging global consensus on platform governance.
31.5 Interventions: Fact-Checking, Prebunking, and Media Literacy
31.5.1 Fact-Checking: The Reactive Approach
Fact-checking organizations — PolitiFact, Snopes, Full Fact, Africa Check, and dozens of others — verify claims and publish corrections. Several platforms have partnered with fact-checkers through programs like Meta's Third-Party Fact-Checking Program.
What the evidence shows:
- Fact-check labels reduce the likelihood of sharing labeled content by approximately 10-25% (Clayton et al., 2020)
- Corrections are more effective when they come from sources the audience considers credible (Walter et al., 2020)
- The "continued influence effect" means that even after correction, initial misinformation continues to influence people's reasoning (Lewandowsky et al., 2012)
- Fact-checking can produce an "implied truth effect" — unlabeled content is perceived as more credible simply because other content has been labeled as false (Pennycook et al., 2020)
Structural limitations:
- Fact-checking is inherently reactive — it can only address claims that have already spread
- The scale mismatch is enormous: fact-checking organizations employ hundreds of people; platforms generate billions of pieces of content
- Fact-checking organizations may be perceived as politically biased, reducing their effectiveness with some audiences
- The business model is fragile: fact-checking is expensive and does not generate revenue proportional to its social value
31.5.2 Prebunking: The Proactive Approach
"Prebunking" — also called "inoculation theory" — takes a different approach. Rather than debunking specific false claims after they spread, prebunking exposes people to weakened forms of misinformation techniques before they encounter real misinformation, building resistance.
The concept draws on medical vaccination: a weakened dose of a pathogen stimulates the immune system so it can fight the real pathogen later. Similarly, a weakened dose of a manipulation technique — combined with an explanation of how the technique works — helps people recognize and resist the technique when they encounter it in the wild.
The evidence is promising:
- Google's "prebunking" campaign, tested in collaboration with researchers at Cambridge and Bristol universities, showed short videos explaining manipulation techniques (emotional language, scapegoating, false dichotomies) to millions of users on YouTube. The intervention increased users' ability to identify manipulated content by 5-10 percentage points (Roozenbeek et al., 2022).
- The browser game Bad News (developed at Cambridge) puts players in the role of a disinformation creator, teaching them the techniques from the inside. Players who completed the game showed improved ability to identify misinformation across multiple studies (Roozenbeek & van der Linden, 2019).
- Prebunking appears to work across political ideologies, unlike fact-checking, which can be rejected as partisan.
Limitations:
- Effects may decay over time — "booster shots" may be needed
- Prebunking works best against techniques (emotional manipulation, false authority) rather than specific claims
- Scaling prebunking requires the cooperation of platforms — the same platforms whose business models benefit from engagement with emotionally manipulative content
31.5.3 Media Literacy: The Long-Term Approach
Media literacy education teaches people to critically evaluate information sources, identify manipulation techniques, understand how algorithms shape their information diet, and assess the credibility of claims.
Finland is frequently cited as a success story: the country integrated media literacy into its national curriculum in 2014, and Finnish citizens consistently score among the highest in Europe on media literacy assessments and among the lowest in susceptibility to misinformation (Open Society Institute, 2022).
However, media literacy programs face challenges:
- The sophistication gap. Disinformation operations are increasingly sophisticated, using AI-generated content, deep fakes, and personalized targeting. Media literacy education must keep pace with adversarial innovation.
- The context gap. Media literacy programs developed for one cultural context may not transfer to another. Finnish media literacy education succeeds in a high-trust society with strong public broadcasting; it is unclear how well similar approaches would work in low-trust environments.
- The systemic gap. Media literacy places the burden of defense on individual users, leaving the structural incentives that produce misinformation unaddressed. As Eli observed, "Teaching people to swim harder doesn't fix the dam that's flooding the town."
Callout Box: Interventions Against Misinformation — Summary
Intervention Approach Evidence Limitations Fact-checking Reactive: debunk specific claims Reduces sharing 10-25%; continued influence effect limits impact Cannot match scale; perceived bias; reactive timing Prebunking Proactive: build resistance to manipulation techniques 5-10pp improvement in recognition; works across ideologies Effects may decay; requires platform cooperation Media literacy Long-term: build critical evaluation skills Finnish model shows promise; cross-cultural evidence growing Places burden on individuals; sophistication gap Content labeling Contextual: add warnings without removing content Moderate reduction in sharing; implied truth effect risk Non-transparent application; user habitation Algorithmic adjustment Structural: modify recommendation systems Internal platform research shows significant impact Conflicts with engagement-driven business model
31.6 Platform Accountability: Publisher, Utility, or Something Else?
31.6.1 The Classification Debate
At the heart of the platform governance debate is a fundamental classification question: What are platforms?
The publisher model. Newspapers, magazines, and broadcast networks are publishers. They exercise editorial judgment over what they publish, and they are legally liable for the content they distribute. If platforms are publishers — exercising editorial judgment through algorithms that curate, recommend, and amplify content — should they bear publisher-like liability?
The utility model. Telephone companies, power grids, and postal services are utilities. They provide infrastructure for communication without editorial control over the content that flows through them. If platforms are more like utilities — neutral infrastructure through which people communicate — should they be regulated as common carriers with obligations to serve all users without discrimination?
The platform model. Technology companies argue that they are neither publishers nor utilities but a new category: platforms. They provide tools for users to create and share content but do not exercise traditional editorial judgment. Their content moderation is a good-faith effort to maintain community standards, not editorial selection.
Each classification implies different governance obligations:
- If platforms are publishers, they should be liable for harmful content they distribute, including algorithmically amplified misinformation.
- If platforms are utilities, they should be required to serve all users without discrimination, potentially limiting their ability to moderate content.
- If platforms are a new category, new regulatory frameworks are needed — which is precisely what the EU DSA attempts to provide.
Mira found herself uncomfortable with all three options. "VitraMed is a health technology platform," she said during the seminar. "If someone shares false health information on a VitraMed community forum, are we a publisher? A utility? A platform? The answer changes what we're responsible for. And it matters — because if someone follows bad health advice they found on our platform and gets hurt, someone needs to be accountable."
31.6.2 The Amplification Distinction
An emerging consensus among scholars suggests that the publisher/utility/platform debate may be asking the wrong question. The key distinction is not between hosting and publishing but between hosting and amplifying.
When a platform simply hosts user content — making it available to anyone who seeks it out — the case for immunity is strong. The platform is functioning as infrastructure.
When a platform algorithmically amplifies content — pushing it into users' feeds, recommending it, trending it — the platform is making an active editorial choice about what to promote. The argument for treating amplification differently from hosting has gained traction among legal scholars, policymakers, and even some platform executives.
"Think about it this way," Sofia Reyes explained to the class. "If I stand on a street corner and shout conspiracy theories, the city isn't responsible. But if the city builds a loudspeaker system that amplifies my voice to a million people because it detected that my conspiracy theory generates more 'engagement' — the city bears some responsibility for the amplification, even if not for the original speech."
31.6.3 Structural Accountability Proposals
Beyond the publisher/utility debate, several structural accountability proposals have gained traction:
Algorithmic accountability. Require platforms to disclose how their recommendation algorithms work, conduct impact assessments, and allow independent audits. The EU DSA's systemic risk assessment requirements represent a first step.
Data access for researchers. Enable independent researchers to study platform algorithms and their effects on information quality, mental health, and democratic discourse. Currently, platforms control access to the data necessary for this research, creating a knowledge asymmetry that impedes effective governance.
Interoperability requirements. Require platforms to be interoperable — allowing users to move between platforms, taking their data and social connections with them. This would reduce the lock-in that gives platforms their current market power and enable competition on the basis of content moderation quality.
Business model reform. The most fundamental proposals target the advertising-driven business model itself. If platforms were funded through subscriptions, public funding, or data dividends rather than engagement-driven advertising, the incentive to amplify emotionally manipulative content would diminish. This is a structural reform, not a content moderation fix.
31.7 VitraMed: Health Misinformation and the Trust Crisis
31.7.1 When Misinformation Meets Health Technology
The VitraMed data breach (Chapter 30) created a crisis of trust. But the misinformation that spread about the breach compounded the crisis exponentially.
Within 48 hours of the breach becoming public, the following claims circulated on social media:
- That VitraMed had deliberately sold patient data to insurance companies (false — the breach was the result of a misconfigured cloud storage bucket)
- That VitraMed's predictive health models were being used to deny insurance coverage (false — but the claim resonated because similar practices by other companies had been documented)
- That VitraMed's algorithms discriminated against patients of color (partially true — the models were less accurate for underserved populations, a problem VitraMed was aware of but had not yet disclosed publicly)
The third claim was the most damaging precisely because it contained a kernel of truth. The misinformation ecosystem had taken a real ethical concern (model accuracy disparities) and amplified it into a narrative of deliberate discrimination — which made it harder for VitraMed to acknowledge the real problem without appearing to confirm the false narrative.
Mira watched the crisis unfold with a sickening feeling of recognition. "This is the information disorder framework in action," she told Eli. "The original claim about the breach was true. The claims about data selling were disinformation — someone fabricated them. The claim about discrimination was malinformation — it took a real problem and stripped it of context. And now my father's company can't address the real bias issue without it being interpreted through the lens of the false narratives. The truth is trapped."
31.7.2 Health Misinformation as a Special Case
Health misinformation deserves special attention because its harms are direct and measurable. The World Health Organization declared an "infodemic" alongside the COVID-19 pandemic, recognizing that false health information was undermining vaccination campaigns, promoting unproven treatments, and eroding trust in public health institutions.
The VitraMed case illustrates a broader pattern: data-driven health technologies are uniquely vulnerable to misinformation because they operate in a domain where:
- Trust is essential. Patients must trust health technology systems enough to share sensitive data. Misinformation that erodes this trust undermines the data collection that makes the technology work.
- Complexity creates vulnerability. Health AI systems are complex, and their complexity creates opportunities for misrepresentation. A claim that "VitraMed's algorithm is biased" is much easier to understand (and share) than the nuanced reality of differential model performance across population subgroups.
- Stakes are high. Health misinformation can directly cause physical harm — people who believe false claims about treatments may forgo effective medical care.
- The Power Asymmetry is extreme. Health technology companies have enormous informational advantages over the patients whose data they process. When trust breaks down, patients have few tools to evaluate competing claims about what a company's algorithms actually do.
Reflection: Consider a health technology you or a family member uses (a fitness tracker, a telehealth platform, a health app). What misinformation about this technology have you encountered? How did you evaluate its credibility? What information would you need from the technology provider to make an informed judgment?
31.8 Chapter Summary
Key Concepts
- Misinformation is false information shared without intent to deceive; disinformation is false information created deliberately to mislead; malinformation is true information deployed to cause harm. The distinctions matter for governance.
- False information spreads faster, farther, and more broadly than true information, driven by novelty, emotional arousal, and algorithmic amplification (Vosoughi, Roy, & Aral, 2018).
- Platforms moderate content through automated systems and human review but face a structural trilemma: they cannot simultaneously be fast, accurate, and scalable.
- Section 230 provides broad immunity to US platforms; the EU Digital Services Act imposes graduated obligations including transparency reporting, systemic risk assessments, and algorithmic accountability.
- Interventions include reactive fact-checking, proactive prebunking (inoculation theory), and long-term media literacy education — each with demonstrated effectiveness and significant limitations.
- The amplification distinction — differentiating between hosting content and algorithmically promoting it — offers a potential way forward in the publisher/utility/platform debate.
Key Debates
- Should platforms be legally liable for content they algorithmically amplify, even if they are not liable for content they merely host?
- Can meaningful content moderation coexist with encryption and privacy protections, or must one give way?
- Does the US First Amendment framework, which limits government regulation of speech, adequately address the governance challenges of algorithmic amplification by private companies?
- Who bears responsibility for health misinformation: the creators, the platforms that distribute it, the algorithms that amplify it, or the individuals who share it?
Applied Framework
The Misinformation Response Framework: 1. Classify — Is this misinformation (innocent sharing), disinformation (deliberate deception), or malinformation (weaponized truth)? 2. Trace — How is it spreading? Through organic sharing, algorithmic amplification, coordinated networks, or cross-platform migration? 3. Assess impact — What are the potential harms? How immediate are they? Who is most vulnerable? 4. Choose interventions — Match interventions to the specific problem: fact-checking for specific claims, prebunking for recurring techniques, algorithmic adjustment for structural amplification, media literacy for long-term resilience. 5. Monitor accountability — Who is responsible for implementation, and how will effectiveness be measured?
What's Next
The information ecosystem is not the only domain where data systems create unequal outcomes. In Chapter 32: Digital Divide, Data Justice, and Equity, we examine the structural inequalities that shape who benefits from the data revolution and who bears its costs — from broadband access gaps to the extraction of data from marginalized communities. Eli's Detroit neighborhood will feature prominently as we explore how digital redlining compounds the surveillance harms we've been tracking throughout this book.