Case Study 35-2: EU vs. U.S. Approaches to Platform Disinformation Regulation

Chapter 35 — Law, Policy, and the Regulation of Propaganda


Overview

No comparison in contemporary platform policy illuminates the regulatory debate more clearly than the divergence between the European Union's Digital Services Act framework and the United States' Section 230 regime. These two approaches represent not merely different legal techniques but different foundational theories about the relationship between government, platforms, and democratic discourse. This case study examines the specific provisions of each framework, the concrete decisions they have produced, and what each approach reveals about the trade-offs inherent in any attempt to regulate the information environment.


The U.S. Framework: Section 230 and Its Origins

Section 230 of the Communications Decency Act was not designed with disinformation in mind. It was designed to solve a specific, practical legal problem that had emerged as the early internet was developing.

In 1991, a New York court held in Cubby v. CompuServe that an online service that exercised no editorial control over its discussion forums was not liable as a publisher for defamatory content posted by users. But in 1995, another New York court held in Stratton Oakmont v. Prodigy that Prodigy, which had actively moderated its bulletin boards, was a "publisher" of all content on those boards and therefore could be held liable for defamatory user posts.

The logic of Stratton Oakmont created a perverse incentive: the more an online platform tried to improve the quality of its content through moderation, the more legal liability it assumed. The rational response was to moderate nothing, because moderation triggered publisher liability while passive hosting did not.

Congress intended Section 230 to correct this perversity. By immunizing platforms from liability for third-party content regardless of whether they moderated, Section 230 enabled platforms to moderate without fear of legal consequences. The legislative history is clear: the goal was not to create a liability shield for harmful content but to encourage responsible management of user-generated content by removing the legal risk that had made moderation irrational.

The 26-word provision — enacted in 1996 — has proven extraordinarily consequential. It enabled the growth of user-generated content platforms at a scale that would have been legally impossible under publisher liability models. It has also insulated those platforms from accountability for the harms their systems generate.


The EU Framework: DSA and Its Origins

The EU Digital Services Act has different intellectual roots. Its conceptual predecessors include the EU's e-Commerce Directive (2000), which established a hosting immunity similar to Section 230; the General Data Protection Regulation (GDPR, 2016), which established a comprehensive rights-based framework for data governance; and the EU's Code of Practice on Disinformation (2018), which established voluntary commitments from major platforms on disinformation-related practices.

The DSA, which entered into force in November 2022 and began applying to VLOPs in August 2023, represents the culmination of several years of policy development within the European Commission. It reflects a specific regulatory philosophy articulated by Commissioner Thierry Breton and the Commission's communications: platforms are not neutral conduits but active participants in shaping the information environment, and their size and market power create public interest obligations that must be enforced.

The DSA is not primarily a disinformation law. It covers a wide range of platform obligations related to illegal goods, algorithmic transparency, advertising practices, and due process in content moderation. But its provisions on systemic risk — particularly as applied to disinformation and electoral integrity — have attracted the most attention in the context of this course.


Side-by-Side Comparison: Key Provisions

Liability for Third-Party Content

Section 230 (U.S.): Complete immunity from civil liability for third-party content, regardless of moderation activity.

DSA (EU): Maintains hosting immunity for platforms that lack "actual knowledge" of illegal content and act expeditiously to remove content when notified. However, the DSA's systemic risk and transparency obligations create accountability for the platform's overall content governance without directly removing the hosting immunity.

Key difference: Section 230 immunity is categorical; the DSA's immunity is conditional on compliance with notification-and-takedown obligations and the broader regulatory framework.

Algorithmic Transparency

Section 230 (U.S.): No transparency requirements whatsoever. Section 230 provides no leverage over algorithmic choices; a platform can algorithmically amplify any content it chooses without any disclosure or accountability obligation under Section 230.

DSA (EU): VLOPs must explain their recommender systems to users. They must offer at least one algorithmic option not based on user profiling. The criteria underlying recommender systems must be disclosed to Digital Services Coordinators. Research access to algorithmic data must be provided to vetted researchers.

Key difference: This is perhaps the most significant divergence. The U.S. framework offers no handle on algorithmic amplification decisions; the EU framework treats the recommendation algorithm as a regulated activity requiring transparency and audit access.

Advertising and Political Content

Section 230 (U.S.): Section 230 does not regulate advertising. The FEC has jurisdiction over political advertising, but its regulatory framework for digital advertising is incomplete. Dark money political advertising falls largely outside mandatory disclosure requirements.

DSA (EU): VLOPs must maintain public advertising repositories disclosing who paid for every ad, targeting criteria, and total spend. Political advertising provisions are addressed separately in companion EU legislation. The Code of Practice on Disinformation includes platform commitments to demonetize accounts spreading disinformation.

Key difference: The EU creates a comprehensive advertising transparency infrastructure; the U.S. has a partial and technically outdated disclosure framework with significant dark money gaps.

Systemic Risk Assessment

Section 230 (U.S.): No equivalent obligation exists.

DSA (EU): VLOPs must conduct annual systemic risk assessments covering risks to civic discourse, electoral processes, and democratic institutions. These assessments must be documented, made available to regulators, and inform mitigation measures. They are subject to independent audit.

Key difference: The systemic risk framework creates a proactive obligation for platforms to investigate their own harms — something with no equivalent in U.S. law.

Sanctions

Section 230 (U.S.): Section 230 does not impose obligations, so there is nothing to sanction. Its effect is immunizing, not regulatory.

DSA (EU): Fines of up to 6% of global annual turnover for violations; up to 1% for providing incorrect or incomplete information. Repeated serious violations can result in temporary restrictions on EU market access. The European Commission can take emergency measures in cases of systemic risk to public security or public health.

Key difference: The financial stakes are very high. 6% of Meta's global turnover would be several billion dollars — a figure that creates genuine compliance incentives.


The EU's First Major DSA Actions: The X Case

The most prominent early enforcement action under the DSA involved X (formerly Twitter) following Elon Musk's acquisition of the platform in 2022 and the subsequent mass reduction of trust and safety staff.

The European Commission opened formal proceedings against X in December 2023, identifying preliminary concerns about: (1) the risk assessment process and whether X had conducted a genuine, documented assessment of the systemic risks its platform poses; (2) the platform's advertising transparency repository, which was found to be incomplete and non-functional; and (3) the platform's measures to address risk from "information manipulation and foreign information manipulation and interference."

The X proceedings illustrate both the potential of the DSA framework and its challenges. The Commission has the legal authority and the financial leverage to compel compliance. But the investigation required months of information gathering, legal proceedings, and platform responses — a timeline that means the DSA operates on regulatory time, not platform time. Information operations that pose near-term electoral risks operate on a much faster cycle.

X responded to DSA scrutiny in part by threatening to leave the EU market — a threat that illustrates the market access leverage underlying the DSA framework but also its limits. No major platform has in fact withdrawn from the EU market in response to regulatory pressure, but the threat reveals that the ultimate enforceability of the DSA depends on the EU's value as a market being larger than the compliance burden.


The U.S. Response: The Difficulty of Reform

In the same period that the EU was developing and implementing the DSA, the United States Congress attempted multiple rounds of Section 230 reform, platform transparency legislation, and digital advertising regulation. None succeeded.

The failure of U.S. platform reform legislation reflects several structural factors. First, the political coalition problem: left and right agree that Section 230 should be reformed but want incompatible reforms. Conservative members primarily want to prevent perceived political censorship of conservative speech; progressive members primarily want more accountability for harmful content and algorithmic amplification. Building a majority for any specific reform has proven impossible.

Second, the First Amendment problem: many reform proposals face genuine constitutional obstacles. Content-based regulations of platform speech are subject to heightened scrutiny. Requirements that platforms host specific viewpoints (proposed "must-carry" rules) raise compelled speech concerns under Hurley v. Irish-American Gay Group (1995). Transparency requirements are more constitutionally durable but less politically exciting.

Third, the institutional capacity problem: the United States lacks a regulatory agency with clear jurisdiction and capacity to regulate platform information governance. The FTC, FCC, and FEC each have partial jurisdiction over different aspects of platform activity. A dedicated digital regulator — as several bills have proposed — would require congressional action to establish and fund.

Fourth, and perhaps most significantly: the lobbying problem. Major platforms have invested heavily in maintaining the Section 230 framework and resisting significant reform. The asymmetry between well-funded platform lobbying and diffuse public interest is a classic collective action problem.


What the Comparison Reveals

Ingrid Larsen's observation — "the DSA doesn't tell platforms what content to remove; it's about process and transparency, not about what's true or false" — captures the EU's key regulatory insight. By avoiding content regulation and focusing on process, transparency, and risk accountability, the DSA sidesteps the most difficult First Amendment-type objections while still creating meaningful accountability mechanisms.

The comparison with Section 230 reveals that the U.S. framework is not actually a "free speech" framework in a robust sense — it is an immunity framework. Section 230 does not protect speech; it protects platforms from accountability for the speech they host and amplify. The distinction matters: you can reform platform accountability without restricting speech, just as the EU has done.

Whether the EU approach is adequate is genuinely uncertain. The DSA's effectiveness depends on: - Whether the systemic risk assessment obligation actually causes platforms to investigate and mitigate harms, or whether it produces sophisticated compliance theater - Whether the Commission has the resources, technical capacity, and institutional will to conduct meaningful audits and enforcement - Whether the research access provisions actually enable independent evaluation of platform behavior - Whether the framework can adapt to the speed of platform information operations

What the EU approach clearly does that the U.S. approach does not: it creates a legal infrastructure for platform accountability that requires platforms to investigate their own harms, disclose what they find, and face scrutiny from independent researchers and regulators. Whether that infrastructure produces better outcomes than the U.S. baseline is, as of this writing, still being determined.


Analytical Questions

1. The DSA creates a systemic risk assessment obligation but leaves the choice of mitigation measures largely to platform discretion. Tariq argues this means platforms will conduct assessments designed to reach predetermined conclusions. How could the DSA framework be designed to reduce this risk without imposing specific content mandates?

2. Section 230's complete immunity from third-party content liability was designed to enable moderation. In what ways has it had the opposite effect — enabling platforms to avoid accountability for their active choices about content amplification? Be specific about the distinction between passive hosting and active algorithmic curation.

3. Sophia is running a local school board campaign. How does the digital advertising disclosure gap in U.S. law specifically affect candidates at her level — candidates who are not running for federal office and who are competing in a local information environment? Would the EU's advertising transparency requirements, if applicable, make a difference to her campaign?

4. If you could take one element from the DSA and adapt it for the U.S. regulatory context — modified as necessary to address First Amendment constraints — which element would you choose and how would you adapt it? What would be lost and what would be gained from the U.S.-adapted version?

5. The X/Twitter case reveals a gap between regulatory time and platform time: DSA enforcement operates over months while influence operations operate over days or weeks. Does this temporal mismatch fundamentally undermine the DSA's effectiveness for electoral protection, or are there design modifications that could address it?


The Big Tobacco Parallel

One recurring reference point in discussions of platform regulation is the regulatory history of tobacco. The Federal Trade Commission began regulating tobacco advertising in the 1950s, initially through voluntary industry agreements that proved inadequate, then through mandatory disclosure requirements (health warnings), then through restrictions on advertising targeting minors, and eventually through FDA jurisdiction over tobacco products under the Family Smoking Prevention and Tobacco Control Act (2009).

The parallel is instructive in both directions. The tobacco analogy supports aggressive platform regulation: an industry producing a harmful product initially resisted regulation through industry self-regulation pledges, captured regulatory processes, and First Amendment arguments against advertising restrictions — and ultimately required comprehensive federal regulation to achieve significant public health outcomes.

But the parallel has limits. Tobacco products are uniformly harmful; platforms are not. The First Amendment applies to speech in ways that it does not apply to cigarettes. And the "product" being regulated — information — is categorically different from nicotine in ways that make the design of effective regulation considerably more complex.

What the tobacco history does clearly support is skepticism about voluntary industry self-regulation as a substitute for enforceable legal obligations. The tobacco industry's Code of Voluntary Advertising Standards, maintained for decades, produced documented evasion and inadequate compliance. The history of platform self-regulation — from early voluntary commitments through the Code of Practice on Disinformation to the Oversight Board — has followed a recognizably similar pattern.


This case study connects to: Chapter 35, Sections 35.6–35.7 (Section 230 and DSA analysis); Chapter 35, Section 35.8 (platform self-regulation); Case Study 35-1 (Smith-Mundt Act); Chapter 6 (free speech frameworks); Chapter 30 (information operations in democratic vs. authoritarian systems).