Case Study: Section 230 vs. the EU DSA — Two Approaches to Platform Liability
"The question is not whether to regulate platforms. The question is whether we regulate them thoughtfully or whether we let them regulate themselves." — Margrethe Vestager, European Commissioner for Competition, 2022
Overview
On one side of the Atlantic, a 26-word sentence written in 1996 provides some of the broadest legal protections ever afforded to private companies: Section 230 of the Communications Decency Act. On the other side, a comprehensive regulatory framework enacted in 2022 and fully applicable from February 2024 imposes detailed, graduated obligations on digital platforms: the European Union's Digital Services Act. Together, these two regimes represent the most influential — and most divergent — approaches to platform governance in the democratic world.
This case study examines both approaches in depth, analyzing their origins, their mechanisms, their effects on platform behavior, and their implications for the governance of misinformation, content moderation, and algorithmic amplification. It is a case study in institutional design: how two democratic systems, facing the same technological challenge, arrived at fundamentally different governance frameworks — and what the consequences of each approach have been.
Skills Applied: - Comparative regulatory analysis - Evaluating trade-offs between immunity and obligation - Connecting legal frameworks to platform behavior and user experience - Assessing the effectiveness of regulatory design choices
The Situation
Part I: Section 230 — The Twenty-Six Words That Created the Internet
Origins
Section 230 was born from a legal paradox. In 1995, two court decisions produced contradictory results:
In Cubby, Inc. v. CompuServe (1991), a federal court held that CompuServe — an early internet service provider that did not review user-posted content — was not liable for defamatory content posted by a user. Because CompuServe did not exercise editorial control, it was treated as a distributor (like a bookstore) rather than a publisher (like a newspaper).
In Stratton Oakmont, Inc. v. Prodigy Services (1995), a New York court held that Prodigy — an internet service that did moderate some user content — was liable for defamatory user posts. Because Prodigy had established content guidelines and used screening software, the court treated it as a publisher exercising editorial judgment.
The paradox was immediate and dangerous: platforms that moderated content faced greater liability than platforms that did nothing. The legal incentive was to avoid any content moderation — a result that Congress found unacceptable.
Section 230, enacted as part of the Communications Decency Act of 1996, solved this problem with remarkable brevity. Its two key provisions established:
- No platform-as-publisher liability (Section 230(c)(1)): Platforms would not be treated as publishers of user content, regardless of whether they moderated content or not.
- Good faith moderation protection (Section 230(c)(2)): Platforms could moderate content in good faith without losing their immunity — directly overruling the Stratton Oakmont logic.
The Immunity in Practice
For nearly three decades, Section 230 has shielded platforms from liability for an extraordinary range of user-generated content. Courts have interpreted the immunity broadly:
- A platform that hosts user reviews is not liable if a review is defamatory (Batzel v. Smith, 2003).
- A platform that algorithmically recommends content is generally not liable for the content it recommends (Force v. Facebook, 2019).
- A platform that profits from user content through advertising is not liable for the content itself.
This broad immunity enabled the growth of the modern internet economy. Without Section 230, platforms like Facebook, YouTube, Reddit, and Yelp would have faced crushing litigation costs — every defamatory review, every harassing comment, every copyright-infringing upload could have generated a lawsuit against the platform. Section 230's defenders argue that this legal protection was essential for innovation and for the creation of platforms that allow billions of people to speak freely.
The Critique
Critics from across the political spectrum argue that Section 230's broad immunity has created an Accountability Gap of historic proportions:
From the left: Platforms profit from hosting and algorithmically amplifying hate speech, misinformation, and content that causes real-world harm — election manipulation, genocide (Myanmar), self-harm among teenagers — while bearing no legal responsibility. Section 230 socializes the harms of platform content while privatizing the profits.
From the right: Platforms use their moderation power to suppress conservative speech, and Section 230 shields these moderation decisions from legal challenge. The immunity was intended to protect neutral platforms, not to empower politically motivated censorship.
From structural reformers: The debate about whether platforms moderate too much or too little misses the fundamental problem: algorithmic amplification. Section 230 was written in 1996, when platforms hosted static content. It does not distinguish between hosting content (passive) and amplifying content through recommendation algorithms (active). A law designed for bulletin boards is governing engagement-optimized systems that actively shape billions of people's information diets.
Part II: The EU Digital Services Act — Graduated Obligation
Origins
The EU DSA emerged from a different legal and political tradition. European governance has historically favored risk-based regulation over market self-regulation. The EU's approach to technology governance — exemplified by the GDPR (2018), the AI Act (2024), and the DSA (2024) — reflects a belief that digital markets, left unregulated, produce systematic harms that individual choice and market competition cannot correct.
The DSA was proposed by the European Commission in December 2020, negotiated over approximately 18 months, and became fully applicable in February 2024. It replaced the EU's prior framework (the e-Commerce Directive of 2000) with a comprehensive, graduated system of platform obligations.
The Graduated Structure
The DSA imposes different levels of obligation on different types of platforms:
All intermediary services (ISPs, hosting providers, online platforms) must: - Establish a single point of contact for authorities - Include content moderation information in their terms of service - Publish annual transparency reports
Hosting services must additionally: - Implement "notice and action" mechanisms for reporting illegal content - Provide clear explanations when content is removed, with information about redress options
Online platforms (with user-generated content) must additionally: - Maintain internal complaint-handling systems - Prioritize reports from "trusted flaggers" — organizations with demonstrated expertise - Suspend users who frequently post illegal content - Not use "dark patterns" that deceive or manipulate users
Very Large Online Platforms (VLOPs) — those with more than 45 million monthly active EU users — must additionally: - Conduct annual systemic risk assessments covering the risks of dissemination of illegal content, impacts on fundamental rights, and impacts on civic discourse and electoral processes - Implement risk mitigation measures and submit to independent audits - Provide the European Commission with data access for compliance monitoring - Offer users the option to receive recommendations not based on profiling - Maintain publicly accessible repositories of all advertisements displayed on their platform - Appoint compliance officers independent from operational management
Enforcement
The DSA is enforced through a dual system: - National Digital Services Coordinators in each EU member state handle complaints, investigate violations, and impose penalties - The European Commission has direct supervisory authority over VLOPs, including the power to request information, conduct inspections, and impose fines of up to 6% of global annual turnover
This enforcement structure gives the DSA real teeth. A fine of 6% of global turnover for Meta would exceed $8 billion — a figure that commands attention even from the world's largest technology companies.
Comparative Analysis
Dimension 1: Default Posture
Section 230's default is immunity. Platforms are not responsible for user content unless a specific exception applies (child sexual abuse material, federal criminal law, intellectual property).
The DSA's default is obligation. Platforms must take specific actions to address illegal content, provide transparency, and manage systemic risks — with obligations scaling with platform size and power.
This difference reflects fundamentally different assumptions about markets and governance. Section 230 trusts market incentives to produce adequate content moderation. The DSA assumes that market incentives are insufficient — that platforms' profit motives are structurally misaligned with public safety — and that regulatory intervention is necessary.
Dimension 2: Algorithmic Accountability
Section 230 contains no provisions addressing algorithmic amplification. The law was written before recommendation algorithms existed, and courts have generally held that algorithmic recommendation is protected under the same immunity that covers hosting.
The DSA directly addresses algorithms. VLOPs must: - Conduct risk assessments that include the impact of recommendation systems - Offer users non-profiling-based recommendation options - Provide data to regulators and researchers for algorithmic accountability research
This is one of the most significant differences. The DSA recognizes the amplification distinction — that algorithmically promoting content is functionally different from hosting it — and imposes governance obligations accordingly. Section 230 treats hosting and amplification identically.
Dimension 3: Transparency
Under Section 230, platforms have no transparency obligations. They may publish voluntary transparency reports (and most major platforms do), but the content, format, and completeness of these reports are entirely at the platform's discretion.
Under the DSA, transparency reporting is mandatory and detailed. VLOPs must report on content moderation decisions (number of items removed, reasons, use of automated systems, appeal outcomes), systemic risk assessments, and advertising practices. This information must be publicly accessible.
Dimension 4: User Rights and Redress
Under Section 230, a user whose content is removed by a platform has limited recourse. The platform's moderation decisions are generally protected by Section 230(c)(2), and the First Amendment (which restricts government action, not private company action) does not apply. Some platforms have introduced internal appeal mechanisms, but these are voluntary and non-standardized.
Under the DSA, users have structured redress rights. Platforms must explain removal decisions, provide internal complaint mechanisms, and users can escalate disputes to certified out-of-court dispute settlement bodies. These provisions create accountability for moderation decisions that does not exist under US law.
Dimension 5: The Misinformation Problem
Neither framework directly regulates misinformation (as distinct from illegal content). This is a deliberate choice: defining "misinformation" in law raises profound free speech concerns. False statements are generally protected speech in both US and EU law, unless they constitute fraud, defamation, or incitement.
However, the DSA's systemic risk assessment requirement indirectly addresses misinformation. VLOPs must assess and mitigate risks to civic discourse and electoral processes — which includes the risk that their recommendation algorithms amplify false and misleading content. This creates regulatory pressure to address misinformation as a systemic risk without requiring platforms to determine the truth or falsity of specific statements.
Section 230 creates no equivalent pressure. US platforms address misinformation only to the extent that their own business interests or reputational concerns motivate them to do so.
The Early Evidence: How Is the DSA Working?
The DSA has been fully applicable for only a short period, and rigorous evaluation will take years. But early observations include:
Increased transparency. Major platforms have published more detailed transparency reports to comply with DSA requirements. These reports provide researchers and policymakers with information that was previously unavailable — including, for the first time, data on the use of automated content moderation systems and their error rates.
Risk assessment challenges. The first round of systemic risk assessments revealed the difficulty of the exercise. Platforms argued that some assessment requirements were vague; regulators argued that some assessments were superficial. The iterative process of establishing expectations will take multiple cycles.
Enforcement actions. The European Commission opened formal proceedings against X (formerly Twitter) in December 2023 over concerns about the platform's handling of illegal content and disinformation. This was the first enforcement action under the DSA and signaled that the Commission intended to use its supervisory authority actively.
Compliance costs. Smaller platforms have raised concerns about the cost of DSA compliance, particularly the transparency reporting requirements. While the graduated structure limits obligations for smaller services, critics argue that compliance costs may still disproportionately burden smaller competitors relative to dominant platforms with larger legal and compliance teams.
Connecting to Chapter Themes
The Accountability Gap
Section 230 exemplifies the Accountability Gap: platforms make decisions that affect billions of people's access to information, but bear no legal responsibility for the consequences. The DSA represents an attempt to close that gap — not by eliminating platform discretion but by requiring transparency, risk assessment, and accountability mechanisms.
The Power Asymmetry
Under both frameworks, platforms hold enormous power over public discourse. But the DSA creates countervailing mechanisms — regulatory oversight, user redress, mandatory transparency — that Section 230 does not. The Power Asymmetry remains, but the DSA introduces institutional checks that reduce the asymmetry at the margins.
The Amplification Distinction
The DSA's treatment of recommendation algorithms represents the most significant regulatory recognition of the amplification distinction to date. By requiring VLOPs to offer non-profiling-based recommendation options and to assess the systemic risks of their recommendation systems, the DSA begins to differentiate between hosting and amplifying — a distinction that Section 230, written in 1996, does not and cannot make.
Discussion Questions
-
The innovation argument. Defenders of Section 230 argue that broad immunity was essential for the growth of the internet economy. Evaluate this argument: Was Section 230 necessary in the 1990s? Is it still appropriate today, given the scale and power of modern platforms? What would the internet look like if Section 230 had never been enacted?
-
The free speech tension. The DSA's systemic risk assessment requirements create pressure on platforms to address misinformation — but without defining what misinformation is. How does this approach balance free expression concerns with the need to address harmful false information? What are the risks of indirect regulation through risk assessment requirements?
-
Exportability. Could the DSA model be adopted in the United States? What legal obstacles (particularly the First Amendment), political dynamics, and institutional differences would affect its implementation? Conversely, could Section 230's immunity model work in the EU?
-
The small platform problem. Both frameworks face challenges with smaller platforms. Section 230 provides identical immunity to a tiny forum and to Facebook. The DSA imposes graduated obligations but still places compliance burdens on smaller services. Propose a framework that more effectively addresses the different governance challenges of different-sized platforms.
Your Turn: Mini-Project
Option A: Reform Proposal. Draft a proposed reform to Section 230 that addresses the amplification distinction — maintaining immunity for hosting while creating accountability for algorithmic amplification. Your proposal should: (1) define the distinction clearly, (2) specify what new obligations would apply, (3) address First Amendment concerns, and (4) explain enforcement mechanisms. Write a two-page policy brief.
Option B: DSA Audit. Select one VLOP (Meta, Google/YouTube, TikTok, X, Amazon) and review their most recent DSA transparency report (available on the European Commission's transparency database). Evaluate: (1) what information is disclosed, (2) what is notably absent, (3) how the platform describes its systemic risk assessment, and (4) what recommendations you would make for more meaningful transparency. Write a two-page assessment.
Option C: Global Comparison. Research one non-US, non-EU platform governance framework (Australia, India, Brazil, or Singapore, as described in Section 31.4.3). Compare it to both Section 230 and the DSA. Evaluate which approach it more closely resembles and what unique features it introduces. Write a one-page comparative analysis.
References
-
Citron, Danielle Keats, and Benjamin Wittes. "The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity." Fordham Law Review 86, no. 2 (2017): 401-423.
-
Communications Decency Act, Section 230. 47 U.S.C. Section 230 (1996).
-
Douek, Evelyn. "Governing Online Speech: From 'Posts-as-Trumps' to Proportionality and Probability." Columbia Law Review 121, no. 3 (2021): 759-834.
-
European Commission. "Digital Services Act." Regulation (EU) 2022/2065 of the European Parliament and of the Council, October 19, 2022.
-
European Commission. "Formal Proceedings Against X Under the Digital Services Act." Press Release, December 18, 2023.
-
Kosseff, Jeff. The Twenty-Six Words That Created the Internet. Ithaca, NY: Cornell University Press, 2019.
-
Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019).
-
Stratton Oakmont, Inc. v. Prodigy Services Co., No. 31063/94 (N.Y. Sup. Ct. 1995).
-
Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).
-
Vestager, Margrethe. "Shaping Europe's Digital Future." European Commission Speech, February 2022.
-
Wu, Tim. "Will Artificial Intelligence Eat the Law?" Columbia Law Review 119, no. 7 (2019): 2001-2028.