Case Study 38-2: Section 230 — The 26 Words That Created the Modern Internet (And Maybe Broke Democracy)
History, Interpretation, Reform Debate, and What It Reveals
The Origin Story
In 1995, a New York trial court decision in Stratton Oakmont v. Prodigy Services held that Prodigy, a bulletin board service, was liable as a publisher for defamatory content posted by an anonymous user. The court's reasoning: Prodigy had moderated its bulletin boards, exercising editorial control, and by doing so had made itself a publisher with publisher liability.
The decision terrified the nascent Internet industry and members of Congress who wanted to see the Internet develop. It created a catastrophic incentive: platforms that moderated content (trying to be responsible) faced greater liability than platforms that did nothing. The "don't moderate" conclusion was both legally rational and obviously undesirable.
Representatives Chris Cox (R-CA) and Ron Wyden (D-OR) drafted what became Section 230 as an amendment to the Communications Decency Act. Their goal was to remove this perverse incentive: platforms should be able to moderate content without thereby becoming publishers responsible for everything they leave up. Cox and Wyden wanted to enable a "Good Samaritan" approach to content moderation — platforms that tried to improve their platforms' quality should not be punished for the attempt.
The law passed in 1996. For the next decade, it operated largely as designed: it enabled user-generated content platforms to host content without assuming full publisher liability, and it allowed platforms to moderate that content without creating new liability.
What Cox and Wyden didn't anticipate — and what no one in 1996 could have fully anticipated — was the development of algorithmic recommendation systems at scale. Section 230 was written for a world where platforms were neutral conduits for user content. It was not written for a world where platforms actively select, rank, amplify, and target content to maximize engagement.
What Section 230 Actually Says (And What It Doesn't)
The liability protection in Section 230 is more specific than it's often described:
What (c)(1) protects: Platform liability for content "provided by another information content provider." The immunity is for hosting user-generated content. Facebook is not liable for defamatory posts written by users. YouTube is not liable for infringing content uploaded by users. Twitter is not liable for harassment posted by users.
What (c)(2) protects: Good-faith decisions to "restrict access to or availability of material" that is "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." Platforms that moderate (remove, restrict, label) content don't thereby take on liability for content they leave up.
What Section 230 explicitly does NOT protect: - Federal criminal law - Intellectual property claims - Electronic Communications Privacy Act violations - Sex trafficking (after FOSTA-SESTA, 2018)
The critical ambiguity: algorithmic amplification
The major legal question that courts and scholars have debated for years: when a platform's recommendation algorithm actively surfaces a piece of user-generated content to specific users based on predicted engagement, is the resulting harm covered by Section 230's immunity?
The case for immunity: The algorithm is just a tool for organizing and displaying user-generated content. The content itself was created by the user; the platform is still "treating it as a publisher" of user-generated content.
The case against immunity: Active algorithmic amplification is substantively different from passive hosting. When a platform's algorithm identifies that a piece of extremist content will generate high engagement and pushes it to millions of users who didn't search for it, the platform is making a publication decision — just an automated one. The harm from the recommendation system is distinct from the harm of the underlying content.
Courts have generally, though not unanimously, sided with the immunity interpretation. The Supreme Court's Gonzalez v. Google (2023) case raised the question of whether Section 230 protects algorithmic recommendations directly, but the Court's decision sidestepped the core question by deciding the case on other grounds.
The Gonzalez Case: The Question the Court Didn't Answer
The Gonzalez v. Google case was brought by the family of a victim of the 2015 Paris ISIS attacks. The family alleged that YouTube's recommendation algorithm had amplified ISIS recruitment and propaganda videos, contributing to the radicalization of the attackers, and that Section 230 should not protect Google from liability for its algorithmic recommendations.
The case attracted enormous attention because it presented the Supreme Court with the algorithmic amplification question directly. Legal scholars filed dozens of amicus briefs arguing different positions. Tech companies, civil liberties organizations, and national security advocates all had stakes in the outcome.
The Court's 2023 decision was anticlimactic: the majority held that the Gonzalez plaintiffs hadn't adequately stated a claim under the Anti-Terrorism Act (the underlying statute), so the Court didn't need to resolve the Section 230 question. In dicta, Justice Thomas — who had previously written separately to question whether Section 230's immunity had been interpreted too broadly — suggested that the algorithmic amplification question warranted resolution, but without the Court's holding.
The result was that the central question — does Section 230 protect platforms' algorithmic amplification decisions — remains formally unresolved. Lower courts have tended toward immunity, but the question is live.
The Reform Debate: Who Wants What and Why
Section 230 has become one of the rare issues with active critics on both the political left and right, though for completely different reasons. This unusual configuration has produced the paradox of bipartisan criticism without bipartisan reform.
Conservative critics argue that platforms use Section 230 immunity to moderate content in politically biased ways — removing conservative speech while leaving liberal speech, using the Good Samaritan immunity to impose ideological preferences without accountability. Their proposed remedies generally involve either removing the Good Samaritan immunity (platforms that moderate cannot claim immunity) or conditioning immunity on content-neutrality in moderation.
Progressive critics argue that platforms use Section 230 immunity to avoid accountability for harmful content they amplify — conspiracy theories, harassment, incitement, disinformation — arguing that algorithmic amplification is an editorial choice that should carry liability. Their proposed remedies generally involve creating liability for algorithmically amplified harmful content while preserving passive hosting immunity.
These two critiques point in exactly opposite directions: conservatives want platforms to moderate less (under threat of losing immunity), progressives want platforms to moderate more (under threat of liability for harmful content they amplify). Any Section 230 reform that satisfies one group tends to make the other group's concern worse, which is a significant reason comprehensive reform has not passed.
The actual effects of reform proposals reflect this tension:
EARN IT Act: Primarily addressed CSAM, but the mechanism (defining "best practices" for CSAM detection through a commission) raised concerns that "best practices" would be defined to require scanning of encrypted content, with implications extending well beyond CSAM.
KOSA: Primarily addressed harms to minors, with bipartisan support but First Amendment concerns from civil liberties organizations who argued the duty-of-care framework would lead to over-restriction of legal content.
Platform Accountability and Transparency Act: Primarily addressed researcher data access and transparency, drawing less political controversy but also generating less political momentum.
What the Section 230 Debate Reveals
Beyond the specific policy question, the Section 230 debate illuminates several deeper tensions in platform governance:
Speech protection vs. platform accountability. The American tradition of strong speech protection — shaped by the First Amendment and validated by hard history — means that content-based regulation of platforms is politically and legally difficult in ways it is not in Europe. This is not simply a policy failure; it reflects genuine values. The tension between protecting speech and protecting people from the harms of amplified harmful speech is real and not easily resolved.
The neutrality myth. Section 230 was written for a world in which platforms were relatively neutral conduits. The premise was that platforms don't make editorial decisions about user-generated content; they just host it. This premise is false for modern platforms, which make billions of editorial decisions through their recommendation algorithms. The legal fiction of platform neutrality is part of what allows immunity for algorithmic harms.
The political economy of reform. Platform companies have enormous lobbying capacity and have spent heavily to shape the Section 230 debate. The difficulty of passing reform despite bipartisan criticism of platforms reflects not just genuine policy disagreement but also the effects of that lobbying. Understanding why reform is hard requires understanding who benefits from the status quo and what resources they deploy.
The counterfactual matters. Section 230 enabled the development of user-generated content platforms that have produced genuine value: Wikipedia, community forums, small business reviews, social organizing, journalism. Reform that eliminates the user-generated content ecosystem to address platform harms would not obviously be a net improvement. The question is not "should Section 230 exist?" but "how should its immunity be scoped given what we now know about algorithmic amplification?"
Toward a More Adequate Framework
If Section 230 is to be reformed in a way that addresses algorithmic amplification harms while preserving what is genuinely valuable, several elements would need to be in place:
Distinguish passive hosting from active amplification. Maintain immunity for platforms that host user-generated content without actively selecting it for recommendation. Create liability pathways for platforms' algorithmic decisions to amplify specific content, where those decisions cause documented harm.
Specify a standard of knowledge. Liability that attaches only when platforms have actual notice that specific content is harmful (and choose to amplify it anyway) is more workable than strict liability for all amplified harms. A "knew or should have known" standard, applied to algorithmic categories rather than individual pieces of content, might be feasible.
Create a safe harbor for responsible practices. Rather than removing immunity as a sanction, create enhanced immunity (protection from more types of liability) for platforms that implement specified responsible practices: algorithmic impact assessments, researcher data access, transparent moderation policies.
Separate content liability from design liability. Platform design choices — infinite scroll, variable reward notification patterns, engagement-optimized recommendations — are distinct from hosting decisions. A reformed Section 230 might address design liability separately from content liability, creating accountability for harmful design without threatening the foundation of user-generated content.
None of these are politically simple. But they represent the direction in which a more adequate framework would need to develop, based on what the evidence shows about where platform harms actually originate.
Conclusion
Section 230 is a 1996 solution to a 1995 problem, applied to a 2020s information environment that its drafters could not have fully anticipated. The debate about reforming it reveals deep tensions between speech protection and platform accountability, between individual liberty and collective welfare, between the genuine value of user-generated content platforms and the genuine harms they cause.
The "26 words that created the modern Internet" didn't break democracy. Algorithmic amplification optimized for engagement, operating within a legal framework that immunizes those decisions from liability, created the conditions in which democratic discourse is systematically distorted by the incentives of engagement maximization. Fixing that requires addressing the incentive structure, not just the legal immunity.
The Section 230 debate is a window into that deeper problem — and a reminder that legal frameworks shape the conditions in which technological choices are made, for better and worse.
This case study draws on: Kosseff, J. (2019). The Twenty-Six Words That Created the Internet. Cornell University Press. Citron, D.K., & Wittes, B. (2017). The Problem Isn't Just Backpage: Revising Section 230 Immunity. Georgetown Law Technology Review. Balkin, J.M. (2021). How to Regulate (and Not Regulate) Social Media. Journal of Free Speech Law. Gonzalez v. Google LLC, 598 U.S. 617 (2023).