Case Study 37-2: Section 230 Reform Debates

Analyzing Three Proposed Reforms: EARN IT, KOSA, and SHIELD Act


Overview

Section 230 of the Communications Decency Act has been called "the twenty-six words that created the internet" (Jeff Kosseff's phrase) and "the Magna Carta of the internet" by supporters, and "a gift to Big Tech" and "a shield for predators" by critics. Since 2018, Congress has considered dozens of Section 230 reform proposals. Three have attracted the most sustained legislative attention: the EARN IT Act, the Kids Online Safety Act (KOSA), and the SHIELD Act.

Each reform targets a distinct harm (child sexual abuse material, minor safety, and non-consensual intimate imagery respectively), each employs a different legal mechanism, and each has been praised by advocates for protected populations and criticized by free speech advocates. This case study analyzes each in detail, examining their likely effects on both the harms they target and the speech they may affect as collateral impact.


The Current Section 230 Baseline

To evaluate reforms meaningfully, we must first understand what Section 230 currently provides and why it matters.

Section 230(c)(1) says: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This has been interpreted by courts to mean that platforms cannot be held civilly liable for content their users post, even if they are aware of that content.

Section 230(c)(2) says platforms cannot be held liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." This protects moderation decisions.

The combination means:

  1. A platform is not liable for hosting a user's false claim, even if notified of it
  2. A platform cannot be penalized for choosing to moderate or not moderate that content

This is the immunity structure that all three proposed reforms would modify — each in different ways, for different categories of harm.


Reform 1: The EARN IT Act

What It Would Do

The Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, first introduced in 2020 by Senators Graham and Blumenthal and reintroduced in subsequent Congresses, would modify Section 230 specifically with respect to child sexual abuse material (CSAM).

Under EARN IT, platforms would need to "earn" their Section 230 immunity from CSAM-related civil and state criminal liability by complying with a set of "best practices" developed by a National Commission on Online Child Sexual Exploitation Prevention. The Commission would be composed of government officials (including the Attorney General and Secretary of Homeland Security) plus technology and child safety experts.

EARN IT would also clarify that Section 230 does not provide immunity from federal and state criminal CSAM statutes, removing ambiguity that some courts had addressed differently.

The Problem It Targets

CSAM is already illegal under federal and state law. The National Center for Missing and Exploited Children (NCMEC) receives millions of CyberTips annually from technology companies reporting CSAM. Critics of the current legal structure argue that Section 230 has been interpreted by some courts to immunize platforms from civil liability even when they have been notified of CSAM on their platforms and fail to act — and that stronger accountability would produce better outcomes for child victims.

Key Criticisms

Encryption backdoor pressure: The original EARN IT Act drew immediate criticism from technology and civil liberties organizations arguing that the "best practices" framework could effectively mandate encryption backdoors. The logic: the Commission, dominated by law enforcement officials, would adopt "best practices" requiring platforms to scan content for CSAM; effective content scanning is incompatible with end-to-end encryption; platforms would therefore face the choice of abandoning encryption or losing their immunity.

Proponents of this critique note that multiple Attorney General statements during the same period explicitly called for mandating encryption backdoors. Critics of the critique argue that the Commission would include technologists and privacy advocates, that the best practices would not necessarily require scanning over encryption, and that the bill was amended in later versions to include explicit protections for encryption.

First Amendment concerns: The best practices framework effectively creates a government commission with authority over platform design and content policies. Civil liberties organizations argued this gives the government de facto content control over internet platforms without the procedural protections of direct regulation.

State law carve-out: Removing Section 230 immunity for state criminal CSAM laws means platforms could face 50 different state law standards for what constitutes failure to address CSAM. States with aggressive or poorly defined CSAM laws could expose platforms to liability for hosting content that does not constitute CSAM under federal law.

Likely effects on legitimate speech: The most significant concern is that EARN IT's compliance pressure would cause platforms to over-moderate — removing legal content (including, potentially, legitimate sexual health information, LGBTQ+ content, and academic discussions of child exploitation) to avoid the risk that borderline content triggers liability. Organizations serving LGBTQ+ youth were particularly vocal that EARN IT could cause platforms to remove resources for LGBTQ+ teenagers.

Assessment

EARN IT addresses a genuine harm: platforms that profit from advertising while hosting or failing to address CSAM. The existing FOSTA-SESTA law (enacted in 2018) demonstrated that Section 230 modifications targeting sex trafficking, while well-intentioned, can cause collateral harm to legitimate speech and services. Whether EARN IT's approach would perform better depends significantly on how the Commission is structured and what best practices it adopts — details that the legislation leaves substantially to the Commission's discretion. This discretion is precisely what critics find most concerning.


Reform 2: The Kids Online Safety Act (KOSA)

What It Would Do

The Kids Online Safety Act, introduced by Senators Blumenthal and Blackburn and passing the Senate in 2024 with substantial bipartisan support, would create a "duty of care" for covered platforms — defined as online services "likely to be accessed by" minors — to act in the best interests of minors. The duty would require platforms to:

  • Prevent and mitigate harms to minors including promotion of suicide, eating disorders, substance abuse, and sexual exploitation
  • Provide safeguards including default privacy settings, parental supervision tools, and limits on recommending content to minors
  • Give minors the ability to disable algorithmic recommendation and data-minimization defaults

The bill, as amended after substantial advocacy by civil liberties organizations, replaced earlier language that would have required platforms to prevent access to content "that promotes" certain harm categories. The revised bill focuses on platform design obligations and the algorithmic recommendation system rather than content removal.

The Problem It Targets

Research on the relationship between social media use and adolescent mental health — while contested — has generated substantial concern. Frances Haugen's 2021 disclosures from Meta's internal research suggested the company's own researchers had found Instagram's effects on teenage girls' body image and mental health to be significant. Multiple state attorneys general, pediatric associations, and youth safety advocates have called for stronger platform obligations regarding minor users.

Key Criticisms

Original version's speech implications: The original KOSA included a duty to prevent minors from encountering content that "promotes" a list of harms including eating disorders, anxiety, and depression. Critics — including the ACLU and numerous LGBTQ+ organizations — argued that this obligation would cause platforms to remove LGBTQ+ content (on the theory that it might cause anxiety in some minors), mental health support content (which necessarily discusses mental health conditions), and political content (which might be characterized as promoting anxiety). The Human Rights Campaign and Lambda Legal both opposed the original bill on these grounds.

Amendment impact: The 2024 amended version substantially narrowed the duty of care, removing the "promotes" content standard and focusing on algorithmic design rather than content-based obligations. Supporters of the original bill argued the amendments weakened it significantly; supporters of the amendments argued they preserved the bill's core protective function while eliminating the censorship risk.

Verification problems: Enforcing minor-specific obligations requires knowing which users are minors. Age verification creates significant privacy concerns (platforms must collect sensitive data to verify ages) and security concerns (centralized databases of users' ages and identity documents are attractive targets for data breaches).

Preemption of state innovation: KOSA proponents argued that federal legislation was preferable to a patchwork of state laws (which were multiplying rapidly). Opponents argued that federal preemption would prevent states from enacting stronger protections.

Effect on Misinformation

KOSA's effects on misinformation specifically would be indirect. The duty of care framework focuses on harms to individual minors (mental health, exploitation, privacy) rather than on misinformation as a category. However, the algorithmic recommendation provisions could affect how platforms distribute health-related misinformation to minors — an algorithmic recommendation system that deprioritizes harmful content would presumably deprioritize health misinformation. The extent and mechanism of this effect are speculative without implementation data.

Assessment

KOSA in its amended 2024 form is substantially less problematic from a free speech perspective than its original version, but key questions about implementation remain. The duty of care standard is inherently vague, and what constitutes "acting in the best interests of minors" will be contested by platforms, advocates, and regulators for years after enactment. The age verification challenge is real and unsolved. The bill's prospects depended significantly on House action and subsequent administrative implementation — neither of which was assured at the time of its Senate passage.


Reform 3: The SHIELD Act

What It Would Do

The Stopping Harmful Image Exploitation and Limiting Distribution (SHIELD) Act would create federal civil liability for non-consensual intimate imagery (NCII) — colloquially called "revenge porn" — allowing victims to sue platforms that fail to remove such content after being notified of it. The proposal would carve out an exception to Section 230(c)(1) immunity for knowing hosting of NCII.

The Act would: - Create a federal private right of action for NCII victims - Allow suits against platforms that knowingly host NCII after notification - Provide for compensatory and punitive damages

The Problem It Targets

Non-consensual distribution of intimate images — including both photographs/videos originally created consensually and synthetic (AI-generated) intimate imagery of real individuals — causes severe and documented harm to victims: psychological trauma, reputational damage, employment consequences, and in some cases has been linked to suicides. While the majority of US states have enacted NCII laws, enforcement against platforms is constrained by Section 230's immunity.

AI-generated NCII (sometimes called "deepfake porn") has dramatically increased the scale of the problem. A 2023 study estimated that AI-generated NCII constituted the vast majority of new NCII content on dedicated websites. Unlike traditional NCII, AI-generated versions can be created without any actual intimate images of the victim.

Key Criticisms

Notice-and-takedown workability: The SHIELD Act's framework requires platforms to remove NCII upon notification. Critics argue the notification-and-takedown model is susceptible to abuse: bad actors could use NCII takedown mechanisms to remove legitimate content by falsely characterizing it as NCII. The risk of weaponization against political speech, artistic expression, or journalism is not trivial.

Definitional scope: Precisely defining "intimate imagery" to capture actual NCII while excluding legitimate artistic nudity, journalistic documentation of human rights abuses involving sexual violence, and medical content is difficult. Overly broad definitions create censorship risks; overly narrow definitions leave victims without protection.

AI-generated content: The SHIELD Act's application to AI-generated NCII that does not use any actual imagery of the victim raises distinct questions. Traditional NCII law protects existing images; AI-generated NCII may require different legal frameworks based on the right of publicity, fraud, or defamation rather than privacy.

Small platform burden: Compliance costs for NCII notification and takedown systems would be significant for small platforms, community forums, and hosting services — potentially pushing NCII victims to use large platforms that can afford compliance infrastructure rather than eliminating NCII hosting generally.

Effect on Misinformation

The SHIELD Act is not primarily a misinformation law, but it addresses a form of harmful false communication — content that falsely represents its subject as participating in intimate activity. Its AI-generated NCII provisions overlap with broader debates about synthetic media regulation and truth in representation. Establishing a legal framework for NCII liability may also establish precedents for liability related to other forms of synthetic harmful content, including synthetic political disinformation.

Assessment

Of the three proposals, the SHIELD Act has the most targeted and clearly defined harm scope, and its use of a notice-and-takedown model rather than a pre-screening model minimizes First Amendment concerns. Its primary challenges are definitional (what counts as NCII?) and implementation (how to prevent weaponization of removal mechanisms). The growing AI-generated NCII problem may require a parallel regulatory track that addresses synthetic imagery specifically, rather than relying solely on a framework designed for traditional NCII.


Comparative Analysis

Dimension EARN IT KOSA SHIELD Act
Primary harm targeted CSAM Minor mental health harms Non-consensual intimate imagery
Section 230 modification Conditional immunity on best practices compliance Adds duty of care standard Removes immunity for notified NCII
Government authority created National Commission on best practices FTC enforcement of duty of care Private right of action (courts)
Free speech risk level High (encryption, content pressure) Medium (original); Lower (amended) Lower (targeted harm category)
Misinformation relevance Indirect (platform design) Indirect (algorithmic effects on minors) Low (different harm type)
AI-generated content coverage Partial Not addressed Partial
Smallest platform impact Moderate-High Moderate Moderate

The Common Thread: Who Bears the Compliance Cost?

All three proposals shift compliance costs to platforms — specifically, to the cost of having systems, policies, and legal resources to address their respective harm categories. The platforms best positioned to bear these costs are large, established incumbents with substantial legal and engineering resources. The platforms least positioned to bear them are small platforms, new entrants, nonprofit community spaces, and international services.

This suggests that Section 230 reform of any kind — regardless of its specific mechanism — has structural effects on platform market competition that favor incumbents. Whether this effect is acceptable depends on how one weighs concentrated platform markets (with their own harms) against the costs of imposing liability obligations on small platforms.


Discussion Questions

  1. All three proposed reforms target specific harm categories (CSAM, minor safety, NCII) rather than misinformation directly. Is this targeting strategy well-designed to address the harms, or does it create gaps? Are there arguments for a more general platform duty of care?

  2. The EARN IT Act's potential effect on encryption is a recurring concern. Assess the tradeoff: if EARN IT effectively mandated encryption backdoors, would the reduction in CSAM hosting justify the reduction in privacy and security for all encrypted communications? Who bears the cost of this tradeoff?

  3. KOSA's original version was significantly amended in response to LGBTQ+ and civil liberties organizations' concerns about censorship of LGBTQ+ content. Does this amendment history illustrate the democratic process working as it should, or does it illustrate the difficulty of drafting targeted harm legislation without collateral speech effects?

  4. The SHIELD Act creates a private right of action rather than government enforcement. What are the advantages and disadvantages of private enforcement (litigation by victims) versus government enforcement (agency action) for NCII regulation?

  5. If all three reforms are enacted simultaneously, what would the combined effect on the Section 230 ecosystem likely be? Would the cumulative effect be greater or lesser than the sum of the individual reforms?


Key Takeaways

  • Section 230 reform proposals targeting specific harm categories (EARN IT/CSAM, KOSA/minor safety, SHIELD/NCII) are more constitutionally viable than proposals for general false statement liability, but still involve significant free speech tradeoffs.
  • The most serious concerns with EARN IT involve the potential for best practices requirements to effectively mandate encryption backdoors or create a government commission with broad de facto content authority.
  • KOSA's amendment history illustrates how even well-intentioned harm-prevention legislation can sweep in significant amounts of legitimate speech if the drafting is not precise.
  • The SHIELD Act's targeted notice-and-takedown approach is the most constitutionally contained of the three proposals, though its application to AI-generated NCII raises distinct challenges.
  • All Section 230 reforms impose compliance costs that disproportionately affect smaller platforms and create structural advantages for established incumbents.
  • None of the three proposals directly addresses misinformation in the sense of false claims about elections, vaccines, or public health — illustrating that "Section 230 reform" is not synonymous with "misinformation regulation."