Chapter 37: Key Takeaways — Regulatory Approaches: Free Speech vs. Safety

Constitutional Architecture

  1. The First Amendment constrains only government actors. When platforms moderate content, no constitutional claim arises. When governments order platforms to remove content, or actively coerce moderation decisions ("jawboning"), First Amendment review is triggered.

  2. Recognized categories of unprotected speech are narrow and difficult to expand. Incitement (Brandenburg), defamation (with Sullivan's actual malice standard for public figures), true threats, fraud, and obscenity are the primary exceptions. The Supreme Court in Alvarez (2012) explicitly declined to add "knowingly false statements of fact" as a general unprotected category.

  3. The Brandenburg test for incitement demands directedness, imminence, and likelihood. Most false claims that contribute to harmful public behavior — vaccine refusal, election denial, conspiracy-driven harassment — do not satisfy Brandenburg's demanding standard.

  4. Content-based regulations face strict scrutiny; viewpoint-based regulations are the most serious First Amendment violation. Misinformation laws that facially apply to all false claims but in practice target one political perspective may constitute viewpoint discrimination.

  5. The marketplace of ideas metaphor rests on contestable empirical assumptions. Algorithmic amplification of emotionally engaging content, resource asymmetries between misinformation producers and correctors, and documented cognitive biases all challenge the assumption that free speech markets reliably correct toward truth.

Platform Speech

  1. Platforms are speakers, not common carriers. The NetChoice decisions (2024) strongly suggest that platforms' editorial choices about what content to carry and how to organize it are protected First Amendment expression. Government mandates requiring platforms to carry content they choose to moderate likely face demanding constitutional scrutiny.

  2. State action doctrine means only government censorship is unconstitutional censorship. Private platform content moderation, however aggressive, is legally and constitutionally distinct from government censorship.

  3. Section 230's two provisions protect both hosting and moderation. Subsection (c)(1) immunizes platforms for third-party content; subsection (c)(2) immunizes good-faith moderation. Together they allow platforms to host user content without being liable for it while also allowing them to moderate without becoming responsible for everything they allow.

  4. Section 230 reform that imposes liability on platforms for algorithmically recommended content would likely cause substantial over-moderation. Platforms facing liability for recommendations would have strong incentives to recommend only content whose legality and safety are certain, dramatically narrowing the range of content amplified.

European Models

  1. The EU's Digital Services Act takes a systems approach, not a content prohibition approach. The DSA focuses on platform transparency, risk assessment, user controls, and researcher access — not on government determination of what content is false and should be removed.

  2. Germany's NetzDG created asymmetric incentives that favor over-moderation. Platforms face large fines for under-removing content but no penalty for over-removing content, systematically biasing compliance decisions toward removal.

  3. GDPR indirectly constrains misinformation infrastructure by limiting psychographic micro-targeting. The regulation does not address content directly but restricts the data infrastructure that enables precision-targeted political propaganda.

Defamation and Electoral Speech

  1. Defamation law can impose significant liability for false claims about identifiable private parties. The Dominion v. Fox News case demonstrated that internal communications can establish evidence of actual malice (knowledge of falsity or reckless disregard for truth) where executives and anchors privately rejected claims they continued to broadcast.

  2. The Sullivan actual malice standard makes defamation difficult for public figures. This limitation reflects a deliberate constitutional choice to protect robust public debate, not a failure of defamation doctrine.

  3. SLAPP suits weaponize defamation law against legitimate speech. Anti-SLAPP statutes — enacted in more than 30 states — provide critical protection for journalists, fact-checkers, and advocacy organizations.

  4. After Citizens United, disclosure is the primary constitutional tool for regulating election advertising. Prohibition of election spending by corporations and unions is not permitted; transparent disclosure of who funds election messaging is.

The Dual-Use Problem

  1. All anti-misinformation laws can suppress legitimate speech. Singapore's POFMA, India's IT Rules, and Hungary's pandemic emergency powers have all been applied to politically inconvenient true speech under anti-misinformation authorities.

  2. Anti-misinformation regulation requires procedural safeguards to resist political misuse. Judicial authorization, independent oversight, defined scope, transparency requirements, appeals mechanisms, and sunset provisions are essential design features.

  3. Political independence is a structural requirement, not an aspiration. Regulatory bodies with authority over misinformation must be insulated from the political interests of the government they serve, through fixed terms, bipartisan appointments, and legislative rather than executive oversight.

AI and Emerging Challenges

  1. AI-generated synthetic media creates significant regulatory gaps. Existing defamation, fraud, and election law frameworks partially address AI-generated harmful content but were not designed for it and leave significant gaps.

  2. The EU AI Act's synthetic media provisions focus on labeling, not prohibition. Disclosure requirements are more constitutionally contained than prohibitions but face technical limitations (watermark circumvention, label blindness) that limit their effectiveness.

  3. Watermarking requirements are necessary but insufficient. Technical markers identifying AI-generated content can be removed, and detection pipelines require platform cooperation. Watermarking is a component of a synthetic media regulatory framework, not a solution by itself.

Evidence-Based Policy Design

  1. Prefer structural interventions over content prohibitions. Algorithmic auditing, advertising transparency, platform design requirements, and business model accountability avoid direct constitutional confrontations and target the mechanisms of harmful content scaling.

  2. Measure effects on legitimate speech, not only harmful content. Regulatory evaluation must assess suppression of legitimate speech as a cost of regulation, not merely as acceptable collateral damage.

  3. Sunset provisions and mandatory empirical review are not optional. Misinformation regulation enacted during crisis conditions (pandemics, election periods) should automatically expire and require positive reauthorization based on evidence of effectiveness.


The Enduring Tension

The fundamental tension that runs through all misinformation regulation debate cannot be resolved by doctrine, legislative design, or comparative reference. It is a genuine values conflict: between the democratic interest in controlling dangerous falsehoods and the democratic interest in preventing government control over political speech. Any regulatory approach must navigate this tension rather than dissolve it.

The most intellectually honest position is that there is no regulatory design that fully achieves both goals simultaneously. Every misinformation regulation sacrifices some protection for legitimate speech in exchange for some reduction in harmful false content. The policy question is not which design achieves both goals but which design strikes the most defensible balance — and who has the authority to decide that balance is struck correctly.