40 min read

Every society that has ever grappled with false information has faced the same foundational tension: the state power needed to suppress dangerous falsehoods is the same state power that, if deployed carelessly or maliciously, can suppress...

Chapter 37: Regulatory Approaches: Free Speech vs. Safety

Learning Objectives

By the end of this chapter, students will be able to:

  1. Explain the core First Amendment doctrines that govern speech regulation in the United States and identify categories of speech that fall outside constitutional protection.
  2. Analyze how the state action doctrine shapes the difference between government censorship and platform content moderation.
  3. Evaluate the key arguments for and against reforming Section 230 of the Communications Decency Act.
  4. Compare American free speech frameworks with European regulatory models, including the Digital Services Act and GDPR.
  5. Assess how defamation law operates as a check on misinformation, using Dominion v. Fox News as a case study.
  6. Identify the dual-use problem inherent in misinformation regulation and explain why all content-restricting laws can suppress legitimate speech.
  7. Describe emerging regulatory challenges posed by AI-generated synthetic media.
  8. Apply evidence-based design principles to evaluate proposed misinformation policies.

Introduction

Every society that has ever grappled with false information has faced the same foundational tension: the state power needed to suppress dangerous falsehoods is the same state power that, if deployed carelessly or maliciously, can suppress inconvenient truths. Governments that acquire the authority to label content "misinformation" acquire the authority to label political opposition "misinformation." This is not a hypothetical concern. It is a documented pattern observable from 19th-century seditious libel prosecutions through 21st-century government orders to social media platforms.

At the same time, the argument that unregulated speech markets will self-correct toward truth has never been empirically demonstrated and is contradicted by substantial evidence. False narratives about vaccines, elections, and health interventions spread faster and further than corrections. The marketplace of ideas metaphor, borrowed from Justice Oliver Wendell Holmes, assumes rough equality among speakers and rational evaluation of claims by audiences — conditions that do not describe the contemporary information environment.

This chapter examines regulatory approaches to misinformation from constitutional, comparative, and policy design perspectives. We begin with the foundational architecture of American free speech doctrine, proceed through the specifics of what that doctrine does and does not protect, examine the distinctive legal status of platform speech, analyze Section 230 and its proposed reforms, survey European regulatory models, examine defamation and electoral speech law, confront the dual-use problem in misinformation regulation, assess emerging challenges from AI-generated content, and conclude with evidence-based principles for policy design.

The goal is not to resolve these tensions — no chapter can — but to give readers the conceptual vocabulary and analytical frameworks needed to evaluate regulatory proposals rigorously rather than reflexively.


Section 37.1: The Constitutional Architecture of Free Speech

The First Amendment's Text and Its Interpretation

The First Amendment to the United States Constitution provides, in its speech and press clauses, that "Congress shall make no law... abridging the freedom of speech, or of the press." The text is absolute in form but has never been interpreted as absolute in substance. Since Schenck v. United States (1919), courts have recognized that some categories of speech fall outside First Amendment protection, and that even protected speech can sometimes be regulated if the government's interest is sufficiently compelling and the regulation is sufficiently tailored.

Two broad approaches have structured this interpretation:

Categorical Approach: The Supreme Court identifies categories of speech — such as true threats, incitement, obscenity, fraud, and defamation — that receive no First Amendment protection at all. Within these categories, government may regulate freely. Outside them, speech is presumptively protected. This approach, associated with Chaplinsky v. New Hampshire (1942) and its "fighting words" doctrine, prioritizes bright-line rules and predictability.

Balancing Approach: Courts weigh the speech interest against the government interest in regulation, asking whether the harm prevented justifies the speech restricted. This approach is more contextual and flexible but also more susceptible to manipulation: once you permit courts to balance free expression against competing interests, you invite them to find the scales tipping differently depending on who is speaking and what political pressures are operating.

Modern doctrine blends both approaches. Speech within recognized unprotected categories is subject to categorical exclusion; speech outside those categories is subject to tiered scrutiny — strict, intermediate, or rational basis — depending on whether the government is regulating based on content, viewpoint, or only incidentally.

Content vs. Viewpoint Discrimination

A foundational distinction in First Amendment law separates content-based from viewpoint-based regulation. Content-based regulations restrict speech on a particular topic (e.g., a law prohibiting discussion of public health measures). Viewpoint-based regulations restrict speech that takes a particular stance on a topic (e.g., a law prohibiting criticism of public health measures). Both face strict scrutiny and are presumptively unconstitutional, but viewpoint discrimination is the more serious First Amendment violation. Courts have invalidated laws that, while facially neutral, can be shown to target specific perspectives.

This distinction matters enormously for misinformation regulation. A law prohibiting "false statements about vaccines" is content-based if it applies to all such statements regardless of perspective, but viewpoint-based if, in practice or intent, it targets only statements skeptical of vaccines while permitting statements favorable to them.

The Marketplace of Ideas and Its Critics

Justice Oliver Wendell Holmes, dissenting in Abrams v. United States (1919), articulated the foundational metaphor of American free speech doctrine: "the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market." This marketplace metaphor animated much of 20th-century free speech jurisprudence and continues to be invoked in debates about platform content moderation.

Critics of the marketplace metaphor — including legal scholars Jack Balkin, Tim Wu, and C. Edwin Baker — raise several powerful objections:

Market Power Concentration: Real speech markets are not competitive. A small number of platforms dominate information distribution, and within those platforms, engagement algorithms systematically amplify emotionally provocative content — including false content — over accurate but less emotionally salient alternatives.

Cognitive Limitations: The marketplace metaphor assumes rational evaluators who update beliefs in response to evidence. Cognitive science documents systematic departures from this model: confirmation bias, motivated reasoning, the illusory truth effect (repetition increases perceived accuracy regardless of truth value), and social proof effects.

Resource Asymmetries: Sophisticated misinformation campaigns can produce false content at scale using automated systems. Individual fact-checkers, independent journalists, and correction organizations operate with vastly smaller resources. The marketplace favors well-funded speech.

Chilling Effects on True Speech: When false statements crowd out true ones, or when the cost of refuting a deluge of false claims exceeds the resources of those with accurate information, the marketplace metaphor predicts outcomes opposite to those it is invoked to defend.

Legal scholar Frederick Schauer has argued that the marketplace of ideas justification for strong speech protection is an empirical claim about how speech markets behave — and that this empirical claim is contestable. Whether one ultimately agrees with the metaphor or not, its uncritical deployment as a conversation-stopper in misinformation debates fails to engage with substantial evidence about how contemporary information environments actually function.


Section 37.2: What the First Amendment Actually Protects (and Doesn't)

Understanding what the First Amendment actually prohibits is essential for evaluating misinformation regulation proposals, because many such proposals operate within recognized exceptions — or purport to.

Incitement: The Brandenburg Standard

In Brandenburg v. Ohio (1969), the Supreme Court held that the government may not punish "advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action." This demanding three-part test — directedness, imminence, and likelihood — replaced the earlier "clear and present danger" standard from Schenck and dramatically narrowed the category of punishable incitement.

Brandenburg matters for misinformation because some false claims about political opponents, vaccines, or elections are invoked as incitement. The standard is rarely satisfied. A false claim that a vaccine causes infertility, even if many people believe it and act on it by refusing vaccination, does not constitute incitement under Brandenburg because it does not call for imminent lawless action. The difficulty of meeting the Brandenburg standard explains why incitement law is a poor tool for combating health or election misinformation.

Defamation: Sullivan and Its Progeny

New York Times v. Sullivan (1964) imposed constitutional limits on state defamation law, holding that public officials cannot recover for defamation without proving "actual malice" — that the defendant knew the statement was false or acted with reckless disregard for its truth or falsity. Subsequent cases extended this requirement to "public figures." Private individuals may recover under a lesser standard, but the First Amendment still requires some showing of fault.

The Sullivan framework was designed to ensure that robust debate about public officials is not chilled by the threat of defamation liability, even when some false statements are made in the course of that debate. It creates a legal regime in which false statements about public figures are effectively unactionable unless they are made with knowledge of their falsity — a demanding showing.

True Threats

The Court has held that "true threats" — serious expressions of an intent to commit violence — fall outside First Amendment protection. The difficulty is defining what makes a threat "true" versus rhetorical or hyperbolic. In Virginia v. Black (2003), the Court held that cross burning as intimidation could be prosecuted, but that a state could not create a presumption that all cross burning was intended to intimidate. More recently, in Counterman v. Colorado (2023), the Court held that the government must show the speaker was at least reckless as to whether the statement was a threat.

Fraud and Deception

Fraudulent speech — false statements made to obtain money or property — falls outside First Amendment protection. This exception encompasses false advertising, securities fraud involving false statements, and other forms of commercially motivated deception. Notably, the Supreme Court has resisted extending this exception broadly to cover all knowingly false statements. In United States v. Alvarez (2012), the Court struck down the Stolen Valor Act, which criminalized false claims of having received military honors. A plurality held that false statements of fact are not, as a general matter, unprotected, because such a rule would give the government enormous power to punish political opponents whose factual claims it contests.

Commercial Speech

Commercial speech — speech that does no more than propose a commercial transaction — receives intermediate First Amendment protection under Central Hudson Gas & Electric Corp. v. Public Service Commission (1980). The government may regulate commercial speech if (1) the speech concerns lawful activity and is not misleading; (2) the government interest is substantial; (3) the regulation directly advances that interest; and (4) the regulation is not more extensive than necessary. This framework allows the Federal Trade Commission to prohibit false advertising without triggering the full rigor of strict scrutiny.

The Narrow Scope of Exceptions

The key takeaway is that recognized speech exceptions are narrowly defined and difficult to expand. The Supreme Court's 2012 Alvarez decision explicitly rejected the government's invitation to create a general exception for knowingly false statements of fact. The path of least resistance for misinformation regulation, in American constitutional law, is either to work within recognized exceptions (defamation for false statements of fact about identifiable individuals, fraud for financially motivated false claims) or to use non-speech-restrictive tools (disclosure requirements, funding transparency, algorithmic auditing) that do not directly prohibit any speech.


Section 37.3: Platform Speech as Private vs. Public

The State Action Doctrine

The First Amendment constrains only government action. Private actors — including private corporations — are generally free to restrict or compel speech without triggering First Amendment scrutiny. This principle, known as the state action doctrine, means that when Facebook removes a post or Twitter suspends an account, no First Amendment claim arises. The platform is a private entity making editorial decisions about what content to host, and the Constitution does not constrain those decisions.

This creates an important distinction that is frequently obscured in public discourse: censorship in the constitutional sense is something only governments can do. When a platform removes content, it is exercising its own First Amendment rights as an editor. When a government orders a platform to remove content, that is state action and potentially unconstitutional.

Platforms as Speakers vs. Conduits

The legal status of platforms is contested along a spectrum:

Publisher Model: Platforms actively curate and editorialize their content through algorithms, recommendation systems, and human review. Under this model, they are speakers whose editorial choices are protected by the First Amendment. Compelling them to carry content they find objectionable would violate their First Amendment rights under Hurley v. Irish-American Gay Group (1995), which held that forced inclusion of speakers in a parade violated the organizer's First Amendment rights.

Common Carrier Model: Historically, common carriers — telephone companies, railroads, postal services — were required to carry all traffic without discrimination as a condition of their franchise. Some scholars and policymakers argue that dominant social media platforms should be treated as common carriers subject to nondiscrimination obligations. Texas and Florida enacted laws in 2021 attempting to impose such obligations on major social media platforms.

Conduit Model: An intermediate position holds that platforms are primarily conduits for others' speech, that their editorial role is limited compared to traditional publishers, and that the state therefore has greater latitude to regulate their curation without violating the First Amendment.

The NetChoice Cases: Moody v. NetChoice (2024)

The Supreme Court's 2024 decision in Moody v. NetChoice and its companion case NetChoice v. Paxton addressed the constitutionality of Texas and Florida statutes that prohibited large social media platforms from moderating content based on viewpoint. Both statutes were challenged by internet industry trade associations.

The Court unanimously held, in an opinion by Justice Kagan, that the lower courts had not adequately analyzed the full scope of these laws' applications before ruling on their facial validity. The Court remanded the cases for more thorough analysis. But in an extensive portion of the opinion, the Court made clear that at least some applications of these laws would be unconstitutional. Platforms' content moderation choices — decisions about what content to carry, how to organize it, what to amplify — constitute editorial decisions protected by the First Amendment. A state cannot compel a platform to host all speakers' content without regard to that platform's own editorial judgment.

The NetChoice cases represent a significant, if still incomplete, resolution of the platforms-as-speakers vs. platforms-as-common-carriers debate. They strongly suggest that government mandates requiring platforms to carry content they have chosen to moderate will face demanding First Amendment scrutiny. However, the Court has not yet definitively addressed every form of potential platform regulation, and the cases leave open significant questions about what kinds of transparency requirements, disclosure obligations, and algorithmic auditing mandates survive constitutional review.


Section 37.4: The Section 230 Debate

What Section 230 Actually Says

Section 230 of the Communications Decency Act (47 U.S.C. § 230) was enacted in 1996 in response to two conflicting court decisions about online platform liability. The provision has two key operative provisions:

Subsection (c)(1): "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This immunizes platforms from liability for third-party content — they cannot be sued for what their users post, as the platform is not treated as the author.

Subsection (c)(2): Platforms cannot be held liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." This provision protects the Good Samaritan function — platforms can moderate without becoming liable for content they fail to catch.

Together, these provisions allow platforms to host user content without being liable for it while also allowing them to moderate objectionable content without being treated as editors who thereby assume liability for everything they allow. Section 230 is often described as "the law that created the internet" because without it, the liability risk of hosting user content would have made most user-generated content platforms economically unviable.

The Case for Reform

Critics of Section 230 argue that the immunity is too broad and that its breadth permits platforms to profit from harmful content without accountability:

EARN IT Act (Eliminating Abusive and Rampant Neglect of Interactive Technologies): Originally introduced in 2020 and reintroduced in subsequent sessions, this proposal would condition Section 230 immunity for child sexual abuse material (CSAM) on platforms adopting government-approved "best practices." Critics argue this approach creates a commission with broad authority over platform policies, potentially mandating encryption backdoors or other surveillance capabilities.

KOSA (Kids Online Safety Act): Proposed legislation that would create a "duty of care" for platforms hosting content likely to be accessed by minors, potentially overriding Section 230's immunity for harms to children. The bill was substantially revised in response to free speech concerns before Senate passage in 2024; its fate in the House and under subsequent administrations remains uncertain.

SHIELD Act (Stopping Harmful Image Exploitation and Limiting Distribution): Targeting non-consensual intimate imagery (NCII), this proposal would allow victims to sue platforms that fail to remove such content, carving out an exception to Section 230's immunity.

Supporters of reform argue that the current immunity structure removes incentives for platforms to address known harmful content, that the immunity was intended to encourage good-faith moderation rather than to insulate platforms from all accountability, and that reform targeting specific harm categories (child safety, non-consensual imagery) can be done without broadly chilling legitimate speech.

The Case Against Reform

Defenders of Section 230's current scope argue:

Chilling Effects: Any reduction in Section 230's immunity will cause platforms to over-moderate to avoid liability. Smaller platforms and new entrants, which lack the legal resources of established giants, will be most affected. The result is that Section 230 reform benefits incumbents who can afford compliance while raising barriers to entry for competitors.

Democratic Speech: Much of the content that Section 230 enables is valuable political speech, journalism, and community expression. Reforms that make platforms liable for user content will cause them to remove borderline content aggressively, suppressing speech that should be protected.

Technical Limitations: Platforms cannot review all content before publication; the volume of user-generated content is simply too vast. Requiring pre-publication review would fundamentally alter the architecture of the internet.

Beneficial Services: Section 230 enables community moderation by users, not just platforms. User reports, community flags, and volunteer moderators all depend on the immunity's protection for users as well as platforms.

Who Benefits from Section 230

An important but underappreciated dimension of the Section 230 debate is who benefits from the current structure:

Large commercial platforms benefit from immunity for the user content that generates advertising revenue. But small, nonprofit, and community-operated platforms also benefit — and would be disproportionately harmed by increased liability. Wikipedia, small forums, archive sites, and local news comment sections all depend on Section 230. Reforms designed to target Facebook may primarily harm the internet's smaller, more democratically accessible spaces.


Section 37.5: European Approaches

The Digital Services Act

The European Union's Digital Services Act (DSA), which entered full effect in 2024, represents the most comprehensive attempt by a major jurisdiction to regulate online platform behavior without directly prohibiting categories of speech. The DSA creates a tiered regulatory framework:

Intermediary services (basic internet routing and network infrastructure): Subject to limited obligations, primarily a prohibition on monitoring obligations and liability for content they have no knowledge of.

Hosting services: Must have notice-and-takedown mechanisms and respond expeditiously to notifications of illegal content.

Online platforms (user-generated content sites): Must provide transparent terms of service, give users meaningful controls over recommender systems, and report illegal content to authorities.

Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) (services with 45+ million active users in the EU): Subject to the heaviest obligations, including systemic risk assessments, independent audits, data access for researchers, and obligations to mitigate risks to democratic processes, public health, and fundamental rights.

Crucially, the DSA does not define "misinformation" or require platforms to remove content solely because it is false. The focus is on systemic risks, transparency, and platform design rather than content prohibition per se. Critics argue this distinction is more formal than real — that risk assessments will inevitably lead to pressure to suppress controversial but lawful content — while supporters argue it is precisely the right approach: regulating the mechanics of harmful amplification rather than speech content directly.

GDPR and Micro-Targeted Advertising

The General Data Protection Regulation (GDPR), effective since 2018, restricts the collection and use of personal data in ways that substantially limit the psychographically targeted advertising that has funded much political misinformation. By limiting data collection without explicit consent, restricting cross-context tracking, and requiring purpose limitation in data use, GDPR indirectly constrains the infrastructure of sophisticated disinformation campaigns — which often depend on demographic microtargeting to match false claims to psychologically receptive audiences.

The GDPR's chilling effect on data-driven political targeting is contested. Advertising technology companies argue that compliance costs favor large incumbents over smaller competitors. Privacy advocates argue that the restriction of surveillance-based advertising is valuable independent of its effects on misinformation.

NetzDG and Its Progeny

Germany's Netzwerkdurchsetzungsgesetz (NetzDG), enacted in 2017, requires large social media platforms to remove "obviously illegal" content within 24 hours of notification and other illegal content within 7 days, under threat of fines of up to €50 million. The law was designed to address hate speech and terrorist content, not misinformation specifically, but its compliance pressure has affected platform content moderation globally.

Critics of NetzDG — including UN Special Rapporteur on Freedom of Expression David Kaye — argued that the law created incentives for over-deletion because the penalty structure is asymmetric: platforms face large fines for failing to remove content but no penalties for removing content excessively. Private platforms became de facto judges of what speech was permissible, without the procedural protections of judicial review.

NetzDG influenced subsequent legislation in France (Loi Avia), Australia, and Singapore, all of which created mandatory removal obligations with tight timelines and significant penalties. The DSA largely supersedes NetzDG for EU-regulated matters while preserving Member State discretion on remaining national law matters.


Section 37.6: Defamation Law as Mis/Disinformation Check

How Defamation Law Works

Defamation — the communication of false statements of fact that damage a person's reputation — is one of the recognized exceptions to First Amendment protection, though the Sullivan framework significantly constrains defamation liability when public figures are involved.

A successful defamation claim requires: 1. A false statement of fact (not opinion) 2. Publication to a third party 3. Fault (negligence for private plaintiffs; actual malice for public figures) 4. Damages (though some categories of defamation are actionable per se without proof of specific harm)

The distinction between fact and opinion is critical for misinformation cases. Courts have generally held that statements of opinion, even if false, are not defamatory — "I think the mayor is corrupt" is opinion; "The mayor took a bribe from Company X on January 15" is a statement of fact. Many misinformation cases involve mixed fact-opinion statements where the line is contested.

Dominion v. Fox News: The $787.5 Million Settlement

Dominion Voting Systems' defamation lawsuit against Fox News Corporation offers the most prominent recent illustration of how defamation law intersects with election misinformation. The case involved Fox News's broadcast of claims by Trump allies — including Rudy Giuliani and Sidney Powell — that Dominion had rigged the 2020 presidential election through algorithm manipulation, foreign interference, and vote switching.

Dominion alleged that Fox News knowingly broadcast false statements about Dominion's voting machines and software. The case rested on a large archive of internal Fox News communications — texts, emails, and deposition testimony — that painted a detailed picture of executives and anchors who privately expressed skepticism about or outright rejection of the election fraud claims they were broadcasting.

Key findings from pretrial proceedings (discussed in full in Case Study 37-1) included:

  • Fox Corporation chairman Rupert Murdoch described the stolen election narrative as "really crazy stuff" in internal communications.
  • Multiple Fox News anchors and executives privately expressed doubt about or rejection of specific claims while their network continued to air those claims.
  • Fox News executives discussed the risk of losing viewers to more credulous competitors (OAN, Newsmax) if they challenged the election fraud narrative, suggesting commercial rather than journalistic motivations.

Judge Eric Davis of the Delaware Superior Court ruled that it was "CRYSTAL clear that none of the statements... were true," and that many of Fox's witnesses' credibility on the actual malice question would face serious challenges at trial. The case settled for $787.5 million — short of the $1.6 billion Dominion had sought, but the largest known defamation settlement in American history.

The Chilling Effect Debate

The chilling effect doctrine holds that speech restrictions can deter not only targeted speech but also constitutionally protected speech near the regulatory boundary, as speakers avoid the risk of punishment by staying well clear of the line. Defamation law, critics argue, chills investigative journalism, political criticism, and commentary on matters of public concern because speakers cannot always predict what courts will classify as defamatory.

The Supreme Court's Sullivan framework addressed this concern by requiring actual malice for public figure plaintiffs, deliberately tilting the playing field in favor of speakers commenting on matters of public concern. But critics argue this protection may be eroding. Thomas and Gorsuch have both suggested, in separate opinions, that Sullivan should be reconsidered — potentially lowering the bar for defamation liability and increasing the chilling effect on commentary about public figures.

SLAPP Suits and Anti-SLAPP Statutes

Strategic Litigation Against Public Participation (SLAPP) suits are defamation or related claims filed not to win on the merits but to impose litigation costs on critics, investigative journalists, or advocacy organizations. The typical target is a small organization or individual whose legal defense costs will be prohibitive regardless of the merits of the underlying claim.

More than 30 U.S. states have enacted anti-SLAPP statutes that allow defendants to file early motions to dismiss SLAPP suits and recover attorney's fees if successful. Federal anti-SLAPP legislation has been proposed but not enacted. Anti-SLAPP protections are important for media literacy organizations, independent journalists, and fact-checkers who may become targets of defamation claims filed by subjects of their work.


Section 37.7: Electoral Speech Regulation

Campaign Finance Disclosure

The Federal Election Campaign Act (FECA) and its implementing regulations require disclosure of the sources of funding for federal election advertising. Following Citizens United v. Federal Election Commission (2010), which held that corporations and unions cannot be prohibited from making independent expenditures in elections, disclosure has become a primary regulatory tool — the constitutionally permitted alternative to spending prohibitions.

Disclosure requirements serve transparency interests by allowing voters to evaluate who is funding election messaging. For misinformation purposes, disclosure is valuable because it allows audiences to identify foreign-funded content, coordinated inauthentic behavior, and sponsored political messaging that might otherwise appear to be organic grassroots content.

However, disclosure requirements are only as effective as their enforcement. The FEC has persistently struggled with partisan gridlock on enforcement. Digital advertising — which has increasingly replaced broadcast advertising as the primary vehicle for election messaging — has historically been subject to weaker disclosure requirements than broadcast advertising.

FEC Rules on Digital Ads

The Federal Election Commission has lagged behind the technological evolution of political advertising. While broadcast and cable television political advertisements are subject to "stand by your ad" disclaimer requirements and station recordkeeping obligations, digital advertising has historically been subject to more limited disclosure. The FEC has considered but not finalized rules that would extend broadcast-level disclosure requirements to digital political advertising.

Several state-level initiatives have attempted to fill this gap. California's Disclose Act requires prominent disclosure of the top funders of political ads. Washington State has enacted digital advertising disclosure requirements.

The Case for Transparency

The argument for election advertising transparency is not primarily about suppressing false claims but about enabling audience evaluation. An advertisement funded by a foreign government, a domestic political party, or an independent advocacy group warrants different levels of trust and scrutiny. Disclosure allows audiences to apply appropriate skepticism. This approach is consistent with First Amendment constraints because it compels speech (disclosure) rather than restricting it.

The Honest Ads Act, which has been introduced in multiple Congressional sessions without enactment, would extend broadcast-level disclosure requirements to digital political advertising and require platforms to maintain public archives of political advertisements. The proposal has bipartisan support in principle but has stalled repeatedly.


Section 37.8: The Dual-Use Regulation Problem

All Anti-Misinformation Laws Can Suppress Legitimate Speech

The fundamental challenge confronting any legal approach to misinformation is that the same regulatory tools capable of suppressing false claims are capable of suppressing true claims. This is not a bug in the design of anti-misinformation laws; it is an inherent feature of any law that requires an authority to classify speech as false. The authority that can remove a false vaccine claim can remove a true vaccine safety concern. The regulator that can label a conspiracy theory as misinformation can label legitimate criticism of public officials as misinformation.

This dual-use problem has been realized in practice in three instructive national contexts:

Singapore: The Protection from Online Falsehoods and Manipulation Act (POFMA)

Singapore's POFMA, enacted in 2019, authorizes government ministers to issue correction directions requiring platforms and users to post corrections alongside "false statements of fact" related to "public interest." Ministers can also order takedowns of content they determine to be false and contrary to public interest.

Critics — including international press freedom organizations and UN human rights experts — documented numerous applications of POFMA against opposition political speech, journalism critical of the government, and social media commentary on matters of contested political fact. The government's determination of what constitutes a "false statement of fact" on matters of public interest does not require judicial review before the correction direction is issued, though judicial review is available after the fact.

The Singapore case illustrates the core concern with government-administered "truth ministries": the political incentives of governments run directly counter to the purposes that anti-misinformation frameworks are designed to serve. Governments have strong incentives to suppress true information critical of their performance while using anti-misinformation law as a pretext.

India: IT Rules and Platform Pressure

India's Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 created obligations for "significant social media intermediaries" to proactively identify and remove content "disparaging" to sovereignty, integrity, security, and public order. They also required platforms to trace the originator of messages — effectively requiring the elimination of end-to-end encryption for private messaging services.

International human rights organizations documented applications of these rules against journalists covering government COVID-19 responses, critics of agricultural reform laws, and commentators on religious violence. The rules created substantial pressure on platforms to comply with government-initiated removal requests without judicial authorization.

Hungary: Pandemic Emergency Powers

During the COVID-19 pandemic, the Hungarian government enacted legislation that created criminal penalties for spreading "distorted facts" about the pandemic or government response measures. The law included no definition of what constituted a "distorted fact" and left determination to government prosecutors operating under an emergency powers regime without normal judicial oversight. Journalists, opposition politicians, and civil society organizations documented cases in which the law was applied to factually accurate reporting that was inconvenient for the government.

Designing in Safeguards

The lesson of these cases is not that legal approaches to misinformation are inherently illegitimate but that they require robust procedural safeguards against misuse:

  1. Judicial authorization: Removals or corrections should require judicial review, not merely administrative determination.
  2. Independent oversight: Any body empowered to designate content as misinformation should be structurally independent from political control.
  3. Defined scope: Regulatory authority should be clearly delimited to specific categories of harm rather than broadly available for all "public interest" determinations.
  4. Transparency: All removal or correction decisions should be publicly disclosed with reasoning.
  5. Appeals: Expedited appeals mechanisms should be available to speakers whose content is targeted.
  6. Sunset provisions: Emergency anti-misinformation authorities should automatically expire and require positive reauthorization.

Section 37.9: AI-Generated Content and Emerging Regulatory Gaps

The Synthetic Media Challenge

Generative AI systems capable of producing photorealistic images, convincing audio recordings, and persuasive text at scale have created new categories of potentially harmful false content — and exposed significant gaps in existing regulatory frameworks. "Deepfake" video and audio content can attribute statements to public figures who never made them; AI-generated text can fabricate quotes, events, and sources; synthetic profiles can impersonate real individuals or create convincing false identities for coordinated inauthentic behavior.

Current law addresses some of these harms through existing frameworks:

  • Defamation: AI-generated content that falsely attributes statements to identifiable individuals and damages their reputation may be actionable as defamation.
  • Fraud: AI-generated content used in financial fraud may be actionable under existing fraud law.
  • Election law: AI-generated content used in election advertising may require disclosure as political advertising.

But significant gaps remain. Defamation doctrine requires identification of a defendant; when AI-generated content spreads without clear attribution to a creator, identifying the legally responsible party is difficult. Existing election law was not designed for AI-generated synthetic media and does not clearly address content that fabricates candidates' statements.

The EU AI Act's Synthetic Media Provisions

The EU Artificial Intelligence Act, which entered force in 2024, contains specific provisions addressing AI-generated synthetic media. Key requirements for AI systems that generate content include:

Disclosure obligations: AI systems that generate synthetic media — images, audio, video, and text that could be mistaken for real — must label their output as AI-generated in a machine-readable format.

Deepfake labeling: AI systems that generate content portraying real people in situations they were not actually in must clearly label such content as artificial.

Exceptions: Content generated for clearly artistic, satirical, or creative purposes may be exempt from mandatory labeling if the artistic nature is clear to a reasonable observer.

Enforcement: Member states must designate market surveillance authorities; the European AI Office exercises oversight for general-purpose AI models.

Watermarking Requirements and Their Limits

Multiple regulatory proposals and voluntary industry commitments have focused on watermarking or other technical markers embedded in AI-generated content to enable identification. The concept has intuitive appeal but faces significant technical limitations:

Detection robustness: Watermarks can often be removed through simple post-processing (cropping, compression, screenshot-and-repost) without sophisticated technical capability.

Provenance without adoption: Watermarking is only useful if platforms check for watermarks and label or remove unwatermarked AI content — a pipeline that requires platform cooperation and user awareness.

Adversarial removal: Sophisticated actors will actively circumvent watermarking systems; the population most likely to engage in harmful synthetic media creation is the population most likely to seek out watermark removal.

Disclosure Obligations for Political AI Content

Several U.S. states enacted laws in 2023-2024 requiring disclosure when AI-generated content appears in political advertising. These laws vary in scope, definition, and enforcement mechanism. Some require visible on-screen disclosure; others require metadata disclosure; some have exemptions for satire.

The Federal Election Commission has also begun examining whether AI-generated political advertising requires disclosure as a coordinated communication or independent expenditure. These regulatory efforts are early-stage and have not yet been tested in significant litigation.


Section 37.10: Toward Evidence-Based Misinformation Policy

Drawing from Law, Communications, and Political Science

Designing effective misinformation policy requires integrating insights from multiple disciplines. Legal scholars contribute expertise on constitutional constraints and institutional design; communications researchers contribute evidence on how misinformation spreads and how interventions affect it; political scientists contribute analysis of how regulatory institutions behave in practice and how concentrated interests shape regulatory outcomes.

Design Principles

Drawing on this multidisciplinary literature, the following principles can guide evidence-based misinformation policy:

1. Prefer structural interventions over content prohibitions. Regulations targeting algorithmic amplification, advertising transparency, platform architecture, and business model incentives avoid direct constitutional confrontations and target the mechanisms by which harmful content scales, rather than requiring case-by-case judgments about the truth of specific claims.

2. Require transparency rather than suppression where possible. Disclosure mandates — for political advertising, AI-generated content, foreign-funded speech, and platform recommendation algorithms — create conditions for informed evaluation without restricting speech directly.

3. Build in judicial oversight and appeals. Any government authority to designate content as misinformation should be subject to prompt judicial review, with expedited procedures that allow speakers to challenge designations before significant harm is done.

4. Design for political independence. Anti-misinformation regulatory bodies should be structured to minimize susceptibility to capture by the government of the day. This may include fixed terms, bipartisan or nonpartisan appointment processes, and reporting to legislative rather than executive bodies.

5. Measure effects on both harmful content and legitimate speech. Regulatory evaluations should assess not only the degree to which targeted harmful content is reduced but also the degree to which legitimate speech is suppressed as a side effect.

6. Sunset and review. Misinformation regulation should include scheduled expiration and reauthorization requirements, with mandatory empirical review of effectiveness before renewal.

7. Engage affected communities. Marginalized communities, journalists, and civil society organizations — who are often the targets of SLAPP suits and the constituencies most affected by both misinformation and over-censorship — should have formal roles in regulatory design and oversight.


Key Terms

Brandenburg test: The Supreme Court's standard for punishable incitement: speech must be directed to inciting imminent lawless action and likely to produce such action.

Categorical approach: A constitutional methodology that identifies categories of unprotected speech subject to regulation without balancing.

Chilling effect: The deterrence of constitutionally protected speech by legal rules that target speech near but beyond the unprotected category.

Content-based regulation: A law that restricts speech because of the topic or subject matter it addresses.

GDPR: The EU's General Data Protection Regulation, restricting collection and use of personal data.

DSA: The EU's Digital Services Act, creating tiered obligations for online platforms based on size.

NetChoice cases: Supreme Court cases (2024) addressing the constitutionality of state laws restricting platform content moderation.

Section 230: The federal statutory provision immunizing interactive computer service providers from liability for third-party content.

SLAPP suit: Strategic Litigation Against Public Participation — defamation or related suits filed to impose litigation costs on critics rather than to vindicate genuine legal claims.

State action doctrine: The constitutional principle that the First Amendment constrains only government actors, not private ones.

Sullivan actual malice standard: The constitutional requirement, from NYT v. Sullivan, that public figures must prove knowing falsity or reckless disregard for truth to recover in defamation.

Viewpoint discrimination: A law that restricts speech based on the perspective it takes on a subject, the most serious form of First Amendment violation.


Discussion Questions

  1. The "marketplace of ideas" metaphor justifies strong speech protection on the ground that truth will outcompete falsehood in free competition. What empirical evidence would be most relevant to evaluating this claim? Does the contemporary information environment support or undermine the metaphor?

  2. Section 230's subsection (c)(1) immunizes platforms for third-party content; subsection (c)(2) protects good-faith moderation decisions. Critics argue these provisions combine to give platforms immunity regardless of whether they moderate or not. Evaluate this critique. Is it accurate? What would a reform look like that addressed this concern while preserving the internet ecosystem?

  3. The Dominion v. Fox News case settled before trial. What are the implications of settlement for the development of defamation law in the context of election misinformation? Would a full trial verdict have been more or less valuable as precedent?

  4. Germany's NetzDG and the EU's DSA both impose obligations on large social media platforms, but with different approaches. Compare the approaches and their likely effects on both misinformation and legitimate speech.

  5. Singapore's POFMA, India's IT Rules, and Hungary's pandemic emergency powers all illustrate the dual-use problem in misinformation regulation. What institutional design features, if any, could reduce the risk that anti-misinformation law becomes a tool of political censorship? Are these features achievable in practice?

  6. The EU AI Act requires labeling of AI-generated content, but watermarks can be stripped and labels can be ignored. What is the value of labeling requirements if they are technically circumventable? What would actually make labeling effective?

  7. Anti-SLAPP statutes protect speakers from litigation designed to suppress their speech. Should the federal government enact a federal anti-SLAPP statute? What are the arguments on each side?

  8. Design a regulatory proposal for AI-generated political advertising that (a) does not violate the First Amendment, (b) provides meaningful transparency to voters, and (c) is technically feasible to implement. What tradeoffs does your proposal require?


Callout Box: The Dominion Precedent The $787.5M settlement in Dominion v. Fox News (2023) was historically significant not because it established new legal doctrine — it settled before a verdict — but because pretrial discovery produced an unprecedented archive of internal communications showing a gap between broadcasters' private assessments of election fraud claims and their on-air presentation of those claims. This evidentiary record may inform future defamation cases against media organizations that broadcast disputed election-related claims.

Callout Box: The Section 230 Paradox Section 230's two provisions create what some scholars call a "moderation paradox": platforms can neither moderate nor fail to moderate and incur liability. The immunity for third-party content (subsection (c)(1)) means that failing to remove harmful content does not make a platform liable as a publisher. The immunity for good-faith moderation (subsection (c)(2)) means that removing content also creates no liability. Critics argue this combination removes all incentive for responsible content management. Defenders argue that without both provisions, the litigation risk of hosting any user content would make most user-generated content services economically unviable.

Callout Box: First Amendment Absolutism vs. Contextual Balancing Justice Hugo Black famously argued that "no law" in the First Amendment means no law — that the amendment bars all government restrictions on speech, period. This absolutist position has never commanded a Supreme Court majority, but it represents a pure form of the categorical approach. Most contemporary First Amendment scholars reject absolutism but disagree about where the balance should be struck. Where you locate yourself on this spectrum will shape how you evaluate every regulatory proposal in this chapter.


Section 37.11: The Role of Journalism and Civil Society in the Regulatory Ecosystem

Journalism as a Quasi-Regulatory Institution

Regulatory frameworks and platform content policies are not the only institutions that shape the information ecosystem. The journalism profession itself — through its norms, practices, and ethics — functions as a quasi-regulatory system for information quality that operates outside the formal legal apparatus and without the state action concerns that constrain government regulation.

The Society of Professional Journalists' Code of Ethics, for example, articulates norms around verification, independence, minimizing harm, and accountability that, if followed consistently, would substantially reduce the production of misinformation by professional news organizations. The fact that these norms are not universally followed — and that the economic pressures of digital media have incentivized departure from them in many outlets — illustrates a regulatory design challenge: informal professional norms are effective only when the professional community enforces them, and effective enforcement requires that audience markets reward adherence rather than punishing outlets that invest in verification and accuracy.

This points to an underappreciated dimension of the misinformation problem: it is partly a market failure. Outlets that invest in expensive verification, multiple-source confirmation, and careful accuracy practices bear costs that outlets that publish quickly without such investment do not bear. If audiences do not reliably favor accurate over inaccurate reporting — and there is reason to believe they do not in some significant market segments — the market does not provide signals that reward quality.

Regulatory approaches that ignore this market structure will be partially effective at best. Interventions that change the market signals — requiring disclosure of correction rates, supporting journalism through public or philanthropic funding, or creating certification systems that allow accurate outlets to differentiate themselves — may be more durably effective than content-specific regulations.

Civil Society Organizations and the Accountability Ecosystem

Beyond formal journalism, a substantial civil society ecosystem has developed to address misinformation:

Fact-checking organizations: PolitiFact, FactCheck.org, Snopes, Lead Stories, AFP Fact Check, and dozens of similar organizations around the world perform the verification function that platform algorithms cannot. The International Fact-Checking Network (IFCN) operates accreditation standards for fact-checking organizations. Third-party fact-checkers are integrated into Facebook's content labeling system as a First Amendment workaround — platforms can implement fact-check labels suggested by independent organizations without the state action problems that would arise from government-directed labeling.

Digital rights and press freedom organizations: The Electronic Frontier Foundation, the Reporters Committee for Freedom of the Press, the Committee to Protect Journalists, the Freedom of the Press Foundation, PEN America, and similar organizations monitor regulatory developments, provide legal support to journalists and publishers, and advocate for speech-protective policies. These organizations serve as a critical watchdog on both government and platform overreach.

Research and accountability organizations: The Stanford Internet Observatory, the Harvard Kennedy School Shorenstein Center, the Reuters Institute for the Study of Journalism, the Digital Forensic Research Lab (DFRLab), and similar institutions produce the empirical evidence base for policy decisions and document specific misinformation campaigns in the public interest.

Community and educational organizations: Libraries, schools, and community organizations deliver media literacy education that complements but operates independently of platform-level interventions and government regulatory activity.

The regulatory ecosystem is more accurately understood as a complex multi-actor governance system than as a binary choice between government regulation and unregulated private platform discretion. Effective misinformation policy design must work with, not instead of, this broader civil society ecosystem — designing regulation that empowers civil society actors, provides them with the data access and legal protection they need to function, and creates accountability structures that bring their insights to bear on platform and government decision-making.

The International Coordination Challenge

Misinformation does not respect national borders. A disinformation campaign originating in one jurisdiction can target audiences in another; a platform regulated in the EU operates globally; state-sponsored influence operations cross all jurisdictional lines. This creates inherent limitations on any national regulatory approach.

International coordination mechanisms for platform regulation are in early stages. The Global Partnership for Action on Gender-Based Online Harassment and Abuse has produced coordinated pressure on platforms regarding gender-based harassment. The G7 and G20 have addressed misinformation in communiqués. The Freedom Online Coalition, an intergovernmental forum, addresses internet freedom issues including state-sponsored disinformation.

The EU's DSA creates jurisdictional effects beyond Europe: platforms subject to DSA compliance have global operations, and complying with DSA requirements for European users may produce policy changes that affect all users globally — a phenomenon sometimes called the "Brussels Effect" by reference to the GDPR's similar dynamic. Whether this effect is desirable depends on whether the EU's regulatory approach is well-calibrated — if it is, the Brussels Effect exports good regulation; if it is poorly calibrated, it exports regulatory failure globally.


Chapter Summary: Regulatory approaches to misinformation exist within a complex landscape of constitutional constraints, institutional design challenges, and empirical uncertainties about what interventions work. American free speech doctrine extends robust protection to most false statements of fact when public figures are involved; European regulatory frameworks like the DSA take a different approach focused on systemic risk and transparency rather than content prohibition. Platform speech is constitutionally protected editorial decision-making under the NetChoice decisions. Section 230's reform is debated with significant consequences for both speaker freedom and platform accountability. Defamation law offers a partial check on misinformation but is constrained by the Sullivan actual malice standard. The dual-use problem — that all anti-misinformation law can suppress legitimate speech — is documented in real-world applications and must be addressed through procedural safeguards. AI-generated content creates new gaps that existing frameworks only partially address. Journalism ethics, civil society organizations, and international coordination mechanisms form a broader governance ecosystem that formal regulation must work with rather than replace. Evidence-based policy design favors structural interventions, transparency requirements, judicial oversight, and sunset provisions over content prohibitions administered by politically accountable officials.