Case Study 6.1: The Tension Between Free Speech and Harmful Disinformation
Platform Governance and the Limits of the First Amendment
In January 2021, following the Capitol riot on January 6, Twitter permanently suspended President Donald Trump's account. Facebook followed with an indefinite suspension. YouTube removed videos claiming the 2020 election was stolen. Apple and Google removed the social media application Parler from their app stores.
The decisions triggered an immediate and predictable debate: was this censorship? Was it the proper exercise of private platform authority? Was it an attack on free speech or a defense of democratic institutions?
The debate was messy because it mixed several distinct questions that the free speech literature separates carefully. This case study untangles them.
Three Distinct Questions
Question 1: Did the First Amendment apply?
The First Amendment restricts government restrictions on speech. It does not apply to private actors. Twitter, Facebook, Google, and Apple are private corporations; their decisions to moderate, suspend, or ban content are not First Amendment violations regardless of how they are characterized in political debate.
This is settled constitutional law. The confusion is widespread because "free speech" has both a legal meaning (the First Amendment standard) and a more general cultural meaning (a norm of openness to a wide range of viewpoints). The suspension decisions were unambiguously within the platforms' legal rights. Whether they were wise, consistent, or appropriate by the broader cultural norm is a different question.
Question 2: Were the suspensions appropriate exercises of platform authority?
This is where genuine disagreement begins. Scholars and practitioners who have studied platform governance offered several positions:
Pro-suspension: The content in question directly contributed to a specific documented instance of political violence. The platforms' own terms of service covered incitement to violence. Platforms that do not enforce their terms of service for high-profile users create a documented asymmetry between how they treat powerful and ordinary users.
Anti-suspension: Permanent account removal of a sitting head of state, days before a presidential transition, was an unprecedented exercise of private corporate power over political discourse with no accountability to democratic process. The decisions were made by small groups of company executives under public pressure, not through transparent rules-based processes. The precedent, applied globally, could justify suppression of far more legitimate political speech.
Structural critique: Both sides of the suspension debate accepted the premise that platform decisions of this scale are private corporate decisions made unilaterally. The more fundamental question is whether social media platforms with this level of influence over public discourse should operate as unaccountable private actors at all.
Question 3: What does this tell us about the relationship between free speech and democracy?
The case illustrates the central tension of Chapter 6 concretely. On one side: the speech in question — claims that the 2020 election was stolen, framing that characterized the incoming government as illegitimate, rhetoric that a federal investigation subsequently found contributed to the Capitol attack — directly undermined democratic institutions and the conditions for legitimate democratic transition.
On the other side: private corporations made decisions about the permissibility of political speech, based on internal policies, under political pressure, with no democratic accountability. If we accept that platform moderation decisions are appropriate remedies for democracy-damaging disinformation, we have also accepted that those decisions will be made by corporate actors whose interests and values may not align with democratic principles.
The Platform Dilemma
The three-question analysis reveals a genuine dilemma that neither the U.S. free speech tradition nor platform governance norms have resolved:
If platforms do nothing (minimal moderation), the information environment can be flooded with coordinated disinformation, and well-resourced propaganda operations can operate without accountability. The "marketplace of ideas" becomes a market dominated by the most well-funded and most willing to exploit cognitive vulnerabilities.
If platforms do too much (aggressive moderation), corporate actors with commercial interests make decisions about political speech that affect billions of people, with limited transparency, limited accountability, and incentives that may not align with democratic values.
The EU Digital Services Act (2022) represents one attempt to address this dilemma through regulation: requiring large platforms to assess and mitigate systemic risks, provide transparency reports, and be subject to audits — while stopping short of directly regulating the content of individual speech decisions. Whether this model will succeed is an empirical question that post-2022 research is beginning to address.
Historical Context: The Alien and Sedition Acts and the CPI
The contemporary debate about platform moderation is not the first time that speech freedom and democratic protection have appeared to conflict. The historical record includes several episodes in which democratic governments restricted speech in ways that were subsequently judged to have been excessive:
The Alien and Sedition Acts (1798): President John Adams signed legislation making it illegal to publish "false, scandalous and malicious writing" against the government. The laws were used primarily against Federalist opponents of the Democratic-Republican opposition. Thomas Jefferson and James Madison argued the laws violated the First Amendment. The Acts expired or were repealed by 1802.
The Espionage and Sedition Acts (1917–1918): As described in Chapter 1, these laws were used to prosecute antiwar speech, labor organizing, and Socialist Party activity. Eugene Debs was sentenced to ten years for an antiwar speech. The Supreme Court upheld these prosecutions but later reversed course in Brandenburg v. Ohio (1969).
McCarthyism (1947–1957): Congressional investigations of alleged Communist infiltration suppressed political dissent, destroyed careers, and created a climate of political conformity through a combination of legal pressure, social stigma, and media amplification. (See Chapter 23.)
The pattern: Each of these episodes involved the government (or government-adjacent institutions) defining political speech it considered dangerous and restricting or suppressing it. Each was subsequently judged to have overreached. The historical pattern supports the argument that government authority to restrict "harmful" speech will be used against legitimate dissent.
What This Case Does Not Resolve
The case study is designed to illuminate the complexity of the debate, not to resolve it. Several questions remain genuinely contested:
- Can the First Amendment tradition be preserved while addressing the documented harms of large-scale coordinated disinformation?
- Can private platform moderation be made democratically accountable without becoming government censorship?
- Do the documented harms of misinformation-driven democratic erosion outweigh the documented harms of government speech restriction?
These questions will occupy media law scholars, platform governance researchers, and democratic theorists for the foreseeable future. Students who finish this course should be better equipped to engage these questions — not to have resolved them.
Discussion Questions
-
The case study distinguishes three questions: the First Amendment question, the platform authority question, and the democratic implications question. Why is it analytically important to keep these three questions separate? What confusions arise when they are treated as the same question?
-
Twitter's permanent suspension of a sitting head of state's account is described as "unprecedented." Is this the right criterion for evaluating the decision — that it had never been done before — or should it be evaluated by the specific facts and risks of the situation? What criteria would you use?
-
The historical examples (Alien and Sedition Acts, Espionage Act, McCarthyism) support the argument that government authority to restrict "harmful" speech will be overused. Does this pattern apply to private platform moderation? What are the similarities and what are the differences?
-
Design a brief regulatory framework for platform governance that attempts to address the "platform dilemma" — neither doing nothing nor giving corporate actors unconstrained authority. What transparency requirements, accountability mechanisms, and independent oversight structures would it include?