Chapter 37: Exercises — Regulatory Approaches: Free Speech vs. Safety
Instructions
These exercises range from doctrinal analysis to policy design to comparative law. Many require you to reason through genuine legal and policy tensions without a single "correct" answer. For policy analysis exercises, clearly state your assumptions, cite the principles from the chapter, and address the strongest counterarguments to your position.
Part A: Doctrinal Analysis
Exercise 1: Applying the Brandenburg Test
For each of the following statements, determine whether it meets the Brandenburg standard for unprotected incitement. Justify your answer with reference to the three-part test (directedness, imminence, likelihood).
a. A political commentator says in a podcast: "If the government keeps ignoring us, people are going to start burning things down."
b. A social media post targeting a specific public health official by name states: "Dr. [Name] lives at [address]. She is destroying our children. Someone needs to stop her."
c. A rally speaker says: "We should march on the Capitol right now and let them hear our voices — if they won't listen, we'll make them."
d. A meme shows an image of a county election official with the caption "Traitor. These people deserve what's coming to them." The meme is shared the night before the official's certification vote.
Exercise 2: Fact vs. Opinion Distinction
Classify each of the following statements as (a) a statement of fact, (b) an opinion, or (c) a mixed fact-opinion statement. For each, explain your reasoning and its implications for defamation liability.
a. "Senator Jones voted to raise your taxes five times."
b. "Senator Jones is corrupt."
c. "In my view, based on her voting record, Senator Jones clearly doesn't care about working families."
d. "The Acme vaccine contains toxic heavy metals."
e. "Studies show that masks don't work."
f. "The 2020 election was stolen through machine manipulation."
Exercise 3: Content vs. Viewpoint Discrimination
A state legislature enacts the "Anti-Misinformation in Public Health Emergencies Act," which prohibits "the publication of statements known to be false about state-authorized public health measures." Analyze whether this law constitutes:
a. Content-based regulation b. Viewpoint-based regulation c. Both, or neither
Then explain what level of constitutional scrutiny the law would face and whether, in your analysis, it would survive that scrutiny.
Exercise 4: State Action and Private Platforms
A user has their account suspended by a large social media platform for posting content the platform characterizes as health misinformation. The user argues this is "censorship" that violates her First Amendment rights.
a. Explain why this claim, as stated, fails as a matter of constitutional doctrine.
b. Under what circumstances could government involvement in a platform's content moderation decision create a viable First Amendment claim? Identify the relevant legal framework.
c. If the federal government sent a letter to the platform specifically identifying this user's content and "requesting" its removal, and the platform complied, how does this change the analysis? Reference the legal framework governing "jawboning."
Exercise 5: The Section 230 Liability Structure
Platform X hosts user-generated content and uses an algorithmic recommendation system that actively promotes content to users who have not specifically sought it out. A user sues Platform X for damages from health advice posted by another user that the algorithm recommended and the plaintiff relied on, suffering harm.
a. How would Platform X most likely invoke Section 230(c)(1) to defend against this lawsuit?
b. Does the fact that Platform X's algorithm actively recommended the content — rather than passively hosting it — affect the Section 230 analysis? Research the legal debate on this question (hint: consider the Force v. Facebook and Gonzalez v. Google lines of cases).
c. If Congress eliminated Section 230(c)(1) immunity for algorithmically amplified content while preserving it for passively hosted content, what would be the likely practical effects on platform behavior? List at least three anticipated effects.
Part B: Comparative Law Analysis
Exercise 6: DSA vs. Section 230
Create a structured comparison of Section 230 (as currently enacted) and the EU Digital Services Act using the following dimensions:
| Dimension | Section 230 | DSA |
|---|---|---|
| Basic approach | ||
| Who is covered | ||
| Obligations on platforms | ||
| Liability structure | ||
| Government role | ||
| Speech implications | ||
| Enforcement mechanism |
After completing the table, write a 300-word analysis of which approach is better suited to the goal of reducing harmful misinformation while protecting legitimate speech.
Exercise 7: NetzDG's Asymmetric Incentives
Germany's NetzDG imposes fines for failure to remove "obviously illegal" content within 24 hours but imposes no penalty for over-removal of legal content.
a. Draw a decision matrix for a platform compliance officer facing a content removal decision under NetzDG. What does the matrix reveal about incentive structures?
b. A digital rights organization argues that NetzDG systematically produces over-censorship. A German government official argues that the law effectively removes genuinely illegal content without evidence of systematic over-censorship. What data would each side need to support their position? How would you design a study to evaluate the question empirically?
c. The EU's DSA takes a different approach, focusing on systemic risk assessment and transparency rather than mandatory removal with tight timelines. Evaluate whether this approach successfully addresses the asymmetric incentive problem you identified in part (a).
Exercise 8: The Singapore POFMA Case Study
Review the description of Singapore's POFMA in Section 37.8. Between 2019 and 2023, POFMA correction directions were issued to:
- A political opposition politician who claimed the government had misrepresented unemployment statistics
- A news website that published a report on a secret government financial arrangement
- A social media user who shared statistics about COVID-19 vaccination adverse events
- A foreign academic's journal article criticizing Singapore's judicial system
a. For each case, identify what question of fact vs. opinion/interpretation is at stake.
b. Design an alternative version of POFMA that retains the government's interest in correcting false information about public affairs while including safeguards against political misuse. What procedural protections would be essential?
c. Is it possible to design an anti-misinformation law effective enough to correct demonstrably false claims while including safeguards sufficient to prevent political misuse? Or are these goals in fundamental tension? Defend your position.
Part C: Policy Design Exercises
Exercise 9: Designing an AI Disclosure Regime
The US Congress asks you to draft a regulatory framework for disclosure of AI-generated content in political advertising. Your framework must:
- Not violate the First Amendment (be content-neutral or satisfy intermediate scrutiny)
- Provide meaningful information to voters
- Be technically feasible given current technology
- Apply to actors across the political spectrum consistently
Draft a one-page policy memo outlining your framework. Address: what must be disclosed, to whom, by whom, in what format, enforced by which agency, with what penalties for noncompliance.
Exercise 10: Redesigning the EARN IT Act
The EARN IT Act has been criticized for: - Effectively mandating encryption backdoors - Giving a government-appointed commission broad authority over platform design - Creating de facto veto power over platform practices for a commission without meaningful independence
Redesign the EARN IT Act to address these criticisms while still advancing the goal of protecting children from CSAM online. What mechanisms would you use? What tradeoffs are unavoidable?
Exercise 11: Federal Anti-SLAPP Legislation
Several states have enacted anti-SLAPP statutes, but there is no federal equivalent. A coalition of press freedom organizations asks you to draft a federal anti-SLAPP statute.
a. What key provisions should the statute include? Consider: the standard for an anti-SLAPP motion to dismiss, the fee-shifting rule, the procedural posture (when does the motion interrupt discovery?), and which types of claims are covered.
b. What constitutional authority supports federal anti-SLAPP legislation? Are there federalism objections?
c. Identify potential opponents of federal anti-SLAPP legislation and their strongest arguments. How would you address each argument in your draft?
Exercise 12: Electoral Transparency Proposal Analysis
The Honest Ads Act would extend broadcast political advertising disclosure requirements to digital advertising. Analyze the proposal using the following framework:
a. Effectiveness: Would the disclosure requirements actually reduce misinformation effects, or would they only add labels that most users ignore?
b. Administrability: Which agency would enforce the requirements? Does that agency have the technical expertise and resource capacity to do so?
c. Constitutional validity: Are there First Amendment objections to requiring disclosure of political advertising's funders? Consider Citizens United and subsequent compelled speech cases.
d. Scope gaps: Are there categories of political digital advertising that the proposal would not cover? How significant are these gaps?
Part D: Institutional Design
Exercise 13: Designing a Misinformation Regulatory Body
Several countries have created or proposed independent bodies to combat misinformation (Ireland's Media Commission, the UK's Ofcom under the Online Safety Act). You are tasked with designing such a body for the United States.
Design a regulatory body with specific answers to the following questions:
a. What is the body's formal name and authorizing statute? b. What powers does it have? (Advisory only? Binding orders? Fine authority?) c. How are its members selected, and what are their terms? d. What institutional protections ensure political independence? e. What decisions require judicial authorization before implementation? f. What appeals processes are available to regulated parties? g. How is the body funded, and are there safeguards against budget-based political pressure? h. What sunset and review provisions apply?
Exercise 14: Platform Accountability Scorecard
Develop a "platform accountability scorecard" that evaluates social media platforms on their misinformation governance practices. Your scorecard should:
- Include 10-15 measurable criteria
- Weight each criterion by importance (your weighting should be justified)
- Be applicable to platforms of different sizes
- Distinguish between process accountability (having policies) and outcome accountability (what the policies produce)
- Not require access to non-public platform data (or clearly flag which criteria require data access)
Apply your scorecard to one major platform using publicly available information.
Part E: Case Analysis
Exercise 15: Analyzing the NetChoice Decisions
The Supreme Court's 2024 decisions in Moody v. NetChoice and NetChoice v. Paxton remanded the cases for further analysis rather than definitively resolving the constitutional questions.
a. What specific questions did the Court leave open on remand?
b. The Court strongly suggested that some applications of the Texas and Florida statutes would be unconstitutional. Based on the Court's analysis, which applications are most likely to survive constitutional review?
c. Write a one-paragraph prediction about how the lower courts will rule on remand and the likely Supreme Court resolution if the cases return.
Exercise 16: Defamation Reform Debate
Justices Thomas and Gorsuch have separately suggested that New York Times v. Sullivan (1964) should be reconsidered. Assume the Supreme Court agrees to hear a case presenting the question of whether Sullivan should be overruled or modified.
a. Write a 200-word argument for a petitioner (a public figure plaintiff) urging the Court to modify Sullivan.
b. Write a 200-word argument for a respondent (a media defendant) urging the Court to reaffirm Sullivan.
c. If you were advising the Court, what modification, if any, would you recommend? What would be the likely effect of your recommended modification on the misinformation landscape?
Exercise 17: Electoral Speech Hypothetical
In the month before a state gubernatorial election, a social media campaign produces videos showing the incumbent governor making statements she never made, fabricated using AI voice synthesis. The videos advocate for her opponent.
a. What existing legal claims, if any, could the governor bring?
b. What existing legal claims, if any, could the electoral authorities bring?
c. What existing regulatory authority, if any, could the FEC or state election authorities use to address the videos?
d. What regulatory gap does this scenario illustrate, and how would you propose to address it?
Part F: Synthesis and Reflection
Exercise 18: The "True Suppression Ratio"
A communications researcher proposes the "True Suppression Ratio" (TSR) as a metric for evaluating misinformation regulatory regimes: the number of false claims removed or labeled for every legitimate speech act suppressed. She argues that any regulatory regime with a TSR below 10:1 (fewer than 10 false claims addressed per legitimate speech act suppressed) is likely to cause more harm than good.
a. Evaluate the TSR concept as a regulatory evaluation tool. What are its strengths and limitations?
b. How would you operationalize "false claim removed" and "legitimate speech act suppressed" for empirical measurement purposes?
c. What TSR threshold would you set for acceptable misinformation regulation, and why?
Exercise 19: First Principles Reflection
This chapter presents several philosophical frameworks for thinking about speech regulation:
- The marketplace of ideas (Holmes)
- The listener's autonomy (a listener-centered approach)
- Democratic deliberation (Sunstein's approach)
- Epistemic justice (Miranda Fricker's framework)
For each framework, briefly explain how it would evaluate a government mandate requiring social media platforms to add fact-check labels to vaccine-related posts that contradict CDC guidance.
Exercise 20: The Dual-Use Paradox
The chapter argues that "all anti-misinformation laws can suppress legitimate speech." Write a 400-word essay addressing the following:
a. Is the dual-use problem a fatal objection to all misinformation regulation, or does it merely mean that regulatory design must be more careful? Defend your position.
b. If the dual-use problem means that no misinformation regulation can be trusted to be consistently applied against actual misinformation rather than political speech, what follows for policy? Are there non-regulatory alternatives?
Exercise 21: Comparative Press Freedom
Research the press freedom rankings published by Reporters Without Borders (RSF) for the current year. Select three countries that have enacted significant anti-misinformation legislation (e.g., Singapore, Hungary, France, Germany, Australia) and three comparable countries that have not enacted such legislation.
a. Is there a correlation between anti-misinformation legislation and press freedom rankings?
b. What methodological limitations affect this analysis?
c. What does the comparative evidence suggest, if anything, about the effects of anti-misinformation law on press freedom?
Exercise 22: Regulatory Capture Analysis
"Regulatory capture" occurs when regulatory agencies come to be dominated by the industries they regulate rather than the public interest they were designed to serve. Apply this concept to potential misinformation regulatory bodies:
a. What are the pathways through which a misinformation regulatory body could be captured by (i) incumbent large platforms, (ii) the government of the day, or (iii) advocacy organizations?
b. What structural features could reduce capture risk?
c. Compare your proposed safeguards to those in the DSA's provisions for the European AI Office. Are the DSA's safeguards adequate?
Exercise 23: International Law Dimensions
The International Covenant on Civil and Political Rights (ICCPR) Article 19 protects freedom of expression but permits restrictions that are (a) provided by law, (b) pursuing a legitimate aim (listed in the article), and (c) necessary and proportionate.
a. Apply the ICCPR Article 19 three-part test to Singapore's POFMA.
b. Apply the same test to the EU's DSA.
c. Apply the same test to a hypothetical US federal law requiring social media platforms to label election-related content from state and local election officials as "authoritative."
Exercise 24: Draft a Policy Brief
You are a policy analyst advising a newly elected U.S. Senator who wants to introduce legislation addressing election misinformation in digital advertising. The Senator has the following constraints:
- She does not want to create a government body that could be weaponized against political speech
- She wants to avoid direct content regulation that would face First Amendment challenges
- She wants to focus on transparency and accountability rather than prohibition
- She wants the legislation to be bipartisan enough to attract cosponsors
Draft a 500-word policy brief with a specific legislative proposal. Include: the problem statement, your proposed mechanism, anticipated objections from both the left and right, and your responses to each.
Exercise 25: Synthesis Essay
Drawing on the full content of this chapter, write a 600-word essay responding to the following claim:
"The First Amendment has become a shield for misinformation, protecting the financial interests of media companies and the political interests of bad actors at the expense of democratic self-governance. The cure is not more speech but robust government intervention to impose truth standards on digital public discourse."
Your essay should: - Engage with the strongest version of this claim - Identify what is right and what is wrong about it - Propose a position of your own on the appropriate role of law in addressing misinformation - Acknowledge the strongest objection to your position and respond to it