Chapter 35: Exercises
Law, Policy, and the Regulation of Propaganda
Exercise 35.1 — Constitutional Framework Analysis
Learning Objective: Apply the Brandenburg v. Ohio standard to contemporary disinformation scenarios and identify the gap between legal standards and democratic harms.
Instructions:
For each of the following scenarios, analyze whether the speech described would likely be protected under the Brandenburg standard, and separately identify whether it poses a genuine harm to democratic discourse that existing law fails to address.
Scenario A: A social media account operated by a foreign intelligence service posts accurate but selectively curated content designed to depress voter turnout among minority communities. The content contains no false statements. It is targeted via the platform's ad system to users profiled as likely to vote for a specific candidate.
Scenario B: An elected official claims, in a widely circulated speech, that a specific voting machine company has rigged its machines to change votes — a claim that has been reviewed by election officials in five states and found to have no factual basis. The official repeats the claim after the courts have rejected it.
Scenario C: A coordinated network of 500 social media accounts, all run by the same organization, posts that a specific public health official "should be removed from office by any means necessary" after the official issues a vaccine mandate. The accounts do not specifically call for violence.
Questions:
-
For each scenario, identify the specific element of the Brandenburg standard that protects or does not protect the speech: (a) directed toward producing lawless action; (b) imminent; (c) lawless action; (d) likely to produce such action.
-
Which scenario do you find most difficult to analyze under Brandenburg? Why?
-
For any scenario where Brandenburg protects the speech, identify one non-legal response (platform action, journalistic, civil society) that might address the harm.
-
Ingrid argues that the EU framework would handle at least one of these scenarios differently. Which one? How?
Exercise 35.2 — Section 230 and Platform Accountability
Learning Objective: Distinguish between platform hosting, platform moderation, and platform amplification as distinct regulatory problems.
Background Reading: Review Section 35.6 on Section 230 and the platform liability debate.
Part A — Categorize the Conduct
For each of the following platform actions, determine whether it is (1) passive hosting of user content (clearly covered by Section 230), (2) content moderation (covered by Section 230(c)(2)), or (3) active algorithmic amplification of content (Section 230 status contested):
a) A platform hosts a post claiming that a specific cancer treatment cures COVID-19. b) A platform's recommendation algorithm surfaces that post to 500,000 users who searched for COVID-19 information. c) A platform removes that post for violating its health misinformation policy. d) A platform places a fact-check label on that post but continues to recommend it to users. e) A platform runs a paid advertisement for a supplement company that uses similar claims.
Part B — Reform Design
Choose one of the three major reform approaches to Section 230 (eliminate it, narrow the immunity, create conditional immunity based on practices) and write a 300-word argument for your chosen approach. Your argument must address: (a) the specific harm it addresses; (b) the constitutional framework that permits it; (c) one serious objection to your approach; (d) how you respond to that objection.
Discussion Questions:
-
Tariq argues that Section 230 reform will inevitably be used to pressure platforms to remove political content that powerful officials find inconvenient. How would you design a reform proposal to minimize this risk?
-
The EU's DSA takes a different approach than Section 230 reform — it does not change liability rules but imposes transparency and process obligations. What are the advantages and disadvantages of the DSA approach compared to Section 230 reform?
Exercise 35.3 — Comparative Regulatory Design
Learning Objective: Compare the U.S. and EU regulatory frameworks and evaluate the trade-offs between speech protection and disinformation accountability.
Setup: You are a policy consultant advising a mid-sized democracy (population 50 million, consolidated democratic institutions, history of foreign information operations by neighboring state, no equivalent of either the First Amendment or the EU membership) that is designing a framework for regulating disinformation on social media platforms.
Task:
Design a two-page regulatory framework memo covering:
-
The Problem Definition — What specific behaviors does your framework target? (Be precise: not "disinformation" but specific conducts or mechanisms.)
-
Regulatory Architecture — Choose one of the following approaches or combine elements: (a) content-based restrictions requiring platform removal of specified false claims; (b) transparency and disclosure requirements; (c) algorithmic accountability obligations; (d) campaign finance-style disclosure rules for political advertising; (e) civil liability for documented harms.
-
Constitutional/Rights Framework — What rights-protection principles constrain your framework? How does it avoid over-restricting legitimate speech?
-
Enforcement Mechanism — Who enforces the framework? (government regulator, independent body, private right of action, some combination) What prevents enforcement from being weaponized against political opponents?
-
Sunset Provision — Include a provision requiring review and reauthorization of the framework after five years, with specified criteria for evaluating whether it has achieved its goals without producing unacceptable side effects.
Class Discussion: Compare the frameworks proposed by different seminar members. Where do they agree? Where do they diverge? Do the disagreements reflect different values, different assessments of empirical evidence, or different institutional assumptions?
Exercise 35.4 — Political Advertising Disclosure Audit
Learning Objective: Apply campaign finance transparency analysis to real political advertising.
Instructions:
Using publicly available political advertising archives (the FEC's public database, Meta's Ad Library, Google's Political Advertising Transparency Report, or similar), identify three political advertisements in a current or recent election cycle and complete the following analysis for each:
For Each Advertisement:
-
Who paid for it? Is the paying entity a campaign committee, super PAC, 501(c)(4), or other? What can you determine about the ultimate funding source?
-
What audience was it targeted at? (Use whatever targeting information is publicly disclosed, and note what is not disclosed.)
-
Does the advertisement contain any claim that is factually verifiable? Is that claim accurate? (You may need to do brief independent research.)
-
Does the advertisement's sponsorship disclosure comply with the applicable legal requirements? (Compare what is disclosed with what the law requires.)
Synthesis:
Write a 200-word assessment of what your three-advertisement audit reveals about the current disclosure framework's effectiveness. What information did you have access to? What information was unavailable? What would a citizen need to know in order to make an informed judgment about each advertisement, and how much of that information was available?
Exercise 35.5 — Progressive Project: Policy Proposal Development
Learning Objective: Draft a structured policy proposal addressing a specific disinformation problem using the Action Checklist framework from Section 35.15.
This exercise contributes directly to your Progressive Project.
Step 1 — Problem Selection
Select one specific disinformation problem affecting the community context of your inoculation campaign. Options include: - Dark money political advertising in local elections - Health misinformation targeting specific demographic communities - Foreign-operated influence networks in local political discourse - Algorithmic amplification of extremist recruitment content - Voter suppression disinformation targeting specific precincts - Automated bot activity inflating apparent public support for specific positions
Step 2 — Background Research
Identify at least two published empirical studies or documented cases that establish the scope and mechanism of your chosen problem. Summarize the evidence in 100 words.
Step 3 — Policy Proposal Draft
Using the template from Section 35.16, draft your proposal (200–300 words) addressing: (a) the specific problem; (b) the proposed intervention; (c) the legal/constitutional framework; (d) one foreseeable unintended consequence and how your design addresses it.
Step 4 — Stakeholder Analysis
Identify: (a) two stakeholders who would support your proposal and why; (b) two stakeholders who would oppose it and why; (c) one institutional actor who would be responsible for implementation and whether they currently have the authority and resources to do so.
Step 5 — Peer Review
Exchange your proposal with a classmate. Write a 150-word response assessing: whether the identified problem is specific enough; whether the proposed intervention addresses the problem's causal mechanism; whether the constitutional analysis is accurate; and whether the unintended consequence analysis identifies the most significant risk.
Exercise 35.6 — Historical Pattern Analysis: The Weaponization Record
Learning Objective: Analyze Tariq's historical argument about speech regulation through primary source evidence.
Background: Tariq's argument in this chapter invokes a historical pattern: laws designed to restrict "harmful speech" are repeatedly used against the people they were supposed to protect. This exercise asks you to evaluate this claim through specific historical examples.
Research Task:
Investigate one of the following historical episodes in detail:
Option A: Eugene Debs and the Espionage Act (1917–1921) Research the specific charges against Debs, the speech that gave rise to the prosecution, the Supreme Court's decision in Schenck v. United States (1919) that upheld the conviction (note: Debs's case was Debs v. United States, decided on similar grounds), and Justice Holmes's eventual evolution toward the Brandenburg standard.
Option B: COINTELPRO and the Civil Rights Movement (1956–1971) Research the FBI's specific activities against the SCLC, NAACP, and Black Panther Party under COINTELPRO. What legal authorities were invoked? What extralegal activities occurred? What were the effects on the organizations targeted?
Option C: McCarthy Era and the Smith Act prosecutions (1948–1957) Research the prosecution of Communist Party leaders under the Smith Act in Dennis v. United States (1951) and the eventual narrowing of the Smith Act in Yates v. United States (1957). What was the political context? Who was prosecuted and why?
Analysis Questions:
-
In your chosen example, what stated justification was offered for the speech restriction? What was the actual application of the restriction?
-
Does your example support or complicate Tariq's argument? Does the history show that (a) bad actors consciously weaponized the law, (b) well-intentioned actors applied the law in ways that served their interests, or (c) the law's structure made weaponization inevitable regardless of intent?
-
What safeguards, if any, might have prevented the abuse in your example? Could those safeguards be incorporated into contemporary regulatory proposals?
-
Does your historical example provide evidence about how current disinformation regulations might be applied? What are the limits of the historical analogy?
Exercises contribute to the following Progressive Project components: Policy Proposal (35.5), Community Impact Analysis (35.4), and Historical Context (35.6). See the Progressive Project guidelines for integration instructions.