52 min read

> "In Sweden, we have the Fundamental Law on Freedom of Expression, but we also have the Digital Services Act. How do you regulate disinformation without becoming the thing you're fighting?"

Chapter 35: Law, Policy, and the Regulation of Propaganda

Part 6: Critical Analysis


"In Sweden, we have the Fundamental Law on Freedom of Expression, but we also have the Digital Services Act. How do you regulate disinformation without becoming the thing you're fighting?" — Ingrid Larsen

"That is exactly the question." — Prof. Marcus Webb


It was the third week of November, and the seminar room felt charged in a way that differed from earlier sessions. The course had spent months building toward this: understanding how propaganda works, why it works, what it does to individuals and institutions, and how people and societies resist it. Now the students were looking at the other side of the ledger — what law and policy can and cannot do in response.

Ingrid Larsen had raised the question at the end of the previous session, and Prof. Webb had promised to take it seriously. Ingrid was direct: in Sweden, freedom of expression was constitutionally protected at a level that made American First Amendment lawyers sit up and take notice. But Sweden was also a member of the European Union, and the EU had just enacted one of the most sweeping platform regulatory frameworks in history. She lived, in other words, at the intersection of the two dominant regulatory philosophies in the democratic world.

"The question isn't rhetorical," she said when Webb invited her to restate it. "My government is subject to both. How do you hold those two things at once?"

Sophia Marin, who had been running her school board campaign throughout the semester, had a more immediate concern. She had been posting on social media, sending emails, organizing a phone-banking operation. "I need to know," she said, "whether any of this could get me in legal trouble. And — separately — if I could write any law I wanted, what would actually help?"

Tariq Hassan, who had spent the semester as the seminar's most consistent skeptic of regulatory solutions, leaned back in his chair. "I want to go on record before we start," he said. "Every law that has ever been written to restrict 'harmful' speech has eventually been used against the people it was supposed to protect. The Espionage Act was passed to stop German spies. It was used against Eugene Debs. The Sedition Act was supposed to protect democracy. It was used to prosecute people for criticizing the draft. I want that pattern on the table before we talk about anything else."

"Noted," said Webb. "And we'll return to it. Repeatedly."


35.1 The Constitutional Framework: First Amendment in Context

The legal regulation of propaganda in the United States cannot be understood without beginning at the constitutional foundation. The First Amendment to the Constitution reads, in relevant part: "Congress shall make no law... abridging the freedom of speech, or of the press." The Supreme Court has, over two centuries, interpreted this clause to establish one of the most expansive speech-protective frameworks in the democratic world.

The operative legal standard governing speech that might incite violence or harmful action was established in Brandenburg v. Ohio (1969). In that case, the Court overturned the conviction of a Ku Klux Klan leader who had made inflammatory statements at a rally. The Court held that the government may not punish "advocacy of the use of force or of law violation except where such advocacy is directed to producing imminent lawless action and is likely to produce such action." This is a demanding standard. It requires not merely that speech be dangerous in some general sense, but that it be (1) directed toward producing (2) imminent (3) lawless action and (4) likely to actually succeed in producing it.

The implications for propaganda regulation are substantial. Under the Brandenburg standard, most propaganda — even propaganda that is demonstrably false, demonstrably harmful to democratic institutions, and demonstrably produced by foreign adversaries — is constitutionally protected speech in the United States. A Russian Internet Research Agency troll farm posting divisive content on Facebook does not, on its face, direct imminent lawless action in the technical legal sense. This does not make the content harmless; it means only that a content-based prohibition would be constitutionally suspect.

The Brandenburg framework reflects a deep structural choice embedded in American constitutional law: the cure for bad speech is more speech, not enforced silence. This principle, associated most strongly with Justices Louis Brandeis and Oliver Wendell Holmes, holds that in a free marketplace of ideas, truth will tend to prevail over falsehood if the government stays out of the business of picking winners. The principle has genuine historical grounding — the government's track record as arbiter of truth is not inspiring — but it was formulated in a pre-digital, pre-platform, pre-algorithmic era when the "marketplace" metaphor had at least some structural plausibility.

The International Divergence

The United States framework diverges sharply from international human rights law. The International Covenant on Civil and Political Rights (ICCPR), which the United States has ratified with significant reservations, contains in Article 19 a right to freedom of expression subject to restrictions that are (1) provided by law, (2) necessary for the protection of specified interests, and (3) proportionate to the harm. This is a balancing framework, not an absolute prohibition on government restriction.

More significantly, ICCPR Article 20 requires — affirmatively requires — that states prohibit "any propaganda for war" and "any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence." The United States, upon ratifying the ICCPR, entered a formal reservation declaring that Article 20 does not require any restrictions on speech protected by the First Amendment. In practice, this means the United States treats its constitutional framework as superior to and incompatible with the international human rights obligation to prohibit war propaganda and hate speech incitement.

Most other liberal democracies have chosen differently. Germany prohibits Holocaust denial. Canada prohibits hate speech. France prohibits the public justification of terrorism. The United Kingdom has offenses of incitement to racial and religious hatred. The EU framework, as we will examine, proceeds from the premise that false and harmful speech can be regulated in a proportionate way without destroying democratic discourse.

Ingrid found this comparison clarifying. "The Swedish Fundamental Law is very protective," she said, "but it has always recognized that the state has a legitimate interest in prohibiting some kinds of speech. The question is always about the threshold and the process. In the U.S., it feels like the threshold is almost impossibly high, and the process question barely arises."

"Because if you set the threshold very low," Tariq said, "you're handing a weapon to whoever controls the government."

"That's the trade-off," Webb acknowledged. "And it's real. We'll come back to the historical record on that."


35.2 The Smith-Mundt Act: Domestic Propaganda Law

One of the most underappreciated regulatory interventions in the history of American information policy was enacted in 1948, at the dawn of the Cold War. The Smith-Mundt Act — formally the United States Information and Educational Exchange Act of 1948 — established the legal framework for American overseas propaganda operations while simultaneously erecting a critical firewall: the United States government was prohibited from directing its foreign propaganda apparatus at domestic American audiences.

The logic was straightforward and democratically principled. Voice of America, Radio Free Europe, and their successor programs were designed to project American values and information into societies living under communist control. These were propaganda operations in the technical sense — state-produced communications designed to influence foreign publics. Congress was willing to fund such operations, but it drew a clear line: the U.S. government could not use those same tools against its own citizens. The distinction between foreign and domestic audiences was constitutionally and democratically significant. A government that propagandizes its own population is something different from — and something more dangerous than — a government that communicates with foreign audiences through crafted messaging.

Smith-Mundt held for sixty-four years. Then came the National Defense Authorization Act of 2012, which included a provision — the Smith-Mundt Modernization Act — that dramatically revised the original framework.

The modernization removed the prohibition on domestic dissemination of materials originally produced for foreign audiences. Under the revised law, materials produced by the State Department and Broadcasting Board of Governors for foreign audiences can, upon request, be made available to domestic audiences. Proponents argued this was a modest update — in the internet age, the firewall between domestic and foreign audiences was already porous; foreign propaganda materials were accessible to any American with a browser; the old law was a vestigial formality.

Critics argued something more consequential had happened. By removing the domestic dissemination prohibition, Congress had potentially opened the door to U.S. government entities producing content with dual targeting — nominally aimed at foreign audiences but designed and distributed in ways that reached domestic ones. This was not merely theoretical: in the years following the modernization, questions arose about how the Department of Defense's information operations, cyber command's social media activities, and various intelligence community communications were being handled under the revised legal framework.

The Smith-Mundt case study (Case Study 35-1) examines this history in detail. The short version for purposes of this chapter: Smith-Mundt represents one of the few direct statutory interventions in the United States specifically designed to regulate government propaganda. Its 2012 revision illustrates how regulatory frameworks can be weakened in ways that their architects would not have anticipated or endorsed.


35.3 Wartime Speech Law: Espionage Act to Present

Tariq had been waiting for this section. "This," he said, "is where the history actually gets instructive."

He was right. The story of wartime speech restrictions in the United States is, in significant part, a story of laws passed to protect democracy being used to suppress democratic dissent. Understanding this history is essential context for evaluating any contemporary regulatory proposal.

The Espionage Act (1917) and the Sedition Act (1918)

The Espionage Act was enacted in June 1917, two months after the United States entered World War I. Its most controversial provisions criminalized willfully making false statements with intent to interfere with military operations and causing or attempting to cause insubordination, disloyalty, or refusal of duty in the armed forces. The 1918 Sedition Act extended this to prohibit any "disloyal, profane, scurrilous, or abusive language" about the U.S. government, Constitution, military, or flag.

Eugene Debs, the Socialist Party presidential candidate who had received 6 percent of the national vote in 1912, was convicted under the Espionage Act in 1918 for a speech in which he praised three men who had been imprisoned for refusing to register for the draft. Debs said in his speech that he "had to be careful" about what he said because of the law — a comment that itself became evidence of intent. He was sentenced to ten years in prison. He ran for president from his jail cell in 1920, receiving nearly a million votes.

The broader pattern was not subtle. Between 1917 and 1919, the Espionage and Sedition Acts were used to prosecute socialists, anarchists, labor organizers, anti-war activists, and German-Americans who criticized the war. They were not used against German spies. As legal historian Geoffrey Stone has documented, the prosecutions were overwhelmingly directed against left-wing political speech rather than espionage or military interference in any operational sense.

The Espionage Act has never been repealed. It was used against Daniel Ellsberg (Pentagon Papers, 1971), against Thomas Drake (NSA whistleblower, 2010), and against Edward Snowden (whose indictment remains outstanding). It is the primary statute under which the Department of Justice has prosecuted national security leakers, and its application has been consistently controversial.

The Cold War and the McCarthy Period

The Smith Act of 1940 made it a crime to advocate the violent overthrow of the government. In Dennis v. United States (1951), the Supreme Court upheld the conviction of Communist Party leaders for conspiracy to advocate the violent overthrow of the government — even though no violent action had occurred or been planned in any specific sense. At the height of McCarthyism, the FBI's COINTELPRO program engaged in covert activities against civil rights leaders, anti-war activists, and socialist organizations that went far beyond legal prosecution into active disruption, black propaganda, and extra-legal pressure.

"Every one of those people," Tariq said, "was told they were being protected from a genuine threat. The Communist Party was a genuine threat, in the eyes of many intelligent people. And every one of those prosecutions was a legitimate use of the law to suppress political speech that the government found inconvenient."

Post-9/11 and the Material Support Framework

The post-September 11 period produced the Patriot Act and a series of expansions to the material support statutes (18 U.S.C. § 2339A and § 2339B). The material support provisions have been used to prosecute individuals for providing advice and training to designated terrorist organizations — in Holder v. Humanitarian Law Project (2010), the Supreme Court upheld the prosecution of individuals who had provided expert advice on nonviolent dispute resolution to the Kurdistan Workers' Party (PKK), a designated terrorist organization. The Court held that even "coordinated" speech could constitute material support.

The Humanitarian Law Project decision is relevant to propaganda regulation because it established that the government can, in some circumstances, restrict speech that is coordinated with a foreign adversary — not because of its content, but because of the coordination. This theory has been applied, in a civil context, to some of the legal theories underlying the Special Counsel's investigation of the Internet Research Agency.

The Alvarez Case (2012): The Stolen Valor Act

United States v. Alvarez (2012) provides a more recent — and more directly relevant — data point on the limits of false speech regulation. Xavier Alvarez had falsely claimed, at a public meeting, to be a recipient of the Medal of Honor. He was prosecuted under the Stolen Valor Act, which made it a crime to falsely claim military awards.

The Supreme Court struck down the Stolen Valor Act as an unconstitutional restriction on free speech. While the Court did not produce a majority opinion, a plurality held that content-based restrictions on false statements of fact are presumptively unconstitutional and that the government bears the burden of showing that restricting the false speech directly advances a compelling interest. The plurality found the government had not met this burden — there were less restrictive means of protecting the integrity of military honors than criminalizing false claims about them.

The implications for disinformation law are significant. If criminalizing a straightforward, verifiable lie about personal credentials is constitutionally suspect, criminalizing more complex political disinformation — which often involves genuine opinion, selective emphasis, and contested interpretive frameworks mixed with false factual claims — is considerably harder. The doctrinal path from Alvarez to broad disinformation regulation is not obvious.


35.4 Defamation, False Light, and Disinformation Law

Civil law offers another potential pathway for addressing disinformation. Defamation — publishing false statements of fact that damage someone's reputation — is actionable in all U.S. jurisdictions. But the law of defamation as it applies to political speech has been shaped by a landmark Supreme Court decision that makes it nearly useless as a tool against most political disinformation.

New York Times v. Sullivan (1964)

New York Times v. Sullivan arose from a full-page advertisement in the Times placed by civil rights supporters, which contained several factual errors about police actions in Montgomery, Alabama. L.B. Sullivan, a Montgomery public safety commissioner, sued for defamation. The Alabama courts awarded him $500,000.

The Supreme Court reversed unanimously. Justice Brennan's opinion established the "actual malice" standard: a public official cannot recover for defamation relating to official conduct unless the statement was made "with knowledge that it was false or with reckless disregard of whether it was false or not." The Court extended this standard, in subsequent cases, to "public figures" generally — not just government officials.

The Sullivan actual malice standard reflects the same underlying philosophy as Brandenburg: protecting vigorous debate about public affairs requires some tolerance for false statements, because the fear of defamation liability will chill true speech. If political criticism could be suppressed whenever it contained an erroneous fact, political speech would be impoverished.

The practical effect on disinformation accountability is severe. To win a defamation suit against a political disinformation campaign, a plaintiff would need to prove not just that a statement was false and damaging, but that the defendant knew it was false or recklessly disregarded its falsity. For large-scale disinformation operations — which typically maintain plausible deniability, use anonymous sources, and can always claim genuine belief — this standard is almost impossible to meet.

The Debate About Sullivan's Future

In 2021, Justice Clarence Thomas wrote a concurrence in Berisha v. Lawson calling for the Supreme Court to reconsider Sullivan. Thomas argued that the actual malice standard has no basis in the text or original understanding of the First Amendment, that it was essentially invented by the Warren Court, and that it has resulted in powerful media organizations being effectively immune from accountability for false reporting.

Justice Neil Gorsuch subsequently echoed similar concerns. The debate is genuine: Sullivan was decided in a media environment where the primary concern was powerful state governments using defamation law to suppress civil rights reporting. In the current environment, the more pressing problem may be that the actual malice standard protects not scrappy advocacy journalism but billion-dollar media operations and algorithmically amplified disinformation networks.

Whether the Court should revisit Sullivan — and what standard should replace it — is one of the most significant open questions in First Amendment law. For present purposes, the key point is that Sullivan's actual malice standard represents a major obstacle to using civil defamation law as a tool against political disinformation.

False Light

The "false light" privacy tort offers a civil alternative to defamation. While defamation focuses on reputational harm caused by false statements, false light focuses on a plaintiff being cast in a "false light" before the public in a way that would be objectionable to a reasonable person. False light claims can reach some conduct that defamation cannot — including highly misleading statements that are technically true.

False light is recognized in fewer than half the states, and the Supreme Court has extended some First Amendment protections to it as well (Time Inc. v. Hill, 1967). It is not a robust alternative regulatory tool, but it illustrates the point that civil law contains multiple overlapping frameworks that can be adapted for disinformation contexts.


35.5 Campaign Finance and Political Advertising Law

Sophia had been listening carefully. "Okay," she said. "Campaign finance law. This is the one I actually have to think about for my campaign. What are the rules?"

Political advertising — one of the most significant vehicles for propaganda in democratic societies — is regulated not primarily through speech law but through campaign finance law. This distinction matters: campaign finance regulations typically operate through disclosure requirements and spending limits rather than through content restrictions, and they are therefore more constitutionally durable.

Federal Election Commission Disclosure Requirements

Federal law requires that political advertisements expressly advocating the election or defeat of federal candidates include disclaimers identifying who paid for the advertisement. The standard "I'm [Name] and I approve this message" disclaimer is a product of the Bipartisan Campaign Reform Act (BCRA, 2002). These requirements have generally survived First Amendment challenge on the theory that disclosure — knowing who is speaking — does not restrict speech but rather informs voters.

Citizens United v. FEC (2010)

Citizens United v. Federal Election Commission (2010) is one of the most consequential — and most contested — Supreme Court decisions in recent American political history. The case involved a conservative nonprofit's desire to broadcast a film critical of Hillary Clinton during the 2008 primary season. The question before the Court was whether a longstanding prohibition on corporate spending in federal elections was constitutional.

The Court held, 5-4, that the prohibition was unconstitutional. The majority reasoned that corporations have First Amendment rights and that spending money on political speech is protected expression. Prohibiting corporate political spending restricts speech based on the identity of the speaker, which is impermissible under the First Amendment.

The effects on the political advertising landscape were immediate and substantial. Citizens United enabled the creation of "super PACs" — political action committees that can raise unlimited funds from corporations, unions, and individuals as long as they do not "coordinate" with official campaigns. In the 2020 election cycle, super PACs and other outside groups spent more than $3 billion on federal elections.

The "Dark Money" Problem

The coordination prohibition has proven highly porous in practice. More significantly, the disclosure requirements that Congress has maintained apply to direct expenditures but can be circumvented through the use of 501(c)(4) "social welfare" organizations, which are not required to disclose their donors. These organizations — colloquially known as "dark money" groups — can spend on political advertising without any public disclosure of where the money comes from.

"So you can run an ad saying I'm dangerous to children," Sophia said, "funded by my opponents' donors, and nobody has to say who paid for it?"

"If it's run through the right legal structure, at the right time, yes," Webb said. "The disclosure framework has significant gaps."

Digital Advertising: The Regulatory Gap

The FEC has struggled to apply political advertising disclosure requirements to digital platforms. While traditional television and radio political ads are subject to clear disclosure rules, the regulatory framework for digital political advertising — social media posts, targeted ads, algorithmic content promotion — is far less developed. The FEC's rulemaking on internet political advertising has been slow, contentious, and incomplete.

Several legislative proposals have attempted to close this gap, including the Honest Ads Act (first introduced 2017, repeatedly reintroduced), which would require digital platforms to maintain public databases of political advertising and apply broadcast-equivalent disclosure requirements to paid political content online. As of this writing, the Honest Ads Act has not been enacted.


35.6 Platform Liability: Section 230 and Its Discontents

Section 230 of the Communications Decency Act (1996) — 26 words that, as it has been said, created the modern internet — reads: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

This provision, enacted as part of the same legislation that the Supreme Court struck down on First Amendment grounds (Reno v. ACLU, 1997), was designed to solve a specific legal problem. Courts had held that internet platforms that exercised any editorial discretion over content — removing some posts, flagging others — thereby became "publishers" of all content on their platform, exposing themselves to defamation liability for everything their users posted. The perverse effect was to incentivize platforms not to moderate at all, since moderation triggered liability.

Section 230 solved this problem by immunizing platforms from liability for third-party content, regardless of whether the platform exercised editorial discretion. The section has a second clause, 230(c)(2), which provides immunity for "good faith" actions to restrict access to material the platform considers "objectionable." Together, these provisions created the legal foundation for platforms to (a) host massive quantities of user-generated content without liability for that content and (b) moderate content without losing their liability shield.

The Core Debate

The Section 230 debate has become one of the most contested areas of platform policy, with opposition coming simultaneously from left and right — albeit for different reasons.

The conservative critique holds that platforms use Section 230 immunity to censor conservative speech under the guise of moderation, while facing no accountability for doing so. The progressive critique holds that Section 230 immunity allows platforms to profit from algorithmically amplified disinformation, hate speech, and violence-inciting content without facing any liability for the harms their systems cause. Both critiques can be simultaneously valid because they describe different aspects of the same legal architecture.

For propaganda and disinformation purposes, the more significant concern is the second: that Section 230 insulates platforms not just from liability for content they did not create but from accountability for the affirmative choices their recommendation algorithms make in amplifying that content. A platform's algorithm actively choosing to serve inflammatory content to users because it maximizes engagement is qualitatively different from a platform passively hosting content that a third party posted. Whether Section 230 immunity covers algorithmic amplification decisions — as opposed to mere hosting — is a contested legal question that courts have not definitively resolved.

Legislative Reform Proposals

Several legislative proposals have sought to reform Section 230 without eliminating it entirely:

The EARN IT Act (Eliminating Abusive and Rampant Neglect of Interactive Technologies) would condition Section 230 immunity on platforms' adoption of best practices for combating child sexual abuse material. Critics argue it would also create pressure to break encryption and would effectively deputize the government in content moderation decisions.

The SAFE TECH Act would strip Section 230 immunity from paid content (advertising) and require platforms to implement some measures to prevent "known unlawful" content. Its scope and enforceability have been heavily debated.

No major Section 230 reform legislation has been enacted. The political coalition needed to pass reform faces the challenge that left and right want incompatible things from any reform — the left wants more moderation accountability, the right wants less censorship. Building a majority for either version has proven difficult.


35.7 The EU's Approach: Digital Services Act and Code of Practice

Ingrid Larsen's question at the start of this chapter pointed toward the most significant regulatory development in the global platform information environment since Section 230 itself: the European Union's Digital Services Act (DSA), which entered into force in 2022 and began applying to very large online platforms and search engines in August 2023.

"The DSA doesn't tell platforms what content to remove," Ingrid noted. "That was what I found interesting when I read it. It's about process and transparency, not about what's true or false."

This observation captures a key feature of the EU's regulatory philosophy: the DSA is, fundamentally, a procedural and transparency regulation rather than a content regulation. It does not attempt to define disinformation and mandate its removal. Instead, it creates obligations around how platforms manage risk and how they account for that management to regulators and the public.

What the DSA Requires

For "very large online platforms" (VLOPs) and "very large online search engines" (VLOSEs) — those with more than 45 million monthly active users in the EU — the DSA imposes a set of heightened obligations that have no parallel in U.S. law:

Systemic risk assessments: VLOPs must conduct annual assessments of the systemic risks their platforms pose, including risks to "civic discourse" and "electoral processes," risks arising from "intentional manipulation of the platform's service," and risks arising from "the dissemination of illegal content, and of any negative effects for the exercise of fundamental rights." These risk assessments must be documented and made available to the European Commission's Digital Services Coordinator upon request.

Risk mitigation measures: Platforms must implement reasonable mitigation measures in response to identified risks. The DSA does not specify what those measures must be — it leaves that to platform discretion — but it requires that the measures be documented, implemented, and reported on.

Transparency reporting: VLOPs must publish transparency reports on their content moderation activities at least annually, including information on the human resources devoted to content moderation, the number of pieces of content removed, and the error rates in automated content moderation systems.

Recommender system transparency: Platforms must provide users with at least one option to receive recommendations not based on profiling. They must clearly explain how their recommender systems work. For VLOPs, the criteria underlying recommender systems must be disclosed to the Digital Services Coordinator upon request.

Advertising transparency: Platforms must maintain public repositories of all advertisements, including information about who paid for the ad, which audience it was targeted at, and what criteria were used for targeting.

The Code of Practice on Disinformation

Alongside the DSA, the EU has maintained a voluntary Code of Practice on Disinformation, first adopted in 2018 and significantly revised in 2022. The Code is a self-regulatory instrument: platforms that sign it commit to implementing specific measures to reduce the spread of disinformation, including demonetizing disinformation actors, improving political advertising transparency, supporting fact-checking, and conducting research access programs.

The Code's signatories include Google, Meta, Twitter/X, TikTok, Microsoft, and others. Its enforceability is limited by its voluntary nature, but the DSA has given it new teeth by designating compliance with the Code as one pathway for demonstrating compliance with the DSA's disinformation risk mitigation obligations.

Why the EU Can Do What the U.S. Cannot

The EU's regulatory approach works, legally speaking, because it operates on the theory of market access rather than speech restriction. The EU is not telling platforms what speech to allow or prohibit; it is setting conditions for operating in the EU market. Any platform that wants access to 450 million European consumers must comply with the DSA's obligations. This is not a First Amendment problem for the simple reason that the EU is not subject to the First Amendment.

For U.S. platforms, the DSA creates a de facto global standard pressure: complying with the DSA's transparency and risk assessment requirements for EU operations creates systems and capacities that can be applied globally. The regulatory lever is applied in Europe; the effects are felt in the global platform architecture.


35.8 Platform Self-Regulation: The Oversight Board and Its Limits

The debate about platform regulation has centered partly on whether platforms should regulate themselves or be regulated by governments. Meta's creation of the Oversight Board in 2020 was the most ambitious experiment in platform self-regulation in the history of the industry.

The Oversight Board is an independent body of forty members — former presidents and prime ministers, human rights scholars, journalists, and legal experts from around the world — with the power to review and overturn individual Meta content moderation decisions. It is funded by an independent trust, theoretically insulating it from direct Meta control. Its decisions on content cases are binding on Meta.

"Theoretically," Tariq said, and let that word carry weight.

What the Oversight Board Can and Cannot Do

The Board has issued binding decisions reversing Meta's removal of content — including, controversially, a ruling that Meta's indefinite suspension of Donald Trump's account after January 6, 2021 was not consistent with Meta's own policies and that Meta needed to decide on a defined period of suspension or permanent removal. This case illustrated both the Board's genuine independence (it criticized Meta in terms the company clearly did not enjoy) and its structural limitations (the Board could rule on the procedural regularity of the decision but could not make the substantive policy choice about how to handle political leaders who incite violence).

The Board can review individual content decisions. It cannot review Meta's fundamental algorithmic and business model choices — the decisions about how the News Feed ranks content, how advertising is targeted, how the platform's recommendation systems work. These are the decisions that critics argue do the most to amplify propaganda and disinformation. They fall entirely outside the Oversight Board's jurisdiction.

The Board has been criticized by some as "accountability theater" — a sophisticated public relations exercise designed to deflect demands for external regulation by demonstrating that something like accountability exists. This critique has force. The Board's jurisdiction covers a tiny fraction of the content decisions Meta makes daily (Meta processes billions of posts; the Board has decided a few hundred cases). Its structural inability to address systemic issues — amplification, monetization, architectural choices — means it cannot address the most significant concerns.

At the same time, the Board has shown genuine independence in cases involving powerful governments. It has ruled against content removals sought by national governments, issued critical assessments of Meta's policies in authoritarian contexts, and maintained a record of transparency about its deliberations that is more robust than most corporate accountability mechanisms.

The honest assessment is that the Oversight Board is a genuine but extremely limited accountability mechanism, operating at the margins of the most significant problems, and that its creation has at least partially served to reduce political pressure for more comprehensive external regulation.


35.9 The German Model: NetzDG

Germany represents the most assertive regulatory approach to platform disinformation among major Western democracies, reflecting Germany's unique historical relationship with hateful propaganda.

The Netzwerkdurchsetzungsgesetz (NetzDG — Network Enforcement Act), enacted in 2017, requires that large social media platforms (more than 2 million registered users in Germany) remove "clearly illegal" content within 24 hours of receiving a complaint — or within seven days for cases requiring more complex evaluation. "Clearly illegal" is defined by reference to a list of criminal offenses under German law, including incitement to hatred (Volksverhetzung), distribution of content depicting terrorist organizations, and Holocaust denial.

The sanctions for non-compliance are severe: fines of up to €50 million for systematic failure to comply with removal obligations. The threat of fines of this magnitude created strong incentives for platforms to develop robust German-market compliance operations.

What NetzDG Achieved

The German law produced measurable changes in platform behavior. Platforms invested in German-language content moderation capacity. Removal rates for flagged content increased significantly.

It also produced documented cases of over-removal — the deletion of content that was arguably legal, including political satire and journalism. The German satirical magazine Titanic had a parody account suspended; a feminist group had content removed that fell within commentary on sexism that was clearly legal. Critics argued that the tight compliance timelines incentivized platforms to remove any content that might conceivably violate German law rather than conduct careful analysis — since failing to remove clearly illegal content created liability, but removing legal content faced no comparable sanction.

This asymmetric incentive structure — the "chilling effect" — is one of the central concerns with NetzDG-style regulation and a key reason why Tariq's skepticism about regulation deserves engagement rather than dismissal. When the cost of a false negative (failing to remove illegal content) is a massive fine and the cost of a false positive (removing legal content) is nothing, rational platforms will systematically over-remove. The losers from systematic over-removal are not evenly distributed: they tend to be minority voices, political dissidents, and speakers with less legal and reputational protection.

Germany's Specific Legal Context

NetzDG makes more sense — both politically and legally — in the German context than it would in the American one. Germany's post-war constitutional order was explicitly designed to prevent the re-emergence of Nazi politics. The Basic Law contains provisions allowing the government to ban political parties that seek to undermine the constitutional order. Holocaust denial is a criminal offense. These provisions reflect a considered constitutional decision that some speech is so corrosive to democratic foundations that its prohibition is necessary to preserve democracy itself — what the German Constitutional Court has called "militant democracy" (streitbare Demokratie).

The American constitutional framework reflects a different judgment: that the government cannot be trusted to identify which speech undermines democracy, and that the cure for speech that threatens democracy is more democracy. Neither framework is obviously wrong; they reflect different historical experiences and different theories of how democracy fails.


35.10 International Law Frameworks

The cross-border nature of disinformation creates a fundamental challenge for any national regulatory framework: a platform headquartered in the United States, with servers in Ireland, can deliver content to users in Germany that was produced by operators in St. Petersburg. Which law applies? Whose court has jurisdiction? Who can compel removal?

The ICCPR Framework

The International Covenant on Civil and Political Rights, ratified by 173 states, provides the baseline international human rights framework for speech regulation. Article 19 protects freedom of expression. Article 20 requires the prohibition of war propaganda and incitement. The Human Rights Committee, in General Comment 34 (2011), has elaborated that Article 20's obligations apply to all forms of propaganda, including digital and social media, and that "advocacy of hatred" covers not just intentional incitement but foreseeable incitement.

The difficulty is enforcement. The Human Rights Committee can issue decisions finding that a state has violated the ICCPR, but it cannot impose sanctions. State compliance with adverse decisions is voluntary. For cross-border disinformation — particularly state-sponsored disinformation from states that have not ratified the ICCPR or that have entered reservations similar to the U.S. reservation on Article 20 — the international framework provides a normative standard but not an enforcement mechanism.

The Rabat Plan of Action

The Rabat Plan of Action (2012), developed by the UN Special Rapporteur on Freedom of Expression and the Special Advisor on the Prevention of Genocide, established a six-part threshold test for determining when advocacy of hatred reaches the level of incitement that states are obligated to prohibit under Article 20. The six factors are: context, speaker, intent, content, extent, and likelihood/imminence.

The Rabat threshold test is a more nuanced tool than the Brandenburg imminent lawless action standard, and it has been influential in international human rights discussions. But it operates at the level of norm-setting, not enforcement, and its application to algorithmically amplified digital disinformation has not been authoritatively established.

Jurisdictional Challenges

The EU's approach to cross-border platform regulation through the DSA represents the most aggressive attempt to establish effective jurisdiction over global platforms. The DSA's market access theory — comply or lose access to the EU market — has proven effective in generating platform compliance in ways that purely national jurisdictional claims have not.

The alternative model — represented by Russia and China — is internet fragmentation: the establishment of national-level internet infrastructure that allows governments to control what their citizens can access and what can reach their citizens. This "splinternet" trajectory is the most significant structural alternative to the current global platform architecture, and it raises questions that go beyond this chapter's scope but are essential context for any discussion of regulatory design.


35.11 The Accountability Gap: Who Regulates Disinformation When No Law Does?

The gap between where disinformation occurs and where legal authority operates is vast. In that gap, non-governmental organizations have established themselves as a de facto accountability layer — conducting research, publishing findings, applying reputational pressure, and in some cases directly influencing platform enforcement decisions.

Civil Society Organizations

The Center for Countering Digital Hate (CCDH) researches and publicizes accounts and networks responsible for disinformation and hate speech, particularly in health and political contexts. Its research on COVID-19 vaccine misinformation identified a relatively small number of "superspreader" accounts responsible for a disproportionate share of anti-vaccine content, contributing to platforms' decisions to remove or restrict those accounts.

NewsGuard develops ratings of news and information websites based on journalistic credibility standards, selling those ratings to advertisers, libraries, and browsers to enable filtering of low-credibility content. Its model attempts to create economic incentives for credible information without requiring government action.

EUvsDisinfo, operated by the European External Action Service, tracks and counters disinformation narratives attributed to the Russian government, publishing a public database of identified disinformation cases. It operates within an explicitly adversarial geopolitical frame — it is not politically neutral — but it has established one of the more robust public records of state-sponsored disinformation operations.

Stanford Internet Observatory and Related Academic Research

The Stanford Internet Observatory, the Harvard Shorenstein Center, and the Oxford Internet Institute have developed substantial research programs investigating platform information operations, coordinated inauthentic behavior, and the effectiveness of platform interventions. This research community serves an accountability function by producing independent empirical evidence about platform behavior that is not dependent on platform self-reporting.

"But here's the problem," Tariq said. "These are private organizations. They're not accountable to anyone in particular. They don't have to prove their case in court. They can have their own ideological biases. And the platforms use their research as cover — 'well, the Stanford Internet Observatory said we should remove that, so we did.' That's outsourcing content regulation to private entities with no democratic mandate."

This is a genuine concern and not merely a reflexive objection. The informal accountability ecosystem that has developed around platform content governance — researchers, advocacy organizations, civil society groups influencing platform decisions — exercises substantial power over the information environment without the procedural protections, transparency requirements, and accountability mechanisms we expect from formal regulatory bodies. The question of whether this is better or worse than formal government regulation is not one that admits an obvious answer.


35.12 Research Breakdown: The Empirical Record on Regulatory Effectiveness

The regulatory debates examined in this chapter are not merely theoretical; they are amenable to empirical investigation. A growing body of research examines what platform enforcement actions and regulatory interventions actually accomplish.

Platform Enforcement Consistency

The Stanford Internet Observatory and others have documented significant inconsistency in platform content enforcement. Studies find that enforcement of identical policies varies substantially by language (English-language content receives more consistent enforcement than non-English content), by the identity of the account posting (large accounts with verified status face less restrictive enforcement than small accounts), and by the political context (enforcement in countries where the platform has strong business relationships is less aggressive than enforcement in smaller markets).

The Deplatforming Literature

One of the more robust findings in the empirical literature concerns the effects of removing high-profile accounts ("deplatforming"). Studies examining the removal of Alex Jones from major platforms in 2018, the removal of right-wing figures from Twitter in the years following January 6, 2021, and similar cases generally find that deplatforming reduces the reach of the removed accounts substantially and that migration to alternative platforms does not fully restore the original reach. This finding challenges the claim that deplatforming never works; it appears to work in the specific sense of reducing amplification.

However, the same literature finds that deplatforming does not reduce the underlying sentiment — beliefs and grievances expressed by deplatformed accounts persist in the population. Removing amplification reduces spread but does not address the sources of the disinformation or the social conditions that made it effective.

The Chilling Effect Evidence

Danielle Keats Citron's work on "cyber civil rights" — and related empirical research — documents the specific ways in which disinformation and online harassment impose silence costs on marginalized voices: women, minorities, journalists, and political figures who reduce or eliminate their public presence in response to targeted harassment. This research reframes the disinformation problem: the threat to free expression is not always from government regulation but sometimes from private actors who use speech as a weapon to silence other speakers.

This reframing matters for regulatory design. If the goal is to maximize robust democratic discourse, the relevant question is not only "how do we prevent government from restricting speech?" but also "how do we prevent private actors from using speech to silence democratic voices?"


35.13 Primary Source Analysis: DSA Articles 34–35 as Regulatory Design

The EU Digital Services Act's Articles 34 and 35 on systemic risk assessment represent one of the most sophisticated attempts in regulatory history to address platform-level information harms without engaging in content regulation. Reading them carefully as a regulatory design document reveals both the theory of change embedded in the framework and its inherent limitations.

Article 34: Systemic Risk Assessment

Article 34 requires VLOPs to identify and analyze the "systemic risks" stemming from their platform's design, functioning, and use. The risks to be assessed include: (a) dissemination of illegal content; (b) negative effects on fundamental rights; (c) civic discourse and electoral processes; (d) public security; and (e) gender-based violence.

Crucially, the assessment must cover "the actual or foreseeable negative effects" — not just documented harms but reasonably anticipated ones. Platforms must assess "the design and functioning of their algorithmic systems, including recommender systems and advertising systems," making clear that the risk framework encompasses not just content but architecture.

Article 35: Mitigation Measures

Article 35 requires platforms to "put in place reasonable, proportionate, and effective mitigation measures, tailored to the specific systemic risks identified." The measures may include "adapting content moderation or recommender systems, their online interfaces or their terms and conditions."

The critical word is "reasonable." The DSA does not specify what mitigation measures are required. It does not mandate specific outcomes. It creates a procedural obligation — assess the risk, implement proportionate mitigation, document what you did — rather than a substantive outcome obligation.

The Theory of Change

The theory of change embedded in Articles 34–35 is proceduralist: if platforms are required to systematically identify the harms their systems cause and document what they did about them, several things will happen. Platforms will have legal incentive to actually investigate their own harms, rather than maintaining motivated ignorance. Regulators will have access to documentation that enables oversight. Researchers, through the DSA's data access provisions, will be able to independently evaluate whether mitigation measures are working. Public accountability will follow from transparency.

What the framework leaves to platform discretion: essentially everything substantive. The platform decides what "reasonable" mitigation looks like. The platform conducts the risk assessment. The asymmetry of information between platforms and regulators remains enormous.

This is not a fatal critique — the DSA may be the best available regulatory design given constitutional and practical constraints — but it is an honest assessment of its architecture.


35.14 Debate Framework: Should Democratic Governments Regulate Disinformation?

The seminar debate was structured around three positions. Each student was assigned to defend one, then challenge it from another position.

Position A: The Anti-Regulation Position

The First Amendment framework provides the appropriate standard for democratic speech regulation — and that standard prohibits most government content restrictions on political speech. The argument is not merely formal; it is historical. The Espionage Act, the Smith Act, the McCarran Internal Security Act, COINTELPRO, and a long parade of other regulatory interventions purportedly aimed at harmful speech were in fact directed at democratic dissent. When governments acquire the power to label speech "disinformation," they acquire the power to suppress political opposition. The labels will always be available; the power will always be abused.

The structural solution to disinformation is not restriction but robust counterspeech — more transparency about platform algorithms, better media literacy education, more funding for independent journalism. These are speech-expanding solutions rather than speech-restricting ones.

Tariq was assigned this position and found it congenial: "The history is clear. Give any government the power to regulate 'harmful speech' and it will regulate the speech of whoever is politically inconvenient. That is not a hypothetical. That is what has happened every time, without exception."

Position B: The Targeted Regulation Position

Platform-scale disinformation poses a clear and present danger to democratic institutions that existing law cannot address. The First Amendment framework was designed for a world in which harmful speech was constrained by the natural limits of human communication — one speaker could not instantly reach 100 million people. Algorithmic amplification has fundamentally changed the information environment, and the constitutional framework has not caught up.

Targeted, narrowly drawn regulation of specific harmful practices — coordinated inauthentic behavior, microtargeted deceptive advertising, algorithmically amplified demonstrably false health claims — is both necessary and constitutionally permissible. The key is design: regulation that targets the coordination and infrastructure of disinformation, the deceptive advertising market, and the amplification architecture rather than specific viewpoints can be drafted consistently with First Amendment principles.

The evidence of harm is not abstract. The 2020 election and the January 6 aftermath, the COVID-19 infodemic, and documented foreign interference in democratic elections demonstrate that disinformation at scale has real effects on democratic functioning. Waiting for the perfect solution while the problem compounds is itself a policy choice with consequences.

Position C: The Structural Regulation Position

The most effective and most constitutionally durable regulatory approaches target the infrastructure of disinformation rather than its content. These include: (1) campaign finance transparency reforms that eliminate dark money political advertising; (2) algorithmic transparency requirements that allow independent researchers to evaluate amplification decisions; (3) data portability and interoperability requirements that reduce platform lock-in and enable competitive alternatives; (4) antitrust action against platform monopolies that eliminate competitive incentives for quality information curation; and (5) digital advertising market reforms that eliminate the economic incentives for disinformation production.

None of these proposals involves the government telling anyone what they can or cannot say. All of them address structural features of the information environment that enable disinformation to thrive. They are the regulatory equivalent of improving road safety by redesigning intersections rather than arresting bad drivers.

Ingrid found this position most compatible with her experience of the DSA: "The DSA is mostly a structural regulation. It doesn't say what content to remove. It says: tell us what risks you've identified and what you've done about them. That's a structural approach."


35.15 Action Checklist: Policy Advocacy Framework

Whether you are advocating for a specific regulatory reform, evaluating a policy proposal, or simply trying to be an informed citizen about these debates, the following framework helps structure analysis:

Identify the Legal Baseline - What constitutional framework governs? (First Amendment for U.S. federal action; other frameworks for states, for non-U.S. jurisdictions, for private actors) - What does existing law already regulate? (political advertising disclosure, defamation, incitement, campaign finance) - What enforcement mechanisms currently exist? (FEC, FTC, state attorneys general, private right of action)

Evaluate the Problem Precisely - What specific harm are you trying to address? (voter suppression disinformation, health misinformation, foreign interference, coordinated harassment) - What is the causal mechanism? (algorithmic amplification, dark money advertising, bot networks, deceptive design) - What evidence documents the harm? (published research, enforcement actions, documented case studies)

Assess Proposed Interventions - Does the proposed intervention target content/viewpoint (constitutionally vulnerable) or conduct/infrastructure (more durable)? - What are the foreseeable unintended consequences? (over-removal, chilling effects, enforcement against minority voices) - Who has enforcement authority? (government agency, private right of action, international body) - Who bears the compliance burden? (large platforms, small platforms, individual speakers)

Apply Historical Pattern Analysis - Has a similar regulation been attempted before? What happened? - Who controlled enforcement when the regulation was adopted? Who might control it under different political conditions? - Does the regulation create tools that could be repurposed against the people it is supposed to protect?

Consider the Broader Context - Does the proposed regulation address the root cause or the symptom? - What non-regulatory interventions would address the same problem? - What is the international dimension? (cross-border enforcement, regulatory arbitrage, authoritarian co-optation of regulatory language)


35.16 Inoculation Campaign: Policy Dimension — Progressive Project

This chapter's contribution to the Progressive Project builds the policy advocacy dimension of your inoculation campaign.

Assignment: Draft a Regulatory Proposal

Draft a brief (one substantial paragraph, approximately 200–300 words) policy proposal for one regulatory intervention that would improve the information environment your community operates in. Your proposal should clearly identify:

  1. The specific problem — Not "disinformation" in general, but a specific mechanism: dark money political advertising, algorithmic amplification of health misinformation, foreign-funded political influence operations, microtargeted voter suppression content, or another problem with a clear causal mechanism.

  2. The proposed intervention — What specific legal or regulatory action would you propose? A new disclosure requirement? A platform obligation? A funding mechanism for independent journalism? A research access mandate? Be specific about what the law or regulation would require, from whom, under what conditions.

  3. The legal/constitutional framework — Is your proposal content-based (constitutionally vulnerable under the First Amendment) or conduct-based (more durable)? What existing legal authority supports it? What constitutional objections would it face, and how would you respond?

  4. One foreseeable unintended consequence — Applying the historical pattern analysis from the Action Checklist, identify one genuine way your proposal could be misused or could produce results you did not intend. How would you design the proposal to minimize that risk?

Sophia's draft, shared with the seminar: "The specific problem I'm addressing is the gap in digital political advertising disclosure, which allows political opponents to run targeted advertisements against local candidates like me through dark money structures that require no public disclosure. My proposed intervention is a state-level digital political advertising transparency law requiring any paid political content targeting state and local elections to be disclosed in a public database, including the name of the paying entity, the targeting criteria used, and the total spend. The legal framework is consistent with First Amendment principles because disclosure requirements for political advertising have been upheld by the Supreme Court as serving the compelling interest in informed voters (McIntyre v. Ohio Election Commission distinguished; Citizens United acknowledged the validity of disclosure requirements even while striking spending limits). One foreseeable unintended consequence: state-level disclosure requirements may be easier to evade through creative use of federal campaign finance structures, and they could be enforced selectively by state officials who use compliance investigations to harass political opponents. To minimize this, the database should be fully public, the disclosure standard should be bright-line and ministerial (not subject to enforcement discretion), and there should be a private right of action allowing any citizen to enforce the requirement."


35.17 International Coordination: The Challenge and the Necessity

Every framework examined in this chapter operates within a jurisdiction. The First Amendment governs American platforms. The Digital Services Act governs entities operating in the European Union. NetzDG applies within Germany. But disinformation does not operate within jurisdictions. A coordinated inauthentic behavior operation targeting a French election may originate on servers in a third country, be funded through shell companies registered in a fourth, and execute through platform accounts whose legal status is contested across half a dozen legal systems simultaneously. This structural mismatch — disinformation's borderlessness against law's territoriality — is the central regulatory challenge that no national framework, however well designed, can resolve on its own.

Three international coordination mechanisms have emerged to address this gap, with varying degrees of formalization and effectiveness.

The G7 Rapid Response Mechanism (RRM) was established in 2018 to coordinate detection and response to foreign state-sponsored disinformation threats targeting G7 democracies and their partners. The mechanism operates through a network of national contact points who share intelligence on identified disinformation campaigns, coordinate messaging responses, and develop joint analytical frameworks. Its promise lies in speed and interoperability among allied democracies; its limitations are equally apparent. The RRM covers only the G7 countries and select partners, is funded voluntarily, operates primarily at the government level rather than through binding enforcement, and focuses narrowly on foreign state-sponsored disinformation rather than the broader ecosystem of domestic disinformation that constitutes most of what citizens actually encounter.

EU-led global disinformation research networks represent a second, less formal coordination model. The European Digital Media Observatory (EDMO) and its national hubs fund and connect independent research organizations across EU member states and partner countries to build a shared evidence base on disinformation spread, platform architecture, and intervention effectiveness. The promise is a research infrastructure that crosses borders and shares methodology. The limitation is that shared research does not produce shared enforcement: EDMO can document that a cross-border disinformation network exists; it cannot compel any platform or government outside EU jurisdiction to act on that documentation.

Multilateral conventions on electoral integrity form the third category. The ICCPR's Article 20, requiring states to prohibit propaganda for war and incitement to hatred, is the broadest existing multilateral instrument — and, as noted earlier in this chapter, is one the United States has specifically reserved against. Various regional bodies, including the Council of Europe and the Organization of American States, have developed non-binding guidance on information integrity in elections. The promise of these frameworks is normative: they establish shared standards that can anchor domestic legislation and create baseline accountability expectations. Their limitation is that they are almost universally non-binding. No multilateral convention currently creates enforceable obligations on platforms to address election disinformation across jurisdictions.

Ingrid Larsen raised the point precisely: "Even within the EU — which has the most integrated legal space of any multi-country body in the world, with binding regulations that take direct effect across twenty-seven member states — cross-border disinformation coordination is still incomplete. The DSA applies to platforms, but it does not coordinate the responses of member state governments to each other's disinformation threats. When a network targets Poland and Germany simultaneously using slightly different messaging calibrated to each country, the Polish and German regulators are still operating in separate tracks. If we can't coordinate fully within the EU, the challenge of doing it globally is an order of magnitude harder."

The gap that Ingrid identifies is, in this sense, a specific instance of the accountability gap that has recurred throughout this chapter's analysis. National frameworks face constitutional constraints. Platform self-regulation is voluntary and reversible. The informal accountability ecosystem — researchers, civil society, NGOs — documents problems it cannot compel anyone to solve. And international mechanisms are, as yet, either too narrow in scope, too limited in membership, or too weak in enforcement authority to close the jurisdictional gap that disinformation exploits. Recognizing that gap is not an argument for despair; it is an argument for the specific work of designing better coordination mechanisms — a task that is simultaneously legal, diplomatic, technical, and political.


Chapter Summary

This chapter has examined the legal and regulatory frameworks available for responding to propaganda and disinformation in democratic societies. Several key conclusions emerge:

The constitutional constraints are real and historically earned. The First Amendment framework, and the international equivalents in strong speech-protective democracies, reflects not merely abstract principle but a hard-learned historical lesson: governments that acquire the power to regulate "harmful speech" use that power against democratic dissent. Tariq's historical examples are accurate and the pattern is consistent. Any regulatory proposal must take these constraints seriously, not as a procedural obstacle but as substantive wisdom.

The information environment has changed in ways that strain existing frameworks. The Brandenburg and Sullivan doctrines were developed in a pre-digital, pre-algorithmic era. Algorithmic amplification of disinformation is qualitatively different from a speaker standing on a street corner. Section 230 immunity was designed to enable a nascent internet; it now immunizes platforms whose algorithmic choices shape the political beliefs of billions of people. Whether existing frameworks are adequate to current conditions is genuinely contested, but the question cannot be dismissed.

The EU and German approaches offer instructive alternatives — and instructive cautionary tales. The DSA's proceduralist approach is the most sophisticated attempt to date to impose accountability on platforms without engaging in government content regulation. NetzDG demonstrates both that government-imposed removal obligations can change platform behavior and that they create systematic over-removal incentives. Both experiments are ongoing; the empirical record will continue to develop.

The structural regulation approach offers the most constitutionally durable path for the U.S. context. Targeting the infrastructure of disinformation — dark money, algorithmic amplification, data brokers, advertising economics — rather than its content avoids the most serious First Amendment objections while addressing the mechanisms through which disinformation achieves scale. It is not a complete solution, but it is a constitutionally viable one.

The informal accountability ecosystem — researchers, civil society organizations, international bodies — fills a genuine gap but is not a substitute for formal accountability. Its legitimacy depends on its independence and transparency, both of which are fragile.

The closing exchange in the seminar was between Tariq and Ingrid.

"I maintain my position," Tariq said. "The history says: be very careful with any law that restricts speech, because you will not control who enforces it."

"I agree with the caution," Ingrid replied. "But in Sweden, in Germany, in the EU — we've made different decisions about where the risks lie. We've decided that some speech is so corrosive to democratic foundations that the risk of permitting it is greater than the risk of regulating it. You made a different decision. Both decisions have consequences."

"The difference," Webb said, "is that you've had different experiences with what happens when corrosive speech goes unregulated. And you've had different experiences with what happens when governments get the power to regulate it. History gave you different evidence."

The conversation, like many of the most important ones, ended not with resolution but with a clearer understanding of what is at stake.


Key Terms

Brandenburg v. Ohio (1969) — The Supreme Court decision establishing the "imminent lawless action" standard: government may not punish advocacy of force unless it is directed toward and likely to produce imminent lawless action.

Smith-Mundt Act (1948) — Legislation establishing the framework for U.S. overseas information operations while prohibiting the domestic dissemination of materials produced for foreign audiences; substantially revised in 2012.

Actual malice standard — The New York Times v. Sullivan (1964) requirement that a public official or figure must prove knowledge of falsity or reckless disregard for the truth to prevail in a defamation suit.

Section 230 — The provision of the Communications Decency Act (1996) that immunizes internet platforms from liability for third-party content and for good-faith moderation actions.

Digital Services Act (DSA) — EU regulation (2022) requiring very large platforms to conduct systemic risk assessments, implement proportionate mitigation measures, and report transparently on content moderation activities.

NetzDG — Germany's Network Enforcement Act (2017) requiring large social media platforms to remove "clearly illegal" content within 24 hours or face fines of up to €50 million.

Coordinated inauthentic behavior — Platform operations in which multiple accounts operate in coordination while concealing their relationship, typically to artificially amplify messaging or manufacture false impressions of grassroots support.

Dark money — Political spending by nonprofit organizations that are not required to disclose their donors, enabling anonymous political advertising.

Citizens United v. FEC (2010) — Supreme Court decision holding that the First Amendment prohibits the government from restricting independent political expenditures by corporations, enabling unlimited corporate spending in elections.

ICCPR Article 20 — The provision of the International Covenant on Civil and Political Rights requiring states to prohibit propaganda for war and incitement to hatred — a provision the United States has reserved against.

Chilling effect — The deterrence of lawful speech or conduct by an overly broad or vaguely worded law, or by the existence of enforcement pressure that creates risk even for permissible activity.

Oversight Board — The independent body created by Meta to review and overturn individual content moderation decisions; it cannot review Meta's fundamental algorithmic or business model policies.

Rabat Plan of Action — A 2012 UN document establishing a six-factor threshold test for determining when advocacy of hatred rises to the level of incitement that states must prohibit under international law.


Cross-references: Chapter 6 (democracy and free speech; the wartime speech debate); Chapter 30 (authoritarian vs. democratic propaganda systems); Chapter 36 (Building Resilient Information Ecosystems)