59 min read

The seminar had been going for forty minutes when Ingrid Larsen raised her hand.

Chapter 6: Propaganda and Democracy

The seminar had been going for forty minutes when Ingrid Larsen raised her hand.

She had arrived at Hartwell University three weeks earlier from Aarhus, Denmark, where she was completing a graduate degree in media regulation. Her English was precise and her observations were consistently the most uncomfortable ones in the room. Webb had noticed.

"I want to ask something that might be offensive," she said.

"Go ahead," Webb said.

"In Europe, we have laws that restrict propaganda. Germany prohibits Holocaust denial. France prohibits hate speech. The European Union requires content moderation of disinformation. You have none of this. You call it 'freedom.'" She paused. "I'm not saying we are correct and you are wrong. I'm saying I genuinely do not understand the American argument. Why does 'freedom' require allowing propaganda?"

The room was quiet. Tariq Hassan, who had been listening carefully, leaned forward. "Because the alternative is worse," he said.

"Which alternative?" Ingrid asked. "There are several."

And that, Webb said, was exactly the question they were going to spend the next ninety minutes on.


He stood and moved to the whiteboard. He wrote two words: DEMOCRACY and PROPAGANDA. Then he drew a double-headed arrow between them.

"Before we get into comparative law," Webb said, "I want to establish something. What Ingrid is describing — the European approach — isn't censorship in the colloquial sense. It's a legal and philosophical tradition that believes some speech is so corrosive to democratic society that restricting it is itself a democratic act. That's the argument. You may disagree with it. But let's understand it before we evaluate it."

Tariq was nodding slowly. "But who decides what's corrosive? That's always my question."

"That is always everyone's question," Ingrid said, without sarcasm. "And we have courts for that. You have courts for that too. The question is whether your courts have been asked to do this work."

Sophia Marin had been writing in her notebook. She looked up. "I grew up in a family where my mother and her parents disagreed about everything politically. My grandfather thinks the liberal media is propaganda. My mother thinks Fox News is propaganda. They're both describing the same phenomenon from opposite directions." She set down her pen. "Which is why I don't trust any government to referee it."

Webb pointed at her. "That's the American position, distilled. Hold onto it." He turned back to the board. "And Ingrid, what's the Danish position — distilled?"

She considered. "That we have to trust something. If we trust nothing — no institution, no referee, no standards — then the loudest and most funded voice wins. And in Europe, we have seen what that looks like. We have seen it very recently, in the 1930s and 1940s."

The reference to Nazi Germany landed in the room. Webb let it sit.

"That is exactly the argument," he said. "And it has force. The question we're going to spend this chapter on is: can a democracy protect itself from propaganda without becoming something other than a democracy in the process? And the honest answer — the historically honest, empirically grounded answer — is that we don't know. Different democratic societies have made different bets. Let's understand those bets."

He wrote three more words on the board: LIPPMANN. DEWEY. HABERMAS. Then, below those: FIRST AMENDMENT. BASIC LAW. DSA.

"We're going to move through all of these," he said. "By the end, you should understand not just what the arguments are but why smart, serious, morally serious people disagree. And why that disagreement is not going away."

Tariq leaned back in his chair. He was thinking about the countries he knew from his family's history — Lebanon, Syria, Egypt — countries where governments that claimed to be fighting misinformation had imprisoned journalists and shut down newspapers. The anti-propaganda apparatus, in those contexts, had always been aimed at the wrong target. "The problem," he said, half to himself, "is that the people who need the restriction are never the ones doing the restricting."

"Elaborate," Webb said.

Tariq sat up. "If Germany in the 1930s had had a robust anti-propaganda law, the Nazis would have used it against the Social Democrats before the Social Democrats could have used it against the Nazis. Because the Nazis got to power first. The law is only as good as the people who enforce it, and the people who enforce it are exactly the people you're worried about."

Ingrid turned to face him directly. "That is why we have independent courts. That is why we have the European Court of Human Rights as an external check. Your argument assumes that the institutions designed to enforce the restriction are corrupt. My argument is that we should build better institutions."

"Building better institutions takes time you might not have," Tariq said.

"Allowing propaganda to destroy democracy also takes time," Ingrid said. "Less time, as it turns out."

Sophia looked at the whiteboard. The names, the laws, the double-headed arrow. She felt the familiar sensation of a question getting bigger the longer she looked at it. That, she was learning, was what real questions felt like.

Webb watched the exchange. Both of them were right. Both of them were arguing from real history — Tariq from the history of anti-speech laws weaponized against the powerless, Ingrid from the history of unlimited speech weaponized to destroy the institutions that protected the powerless. He had been teaching this material for twenty years and the tension had not resolved. He did not expect it to resolve today.

"Let's go deeper," he said. "Let's understand where this disagreement comes from."


The Foundational Tension

Democracy depends on persuasion. There is no democracy without it. Citizens must be able to advocate for positions, argue for candidates, make cases to each other about how to govern a shared life. A democracy without persuasion is a contradiction in terms.

But persuasion, as the preceding five chapters have established, operates along a spectrum from information and evidence to emotional manipulation and cognitive exploitation. And propaganda — systematic communication that bypasses critical reasoning in service of a communicating party's interests — can destroy the very conditions that make democratic deliberation possible.

This is not merely a theoretical tension. It is the central dilemma of liberal democratic theory, and it has been articulated, argued over, and left unresolved by the most rigorous political philosophers of the past century. Chapter 6 is where we sit with that unresolution and understand why it is so hard to dissolve.

The tension is structural, not incidental. Democracy requires an informed citizenry capable of rational deliberation. Propaganda, by design, attacks the cognitive and informational conditions that make rational deliberation possible. These two things cannot coexist without conflict, and every democratic society must eventually decide how — and how much — to manage that conflict. The decisions that democratic societies have made about this question reveal as much about their histories, traumas, and deepest values as any other area of political life.


The Democratic Ideal: What It Requires

Liberal democratic theory, across its many variants, is built on a set of assumptions about citizens and how they make political decisions:

Rationality: Citizens are, to some sufficient degree, capable of evaluating evidence and arguments about political matters. They can distinguish good arguments from bad ones, accurate information from misinformation, reliable sources from unreliable ones.

Information: Citizens have access to sufficient, sufficiently accurate information about political matters to make meaningful choices. They know what their government is doing, what the alternatives are, and what the likely consequences of different choices would be.

Autonomy: Citizens' political preferences are genuinely their own — formed through their own evaluation of their own interests and values, not manufactured by systematic external manipulation.

Public sphere: There exists a shared space of political discourse in which citizens encounter and engage with arguments different from their own, deliberate, and reach collective decisions.

These assumptions are demanding. Political philosophers from Plato onward have questioned whether ordinary citizens can meet them. The propaganda problem is that systematic, well-resourced communication campaigns can deliberately undermine each of these conditions — producing citizens whose rationality is bypassed, whose information environment is distorted, whose preferences are manufactured, and whose public sphere is fragmented.

What is important to recognize is that each of these democratic prerequisites has a corresponding propaganda attack vector. If democracy requires rationality, propaganda exploits cognitive shortcuts and emotional triggers that circumvent rational processing. If democracy requires accurate information, propaganda introduces false equivalence, denialism, and manufactured uncertainty into the information environment. If democracy requires autonomous preference formation, propaganda uses targeted micro-advertising, social proof manipulation, and identity-based messaging to manufacture preferences that feel authentic but are externally engineered. And if democracy requires a shared public sphere, propaganda fragments that sphere into mutually reinforcing echo chambers where citizens never encounter — and thus cannot be challenged by — views different from those their information environment has pre-selected for them.

The implication is sobering: propaganda is not merely an annoyance to democracy. It is a systematic attack on democracy's operating conditions. Understanding this is prerequisite to understanding why the legal, regulatory, and philosophical debates about how to respond are so contentious and so important.


Lippmann vs. Dewey: The Founding Debate

Lippmann: The World Outside and the Pictures in Our Heads

Walter Lippmann (1889–1974) was among the most influential American journalists and political commentators of the twentieth century. Before he became the establishment pundit whose syndicated columns shaped opinion in Washington for decades, he was a young intellectual who had witnessed something that disturbed him profoundly: the Committee on Public Information.

Lippmann was involved, at the margins, in wartime information work during World War I. He had watched the CPI manufacture consent for a war that a large portion of the American public had initially opposed. He had watched ordinary citizens respond to propaganda with what looked, from the outside, like genuine enthusiasm. And he had come away from the experience not with the progressive's outrage at manipulation, but with something more unsettling: the recognition that it had worked, and that it had worked because of something true about how human minds process political information.

His 1922 book Public Opinion is one of the foundational texts of both political science and media studies. Its argument is deceptively simple: the world of political affairs is vast, complex, and mostly invisible to ordinary citizens. We do not experience most political reality directly. We experience representations of it — through newspapers, radio, later television, now social media — and those representations are inevitably partial, simplified, filtered through the interests and limitations of their producers, and interpreted through the cognitive habits and prejudices of their consumers. Lippmann called these mental representations "the pictures in our heads," and he argued that democratic politics is, in practice, a contest over those pictures rather than a rational deliberation over the facts themselves.

This is not, Lippmann insisted, a failure of democracy that can be fixed by better education or more honest journalism (though both would help at the margins). It is a structural feature of the relationship between human cognition and political scale. Small communities can govern themselves through direct experience and face-to-face deliberation. Large, complex, industrial societies cannot. The Great Society — Lippmann's term for modern mass civilization — had outgrown the cognitive infrastructure that democratic theory assumed.

His prescription in Public Opinion and its 1925 sequel The Phantom Public was technocratic: the function of ordinary citizens in mass democracy should be limited to hiring and firing political leaders, not to substantive policy deliberation. The details of policy — which required genuine expertise and access to reliable information — should be handled by trained specialists, an "intelligence bureau" of knowledgeable analysts who would gather accurate information and present it to decision-makers. Ordinary citizens would participate in democracy by choosing between competing teams of leaders, not by deliberating about the substantive content of policy.

This prescription has obvious problems, and Lippmann was aware of some of them. Expertise is never neutral. The intelligence bureau's assessments would inevitably reflect its analysts' values, interests, and blind spots. And concentrating political information in an expert class creates a power asymmetry that is difficult to square with democratic equality. But Lippmann's diagnosis — that the conditions for deliberative democracy were not being met and could not easily be created — has proven more durable than his technocratic cure.

Dewey: The Great Community

John Dewey (1859–1952) was the philosopher of American democratic optimism, and his 1927 response to Lippmann, The Public and Its Problems, is one of the most hopeful books in political philosophy — hopeful not in the sense of being naively upbeat, but in the sense of being deeply committed to the possibility of democratic self-governance even in the face of Lippmann's formidable critique.

Dewey agreed with Lippmann's diagnosis more than is usually acknowledged. The modern public was, he conceded, "bewildered" — unable to identify itself, to understand the forces that governed its collective life, to deliberate effectively about shared concerns. The Great Society had created communities of consequence — large-scale networks of interdependence in which actions in one place affected lives everywhere — without creating communities of communication, shared understanding, and deliberative engagement.

But Dewey's response was not to reduce the scope of democratic participation. It was to ask: what conditions would have to be in place for genuine democratic deliberation to occur? And then to argue that creating those conditions was the democratic project, not evidence of democracy's failure.

The conditions Dewey specified were several. First: education oriented not merely toward information transfer but toward the development of habits of inquiry, the capacity for evidence-based reasoning, the disposition toward reflective rather than reactive political judgment. Second: a press that served community understanding rather than commercial entertainment or elite interest — journalism committed to helping citizens understand their shared circumstances rather than merely attracting their attention. Third: the recovery and creation of what Dewey called "local community life" — smaller-scale spaces of face-to-face interaction in which democratic habits could be practiced, where the abstract citizen of liberal theory could become the concrete, embedded, deliberating neighbor of democratic reality.

Dewey's prescription was, in one sense, harder than Lippmann's. It did not reduce the demands placed on ordinary citizens; it tried to create the conditions under which those demands could be met. It did not accept that modern scale made genuine democratic deliberation impossible; it asked what would have to be rebuilt for it to become possible again. These are, in retrospect, not questions that any particular policy program could fully answer, which may be why Dewey's optimism has remained inspiring without becoming practically decisive.

Contemporary Descendants: The Platform Governance Debate

The Lippmann-Dewey debate has never ended. It has merely changed its vocabulary. In contemporary debates about social media platform governance, algorithmic amplification, and disinformation regulation, recognizable descendants of both positions argue over the same fundamental questions.

The Lippmann side appears in arguments for expert intermediaries: fact-checkers, content moderation systems, algorithmic downranking of disputed content, platform policies that defer to specialist judgment about what constitutes misinformation. The scholar Cass Sunstein, in works like #Republic (2017) and Liars: Falsehoods and Free Speech in an Age of Deception (2021), argues that the information environment has become so polluted by misinformation and outrage-optimized content that expert curation and even legal intervention are necessary to restore the epistemic conditions for democracy. Kate Starbird's research on crisis misinformation at the University of Washington documents the systematic ways in which algorithmically amplified false narratives overwhelm accurate information in exactly the pattern Lippmann described — not because citizens are stupid, but because the information environment is structured against them.

The Dewey side appears in arguments for structural reform rather than content restriction: media literacy education, platform transparency requirements, public investment in local journalism, community-based fact-checking projects. danah boyd's work on media literacy advocates for "not just fact-checking but the capacity to reason about the information ecosystem itself." The News Literacy Project and similar organizations operate explicitly in the Deweyan tradition: rather than curating information for citizens, they try to build citizens' capacity to curate for themselves. The argument is that no expert system — however well-intentioned — can be trusted to make content judgments for a diverse democratic public without both epistemic failures (experts are wrong about some things) and political failures (expert judgments will be weaponized by those who control the expert institutions).

What is striking about this contemporary debate is how faithfully it reproduces the 1920s original. Lippmann and Dewey would recognize their own positions in the arguments being made today about content moderation policies, algorithmic transparency, and the role of technology companies in political discourse. The questions they asked are still the right questions. The answers remain, as they always were, genuinely contested.


Habermas and the Public Sphere

The Original Concept

German philosopher Jürgen Habermas introduced the concept of the "public sphere" in 1962 (Strukturwandel der Öffentlichkeit, translated as The Structural Transformation of the Public Sphere) as a historical and normative ideal — a social space in which citizens could engage in rational-critical discourse about matters of common concern, free from both state coercion and private commercial interest.

Habermas's historical argument was that this public sphere — embodied in the eighteenth-century coffee houses, literary salons, and independent press of bourgeois European society — had existed as a genuine institution before being eroded by the twentieth century's mass commercial media, which transformed citizens from participants in public discourse into audiences for entertainment and advertising.

His normative argument is that democratic legitimacy depends on outcomes that emerge from deliberation in something like a genuine public sphere: where arguments are evaluated on their merits, where all affected parties can participate, where the force of the better argument (rather than the power of the better-funded communicator) determines outcomes.

The propaganda challenge to Habermas: Propaganda — whether by states, corporations, or organized political movements — systematically violates the conditions for legitimate public sphere discourse. It introduces arguments that do not represent genuine positions of affected parties (astroturfing). It distorts the quality of argument with manufactured emotional intensity. It restricts who can effectively participate in discourse by saturating the information environment with targeted messaging. It substitutes the force of the better-funded communicator for the force of the better argument.

Contemporary scholars have argued that the degradation of the public sphere by commercial media, partisan echo chambers, and digital influence operations represents a crisis of democratic legitimacy as much as a practical problem of information quality.

Habermas's Self-Critique and Later Revisions

What is less often discussed — and what reveals Habermas's intellectual seriousness — is that he substantially revised his own theory in response to critics. In his later work, including Between Facts and Norms (1992) and numerous essays responding to the feminist and postcolonial critiques of the 1980s and 1990s, Habermas acknowledged significant limitations in his original formulation.

The original public sphere concept, critics pointed out, was deeply exclusionary in its historical form. The bourgeois public sphere that Habermas valorized was, in practice, restricted to propertied men. Women were systematically excluded. Working-class people were excluded. Colonial subjects were excluded. The norms of "rational-critical discourse" that Habermas celebrated as universal were, in historical practice, the norms of a particular social stratum — norms that encoded class, gender, and racial exclusions as epistemic standards. Habermas acknowledged this critique. His later formulations emphasized that a legitimate public sphere required the inclusion of all affected parties, not merely those with the social capital to participate in bourgeois deliberation.

He also revised his treatment of what he called the "lifeworld" — the pre-political background of shared understandings, cultural practices, and social relationships within which political discourse is embedded. In his earlier work, the public sphere floated somewhat free of these social conditions. In his later work, Habermas emphasized that the conditions for genuine public sphere deliberation — the habits, norms, and institutional supports that make rational-critical discourse possible — had to be actively maintained and were genuinely fragile.

Fraser's Counterpublics

The most influential critique of Habermas came from political theorist Nancy Fraser, whose 1990 essay "Rethinking the Public Sphere" remains one of the most-cited pieces in the field. Fraser made several arguments, but the most generative for subsequent scholarship was her concept of "subaltern counterpublics."

Fraser argued that the ideal of a single, unified public sphere — in which all citizens deliberate together on equal terms — was both historically unrealized and normatively problematic. In practice, subordinated social groups — women, working-class people, racial minorities — had never been able to participate in the dominant public sphere on equal terms. What they had done instead was form alternative publics: separate discursive arenas in which members of marginalized groups could articulate their interests, develop their own interpretations of their social situation, and formulate responses to their subordination before bringing those positions into wider public contestation.

Fraser called these alternative arenas "subaltern counterpublics." Examples include the nineteenth-century Black press, women's suffrage organizations, labor union newspapers, and the LGBTQ press before and during the AIDS crisis. These were not failures of public sphere integration. They were, Fraser argued, necessary and valuable features of a genuinely democratic information ecosystem: spaces in which subordinated groups could develop political voice that was not immediately subject to the dismissal, ridicule, or condescension they would encounter in the dominant public sphere.

This concept has important implications for how we understand propaganda's relationship to democracy. Propaganda does not operate in a unified public sphere; it operates in a differentiated information ecosystem in which different communities have different levels of access, different levels of trust in different information sources, and different vulnerabilities to different forms of manipulation. Understanding propaganda's democratic effects requires understanding not just "the public sphere" in the abstract but the specific counterpublics — the communities, media ecosystems, and discursive spaces — that it targets.

It also complicates the speech restriction debate. Laws designed to protect the dominant public sphere from corrosive propaganda may, in practice, restrict the speech of counterpublics whose discourse looks, to mainstream institutions, like exactly the kind of "extreme" or "inflammatory" speech the law is designed to control. The history of anti-speech legislation includes many examples of laws ostensibly directed at extremist movements being applied against labor organizers, civil rights activists, and anti-war protesters. Fraser's concept of counterpublics helps explain why this is not merely coincidental overreach but a structural feature of how speech restriction operates in practice.

Digital Media: Realization and Distortion

When social media emerged in the mid-2000s, many scholars read it through a Habermasian lens and found reason for optimism. For the first time in history, ordinary citizens had the technical means to participate in public discourse on roughly equal terms with established media organizations. The barrier to entry for publishing was effectively zero. Anyone with internet access could communicate to a global audience. The Arab Spring seemed to confirm this reading: social media as the realization of the public sphere's democratic potential.

The optimism did not last. By the mid-2010s, it was apparent that digital media had realized some features of the Habermasian public sphere while profoundly distorting others. It had indeed lowered barriers to participation. It had also created the infrastructure for influence operations at unprecedented scale, algorithmic amplification of outrage and division, filter bubbles that reproduced the echo chamber effects Habermas's bourgeois public sphere had always had, and the commodification of political discourse in new and more granular ways.

The German media scholar Bernhard Pörksen has argued that digital media created what he calls a "digital public sphere" that simultaneously expanded participation and fragmented the shared factual basis on which rational-critical discourse depends. The expansion and the fragmentation are not separate effects; they are products of the same architectural features. Platforms optimized for engagement will amplify whatever generates engagement, and outrage, tribalism, and identity-confirming information generate more engagement than careful, evidence-based argument. The algorithmic public sphere is, in Habermasian terms, systematically biased against the conditions for legitimate democratic deliberation.


Comparative Speech Law: The EU vs. the US

The American Tradition: From Holmes to Brandenburg

To understand the American approach to speech regulation, it is necessary to begin not with the First Amendment's text — which is brief and general — but with the judicial interpretations that have given it its distinctive shape.

The foundational moment is not Brandenburg v. Ohio (1969) but the dissents that preceded it. In Schenck v. United States (1919), the Supreme Court upheld the conviction of Charles Schenck, a Socialist Party official, for distributing leaflets opposing the draft. Justice Oliver Wendell Holmes wrote the majority opinion, introducing the "clear and present danger" test and the "fire in a crowded theater" analogy. The government, Holmes wrote, could restrict speech that created a clear and present danger of substantive evils that Congress had the power to prevent.

Months later, in Abrams v. United States (1919), Holmes — apparently having reconsidered — wrote a dissent (joined by Justice Louis Brandeis) that articulated what would become the philosophical foundation of American First Amendment absolutism. The "ultimate good," Holmes wrote, "is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market." This is the marketplace of ideas doctrine in its canonical form. And its implication is that government restriction of speech is epistemically dangerous: who is the government to decide which ideas are true?

Brandeis's dissent in Whitney v. California (1927) extended the argument. The framers of the Constitution, Brandeis wrote, believed "that the greatest menace to freedom is an inert people; that public discussion is a political duty; and that this should be a fundamental principle of the American government." Democracy, in this view, is not merely compatible with robust speech freedom — it requires it. Restricting speech, even harmful speech, is a fundamentally anti-democratic act because it presupposes that government can make better political judgments than citizens engaging in open deliberation.

Brandenburg v. Ohio (1969) operationalized these principles. The Court held that the government cannot punish advocacy of illegal action unless that advocacy is "directed to inciting or producing imminent lawless action and is likely to incite or produce such action." This is an extremely high bar. Most propaganda — even propaganda that systematically distorts political reality, manufactures false consensus, and undermines democratic deliberation — does not come close to meeting it. The Brandenburg standard reflects the philosophical commitment of Holmes and Brandeis in legal doctrine: absent an immediate, concrete threat of violence, government restriction of political speech is presumptively unconstitutional.

The philosophical architecture of this tradition rests on several connected premises. First, epistemic humility: governments can be wrong about what speech is harmful, and history shows they have been spectacularly wrong. Second, institutional distrust: the power to restrict harmful speech will inevitably be abused by whoever controls the restriction mechanism. Third, democratic priority: in a democracy, citizens — not governments — determine the content of public discourse. Any government interference with that process is, at bottom, anti-democratic.

The German Tradition: Basic Law and Historical Memory

Germany's approach to speech and propaganda is incomprehensible without its historical context, and understanding that context is not a rhetorical move — it is analytically necessary.

The Weimar Republic had an expansive tradition of free speech and political contestation. The Nazi movement used that freedom to organize, propagandize, and ultimately seize power — at which point it abolished the freedoms it had exploited. The architects of Germany's postwar Grundgesetz (Basic Law, 1949) concluded from this experience that unlimited speech freedom was not self-protecting. A democracy that allowed its own enemies to organize freely, deploy propaganda without restriction, and use democratic processes to destroy democracy was not a robust democracy — it was a system with a fatal vulnerability.

The Basic Law's Article 18 provides that whoever abuses fundamental rights — including freedom of speech — to combat the free democratic basic order forfeits those rights. Article 21 allows for the prohibition of parties that seek to undermine or abolish the free democratic basic order. The Bundesverfassungsgericht (Federal Constitutional Court) has interpreted these provisions to permit restrictions on speech that denies the Holocaust, glorifies Nazi ideology, or incites hatred against identifiable groups — restrictions that would be clearly unconstitutional under the First Amendment.

The German framework is sometimes described as "militant democracy" (streitbare Demokratie or wehrhafte Demokratie): a democracy that actively defends itself against its enemies rather than extending maximum freedom to those who would destroy it. The key philosophical insight is that rights are not absolute — they exist in a social and political context, and a right exercised to destroy the conditions for rights is self-defeating. A democracy that allows propaganda designed to undermine democracy is not honoring freedom; it is permitting its own destruction.

The German model has been criticized, including from within Germany, for its potential for overreach. Critics point to cases in which the same legal framework used to prohibit neo-Nazi propaganda has been applied in ways that critics found politically motivated. But the German Constitutional Court's track record of independent, rigorous adjudication has generally been cited as evidence that the feared political weaponization of speech restrictions, while a genuine concern, is not inevitable under well-designed institutional arrangements.

France and the Continental Framework

France has a similarly restrictive approach to hate speech and propaganda, rooted in somewhat different historical experiences. The Loi Pleven (1972) criminalized incitement to racial discrimination, hatred, or violence. The Loi Gayssot (1990) made Holocaust denial a criminal offense. These laws were controversial even within France — some within the French free speech tradition argued that restricting even hateful speech was philosophically untenable. But French courts and the European Court of Human Rights have generally upheld these restrictions.

Article 10 of the European Convention on Human Rights guarantees freedom of expression but explicitly permits restrictions "as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of the reputation or rights of others." The phrase "necessary in a democratic society" is doing enormous philosophical and legal work: it permits a proportionality analysis in which the harms of speech restriction are weighed against the harms of the speech itself. This is fundamentally different from the American framework, in which political speech is presumptively protected regardless of its harms.

The European Court of Human Rights has, over decades of case law, developed a nuanced body of doctrine distinguishing between speech that contributes to democratic discourse (broadly protected) and speech that is designed to destroy democracy or degrade the dignity of identifiable groups (subject to restriction). This distinction is not always easy to apply — the line between vigorous political argument and dehumanizing propaganda can be blurry — but the Court's record shows that it can be drawn with sufficient consistency to give the framework legal coherence.

The Digital Services Act

The European Union's Digital Services Act (DSA, 2022) represents the most ambitious recent attempt to apply European speech law principles to the digital information environment. The DSA's approach is not to require platforms to make specific content decisions but to require transparency about how they make decisions, to mandate risk assessments for their systemic effects on democratic processes, and to create regulatory oversight of the largest platforms by the European Commission.

Large platforms designated as "Very Large Online Platforms" (VLOPs) must assess and mitigate systemic risks arising from their services — including risks to "civic discourse or electoral processes." They must provide algorithmic transparency to researchers and regulators, submit to independent audits, and implement measures to reduce the amplification of content that creates systemic democratic harms. Platforms that fail to comply face fines of up to 6 percent of global annual revenue.

The DSA is not a speech restriction in the traditional sense; it is an institutional design requirement. It says: if you are going to be the dominant public sphere, you must accept accountability obligations commensurate with that role. This approach reflects the distinctly European philosophical tradition: trust institutions to make judgments about the public interest, create accountability mechanisms, accept that there are no perfect answers, but insist on democratic oversight of systems with democratic consequences.

Why the Traditions Diverge — and What It Means

The American and European traditions diverge not because Americans are naive about propaganda's harms or Europeans are indifferent to free expression's value. They diverge because they reflect different historical experiences and different assessments of the primary threats to democratic life.

The American tradition, shaped by colonial resistance to British censorship and institutionalized in the Bill of Rights, identifies government as the primary threat to democratic freedom. The marketplace of ideas is the alternative to government control of information; free speech absolutism is the institutional commitment to keeping the marketplace open. From this perspective, the European restrictions — however well-intentioned — are more dangerous than the speech they restrict, because they hand government a tool that has historically been used against democracy's friends rather than its enemies.

The European tradition, shaped by the experience of fascism, identifies propaganda and the erosion of democratic norms as threats that can be as dangerous as government censorship. From this perspective, the American refusal to restrict demonstrably harmful speech is not principled freedom; it is a failure of self-protection that leaves democracy vulnerable to exactly the kind of organized propaganda campaigns that destroyed the Weimar Republic.

Neither tradition has definitively "won." Both have produced stable democracies with genuine speech freedom. The United States has produced McCarthyism and the PATRIOT Act alongside the Brandenburg doctrine. Germany has produced robust democratic pluralism alongside occasional overreach in applying its speech restrictions. The debate between the traditions is not a debate between freedom and repression; it is a debate about how democratic societies should manage the tension between the speech freedoms democracy requires and the democratic conditions that free speech can undermine.


Wartime Information Management in Democracies

The Pattern Established

Even in liberal democracies with strong free speech traditions, governments routinely restrict information during wartime — through censorship, propaganda, and legal suppression of dissent. This pattern is not an aberration; it is among the most consistent findings of the historical record. And it raises uncomfortable questions about the relationship between democratic values and their practical operation under conditions of perceived emergency.

The CPI's wartime operations (Chapter 1) included not just the promotion of the war effort but the active suppression of dissent. The Espionage Act of 1917 and the Sedition Act of 1918 were used to prosecute journalists, organizers, and activists who opposed the war. These prosecutions were not marginal. They reshaped the American left for a generation and established precedents for government speech restriction that would be invoked repeatedly through the twentieth century.

The Espionage Act Prosecutions

The Espionage Act prosecutions of 1917–1920 are worth examining in detail, because they illustrate both the power of wartime propaganda and the institutional willingness to suppress dissent under its cover.

Eugene Debs was the most famous case. Debs, the Socialist Party's five-time presidential candidate, gave a speech in Canton, Ohio, in June 1918 in which he praised imprisoned antiwar activists and said, "You need to know that you are fit for something better than slavery and cannon fodder." He was arrested under the Espionage Act, convicted, and sentenced to ten years in federal prison. The Supreme Court upheld his conviction unanimously in Debs v. United States (1919), with Justice Holmes writing that Debs's speech had a "natural tendency and reasonably probable effect" of obstructing military recruiting. Debs ran for president in 1920 from prison cell No. 2253 at Atlanta Federal Penitentiary and received nearly a million votes — a fact that says something important about both the limits of speech suppression and the breadth of political opposition to the war.

Kate O'Hare was a Socialist organizer convicted for an antiwar speech in North Dakota in 1917. Her speech argued that American women were being treated as "brood sows" producing cannon fodder for the war machine. She was found to violate the Espionage Act and served fourteen months before being pardoned by President Warren Harding in 1920. O'Hare's case illustrated how broadly the Act was being applied: no specific incitement to illegal action was required; criticism of the war and its human costs was itself treated as criminal.

The Milwaukee Leader, a Socialist newspaper, was barred from the U.S. mail in 1917 — effectively preventing its distribution — for publishing antiwar commentary. Its editor, Victor Berger, was convicted under the Espionage Act and sentenced to twenty years in prison (later overturned on appeal). The case illustrated how wartime speech suppression could operate through administrative mechanisms — postal exclusion — as effectively as through criminal prosecution, and how it disproportionately targeted voices with limited access to alternative distribution channels.

These prosecutions were not conducted in spite of democratic values. They were conducted, in the minds of their architects, in defense of democratic values: democracy was under external threat, and internal dissent was argued to be materially helpful to the enemy. This is the wartime speech suppression logic in its clearest form — and it is a logic that has proven remarkably durable.

Schenck and Its Long Repudiation

Justice Holmes's majority opinion in Schenck v. United States (1919) established the "clear and present danger" test as the governing standard for wartime speech restriction. The "fire in a crowded theater" analogy became perhaps the most-cited — and most misunderstood — passage in First Amendment jurisprudence. What Holmes actually wrote was that free speech "would not protect a man falsely shouting fire in a theater and causing a panic." The analog was narrow: a direct, immediate, physical emergency. But in subsequent application, it became a general justification for restricting any speech that might, through any chain of causation, threaten government interests.

Zechariah Chafee, the Harvard law professor who became the leading scholarly critic of the Espionage Act prosecutions, pointed out that Holmes's analogy obscured the crucial distinction between speech that directly caused immediate harm and speech that might, through a long chain of influence and persuasion, affect future political choices. Distributing leaflets opposing the draft was not shouting fire in a theater; it was contributing to a public debate about policy. The "clear and present danger" test, as applied in Schenck and the cases that followed it, was clear and present danger in name only.

Holmes himself seemed to recognize this when he dissented in Abrams months later, beginning the process of building a more protective First Amendment doctrine. But the full repudiation took fifty years. It was only in Brandenburg v. Ohio (1969) that the Court finally articulated a standard — imminence, directness, likelihood — that gave genuine protection to political speech that fell short of direct incitement. The long arc from Schenck to Brandenburg is a story of the American legal tradition slowly working out the implications of its own commitment to speech freedom, learning from the abuses of the intervening decades.

McCarthyism and the Pattern of Cold War Suppression

The wartime speech suppression pattern did not end with World War I. The Cold War produced its own version: the House Un-American Activities Committee, the Smith Act prosecutions of Communist Party officials, the loyalty oath programs that swept through government, universities, and the entertainment industry. Senator Joseph McCarthy's campaign of 1950–1954 destroyed careers, drove people from public life, and suppressed legitimate political dissent — all in the name of protecting democracy from communist subversion.

What is analytically important about McCarthyism is that it operated primarily through propaganda rather than through legal restriction. McCarthy's power came not from the Smith Act prosecutions but from his ability to use the information environment — televised hearings, newspaper coverage, the implicit threat of public accusation — to create a climate in which mere association with leftist ideas was politically dangerous. The suppression of dissent was achieved through what we would now call an influence operation, using the social mechanism of reputational destruction rather than the legal mechanism of criminal prosecution.

This pattern — using propaganda to suppress the conditions for democratic deliberation without technically restricting speech through law — is important for understanding the contemporary disinformation debate. The most effective attacks on democratic deliberation often operate not through legal restriction but through information saturation, reputational destruction, and the manufacturing of social costs for heterodox expression. McCarthy demonstrated that you do not need to outlaw opposition speech; you only need to make speaking it socially and professionally catastrophic. The lesson was not lost on subsequent operators of influence campaigns.

The Enduring Pattern

This history raises uncomfortable questions. Democratic governments that engage in wartime propaganda — managing information to maintain civilian morale and suppress opposition — are doing something that is, by the working definition, propaganda. They are communicating intentionally, targeting the emotional register of their audience, serving the government's interest, and strategically omitting information (military setbacks, casualty counts, civilian casualties) that would undermine support.

Does democratic legitimacy — the government's elected status, the genuine external threat it is managing — change the moral evaluation? This is a genuinely contested question, and the course's Debate Frameworks return to it in Chapter 30 (Authoritarian vs. Democratic Propaganda). What can be said here is that the historical record strongly suggests a structural pattern: democratic governments under stress regularly invoke the language of democratic defense to justify speech suppression that they would not countenance in peacetime. The Espionage Act prosecutions, COINTELPRO, the post-9/11 surveillance expansion — each involved genuine security concerns and each also involved genuine overreach against legitimate political dissent. The pattern is not coincidental. It reflects a structural tension in democratic governance that no set of constitutional provisions has fully resolved, and that makes the abstract arguments for trusting democratic governments to regulate political speech considerably harder to sustain in the light of historical experience.


Democratic Backsliding: Three Case Studies

The Analytical Framework: Competitive Authoritarianism

Political scientists Steven Levitsky and Lucan Way coined the concept of "competitive authoritarianism" in their 2002 article and elaborated it in their 2010 book of the same name. The concept describes hybrid regimes that maintain the formal institutions of democracy — elections, legislatures, courts — while systematically undermining their practical effectiveness. In competitive authoritarian systems, elections happen but are not fair; courts exist but are not independent; opposition parties operate but face systematic disadvantage.

What Levitsky and Way found, across dozens of cases, was that media capture was among the earliest and most reliable indicators of authoritarian consolidation. Before elections are stolen, before courts are packed, before opposition is imprisoned, the information environment is degraded. This sequencing is not accidental. Controlling the information environment is how authoritarian movements build the public support — or at least the public acquiescence — that allows them to consolidate power. Propaganda is not merely a symptom of competitive authoritarianism; it is one of its primary mechanisms.

The three cases below illustrate this pattern in different national and regional contexts. They are not exotic or distant examples; they are cases from the recent past involving countries deeply integrated into the global democratic community, and their trajectories carry implications that scholars and policymakers in every democracy are actively studying.

Hungary: Orbán's Media Capture

Viktor Orbán's Fidesz party won Hungary's 2010 parliamentary election with a two-thirds majority, giving it the constitutional supermajority required to amend Hungary's fundamental law. What followed was one of the most methodical and comprehensive media capture operations in post-Cold War European history.

The first phase involved legislative changes. The Fidesz government created a new Media Council with broad regulatory powers over Hungarian broadcasting, staffed it with party loyalists, and used it to grant favorable licenses to pro-government broadcasters while applying regulatory pressure to independent outlets. The state advertising market — which represents a substantial portion of media revenue in Hungary — was systematically redirected toward pro-government publications and away from independent ones, creating a structural financial advantage for loyalist media that operated independent of any formal censorship.

The second phase involved ownership changes. As independent media outlets found themselves financially squeezed by reduced advertising revenue and regulatory pressure, Orbán-aligned businessmen — many of whom were dependent on government contracts for their other business interests — acquired them. The process accelerated sharply after 2018, when more than five hundred Hungarian media outlets — newspapers, regional television stations, online news portals, radio stations — were consolidated under the umbrella of the Central European Press and Media Foundation (KESMA). The consolidation happened quickly and comprehensively enough that by 2019, Hungary ranked 87th out of 180 countries in Reporters Without Borders' Press Freedom Index — lower than any other EU member state.

The propaganda techniques deployed through this captured media ecosystem were consistent with the frameworks established in earlier chapters. The relentless construction of an existential threat: Hungary is being invaded by migrants; Christian civilization is at risk; national sovereignty is being destroyed by Brussels bureaucrats. The personalization of evil through a single arch-enemy: the Hungarian-American financier George Soros, who was portrayed in a sustained government advertising campaign as the puppet master behind immigration, the European Union, and every challenge to Orbán's power. The systematic delegitimization of alternative information sources: independent journalists, opposition politicians, foreign media, and international observers were all characterized as foreign agents, Soros operatives, or enemies of the Hungarian people.

These techniques were effective. Fidesz won subsequent elections in 2014, 2018, and 2022 with commanding majorities, even as international observers documented systematic unfairness in election conditions, including unequal access to media, gerrymandering, and the use of state resources for partisan purposes. The Hungarian case illustrates the central mechanism of media capture-enabled democratic backsliding: once the information environment is controlled, voters may genuinely support the incumbent government not because they are coerced but because they have been systematically denied the information that would be necessary for them to evaluate it critically.

Turkey: Erdoğan's Press Restrictions

Turkey's experience of democratic backsliding under Recep Tayyip Erdoğan offers a different model of media capture — one that combined market pressure, regulatory coercion, and, eventually, direct legal suppression on a massive scale.

In the early years of AKP rule (from 2002), Erdoğan's government initially moved in the direction of expanded press freedom, particularly in relation to the military-influenced speech restrictions of the preceding secular nationalist era. The reversal came gradually after approximately 2013, when the Gezi Park protests and the subsequent falling-out between Erdoğan and the Gülenist movement created a political climate in which the government increasingly treated independent journalism as a security threat.

The turning point came with the 2016 coup attempt, which the government used as justification for emergency measures of extraordinary scope. Within weeks of the coup, more than 150 media outlets were closed by government decree. Thousands of journalists, academics, and civil servants were detained. The climate of self-censorship that resulted — in which journalists, editors, and media owners understood that certain topics, certain framings, and certain sources were politically dangerous — was arguably more effective at shaping the information environment than the prosecutions themselves.

The propaganda framework that developed over this period deployed a consistent narrative: Turkey is surrounded by enemies (Kurdish militants, foreign-backed plotters, the "parallel state" of the Gülenist movement, Western governments that secretly support Turkey's destabilization); Erdoğan and the AKP are the defenders of Turkish national sovereignty and Muslim identity against these threats; critics of the government are objectively in league with Turkey's enemies. This framework performed the classic propaganda function of making loyalty to the leader indistinguishable from loyalty to the nation — and criticism of the leader equivalent to treason.

By 2022, Turkey ranked 149th out of 180 countries in Reporters Without Borders' Press Freedom Index. More than 150 journalists were imprisoned during the 2016–2020 period, making Turkey one of the world's leading jailers of journalists. The practical effect on democratic deliberation was measurable: on the eve of the 2023 presidential election, researchers at Freedom House documented that Turkish citizens' access to balanced political information was severely constrained, with pro-government media commanding the overwhelming majority of broadcast audience share.

Venezuela: Chávez, Maduro, and the Construction of Bolivarian Reality

Venezuela under Hugo Chávez (1999–2013) and Nicolás Maduro (2013–present) represents a third model: propaganda as integral to a deliberate, ideologically self-conscious project of political transformation that was simultaneously democratic in some of its rhetorical claims and systematically authoritarian in its institutional effects.

Chávez was a gifted natural communicator who understood media intuitively and used it with exceptional skill. His weekly television and radio program Aló Presidente — which often ran for four to eight hours and was broadcast on state media — blended political speeches, military ceremonies, musical performances, conversations with citizens, and direct presidential decision-making in a format that was impossible to ignore and difficult to argue against in conventional political terms. It was propaganda in the definitional sense: systematic, intentional, emotionally targeted, serving the communicator's interests. It was also genuinely popular with significant portions of the Venezuelan population who felt, not without reason, that previous governments had ignored them.

The Venezuelan case illustrates what propaganda scholars call "alternative reality construction" at scale: the building of a parallel information universe — with its own media, its own interpretive frameworks, its own canon of heroes and villains, its own historical narrative — in which the government's version of events is not just promoted but made the default setting of political reality for its supporters. By the time Maduro inherited the system from Chávez, the independent press had been reduced through a combination of regulatory pressure, financial strangulation through denial of foreign exchange for newsprint imports, and the direct prosecution of journalists to a fraction of its former size and influence.

Under Maduro, as Venezuela's economy collapsed under the combined effects of oil price decline, economic mismanagement, and international sanctions, the propaganda apparatus became increasingly detached from economic and social reality. The gap between the government's account of Venezuela's situation and the lived experience of millions of Venezuelans who were emigrating in the largest displacement crisis in Latin American history produced a form of cognitive dissonance that characterizes the advanced stages of authoritarian propaganda: a regime whose information apparatus cannot acknowledge the reality that its audience is experiencing in their daily lives.

The Pattern

Across these three cases, and in the broader comparative literature on democratic backsliding, several consistent patterns emerge. First: media capture precedes other forms of institutional capture. The information environment is the first target of authoritarian consolidation because controlling it shapes the public opinion landscape within which every subsequent institutional conflict is decided. Second: propaganda in competitive authoritarian regimes consistently deploys us-versus-them frameworks that transform political competition into existential threats to national identity or security. Third: the propaganda is effective not because it is crude but because it is sophisticated — it speaks to genuine grievances, deploys real cultural and national symbols, and creates authentic emotional connections with its target audiences before leveraging those connections to distort political reality.

The democratic implications are profound. If propaganda can enable authoritarian consolidation through the voluntary support of a propagandized public — rather than through overt coercion — then the distinction between authoritarian and democratic politics is not solely a matter of formal institutions. It is a matter of information environment. A democracy with a captured information environment is not fully a democracy, even if its elections continue to be held on schedule. And the mechanism by which a democracy becomes a competitive authoritarian regime is, in the most direct sense, a propaganda success story.


Platform Governance and the New Public Sphere

The Infrastructure of Political Speech

In the 2020s, social media platforms have become the dominant infrastructure for political communication in most democratic societies. Twitter (now X), Facebook, YouTube, TikTok, and their equivalents are not merely services that citizens use for political discussion; they are the primary public sphere — the place where political arguments are made, tested, amplified, countered, and ultimately reach the audiences that determine political outcomes. Understanding their role is not optional for anyone who wants to understand how propaganda works in contemporary democracy.

This represents a qualitative shift in the structure of political communication, not merely a quantitative one. Previous dominant information technologies — the printing press, mass circulation newspapers, radio, television — were structured, in varying degrees, by gatekeepers who made decisions about what information reached mass audiences. Those gatekeepers were often biased, often captured by elite interests, and often complicit in propaganda. But they were identifiable, accountable in principle, and operating under regulatory frameworks that, again in principle, imposed some standards of accuracy and editorial responsibility.

Social media platforms are different in architecture. They are not gatekeepers in the traditional sense; their content moderation is largely reactive rather than editorial, and their primary revenue mechanism — digital advertising targeted by granular behavioral data — creates systematic incentives to maximize engagement rather than quality. The algorithm is not a neutral distribution mechanism; it is an editorial system optimized for a specific goal — time on platform, clicks, shares, reactions — that has no necessary relationship to the quality, accuracy, or democratic value of the content it distributes. And because outrage, tribal identity affirmation, and fear generate more engagement than careful, evidence-based argument, the algorithmic editorial system is structurally biased toward exactly the content that propaganda theory identifies as most corrosive to democratic deliberation.

The 2021 Deplatforming of Trump: A Case Study

On January 6, 2021, supporters of President Donald Trump stormed the United States Capitol following a rally at which Trump had urged them to march to the building and fight against the certification of the 2020 presidential election results. In the hours and days following the attack, Twitter permanently suspended Trump's account; Facebook and Instagram suspended him indefinitely (subsequently restoring him in 2023); YouTube suspended his channel for a week and imposed additional restrictions thereafter. These were the most consequential content moderation decisions ever made by private technology companies, and they illustrated almost every dimension of the contemporary platform governance debate.

The case for the deplatforming rested on two arguments that the platforms themselves advanced. First, that Trump's communications had contributed to an incitement to violence — by repeatedly and falsely claiming that the election had been stolen, by directing his supporters to the Capitol, and by failing to clearly call off the attack as it unfolded — that the platforms were unwilling to continue enabling. Second, that his continued presence on the platforms in the immediate aftermath of the attack created ongoing risk of additional violence before the transfer of power was complete. Both arguments had genuine merit, and both were grounded in the platforms' existing policies against incitement.

The case against the deplatforming — articulated by free speech advocates across the political spectrum, including many who were not Trump supporters and who had no sympathy for what had occurred on January 6 — rested on a different set of concerns. Private platforms with the scale of Facebook and Twitter had effectively become public utilities, the argument ran, and their content moderation decisions had consequences for democratic discourse too significant to be made by unaccountable private actors under opaque proprietary standards. The concern was not only about Trump; it was about the precedent and the principle. If platforms could deplatform a sitting president on the basis of their own judgment about whether his speech was dangerous, they could, by the same logic, deplatform anyone whose speech they judged, on whatever basis, to create similar risks.

The debate crystallized a question that neither the Lippmann-Dewey framework nor the Habermasian public sphere concept had fully anticipated: when the public sphere is privately owned infrastructure, who governs it, by what standards, and with what accountability? The answer in the United States, as of the mid-2020s, is that private companies govern it, by whatever standards they choose, with accountability only to their shareholders and, minimally, to their users. This is not a satisfying answer from the standpoint of democratic theory, but it is the answer that the American legal and regulatory framework has produced.

The EU vs. US Approaches to Platform Governance

The European Union's Digital Services Act (2022) and Digital Markets Act (2022), taken together, represent the most comprehensive attempt by any democratic jurisdiction to address the platform governance question. The DSA's approach is not to require platforms to make specific content decisions — it does not tell platforms what to take down or leave up — but to require transparency about how they make decisions, to mandate risk assessments for their systemic effects on democratic processes, and to create regulatory oversight of the largest platforms by the European Commission.

This approach reflects the distinctly European philosophical tradition: accept that there are no perfect answers to hard questions about speech and governance, but insist on democratic oversight of systems with democratic consequences. The DSA says, in effect: if you are going to be the dominant public sphere, you must accept accountability obligations commensurate with that role. Platforms that fail to comply face fines of up to 6 percent of global annual revenue — significant enough to constitute a genuine enforcement mechanism rather than a symbolic gesture.

The United States has, as of this writing, taken no comparable action. Section 230 of the Communications Decency Act (1996) continues to provide platforms with broad immunity from liability for user-generated content. Congressional efforts to reform or repeal Section 230 have produced bipartisan criticism but no legislative action — partly because Democrats and Republicans want different things from reform (Democrats generally want more content moderation of harmful content; Republicans generally want less content moderation of conservative speech), and partly because the political power of the technology industry has been sufficient to block action despite the widespread public dissatisfaction with current platform governance.

The result is a situation that neither the First Amendment tradition's philosophical commitments nor the European regulatory tradition's institutional mechanisms has fully addressed: an information environment in which the most powerful speech-shaping institutions in human history operate with minimal public accountability, under business models that systematically reward the kind of content that propaganda theory identifies as most dangerous to democratic deliberation, and with reach and influence that dwarf any previous communications technology.

Connecting Back to Lippmann, Dewey, and Habermas

The platform governance debate is, at its deepest level, the Lippmann-Dewey debate transposed to twenty-first-century conditions. Lippmann's diagnosis — that the information environment was too complex and too polluted for ordinary citizens to navigate without expert assistance — fits the algorithmic social media environment even more precisely than it fit the 1920s press. His prescription — expert intermediaries, trained gatekeepers — maps onto the contemporary argument for robust content moderation, algorithmic transparency requirements, and regulatory oversight of platform systems.

Dewey's prescription — better conditions for deliberation, more education, recovered community — maps onto the media literacy movement, the push for algorithmic transparency that would allow citizens to understand the information environment they are navigating, and the argument for platform architectures designed to facilitate genuine deliberation rather than outrage and division. Where Dewey would have argued for building the conditions for a real public sphere rather than managing a broken one, contemporary Deweyan advocates argue for redesigning platform architectures to serve deliberative values rather than engagement metrics.

Habermas's public sphere concept provides the normative standard against which platform governance can be evaluated: does this system create conditions in which arguments are evaluated on their merits, all affected parties can participate, and the force of the better argument rather than the better-funded communicator determines outcomes? By that standard, the current social media public sphere fails comprehensively. The question is not whether it fails but what can be done about it — and by whom, with what institutional authority, and through what political process.

Ingrid's question — "Why does 'freedom' require allowing propaganda?" — does not have a simple answer. But it has a precise one: in the American tradition, freedom from government restriction of speech is treated as a precondition for democratic deliberation, on the grounds that the alternative (government control of political speech) is more dangerous than the harm it prevents. Whether that calculation remains correct in an environment where the dominant speech-shaping force is not government but algorithmically optimized private platforms is, precisely, what is now being contested in legislatures, courts, regulatory agencies, and academic seminars on both sides of the Atlantic. The debate Ingrid and Tariq began in Webb's seminar room is not a debate that will be resolved there. But understanding the terms of the debate — its historical roots, its institutional stakes, its philosophical foundations — is the prerequisite for participating in it meaningfully.


Research Breakdown: Democratic Erosion and Information Environment

Study: Levitsky, Steven, and Lucan Way. Competitive Authoritarianism: Hybrid Regimes After the Cold War. Cambridge: Cambridge University Press, 2010. And subsequent work on democratic backsliding.

What it shows: The comparative politics literature on democratic backsliding — Hungary under Orbán, Turkey under Erdoğan, Poland under PiS, Venezuela under Chávez and Maduro — consistently identifies media capture as one of the earliest and most reliable indicators of authoritarian consolidation. Before elections are stolen, before courts are packed, before opposition is imprisoned, the information environment is degraded: independent media are bought, harassed, or regulated into submission; state media is redirected to partisan purposes; opposition voices are marginalized.

Implication for propaganda studies: The relationship between propaganda and democracy is not merely theoretical. The empirical record of democratic backsliding shows that sustained propaganda campaigns — particularly those that discredit independent media and alternative information sources — are both a precursor to and an enabler of democratic erosion.

Additional findings from V-Dem and related research: Subsequent scholarship by V-Dem (Varieties of Democracy) researchers, Pippa Norris, and Ronald Inglehart has extended Levitsky and Way's framework to document what Norris and Inglehart call "authoritarian reflexes" in established democracies. Their research documents measurable democratic backsliding on information environment indicators — media independence, factual accuracy of public discourse, pluralism of information sources — in countries including the United States, Brazil, and several EU member states. The mechanism is not always government media capture; it can also be market-driven polarization, the collapse of local journalism, and algorithmic fragmentation of shared factual reality. The outcome in terms of democratic deliberation may be similar: an information environment in which shared facts are absent, in which alternative information ecosystems are mutually incomprehensible, and in which the conditions for democratic deliberation are systematically undermined.


Primary Source Analysis: Walter Lippmann, Public Opinion (1922), Chapter 1

Excerpt: "The world that we have to deal with politically is out of reach, out of sight, out of mind. It has to be explored, reported, and imagined. Man is no Aristotelian god contemplating all existence at one glance. He is the creature of an evolution who can just manage to live in the world he cannot see... The environment with which our public opinions deal is refracted in many ways, is dimly and inaccurately transmitted, is intermittently attended to, and crudely interpreted."

Source: Lippmann, writing immediately after WWI, deeply influenced by his awareness of both the CPI's effectiveness and its manipulation.

Message content: The claim is descriptive and theoretical: citizens do not interact with political reality directly — they interact with a representation of it, mediated by media and personal limitations. The "pictures in our heads" are not faithful representations of the world.

Emotional register: Intellectually sober, almost clinical. Lippmann is not alarmed — he is describing a situation he considers structurally inevitable.

What makes this politically consequential: Lippmann's analysis, if correct, has profoundly anti-democratic implications. If citizens cannot form accurate political judgments, the case for democratic self-governance is weakened. Lippmann draws the technocratic conclusion — expert management — while Dewey draws the democratic one — better conditions for deliberation. Both of these responses have modern descendants in debates about platform content moderation and media literacy policy.

A note on Lippmann's own trajectory: It is worth noting, for historical completeness, that Lippmann's life did not fully vindicate his own theory. He became, in his later career, exactly the kind of establishment-connected expert intermediary his prescription called for — and his record as a pundit was marked by the same limitations he had identified in ordinary citizens. He failed to predict the Great Depression, misjudged Hitler's intentions in the late 1930s, and supported the internment of Japanese Americans during World War II. The expert intermediary, it turns out, is also a creature of evolution who can just barely manage to see the world he cannot directly observe. This irony does not refute Lippmann's diagnosis, but it complicates his prescription considerably.


Debate Framework: Should Democracies Restrict Propaganda?

The question: Given documented evidence that propaganda undermines the conditions for democratic deliberation, are restrictions on propaganda in democratic societies justified?

Position A: Restrictions are dangerous to democracy. Government restriction of political speech is more likely to harm democratic discourse than protect it. The history of "anti-propaganda" and "anti-disinformation" legislation includes examples of these laws being used against political opposition (the CPI era's Espionage Act), minority viewpoints, and legitimate journalism. The cure is worse than the disease. The remedy is more speech, not less: education, counter-messaging, inoculation. Furthermore, as Sophia's example illustrates, what counts as propaganda is genuinely contested — one person's propaganda is another person's truth. No government can make this determination without imposing the political perspective of whoever controls the government on the entire population.

Position B: Unrestricted propaganda undermines democracy more than restrictions do. The historical record of democratic backsliding shows that unrestricted propaganda — particularly state-sponsored or corporate-funded disinformation — can systematically erode the conditions for legitimate democratic discourse. European democracies with speech restrictions have not become authoritarian. The United States, with maximum speech permissiveness, has experienced documented erosion of shared factual reality. The marketplace of ideas argument assumes competitive conditions that do not exist: it assumes that accurate information and false information compete on equal terms, which they do not when one side is better funded, more algorithmically amplified, and more emotionally engaging than the other.

Position C: The debate is misframed — the remedy is structural, not legal. The most important interventions are not speech restrictions but structural changes: requiring transparency about political advertising sources (who paid for this?), platform design changes that reduce algorithmic amplification of outrage, public investment in independent journalism, and comprehensive media literacy education. These address the conditions that make propaganda effective without requiring the government to determine what speech is permissible. This position, associated with scholars like Yochai Benkler, Kate Starbird, and danah boyd, argues that the speech regulation debate draws attention away from the structural interventions that are both more effective and less democratically dangerous.

Discussion questions for seminar: Which position do Lippmann's descendants occupy? Dewey's? Where does Habermas's framework place us? Is Ingrid's argument most consistent with Position A, B, or C — or some combination? Tariq's? Sophia's? Has your own position changed over the course of this chapter?


Action Checklist: Democratic Stakes Assessment

When evaluating a propaganda campaign or information operation, ask:

  • [ ] Whose capacity to reason is being targeted? Is the operation designed to inform or to overwhelm?
  • [ ] What would a complete information environment look like? Does this operation contribute to or detract from a shared factual basis for political deliberation?
  • [ ] Who is excluded? Does this operation systematically exclude or marginalize specific communities from effective participation in public discourse?
  • [ ] What is the accountability structure? Can the communicating party be identified and held responsible for inaccurate or manipulative claims?
  • [ ] What is the institutional context? Is this operation being conducted by a government, a corporation, a political movement, or an unaccountable foreign actor? What implications does that have for democratic oversight?
  • [ ] What is the information environment context? Is this operation occurring in a context of diverse, competitive information sources, or in an environment where one set of voices already dominates?
  • [ ] What are the counterpublic implications? Does this operation target the mainstream public sphere, a specific counterpublic, or both? What are the implications for marginalized communities' capacity to form and express political voice?
  • [ ] What is the platform architecture context? Is this operation relying on algorithmic amplification? If so, what does that mean for the "more speech" remedy — is counter-messaging realistically able to reach the same audiences?

Inoculation Campaign: Mission Statement

Chapter 6's Inoculation Campaign component is your mission statement.

Based on the five chapters of Part 1, write a one-paragraph (150–200 word) mission statement for your Inoculation Campaign that:

  1. Names the target community and the primary propaganda threats it faces
  2. Articulates the democratic stakes — why this matters beyond just individual protection
  3. States the campaign's core goal in specific, measurable terms
  4. Names at least one psychological or cognitive principle from Chapters 2–4 that the campaign will address
  5. Reflects the ethical orientation from this chapter — what distinguishes your counter-propaganda campaign from the propaganda it is countering

This mission statement will appear at the front of your final campaign brief. It should be strong enough to stand alone as a description of why the work matters.

A note on ethical orientation: The most important distinction between an inoculation campaign and propaganda is not that one is true and the other is false — though that matters enormously — but that one aims to enhance the audience's capacity for independent critical judgment while the other aims to bypass or undermine it. Your campaign should, at every step, ask whether it is building your audience's capacity to think for themselves or substituting your judgment for theirs. The former is democratic communication. The latter, however well-intentioned, is propaganda. Webb would ask you to hold both possibilities in mind simultaneously — because the history of counter-propaganda campaigns that became propaganda themselves is long enough to warrant humility.