The first day of Sophia Marin's media studies elective, Professor Marcus Webb walked into the room, wrote a single word on the whiteboard in capital letters, and turned around.
In This Chapter
- The Problem of Defining Propaganda
- Propaganda in Pre-Modern Contexts
- Etymology and Origin
- A Survey of Scholarly Definitions
- A Working Definition
- The Spectrum Model: From Advocacy to Propaganda
- What Propaganda Is Not (Or Is It?)
- Propaganda in the Digital Age: New Features, Old Mechanisms
- Research Breakdown: The Evolution of a Definition
- Research Breakdown: Chomsky and Herman's Content Analysis Methodology
- Primary Source Analysis: Bernays, Propaganda (1928), Chapter One
- Defining Propaganda: A Practitioner's View
- Cross-Cultural Definitions: Non-Western Perspectives
- Synthetic Media and the Definitional Frontier
- Debate Framework: Does Intent Define Propaganda?
- Historical Timeline: The Word "Propaganda," 1450–Present
- Argument Map: Is Propaganda Distinct from Persuasion?
- Action Checklist: Applying the Working Definition
- Inoculation Campaign: Choosing Your Community
- Chapter Summary
Chapter 1: What Is Propaganda? Definitions, History, and Scope
The first day of Sophia Marin's media studies elective, Professor Marcus Webb walked into the room, wrote a single word on the whiteboard in capital letters, and turned around.
"PROPAGANDA," he said. "Define it."
Twenty-two students shifted in their seats. Several reached for their phones. One said, tentatively, "lies?"
Webb nodded slowly, the way he did when he was about to complicate something. "Whose lies?" he asked. "For what purpose? By whom? Through what channel? And how do we know it's a lie rather than a mistake, or an exaggeration, or simply a different interpretation of the same facts?"
The student who had answered fell silent. Everyone else was looking at their phones.
"Put them away," Webb said. "The dictionary won't help you. This is the first thing I want you to understand about this course: the word you just looked up is one of the most contested, abused, and analytically slippery terms in the English language. Every definition we arrive at will immediately raise a question that that definition cannot answer. Our job in this chapter is not to resolve the question. Our job is to understand why it is so hard to resolve — and then to choose a working definition rigorous enough to do analytical work."
He capped the marker and sat on his desk.
"Why does this matter? Because if you cannot define something precisely, you cannot study it. And if you cannot study it, you cannot defend yourself against it."
The Problem of Defining Propaganda
It is tempting to define propaganda as "false information spread to advance an agenda." This definition captures what most people mean when they use the word casually. It is also, as a scholarly definition, almost useless.
Consider the problems. Is propaganda necessarily false? A government that truthfully reports its military successes while suppressing its failures is selecting facts strategically — but each reported fact may be accurate. Is that propaganda? A tobacco company that accurately notes that "no study has definitively proven that cigarettes cause cancer in every smoker" is stating something technically true. Is that propaganda? The Nazi Party organized a 1936 Olympic Games that genuinely impressed foreign visitors; their positive impressions were real. Was the impression-management effort propaganda?
If false information is the criterion, we have excluded too much. But if we abandon falseness as the criterion, we face the opposite problem: we must either admit that all persuasive communication is propaganda — in which case the word loses analytical precision — or find some other feature to distinguish it.
This is not an academic puzzle. It has real consequences. When the term "propaganda" is applied indiscriminately — to all political advertising, all government communication, all journalism that takes a position — it loses the power to identify the specific, harmful practices that make the study of propaganda urgent. We need a definition sharp enough to cut.
Propaganda in Pre-Modern Contexts
Before turning to etymology, it is worth pausing to establish a fact that surprises many students encountering the formal study of propaganda for the first time: organized, deliberate political persuasion is not a modern invention. It is not a product of the printing press, mass literacy, radio, or the internet. It is as old as organized political power itself.
The Roman Republic and Empire provide some of the earliest and best-documented examples of systematic image management by a state. Julius Caesar's Commentarii de Bello Gallico — his account of his own military campaigns in Gaul — was written not as private memoir but as a political document designed to reach audiences in Rome who would never see the battlefield. Caesar narrated his own heroism in the third person, as if describing a distant figure, lending his self-promotion the false authority of objective history. The coins minted under Augustus carried his profile alongside symbolic imagery — laurel wreaths, celestial motifs, representations of peace and abundance — that positioned him as divinely favored and historically inevitable. Triumphal arches, aqueducts, forums, and public baths bore inscriptions naming their imperial patron; infrastructure was simultaneously communication. The Ara Pacis, the Altar of Peace commissioned by Augustus and completed in 9 BCE, was a monument to an ideological claim — that Roman military expansion produced universal peace — rendered in marble at a scale designed to impress and overawe. Nothing about this was accidental. The Roman imperial court employed architects, sculptors, poets, and historians specifically to manage the emperor's image and to embed political messages in durable materials that would outlast any individual act of speech.
The Reformation of the sixteenth century produced one of history's most consequential information campaigns, and it was driven by a technology that genuinely was new: the printing press. Martin Luther's Ninety-Five Theses, nailed (or, more likely, distributed by letter) in 1517, became a viral document by the standards of the age. Within two months, the text had been printed and distributed throughout Germany. Within two years, it had spread across Europe. Luther was not only a theologian; he was an extraordinarily skilled communicator who understood that accessible German prose and memorable woodcut illustrations could reach audiences that Latin theological tracts could not. The pamphlet wars of the Reformation — the Flugschriften, or "flying writings" — were the sixteenth century's version of social media: cheap to produce, fast to distribute, shareable across networks of readers, and entirely beyond the control of any central authority. Both Catholic and Protestant factions used them aggressively. The papacy itself, responding to this information crisis, began organizing its own communication infrastructure — which eventually, in 1622, resulted in the institution we will examine in the next section.
What the Reformation pamphlet wars reveal is the structural relationship between communications technology and propaganda. Each new technology that lowered the cost of information reproduction and distribution — the printing press, the telegraph, the radio, the internet — produced a new wave of organized persuasion campaigns, because each new technology changed the calculus of who could reach whom at what cost. Luther's achievement was to understand, intuitively, what modern communications scholars would later formalize: that the medium shapes the message, and that the strategic use of new media by political actors changes political reality. The Catholic Church's Congregatio de Propaganda Fide, which we will examine shortly, was partly a bureaucratic response to the challenge Luther had demonstrated.
The pre-modern history of organized persuasion also challenges a persistent assumption: that propaganda is primarily a tool of authoritarian or dishonest regimes. The Athenian democracy produced some of the world's most accomplished political rhetoric, designed to move popular assemblies toward decisions favored by skilled speakers. Thucydides' account of the Mytilenian Debate — in which the Athenian assembly first voted to execute the entire male population of a rebellious allied city, then reversed the decision after a second debate — is in part a study in how skilled orators can move mass audiences toward or away from atrocity within a single afternoon. The Sophists, whom Plato attacked so relentlessly, were professional teachers of persuasion: their entire enterprise was the transmission of techniques for making audiences believe what you wanted them to believe, regardless of whether it was true. Plato's Republic, often read as a work of political philosophy, is equally a polemic against the Sophists — and, notoriously, it includes its own propaganda theory, in the form of the "noble lie" that Socrates proposes as the foundation of the just city. Even the philosopher most committed to truth recognized that political order might require organized deception.
Medieval crusade preaching organized mass violence through coordinated religious communication campaigns that were sophisticated by any standard. Pope Urban II's sermon at Clermont in 1095 — which launched the First Crusade — was not a spontaneous outbreak of religious enthusiasm. It was the culmination of a carefully prepared communications campaign conducted through the Church's existing network of bishops, abbots, and parish priests. The sermon itself was reportedly delivered outdoors because no building in Clermont could hold the anticipated crowd: the staging was as deliberate as the content. The subsequent spread of crusade enthusiasm across Europe depended on a network of itinerant preachers who carried standardized messages — Jerusalem in danger, sins forgiven through holy warfare, earthly rewards awaiting the brave — adapted for local audiences by skilled communicators who understood what would move different regional and social groups. The result was one of history's most successful mass mobilization campaigns, achieved entirely through organized verbal communication in an era without printing, broadcasting, or literacy for most of the population.
The corporate guilds of Renaissance Italy managed their public images with sophisticated patronage of art and architecture. Power, in every era and every political form, has generated efforts to manage how that power is perceived. The specific techniques change. The underlying dynamic does not.
Etymology and Origin
The word "propaganda" entered European languages in 1622, when Pope Gregory XV established the Congregatio de Propaganda Fide — the Congregation for the Propagation of the Faith. The congregation's purpose was to coordinate the Catholic Church's missionary activities in the wake of the Protestant Reformation: standardizing doctrine, training missionaries, and managing the Church's communication with populations it wished to convert or retain.
The original usage was neutral, even bureaucratic. "To propagate" meant simply to spread — as in propagating plants, or propagating a species. The congregation was in the business of spreading faith, which its members regarded as a straightforwardly good activity.
The term remained largely neutral through the eighteenth century. Thomas Jefferson used it without pejorative intent. Nineteenth-century political reformers spoke of "propagandizing" for causes they considered just.
The shift to negative connotation accelerated during World War I, when the organized information campaigns of all major powers — particularly the British effort to bring the United States into the war — became visible enough, and cynical enough, that the word began accumulating the sinister associations it carries today. By the 1920s, propaganda meant something deliberate, manipulative, and not quite honest. Edward Bernays, whose career we will examine throughout this book, responded to the contaminated word in 1928 by proposing a euphemism: "public relations." The rename worked so well that most people today do not know it was intended as a sanitized substitute.
The etymology matters for two reasons. First, it reminds us that organized strategic communication is not a modern invention — it is as old as any institution with interests to advance. Second, it illustrates how the name given to a practice shapes our perception of it. Bernays understood this perfectly. "Public relations" sounds professional and neutral. "Propaganda" sounds sinister. The same activities can be described with either term.
A Survey of Scholarly Definitions
The academic study of propaganda has produced dozens of competing definitions. Four of the most influential are worth examining in detail, because each captures something important that the others miss.
Harold Lasswell (1927): Writing immediately after World War I, Lasswell defined propaganda as "the management of collective attitudes by the manipulation of significant symbols." This definition is notable for what it emphasizes: management (implying deliberate control), collective attitudes (implying mass audiences rather than individuals), and significant symbols (implying that the vehicle of propaganda is language, image, and narrative — meaning-carrying objects rather than brute force). Lasswell's definition treats propaganda as a technique of social control, distinct from but related to physical coercion.
Edward Bernays (1928): Bernays, the nephew of Sigmund Freud and the founder of modern public relations, defined propaganda as "a consistent, enduring effort to create or shape events to influence the relations of the public to an enterprise, idea, or group." His definition is notably value-neutral — he believed propaganda was simply the mechanism through which modern democratic societies were necessarily governed, and that there was nothing inherently wrong with it. He would spend the next fifty years proving this belief correct in practice and wrong in ethics.
Jacques Ellul (1965): The French sociologist and theologian wrote what many scholars consider the most profound analysis of propaganda ever produced. Ellul's definition was more expansive: propaganda, for him, was not merely the work of identifiable propagandists but a structural feature of modern technological society. It operated through advertising, education, journalism, and entertainment simultaneously, creating a psychological environment in which critical thought became progressively harder to sustain. Ellul distinguished between "agitation propaganda" (which incites immediate action) and "integration propaganda" (which normalizes existing social arrangements) — and argued that the latter was far more dangerous because it was invisible.
Jowett and O'Donnell (2019): The most widely cited contemporary textbook definition describes propaganda as "the deliberate, systematic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist." This definition has five key components: it is deliberate (excluding accidental misinformation), systematic (excluding isolated communications), and aimed at three interconnected targets — perceptions (what you see), cognitions (what you think), and behavior (what you do). The goal is to achieve the desired intent of the propagandist, not simply to communicate or inform.
Each definition leaves something out. Lasswell's is too focused on symbols and doesn't address intent directly. Bernays's is so neutral that it can't distinguish propaganda from ordinary communication. Ellul's is so broad that it encompasses almost all modern life. Jowett and O'Donnell's is precise but may exclude the structural propaganda Ellul identifies — the kind that operates without any identifiable propagandist.
The Propaganda Model: Chomsky and Herman
No survey of scholarly definitions can omit one of the most influential and controversial frameworks in the field: the Propaganda Model proposed by Noam Chomsky and Edward Herman in their 1988 book Manufacturing Consent: The Political Economy of the Mass Media. Where most definitions of propaganda focus on individual acts of deliberate manipulation, Chomsky and Herman identified a systemic mechanism through which large media organizations reliably produce propaganda-like output without any single actor intending to manipulate.
Their argument begins from a structural observation: large mass media organizations in the United States are profit-seeking corporations embedded in a broader capitalist economy. This embedding, they argued, creates five "filters" that systematically bias the news content these organizations produce.
The first filter is ownership. The major American media corporations are owned by large conglomerates — or are themselves large conglomerates — with substantial investments across the economy. News organizations owned by defense contractors are structurally unlikely to produce aggressive investigative journalism about defense contracting. The filter operates not through explicit editorial orders but through the institutional culture that forms when journalists internalize what their organization rewards.
The second filter is advertising. American media is overwhelmingly funded by advertising revenue rather than subscription revenue. Advertisers are businesses with interests of their own. Media organizations that produce content advertisers find uncomfortable — content that, for instance, implicates consumer culture in social harm — risk losing revenue. The filter is rarely explicit; it operates through the accumulated decisions of editors who understand their organization's commercial dependencies.
The third filter is sourcing. News organizations depend on a steady, reliable flow of information from credible sources. Government agencies, think tanks, trade associations, and large corporations have the resources to produce the press releases, background briefings, and expert spokespersons that make journalists' jobs easier. Organizations that challenge power — small advocacy groups, community organizations, whistleblowers — lack these resources. The result is that news content systematically over-represents the perspectives of established institutions.
The fourth filter is flak — the term Chomsky and Herman used for organized criticism, harassment, and legal pressure directed at media organizations and journalists who produce unwelcome coverage. Flak is a mechanism through which powerful interests discipline media organizations, making certain kinds of journalism costlier to produce.
The fifth filter, which Chomsky and Herman initially described as anti-communism (reflecting the Cold War context of the 1988 book), functions more broadly as an ideological consensus: the set of assumptions that are treated as settled and not subject to challenge. In different eras, this filter has operated through different ideological contents — anti-terrorism, market orthodoxy, nationalism — but its structural function is the same: to place certain topics and perspectives outside the range of acceptable mainstream debate.
The Propaganda Model is provocative precisely because it does not require anyone to be lying or acting in bad faith. The journalists who work within these systems may be honest, talented, and committed to accuracy. The system, Chomsky and Herman argued, produces propaganda-like output regardless — through the accumulated weight of structural incentives that systematically favor certain perspectives over others. This is Ellul's insight restated in institutional-economic terms: the most consequential propaganda may be structural, not intentional. We will return to the Propaganda Model repeatedly throughout this book, particularly in Part 3, where we examine media ownership, and Part 6, where we examine the economics of contemporary digital media.
A Working Definition
For the purposes of this book, we will use a five-part working definition that draws on all four traditions above:
Propaganda is intentional communication designed to influence the attitudes, beliefs, or behavior of a target audience, that uses psychological techniques which bypass or overwhelm critical reasoning, in service of the interests of the communicating party, often at the expense of the audience's capacity for autonomous judgment.
Five components, each doing analytical work:
1. Intentional. Propaganda requires a communicator who is trying to achieve an effect, not merely reporting or exploring. This distinguishes it from accidental misinformation, honest error, and good-faith argument.
2. Designed to influence attitudes, beliefs, or behavior. This distinguishes it from purely informational communication (though the line is often contested).
3. Uses techniques that bypass or overwhelm critical reasoning. This is the crucial distinguishing criterion. Legitimate persuasion works through the audience's capacity to evaluate evidence and arguments. Propaganda works around it — through emotional manipulation, cognitive bias exploitation, information environment control, or sheer repetition that causes statements to feel true. This feature places propaganda on the manipulation side of the persuasion-manipulation divide.
4. In service of the communicating party's interests. Propaganda serves someone. Identifying who — and asking whose interests are actually served — is one of the most important analytical moves in this course.
5. Often at the expense of the audience's capacity for autonomous judgment. This is the ethical indictment built into the definition. Propaganda doesn't just influence; it impairs. It leaves its audience less able to think clearly, not more.
This definition will be tested and refined throughout the book. It should be treated as a working hypothesis, not a settled answer.
The Spectrum Model: From Advocacy to Propaganda
Sophia, walking back from Webb's first class with Tariq Hassan, found herself circling back to a nagging doubt. "I get the definition," she told him, "but it still feels like a switch. Either something is propaganda or it isn't. Real life doesn't work that way."
Tariq nodded. He had been thinking the same thing. "Like — my uncle runs a get-out-the-vote organization in Dearborn. He runs ads that say voting is the most important thing Arab-Americans can do right now. Is that propaganda?"
Sophia thought about it. "Probably not? But I couldn't tell you exactly why."
The answer lies in understanding propaganda not as a binary category but as one end of a spectrum. The working definition above is designed to identify the clearest cases, but any honest account of persuasive communication must acknowledge that the poles of a spectrum are connected by gradations, not a sharp break.
At one end of the spectrum sits legitimate advocacy: a labor union publishing a factual account of unsafe working conditions at a specific factory, attributing its claims to documented sources, and calling for legislative remedy. The union has an interest it is advancing; the communication is designed to influence attitudes and behavior; but the methods work through the audience's capacity to evaluate evidence. The facts are available for independent verification. Counterarguments are not suppressed. The audience's rational agency is engaged, not circumvented.
One step along the spectrum lies strategic framing: that same labor union deciding to lead every communication with the most emotionally resonant case — a worker killed by a preventable accident — rather than with aggregate statistics. The facts are still accurate. The claim is still supported by evidence. But the selection and emphasis are designed to produce an emotional response before the rational evaluation begins. Most political communication operates at this level.
Further along sits selective emphasis combined with strategic omission: a corporation releasing a study showing that its product is safe while not disclosing that it also funded and declined to publish three earlier studies showing harm. No individual claim may be false, but the information environment being created is systematically misleading. The audience cannot evaluate the evidence because key evidence has been withheld.
Closer to the propaganda end lies systematic distortion: coordinated messaging across multiple channels that presents a factually misleading picture of reality, using emotional manipulation and social proof alongside selective facts. The intent to mislead is clearer here, though it may still be deniable.
At the propaganda end itself lies what the working definition describes: communication whose primary mechanism of influence is the bypass of critical reasoning — through manufactured fear, fabricated authority, deliberate repetition until falsehoods feel familiar, or the construction of information environments designed to make independent verification impossible.
The spectrum model does not dissolve the analytical category — it clarifies it. Tariq's uncle's get-out-the-vote campaign sits near the legitimate advocacy end; acknowledging this is not the same as saying it belongs in the same category as a government disinformation operation. The existence of gradations between red and orange does not make red and orange the same color. But the spectrum model does perform an important critical function: it keeps us honest about the fact that nearly all political communication involves some degree of strategic framing, and that the study of propaganda is partly a study of where and how the line gets crossed — which requires understanding the whole spectrum, not just the extreme end.
As you analyze specific cases throughout this course, it is worth asking not just "is this propaganda?" but "where on the spectrum does this sit, and what would it take to move it further toward — or further away from — the propaganda end?" That second question is in some ways more productive than the first, because it focuses attention on the specific features of the communication that place it where it does — the selective omission, the emotional amplification, the concealed sponsorship — rather than just reaching for a binary verdict. Propaganda analysis that can identify the specific mechanism at work is far more useful than propaganda analysis that merely assigns a label.
What Propaganda Is Not (Or Is It?)
Working against a definition is often as useful as working toward one. Several categories of communication are frequently confused with propaganda or mistakenly identified as propaganda when they are not.
Education aims to develop the audience's capacity for critical thought and independent judgment. It presents evidence, teaches methods, and encourages skepticism — including skepticism toward the educator. Propaganda aims to produce a specific conclusion. The difference is in orientation: education opens; propaganda closes.
Public relations is organized communication in service of an institution's interests — typically transparent about who is communicating and why, aimed at accurate representation rather than distortion. The boundary is real but frequently violated; when PR abandons accuracy in favor of strategic deception, it becomes propaganda.
Advertising is commercial persuasion, generally transparent about its commercial interest. When it uses techniques that exploit psychological vulnerabilities rather than informing choices — as the tobacco industry did for decades — it crosses into propaganda. Many scholars argue this line is crossed routinely.
Journalism aims to inform, verify, and hold power accountable. When it becomes the vehicle for a political faction's agenda — through selection bias, framing, or the suppression of inconvenient facts — it can function as propaganda without being propaganda in intent. The structural critique is harder to rebut than the intentional one.
The honest answer is that the word "propaganda" describes a continuum, not a discrete category. The working definition above is designed to identify the clearer cases — the ones where intent, technique, and effect all align in ways that are analytically meaningful. The harder cases, where intent is unclear or effect is contested, will be worked through in the chapters ahead.
Propaganda in the Digital Age: New Features, Old Mechanisms
If the pre-modern section of this chapter established that organized persuasion is ancient, this section establishes an equally important counterpoint: the digital era has changed propaganda in ways that are genuinely unprecedented, even if the underlying psychological mechanisms remain constant. Understanding what is new matters because it determines which defenses are adequate and which are obsolete.
The most consequential new feature is scalability. In the era of pamphlet wars, reaching a million people required physical infrastructure: paper, ink, printing presses, distribution networks, and literate intermediaries. In the era of broadcast media, reaching a mass audience required expensive licensing, transmission equipment, and trained production staff. In the digital era, a single actor with a laptop and a social media account can reach hundreds of millions of people at near-zero marginal cost. This is not a minor quantitative change. It is a structural transformation of the propaganda landscape. Organizations, governments, and individuals that previously lacked the resources to conduct systematic influence campaigns at scale now have them. The barrier to entry for mass persuasion has collapsed.
The second new feature is micro-targeting. Broadcast propaganda was, by necessity, relatively blunt: a message had to be calibrated to work across a heterogeneous mass audience, which meant it could not be simultaneously optimized for each audience segment. Digital advertising infrastructure — developed originally for commercial purposes and subsequently adapted for political messaging — makes it possible to deliver different messages to different audiences based on extraordinarily detailed psychological and behavioral profiles. A political campaign can send one message about immigration to voters identified as economically anxious, a different message about the same policy to voters identified as culturally conservative, and a third to voters whose behavioral data suggests they are primarily motivated by security concerns — all simultaneously, with no single voter knowing the others are receiving different messages. This is propaganda calibrated to individual psychological profiles at a scale and precision that Goebbels or Lippmann could not have imagined.
The third new feature is speed. Digital content can achieve mass reach before any fact-checking infrastructure can respond. The asymmetry between the speed of production and the speed of verification is a structural advantage for propagandists. A false story can reach ten million people in six hours; a thorough debunking, produced by reporters working at the speed of responsible journalism, reaches a fraction of that audience days later, after the original story has already shaped the information environment. This is not an accident: it reflects the fundamental economics of attention. False and outrageous content reliably generates more engagement than careful, nuanced correction — and engagement is what digital platforms are designed to maximize.
The fourth new feature is persistence. Pre-digital propaganda left physical traces that could be lost, suppressed, or degraded. The Soviet Union could airbrush Trotsky out of photographs; later generations could eventually recover the originals and document the falsification. Digital content, once published, is effectively permanent. Screenshots, archives, and distributed caches mean that content never fully disappears, which creates its own paradox: the same persistence that allows propagandists to maintain a consistent disinformation environment also allows researchers to document it. The evidence of digital propaganda campaigns is, in principle, recoverable in ways that historical propaganda often was not.
The fifth and perhaps most profound new feature is platform architecture as propaganda enabler. Social media platforms were designed to maximize engagement — not to optimize for truth, informed deliberation, or civic health. The algorithmic systems that determine what content users see are trained on engagement metrics that systematically favor emotionally intense, outrage-inducing, and identity-affirming content over accurate, nuanced, or challenging content. This is Ellul's structural propaganda updated for the twenty-first century: the propaganda is not primarily produced by any individual propagandist. It is produced by the interaction between human psychology, enormous quantities of content, and algorithms optimizing for a metric that happens to reward the features propaganda has always used. Propagandists have learned to exploit this architecture, but the architecture would produce propaganda-favorable conditions even if they hadn't.
There is also a sixth feature worth naming explicitly: feedback loops between propagandists and their audiences. In the broadcast era, propaganda was largely a one-directional phenomenon — the propagandist transmitted, the audience received, and the propagandist had limited real-time information about which messages were landing. Digital platforms give propagandists something unprecedented: instant, granular data about audience response. A disinformation operation running across social media can see, in near-real time, which narratives are generating shares, which emotional framings are producing the most engagement, and which audience segments are most receptive to which messages. This data-driven iteration allows propaganda campaigns to evolve rapidly, discarding what doesn't work and doubling down on what does, with a speed and precision that no broadcast-era propagandist could have achieved. The result is not just propaganda at scale — it is propaganda that continuously optimizes itself against its own measured effects on audiences.
What remains constant across the digital transformation is the underlying psychology. The cognitive biases that propaganda exploits — confirmation bias, in-group favoritism, availability heuristic, authority deference, emotional reasoning — are features of human cognition that predate the internet by millions of years. Digital propaganda is effective not because it invented new ways to manipulate people but because it found ways to deploy ancient manipulation techniques at unprecedented scale, speed, and precision. Part 2 of this book examines those techniques in detail; Parts 5 and 6 return to the specific features of the digital environment. For now, it is enough to note that the study of propaganda in the twenty-first century requires understanding both what is enduringly the same and what has genuinely changed.
Research Breakdown: The Evolution of a Definition
Study: Jowett, Garth S., and Victoria O'Donnell. Propaganda and Persuasion, 1st edition (1986) through 7th edition (2019).
What it shows: Tracking the evolution of a single scholarly definition across seven editions over thirty-three years reveals how the field has responded to changing communication technologies. The 1986 definition focused on mass media. The 1999 edition added internet-specific concerns. The 2019 edition explicitly addresses social media, algorithmic amplification, and computational propaganda.
Key finding: The core of the definition has remained stable — deliberateness, systematicity, the aim of influencing behavior. What has changed is the scope of what counts as "systematic." An individual tweet is not systematic. But an individual tweet that is part of a coordinated campaign, amplified by bots, targeted at a specific demographic's filter bubble, and repeated hundreds of times across platforms — that meets the definition. The same unit of content can be propaganda or not depending on the system it operates within.
Why this matters: Propaganda analysis is not just analysis of individual messages. It requires understanding the infrastructure through which those messages move. This is one of the reasons Chapter 17 (Algorithms and the Attention Economy) appears in this book.
Research Breakdown: Chomsky and Herman's Content Analysis Methodology
Study: Herman, Edward S., and Noam Chomsky. Manufacturing Consent: The Political Economy of the Mass Media. Pantheon Books, 1988.
The methodological question: How do you empirically test a systemic theory of media bias? Chomsky and Herman's answer was elegant: find pairs of comparable events where the predicted bias would produce measurable differences in coverage, and measure those differences.
Their most powerful empirical test used what they called "worthy" and "unworthy" victims — a deliberately provocative framing that described the logic of the bias they were testing. The prediction was that atrocities committed by enemies of the United States (and therefore "worthy" of maximum coverage) would receive systematically more and more sympathetic coverage than comparable or worse atrocities committed by U.S. allies or client states (and therefore "unworthy" of coverage that might embarrass American foreign policy).
The test case Chomsky and Herman used most extensively compared U.S. media coverage of the murder of a Polish priest, Father Jerzy Popieluszko, by agents of the Communist Polish government, with coverage of the murders of dozens of religious workers — priests, nuns, and lay workers — in U.S.-allied Central American states, primarily El Salvador and Guatemala. By any objective metric, the Central American killings were more numerous and more systematic. By the prediction of the Propaganda Model, the Polish case should receive dramatically more coverage.
The measurement was meticulous: Chomsky and Herman counted column inches in major newspapers, tracked story placement (front page versus inside pages), analyzed the language used to describe victims and perpetrators, and coded for the presence of official condemnation and expressions of outrage. The results confirmed the prediction by wide margins. The Polish priest received roughly the equivalent attention of all the Central American religious victims combined, despite the far greater scale of the Central American violence. More tellingly, the framing differed dramatically: the Polish case generated consistent editorial outrage and demands for accountability; the Central American cases were typically covered in passive language that obscured perpetrator responsibility, often relying on government sources that minimized or disputed the atrocities.
What the methodology demonstrates: Chomsky and Herman's contribution was not merely empirical — it was methodological. By using the comparative case design, they neutralized the most common objection to claims of media bias: that any individual instance of biased coverage could be explained by factors specific to that story. The systematic nature of the discrepancy across multiple pairs of comparable cases was harder to explain away. Their approach has been replicated and critiqued by subsequent scholars; some find their case selections non-representative, others find the effect robust across different sampling methods. What remains undisputed is the methodological insight: if you want to test for systemic bias, you need to look at patterns across comparable cases, not individual instances.
Why this matters for propaganda study: The Chomsky-Herman methodology is a model for the kind of analysis this course will practice. Individual pieces of communication are hard to evaluate in isolation — the reasons for any single editorial decision are genuinely ambiguous. Patterns across large samples of comparable communication are more diagnostic. Throughout this course, you will be asked to think not just about individual propaganda artifacts but about the systems and patterns they are embedded in.
Primary Source Analysis: Bernays, Propaganda (1928), Chapter One
Edward Bernays opened his 1928 book — which he titled, without irony, Propaganda — with one of the most audacious paragraphs in the history of public relations:
"The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of."
Source: Bernays, Edward L. Propaganda. New York: Liveright, 1928, p. 9.
Bernays is saying, openly and without apparent discomfort, that democracy is not governed by its citizens. It is governed by the specialists who manage what citizens think. He is not making this as a criticism. He is describing a system he believes is not only inevitable but beneficial — and which he built.
Analytical observations:
Source: Bernays, a consulting professional with direct financial interest in normalizing the practice he is describing. This does not mean he is wrong, but it is relevant context.
Message content: The passage makes three distinct claims. First, that manipulation of public opinion is a feature of democracy, not a bug. Second, that this manipulation is conducted by a class of specialists who are largely invisible to the public. Third, that this is how things should be.
Emotional register: Striking by its absence of apology or hedging. Bernays is calm and matter-of-fact. The emotional register is authority and expertise.
Implicit audience: Other professionals. The book was written for people in business, not for the general public. Bernays is explaining to his peers why their work is not only acceptable but necessary.
Strategic omission: Bernays does not ask whether the public might have an interest in knowing it is being managed. He does not address the question of what happens when the "intelligent manipulation" serves the interests of the manipulators rather than the manipulated. He presents manipulation as a neutral technical problem, not an ethical one.
What this source reveals: Bernays was unusually honest about what he was doing. Most propaganda does not announce itself. The value of this text is precisely that it does — it is a window into the worldview of a practitioner who had not yet learned to hide his methods.
Defining Propaganda: A Practitioner's View
After the second class meeting, Sophia stayed behind. Webb was erasing the whiteboard — a habit she would come to associate with the transition from class time to office hours, from performed certainty to admitted uncertainty.
"Can I ask you something?" she said.
"That's why I'm here," he said, without turning around.
"If everything is on a spectrum — if advertising and PR and journalism and education all shade into propaganda depending on context — then doesn't 'propaganda' just mean 'communication I disapprove of'? What's the point of a term that precise if everything qualifies under the right conditions?"
Webb set down the eraser and turned around. He considered the question with the expression Sophia would later learn meant he had thought about it before and still didn't have a clean answer.
"The point," he said, "is accountability."
He sat on the edge of his desk, which was his thinking position. "When we call something propaganda — carefully, after applying the definition, not just because we dislike its conclusions — we're making a specific claim. We're saying: this communication was designed to bypass your ability to evaluate it. Which means the appropriate response is not just to disagree with it. The appropriate response is to ask who designed it, with what resources, to serve whose interests, and what institutional or legal mechanisms might constrain them."
Sophia considered this. "So the definition determines the remedy."
"Exactly right. Call it propaganda — in the precise sense — and you're calling for media literacy, for platform regulation, for disclosure requirements, for accountability. Call it 'just persuasion' and you're saying the remedy is just to argue back. Those are completely different responses to the same phenomenon. The precision of the definition is not an academic exercise. It is a practical one."
He picked up the eraser again. "The other thing is resistance strategy. If you know you're dealing with a systematic attempt to bypass your critical reasoning, you can practice specific cognitive defenses — inoculation, source analysis, cross-referencing. If you just think you're dealing with someone who disagrees with you, you argue about the content. One approach is effective; the other usually isn't. Precision in definition is the precondition for an effective response."
Sophia wrote that down: Precision determines accountability. Precision determines remedy. Precision determines resistance.
Cross-Cultural Definitions: Non-Western Perspectives
The study of propaganda in Western universities has an unexamined assumption built into it: that "propaganda" is a pejorative term, that the activity it describes is inherently negative, and that the appropriate scholarly posture is critical. This assumption is historically parochial in ways that matter analytically.
In the Soviet Union, the word "propaganda" carried no inherent negative connotation. Soviet doctrine distinguished between propaganda — the in-depth education of committed party members and cadres in Marxist-Leninist theory — and agitation, the simpler, emotional persuasion of the masses toward specific political actions. Both terms were used approvingly. The Soviet state ran an entire apparatus of what it called "agitprop" — agitation-propaganda — as a legitimate and officially celebrated dimension of political education. The assumption embedded in this usage was that organized political communication in service of correct ideology was not manipulation but education: the transmission of true consciousness to those who had not yet achieved it. The pejorative sense of propaganda, in Soviet usage, was reserved for the communications of the capitalist West, which were understood as the ideologically dishonest defense of class interests.
This is not merely a semantic curiosity. It reveals that the negative framing of "propaganda" in Western usage reflects a specific historical and ideological context: the Cold War, during which Western liberal democracies had strong rhetorical reasons to treat all organized political persuasion as suspect (Soviet-style) while simultaneously treating their own political communication as simply "public information," "education," or "free speech." The Cold War framing normalized a double standard that is still embedded in the language we use: "propaganda" describes what adversaries do; "public diplomacy," "messaging," and "strategic communication" describe what we do.
The Chinese concept of 宣传 (xuānchuán) follows a similar pattern. The standard translation of xuānchuán in most Chinese-English dictionaries is "propaganda" or "publicity" — but the term carries no inherently negative connotation in Chinese. It means something closer to the spread of ideas, information, or values through communication, and it is used neutrally to describe activities ranging from commercial advertising to public health campaigns to political education. The Chinese Communist Party has a 宣传部 (Xuānchuán Bù), typically translated in Western media as the "Propaganda Department," a translation that immediately sounds sinister to English ears — though the department's activities are not functionally different from the communications offices of most large governments and corporations. The negative charge is in the translation, not the original.
What do these non-Western framings reveal? Several things. First, they confirm that the pejorative loading of "propaganda" in English is not a neutral analytical achievement — it is a product of specific historical and geopolitical struggles. Second, they remind us that every society conducts organized political communication, and that the question of which communication counts as legitimate and which as propaganda is never decided on purely logical grounds. It is decided by power: the perspectives that define which communication is "information" and which is "propaganda" are the perspectives of those with enough institutional authority to make their definitions stick. Third, and most usefully for our purposes, the cross-cultural comparison highlights the feature of Western propaganda critique that is genuinely analytically distinctive: not the pejorative label, but the emphasis on the protection of individual cognitive autonomy. The working definition in this book is concerned with propaganda's effect on the audience's capacity for independent judgment. That concern — for the individual's right to reason freely — is not universal; it reflects specific philosophical traditions, and acknowledging that does not require abandoning it.
Ingrid Larsen, the Danish exchange student in Webb's seminar, raised this point during the third class meeting. "In Denmark," she said, "we have a concept — meningsdannelse — that means something like 'the formation of opinion.' There's an assumption built into Danish civic culture that organized political communication is a normal and legitimate feature of democracy, and that the boundary between information and propaganda is drawn by whether the communication respects the audience's right to make up their own mind. You don't have to call it propaganda to study it critically. You just have to ask: does this communication help people think, or does it help someone else think for them?"
Webb let the room sit with that for a moment. "That," he said, "is as good a one-sentence definition as I've heard. Write it down." Sophia did. Later, reviewing her notes, she circled the phrase "think for them" and drew a line to Webb's working definition on the previous page. The two framings pointed at the same thing from different directions. That convergence, she suspected, meant they were getting close to something real.
Synthetic Media and the Definitional Frontier
Every definition of propaganda examined in this chapter was constructed in an era when the basic relationship between communication and reality was relatively stable. A photograph depicted something that had happened. A recorded speech captured words that had been spoken. A document was either genuine or forged — and forgery, while possible, required technical skill, left detectable traces, and was limited in scale. The propagandist's toolkit included selective framing, misleading context, and strategic omission, but raw fabrication of audio-visual evidence was expensive, identifiable, and comparatively rare.
That era is ending. The emergence of synthetic media — AI-generated images, audio, and video that depict people doing or saying things they never did or said — creates definitional challenges for propaganda studies that none of the classical frameworks fully anticipated.
Deepfakes and the authenticity problem. Deepfakes are video or audio recordings generated or significantly altered by machine learning systems to produce convincing fabrications. The technology became publicly accessible around 2017 and has improved at a pace that has consistently outrun detection capabilities. By the early 2020s, generating a convincing synthetic video of a public figure required neither specialized hardware nor advanced technical training — freely available software running on a consumer laptop could produce output that deceived a significant proportion of viewers on first inspection. By the mid-2020s, the detection problem had become sufficiently acute that major platforms invested in watermarking, provenance verification, and synthetic-media detection infrastructure — with uneven and often lagging results.
What deepfakes introduce that was previously theoretical is fabricated primary evidence. Previous propaganda had to work with real events, selectively framed. The fabricator of a deepfake is not selecting from reality — they are constructing reality wholesale, in a form that the audience's default truth-detection mechanisms (Does this look like a real video? Does the person's voice sound natural? Are their lip movements synchronized?) are not equipped to evaluate. The phenomenological experience of watching a deepfake is, for most viewers under most conditions, indistinguishable from watching a genuine recording. This means that the audience cannot rely on the most basic epistemic resource they would normally bring to evaluating visual evidence: their own perceptual judgment.
Algorithmic amplification as propaganda infrastructure. Deepfakes represent one end of a synthetic media spectrum. At the other end — less dramatic but arguably more consequential — is the large-scale, automated generation of text designed to populate the information environment with coordinated, apparently organic content. Large language models can generate plausible news articles, social media posts, letters to the editor, product reviews, and political commentary at a scale no human workforce could match. When deployed as part of a coordinated influence campaign, AI-generated text eliminates one of the resource constraints that previously limited propaganda operations: the cost of content production. An operation that previously required hundreds of human workers to simulate organic grassroots discussion can now generate equivalent volume with minimal human oversight.
The definitional question this raises is significant. The working definition in this chapter emphasizes intent, systematicity, and bypass of critical reasoning. All three elements are present in AI-assisted influence operations. But a new element is added: the non-human origin of the persuasive content. When a piece of communication is generated by an algorithm, who is the communicator? The developer of the model? The operator who deploys it? The political actor who funds the operation? The platform that amplifies the output? The diffusion of authorship across multiple technical and organizational layers is not accidental — it is one of the structural features that makes AI-generated propaganda attractive to those who wish to avoid accountability.
The provenance crisis. Scholars of media and communication have begun describing this situation as a provenance crisis — a breakdown in the chain of evidence that connects a piece of communication to its origin. For most of human history, the authenticity of a document, recording, or image could in principle be established through physical examination, chain-of-custody documentation, or forensic analysis. In a world of synthetic media, these verification mechanisms are under systematic pressure: detection capabilities lag generation capabilities, provenance metadata can be stripped or forged, and the sheer volume of synthetic content overwhelms the institutional capacity to evaluate it. The result is not only that specific fabrications are harder to identify — it is that the general credibility of authentic content is undermined. When audiences cannot reliably distinguish genuine recordings from fabrications, they may rationally discount all recordings, including the genuine ones. This is, for some propagandists, the intended effect: not to convince audiences that a specific fabrication is true, but to convince audiences that no evidence can be trusted — a state of epistemic paralysis from which the propagandist's preferred narrative is often the only available anchor.
Revisiting the working definition. Does the emergence of synthetic media require revising the working five-part definition established earlier in this chapter? Reviewing each element:
- Intent to produce a specific attitude or behavior: Yes, present in AI-assisted influence operations.
- Systematic, organized effort: Yes — these operations are organized and coordinated, even if the execution is automated.
- Serves the interests of the propagandist over the audience: Yes.
- Bypasses critical reasoning: Yes — synthetic media specifically targets the audience's perceptual and evidentiary reasoning, not just their analytical reasoning.
- Impairs autonomy: Yes, in the specific sense that it degrades the audience's capacity to evaluate evidence independently.
The working definition survives contact with synthetic media, but it needs one addition: the recognition that "bypassing critical reasoning" now includes bypassing perceptual evaluation, not just analytical evaluation. Propaganda has historically worked by exploiting cognitive biases in the processing of real information. Synthetic media propaganda works by substituting fabricated information in the place where the audience's epistemology expects real evidence — targeting a level of trust that most definitional frameworks assumed was not accessible to the propagandist.
Webb paused in his explanation of this material during the fourth class session and looked at the room.
"Here's the practical upshot," he said. "Every definition of propaganda we've discussed assumes that the propagandist is working with reality — selecting from it, framing it, distorting it. That assumption is now in question. And when that assumption breaks, the audience's default epistemology — trust your eyes, trust recordings, trust what seems real — becomes a liability rather than an asset."
He let that sit.
"The study of propaganda," he said, "has always been partly a study of the gap between evidence and belief. That gap is now larger than it has ever been. And the frameworks we've developed to navigate it were built for a narrower gap."
Tariq had been taking notes throughout. He raised his hand. "So is there a version of the working definition that's future-proof? One that holds up regardless of what the technology becomes?"
Webb smiled. "The version that holds up," he said, "is the one that focuses on the relationship between the communicator and the audience's capacity to reason. Not on what technology is being used. Not on whether the content is technically true or false. But on whether the communication is designed to work with the audience's ability to evaluate evidence, or against it. A deepfake is propaganda not because it's false. It's propaganda because it's designed to foreclose the audience's ability to know that it's false. That's the diagnostic question. It will outlast every specific technology."
Debate Framework: Does Intent Define Propaganda?
The question: Is communication propaganda only if the communicator intends to manipulate — or can propaganda emerge from structural conditions regardless of any individual actor's intent?
Position A: Intent is necessary. The intentional view holds that without a communicator who is deliberately trying to bypass critical reasoning, what we have is misinformation (accidental), error (honest), or structural bias (a different problem requiring different solutions). If we call everything propaganda, we lose the ability to hold specific actors accountable. Intent also matters ethically: a journalist who gets the facts wrong is culpable in a different way than a propagandist who fabricates them.
Position B: Intent is not necessary — and focusing on it is a trap. Ellul's structural view holds that the most dangerous propaganda is the kind no one designed. Advertising normalizes consumerism without any individual advertiser intending to shape society's values. Cable news reinforces in-group identity without any individual anchor intending to fragment the public sphere. Social media algorithms optimize for engagement without any individual engineer intending to amplify outrage. The cumulative effect is a propaganda environment, but there is no propagandist to hold accountable.
The implications diverge: If intent is necessary, propaganda analysis is about identifying bad actors. If intent is not necessary, propaganda analysis is about identifying systems — and the question becomes structural rather than moral.
Position C: Intent is irrelevant — effects are what matter. A third position, drawing on consequentialist ethics, argues that the intent-versus-structure debate is a distraction from the only question that matters practically: what does this communication do to the people who receive it? A message that systematically distorts someone's understanding of the world, impairs their capacity for independent judgment, and serves interests other than their own is propaganda in the morally relevant sense — regardless of whether anyone intended those effects, and regardless of whether those effects emerge from one persuader's deliberate plan or a system's structural logic.
The consequentialist view has real analytical appeal. It shifts the investigative burden from the difficult task of proving intent (which propagandists are careful to conceal) to the more tractable task of measuring effect (which can be studied empirically). It avoids the problem of the "good-faith" propagandist — the ideologically committed actor who genuinely believes they are serving the audience's interests while systematically manipulating them. And it provides a unified framework for analyzing both intentional propaganda and structural propaganda without needing two separate categories.
The consequentialist position also has weaknesses. If the definition of propaganda is purely effect-based, then any communication that produces false beliefs — including honest journalism that turns out to be wrong — qualifies. More seriously, the consequentialist view has no natural principle for distinguishing effective advocacy from propaganda: if a labor union produces communication that genuinely shifts workers' beliefs about their economic interests by giving them accurate information presented compellingly, has it produced propaganda? The beliefs have been influenced; the effects are real. The consequentialist cannot easily say no without importing some criterion — typically about the methods used — that reintroduces something like the intent criterion through the back door.
Where does this leave us? The working definition above requires intent. But the three-position framework reveals that this choice is neither obvious nor cost-free. The book's structural chapters (Parts 3, 6, and 7) will return to the question of whether that requirement is too narrow. Hold all three positions simultaneously as you move through the material. In practice, the most useful analytical stance is usually to ask both questions at once: what was the intent, and what were the effects — and when they diverge, why?
Historical Timeline: The Word "Propaganda," 1450–Present
-
1450 — Gutenberg's printing press reaches practical operation in Mainz. The technology that will make mass propaganda possible for the first time becomes available. Within decades, printers across Europe are producing political, religious, and commercial content at scales previously unimaginable.
-
1517 — Martin Luther distributes his Ninety-Five Theses, which spread rapidly via the printing press across Germany and Europe. The pamphlet as a medium of mass political persuasion is established. Both Protestant and Catholic factions produce prodigious quantities of pamphlets, woodcuts, and broadsides designed for low-literacy audiences.
-
1622 — Congregatio de Propaganda Fide established by Pope Gregory XV. Neutral usage: the spread of doctrine. The institutional response to the Reformation's information war.
-
1640s — English Civil War produces one of history's most intense pamphlet wars. Royalist and Parliamentarian factions flood the country with competing accounts of events, religious justifications, and calls to arms. John Milton's Areopagitica (1644), the first major argument for press freedom, is itself a product of this propaganda environment.
-
1780s–1790s — The French Revolutionary press transforms political communication. Newspapers multiply; radical pamphlets circulate in the hundreds of thousands; popular images and songs spread revolutionary ideology. The Committee of Public Safety during the Terror exercises propaganda control to a degree not previously seen in a nominally republican government.
-
1799–1815 — Napoleon Bonaparte becomes one of history's most deliberate managers of his own image. He commissions paintings, controls the press, stages ceremonies and coronations with explicit attention to their symbolic messaging, and ensures that his military bulletins — distributed throughout France and occupied Europe — present his campaigns in the most favorable light. The "Napoleonic legend" is substantially a propaganda construction, built during his lifetime and consolidated during his St. Helena exile.
-
1895–1898 — The era of "yellow journalism" in the United States, centered on the competition between William Randolph Hearst's New York Journal and Joseph Pulitzer's New York World. Sensationalized, often fabricated reporting on Cuba's struggle for independence from Spain is widely credited — though the historical reality is more complex — with helping push the United States toward the Spanish-American War of 1898. Hearst's alleged telegram to a photographer in Cuba — "You furnish the pictures and I'll furnish the war" — may be apocryphal, but it captures a genuine dynamic of commercially motivated news distortion serving political ends.
-
1795–1815 — French Revolutionary and Napoleonic eras see the term used for political communication across Europe, still largely neutral.
-
1914–1918 — World War I. All major powers run organized propaganda bureaus. British propaganda targeting U.S. neutrality is particularly effective and, eventually, widely recognized. The word begins acquiring negative connotations.
-
1920s — The term solidifies as pejorative in public usage. Bernays responds by coining "public relations" (1923) and writing Propaganda (1928) in a deliberate attempt to reclaim the term's neutrality — and fails.
-
1933 — Joseph Goebbels appointed Reich Minister of Public Enlightenment and Propaganda. The word's association with totalitarian control is cemented in the Western imagination.
-
1945–1960 — The post-WWII period sees propaganda studies institutionalized as an academic field. The Institute for Propaganda Analysis (founded 1937, dissolved 1942) had done early work on identifying propaganda techniques for popular audiences. After the war, the experience of Nazi and Soviet propaganda drives substantial investment in academic research on persuasion, public opinion, and communication effects. Paul Lazarsfeld at Columbia, Carl Hovland at Yale, and Harold Lasswell at Chicago develop the empirical foundations of communication research that underlie the field today. The Cold War ensures that propaganda remains a central preoccupation of both academic research and government funding.
-
1948 — U.S. Congress passes the Smith-Mundt Act, prohibiting the domestic use of U.S. government propaganda materials — an implicit acknowledgment that the government produces them.
-
1965 — Jacques Ellul publishes Propagandes, the most theoretically ambitious treatment of the subject. Argues that propaganda is a structural feature of all technological societies, not just totalitarian ones.
-
2016–present — "Disinformation," "misinformation," and "influence operations" largely replace "propaganda" in journalistic and policy vocabulary — partly because "propaganda" sounds Cold War–era, and partly because the newer terms carry more specific technical meanings. Scholars argue about whether this represents analytical progress or rhetorical retreat.
Argument Map: Is Propaganda Distinct from Persuasion?
Central claim: Propaganda is not merely persuasion; it is a specific kind of influence that crosses a moral and analytical line.
Supporting argument 1: Propaganda bypasses rational agency, while legitimate persuasion works through it. - Grounds: The techniques documented in Part 2 (fear appeals, false authority, repetition) are effective precisely because they exploit cognitive shortcuts rather than engaging the audience's reasoning capacity. - Warrant: Communication that respects autonomy gives its audience the information and argument necessary to evaluate the claim. Communication that exploits bias does not.
Supporting argument 2: Propaganda conceals its intent; legitimate persuasion is transparent. - Grounds: Advertising identifies itself as advertising. Journalism identifies its sources. Propaganda often does not identify who is communicating or why. - Warrant: Concealment of intent removes the audience's capacity to apply appropriate skepticism.
Objection: The line is impossible to draw cleanly. All communication involves selection and framing. All advocacy involves emotional appeal. - Response: This objection proves too much. The fact that a spectrum exists does not mean the poles are identical. Murder and speeding are both violations of rules about vehicles, but they are not the same violation. The difficulty of locating the exact line does not eliminate the line.
Action Checklist: Applying the Working Definition
When you encounter a piece of communication you suspect might be propaganda, ask these five questions:
- [ ] Intent: Is there an identifiable communicator with an interest in producing a specific attitude or behavior in you? (If yes, proceed.)
- [ ] Technique: Does the communication use techniques that work by bypassing your critical reasoning — through emotional intensity, cognitive bias exploitation, false authority, or repetition — rather than by giving you evidence and argument? (If yes, flag.)
- [ ] Interest served: Whose interests does this communication serve? Are those interests the same as yours, or are they in tension with yours? (Note the answer.)
- [ ] Omission: What relevant information is absent from this communication? What would you need to know to evaluate its claims independently? (Identify the gaps.)
- [ ] Effect on autonomy: After encountering this communication, do you feel more or less equipped to think clearly about the subject? (Trust this instinct as data.)
This checklist will not always produce a definitive answer. Sometimes it will produce "this is ordinary advocacy" or "this is legitimate advertising." Sometimes it will produce "this is propaganda." The goal is to make the evaluation explicit rather than reactive.
Inoculation Campaign: Choosing Your Community
The Inoculation Campaign begins here.
Your task for Chapter 1 is to choose the community you will analyze throughout this course. This is the community for which you will ultimately design a propaganda-resistance campaign.
What makes a good target community: - A group you know well enough to describe specifically — not "young people" or "Americans" but "first-generation college students at a regional university" or "Spanish-speaking residents of a specific metro area" or "rural evangelical Christians in the Midwest" - A group that faces identifiable propaganda threats you can research - A group you can potentially reach with a real campaign
Your first deliverable: A one-page community profile that includes: 1. Name and description of the community 2. Key demographic and cultural characteristics 3. Primary media channels this community uses 4. One or two initial observations about propaganda or disinformation this community encounters
A strong community profile is characterized by specificity, not scale. The most common mistake at this stage is choosing a community so large that meaningful analysis becomes impossible: "Americans" or "social media users" or "college students" covers such heterogeneous populations, consuming such different media across such different contexts, that no inoculation campaign could meaningfully address them as a unit. Specificity is not a limitation — it is what makes the analysis rigorous and the eventual campaign realistic.
A second common mistake is choosing a community based on familiarity alone, without considering whether that community faces propaganda threats that are researchable. A strong choice is a community you know personally and one where you can document the specific influence campaigns targeting it through external sources: fact-checking databases, academic studies, journalism. Your personal knowledge provides qualitative texture; external sources provide verifiable evidence. Both are necessary.
A third mistake is choosing a community in a way that presupposes the conclusion. Approaching the analysis with the assumption that your community is a victim of propaganda and that the propagandists are already identified will produce a confirmation exercise rather than an analysis. The most analytically productive posture is genuine curiosity: you suspect your community faces propaganda threats, you have some preliminary observations, and you want to investigate whether those suspicions hold up to scrutiny. Some will. Some won't. Both findings are valuable.
Over the course of the semester, you will return to your community profile at the end of every chapter to refine it. By Chapter 10, your profile should include a detailed map of the media ecosystem your community inhabits. By Chapter 20, it should include a documented inventory of specific propaganda messages targeting your community. By Chapter 35, it should include a psychological profile of the vulnerabilities those messages exploit. The one-page profile you write this week is the seed of that analysis. The more specific and honest you are now, the more useful it will become.
You will build on this profile in every subsequent chapter. The more specific your community, the more useful your analysis will become.
Chapter Summary
Defining propaganda is harder than it looks, and getting the definition right matters analytically and ethically. This chapter has established:
- The pre-modern history of organized political persuasion — Roman coins and monuments, Reformation pamphlet wars, the Congregatio de Propaganda Fide — demonstrating that systematic influence campaigns are as old as political power itself
- The etymological origin of "propaganda" in a 1622 papal congregation — neutral at first, pejorative after WWI
- Four influential scholarly definitions: Lasswell, Bernays, Ellul, and Jowett & O'Donnell, plus the structural Propaganda Model of Chomsky and Herman
- A working five-part definition emphasizing intent, bypass of critical reasoning, interest-serving, and autonomy-impairment
- The Spectrum Model — propaganda as one end of a continuum from legitimate advocacy through strategic framing and systematic distortion
- The distinctions and contested boundaries between propaganda and education, public relations, advertising, and journalism
- The specific new features of digital propaganda: scalability, micro-targeting, speed, persistence, and platform architecture as enabler
- The fundamental three-position debate about whether intent, structure, or effects should define propaganda
- Cross-cultural perspectives — Soviet agitprop, Chinese xuānchuán — that challenge the assumption that the pejorative framing of propaganda is neutral or universal
- The practitioner's view: why precision in definition is a precondition for accountability, remedy, and resistance strategy
The most important takeaway from this chapter is not a definition — it is a disposition. The study of propaganda requires holding complexity: precision about what counts as propaganda while remaining alert to the structural conditions that make propaganda possible even without any individual propagandist.
Sophia walked out of that first class holding both things in her head simultaneously — the working definition in her notes and Webb's warning that it was incomplete. She had expected a course about bad actors doing bad things. What she was beginning to understand was that the subject was more interesting and more uncomfortable than that: propaganda was a problem of communication itself, embedded in the media systems, economic structures, and cognitive architecture of modern life. The question was not only how to identify propaganda when a bad actor waved a flag. The question was how to think clearly when the entire information environment was organized — by design, by accident, or by structural logic — to make clear thinking harder.
That is the problem this book is organized around. Each of the chapters ahead will add a new dimension to the working definition, test it against historical cases and contemporary examples, and push toward the practical question that underlies every analytical chapter: what do you do about it? The study of propaganda has two purposes that are inseparable. One is understanding. The other is resistance. Neither is possible without the other.
Professor Webb ended the first class by erasing the whiteboard. "Now you have a working definition," he said. "By the end of the semester, you'll know exactly why it's incomplete."