> "The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function."
In This Chapter
- The Session That Begins Differently
- 36.1 The Positive Account: What Ethical Persuasion Is
- 36.2 The Hard Cases: When Emotional Appeals Are Ethical
- 36.3 The Narrative Problem
- 36.4 Institutional Models of Ethical Communication
- 36.5 The Effectiveness Question: Does Ethical Persuasion Win?
- 36.6 Advertising Ethics: The Industry's Internal Debates
- 36.7 Public Relations and Advocacy: Drawing the Line
- 36.8 Advocacy Journalism: Transparency vs. Agenda
- 36.9 Teaching Ethical Communication: What Works
- 36.10 Research Breakdown: Anti-Disinformation Campaign Effectiveness
- 36.11 Primary Source Analysis: The Society of Professional Journalists Code of Ethics (2014)
- 36.12 Debate Framework: Is Ethical Persuasion Possible at Scale in the Current Information Environment?
- 36.13 Action Checklist: The Ethical Communicator's Standard
- 36.14 Inoculation Campaign: Final Assembly
- 36.15 Part 6 Closing: The Communicator You Will Be
- Chapter Summary
- Key Terms
Chapter 36: Ethical Persuasion and Responsible Communication
"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function." — F. Scott Fitzgerald, The Crack-Up (1936)
"Rhetoric is the art of ruling the minds of men." — Plato, Phaedrus
The Session That Begins Differently
Professor Marcus Webb does not carry anything into the room on the day of Chapter 36.
No laptop. No printed notes. No coffee. He walks in at 8:58, which is unusual — he is typically already at the front of the room when students arrive. He closes the door behind him, pulls his chair from behind the desk rather than staying at the front, and sets it in the middle of the circle of seminar desks. He sits in it. He looks at the students for a moment without speaking.
Sophia Marin notices immediately that something is different. She has been in this seminar since the beginning of the semester, and she has learned to read Webb's body language with the same care she applies to everything else in this course. He is not performing confidence. He is — she searches for the word — deliberate. Like someone who has decided to say something and is giving himself one last moment to reconsider.
Tariq Hassan, seated across the circle, looks up from his phone. He has been the skeptic of this seminar since week one, and his skepticism is not hostile — it is constitutional, temperamental, the thing that makes him good at what he does. He has spent twelve weeks questioning every claim this course has made, demanding evidence, pushing back on frameworks he found too neat. He looks at Webb with curiosity rather than wariness.
Ingrid Larsen sets her pen down. She comes from Stockholm, where she worked for two years at SVT — Sweden's public broadcaster — before beginning her graduate degree. She knows what institutional ethics looks like in practice, not just in principle. She has been waiting for this chapter since the course began.
"Before we start," Webb says, "I want to tell you something."
The room is quiet.
"I did not go into academia first. A lot of you know this — it's in my faculty bio — but the bio doesn't say what I was actually doing before. I was in political communications. I worked for a U.S. Senate campaign in the mid-1990s and then for a consulting firm that did message development for candidates at the state and federal level. I was good at it. I knew how to find the emotional center of a message, how to test it, how to deliver it through the right channels. I understood framing before I had a name for it. I knew about illusory truth before I'd read a word about Hasher and Goldstein."
He looks at his hands for a moment.
"There was a campaign — I'm not going to name it, because the individual involved is still in public life — where I helped develop a messaging strategy that I knew at the time was not fully honest. Not manufactured lies. Nothing that a fact-checker would catch in 1997. But it was strategically incomplete. It emphasized certain truths in ways designed to produce a false impression of the overall situation. I knew what I was doing. I was good enough at the craft to know exactly what I was doing."
He pauses. "The candidate won. By a substantial margin. And I sat in the victory party and felt — nothing. Nothing that felt like success. I left that firm eight months later. I applied to doctoral programs in communication and started the work of trying to understand the thing I had been participating in."
He looks up at the room.
"I'm telling you this not for sympathy, and not because I think it makes me more credible. I'm telling you because the question this chapter is asking — can you persuade people without manipulating them? — is not an academic question for me. It cost me a career I was good at. It has been the professional obsession of the twenty-five years since. And it is the question I want to try to answer honestly with you today."
The room is very quiet. Tariq puts his phone in his bag. Sophia looks at her notebook but doesn't write anything.
After a moment, Webb stands, returns the chair to its place, and picks up his marker.
"Let's begin."
36.1 The Positive Account: What Ethical Persuasion Is
The preceding chapters of Part Six have been largely diagnostic. Chapter 31 asked what media literacy can and cannot do. Chapter 32 examined the limits of fact-checking. Chapter 33 explored inoculation theory — how to build resistance to manipulation before it occurs. Chapter 34 engaged the philosophical question of what distinguishes persuasion from manipulation. Chapter 35 mapped the legal landscape of what governments can and cannot prohibit.
This chapter takes a different posture. It does not ask what propaganda is or how to resist it. It asks what its opposite looks like — not merely the absence of propaganda but the presence of something substantive and describable. What does ethical persuasion actually look like, and why is that question harder than it sounds?
The standard approach in ethics of communication defines ethical persuasion negatively: it is persuasion that does not manipulate, does not deceive, does not exploit vulnerabilities. These negative criteria are necessary but insufficient. Telling someone not to lie is not the same as teaching them to tell the truth. What we need is a positive account — an account of what ethical persuasion does, not merely what it avoids.
Drawing on philosophical work in communication ethics (Johannesen, 2002; Bok, 1978; Held, 2006), as well as practical standards from journalism, public health communication, and public relations, we can identify five affirmative criteria for ethical persuasion. These criteria should be understood as a unified framework, not a checklist to be satisfied piecemeal. A communication that satisfies four of the five is not "mostly ethical" — the criteria are interdependent.
A. Transparency of Source and Intent
Ethical persuasion is identifiable. The audience knows, or can readily know, who is communicating and why. This is not merely a legal requirement (though it often is, as Chapter 35 established). It is an epistemic one. An audience cannot evaluate a persuasive message without knowing the interests of the person delivering it. A tobacco company arguing that cigarettes do not cause cancer, a pharmaceutical company funding studies on its own drugs, a political operative producing content that appears to come from grassroots sources — all of these violate the transparency criterion even when each individual factual claim is technically accurate.
Source transparency is not simply a matter of attribution. It requires that the relationship between source and message be disclosed in a way that allows the audience to understand the interests at stake. A press release attributed to "the Coalition for Responsible Energy Choices" satisfies the formal disclosure requirement while concealing that the coalition is wholly funded by fossil fuel companies. Transparency in the full sense requires disclosure of material interests, not merely of organizational names.
Intent transparency is equally important and more often neglected. An ethical communicator is clear about what they want the audience to believe or do as a result of the communication. Hidden agendas — communications that present themselves as educational or informational while actually serving persuasive goals — violate this criterion. The native advertising problem (which we examine in 36.6) illustrates this clearly: content that appears editorial while serving advertising goals violates intent transparency regardless of whether it carries a small-print "Sponsored Content" label.
B. Accuracy
Ethical persuasion makes only claims that are true and proportionate. This criterion has two components that are equally important.
"True" means that all factual claims in the communication can withstand scrutiny — that they are supported by reliable evidence, accurately cited, and not presented in ways that would be recognized by their original sources as distortions of their meaning. A communicator who cites a study accurately in its numbers but ignores the study's stated limitations, or who presents a scientific consensus that has substantially shifted since the date they cite, violates the accuracy criterion.
"Proportionate" means that the emphasis, repetition, and emotional weight given to different claims accurately reflects the underlying evidence. This is where many technically accurate communications become ethically problematic. If a political advertisement repeats a true claim about an opponent's record many times while omitting equally true and relevant context, the repetition produces an overall impression that is disproportionate to the evidence — a violation of the accuracy criterion in its deeper sense. The illusory truth effect (Chapter 11) is exploited precisely when communicators understand that repeated exposure to even accurate claims can produce inflated confidence in those claims. Ethical communication is aware of this effect and does not exploit it.
C. Completeness
Ethical persuasion does not strategically omit material information. This criterion follows from accuracy but extends beyond it. A communication can make only true claims while being profoundly dishonest by omitting the context that would change how those claims are understood.
The concept of material information, borrowed from financial disclosure law, is useful here: information is material if a reasonable person in the audience would want to know it before making the decision the communication is designed to influence. When a public health campaign reports that vaccine X reduces infection rates by 80% without disclosing that the study population differs significantly from the target audience, the omission of that difference is material. When an advocacy organization presents survey results without disclosing who funded the survey or how the questions were worded, the omission is material.
The completeness criterion creates genuine tensions with the practical demands of communication. No message can contain all relevant information; every communication involves selection. The ethical question is whether the selection is made in the service of clarity or in the service of a particular conclusion — whether omissions help the audience understand what they most need to know or prevent the audience from knowing what would undermine the communicator's goal.
D. Respect for Autonomy
Ethical persuasion is designed to enable the audience's informed judgment, not to bypass it. This is the most philosophically complex criterion and, in many ways, the defining one.
Chapter 34 examined Caplan's distinction between epistemic autonomy — the capacity to form beliefs on the basis of one's own reasoning — and the various ways that manipulative communication attacks it. Ethical persuasion respects this capacity by providing reasoning that the audience can evaluate, evidence they can check, and arguments they can accept or reject. It does not seek compliance by exploiting cognitive biases, manufacturing emotional states that distort judgment, or making its audience less able to think critically than they would otherwise be.
This criterion has a practical implication that distinguishes ethical from unethical persuasion in observable ways: ethical persuasion performs better when the audience thinks more carefully, not less. If a message becomes less persuasive under careful scrutiny — if it relies on the audience not checking the sources, not comparing it to the record, not thinking critically about the claims — that is a diagnostic signal that something has gone wrong with respect to autonomy.
The autonomy criterion does not prohibit emotional appeals. We will address this directly in 36.2. It does require that emotional appeals not be designed to replace rational evaluation but to make information more salient, more vivid, and more motivating to act on — emotions in service of cognition, rather than in place of it.
E. Alignment with Audience Interests
Ethical persuasion serves the genuine interests of the audience, not merely the interests of the communicator. This criterion is the most contested, because interests are not always transparent, people often disagree about what their own interests are, and the concept can be used paternalistically to override audience preferences.
With those caveats, the criterion captures something real. There is a difference between a physician persuading a patient to take a medication because the physician believes it will benefit the patient and a pharmaceutical company persuading a patient to demand an expensive medication because it will increase the company's revenue. There is a difference between a public health organization persuading teenagers not to smoke because smoking will harm their long-term health and a tobacco company persuading teenagers to associate cigarettes with social desirability because it will build lifelong customers. In both cases, the persuader has an interest. The ethical question is whether that interest is aligned with the audience's genuine interest or in tension with it.
When the communicator's interests and the audience's interests diverge significantly, the ethical obligation is heightened disclosure: the audience needs to know about that divergence to evaluate the communication properly. This is not to say that all advocacy for self-interested positions is unethical — lawyers argue for their clients, lobbyists advocate for their industries, and both activities can be legitimate. But when the divergence is not disclosed, or when the message is designed to conceal that the audience's interests and the communicator's interests are not the same, the autonomy criterion is violated.
36.2 The Hard Cases: When Emotional Appeals Are Ethical
Tariq raises a predictable objection, and it is a good one.
"You've given us criteria for ethical persuasion," he says, "but the most effective persuasion is emotional, not rational. Fear, anger, hope, disgust — these are the engines of political action. If you tell people they have to use only rational appeals, you're not giving them a recipe for ethical persuasion. You're giving them a recipe for losing."
Webb nods. "That's right. And that's why criterion D doesn't prohibit emotional appeals. It requires that they respect autonomy. Those aren't the same thing. Let's get into the distinction."
The "fear appeals" literature in health communication represents one of the richest empirical bodies of evidence on when emotional persuasion is effective and when it backfires. The foundational framework, proposed by Kim Witte in 1992 and developed into the Extended Parallel Process Model (EPPM), identifies two key variables that determine whether a fear appeal is effective: perceived threat (does the audience believe the danger is real and relevant to them?) and perceived efficacy (does the audience believe they can do something effective about it?).
When perceived threat is high but perceived efficacy is low, fear appeals often backfire. The audience, overwhelmed by a danger they believe they cannot address, tends to engage in defensive responses: denial, avoidance, reactance. The anti-drug campaigns of the 1980s and 1990s — particularly the "Just Say No" and D.A.R.E. programs — provide a well-documented case of fear appeals that systematically violated the efficacy component. They amplified perceived threat without providing effective efficacy messages, and research subsequently found that D.A.R.E. graduates showed no difference in drug use rates from non-participants, with some studies finding marginally higher rates among participants (Ennett et al., 1994; West and O'Neal, 2004).
The ethical dimensions of fear appeals follow closely from the empirical ones. A fear appeal is ethical when:
-
The threat is real. The danger communicated accurately reflects evidence about actual risk. A fear appeal that exaggerates the probability or severity of a threat for rhetorical effect violates the accuracy criterion even when it is effective in the short term.
-
The fear is proportionate. The emotional intensity of the communication corresponds to the actual magnitude of the risk. Communicating a risk that affects 3% of a population with the same emotional register as one that affects 30% is disproportionate even when both statistics are accurate.
-
An efficacy response is provided. The communication tells the audience not only what to fear but what to do. A fear appeal that leaves audiences frightened and helpless is not just ineffective — it is an ethical violation, because it produces distress without providing the agency through which that distress could be productive.
-
The emotional response does not overwhelm rational evaluation. The fear is mobilizing, not paralyzing. The audience is moved to act, not frozen in denial or overwhelmed into passivity.
The "Real Cost" Campaign
The FDA's anti-tobacco campaign "The Real Cost," launched in 2014, provides one of the most carefully studied examples of ethical fear appeals in contemporary public health communication. The campaign targeted 12-to-17-year-olds who had already used tobacco products and were identified as at risk of escalating use.
The campaign's design was sophisticated precisely in its handling of the ethical dimensions of fear appeal. Rather than dramatizing lung cancer risk — a long-term consequence that teenagers reliably discount, as the research on temporal discounting consistently shows — "The Real Cost" focused on immediate, visible consequences: tooth decay, skin damage, reduced athletic performance. The emotional register was fear plus efficacy: here is something real that is happening to you now, and here is what you can do about it (quit or don't start).
The campaign was also transparent in its source (FDA), its intent (reducing youth tobacco use), and its funding (the Family Smoking Prevention and Tobacco Control Act of 2009). It made no false claims. It presented evidence in proportionate terms. And the FDA-commissioned research found that it was associated with a statistically significant reduction in youth smoking initiation — an estimated 587,000 adolescents who would otherwise have become smokers did not, between 2014 and 2016 (Farrelly et al., 2017).
"The Real Cost" demonstrates that ethical and effective are not opposites. It succeeded not despite its ethical constraints but, in part, because of them: messages that are accurate, proportionate, and trust-respecting have a longer effective lifespan than messages that work through deception, because they do not depend on audiences remaining ignorant of the truth.
Contrast this with the tobacco industry's own historical communication strategy — Chapter 15's case study on cigarette advertising and Chapter 22's examination of the industry's organized disinformation effort around cancer research (Michaels, 2008). The industry's messaging survived for decades precisely because it was sophisticated enough to maintain technical deniability. But the long-term costs — in public trust, regulatory consequences, and ultimately civil liability — were enormous. Ethical persuasion is not only the right thing. Over the long run, it is also the strategically sound thing.
36.3 The Narrative Problem
Stories are more persuasive than statistics. This is not a normative claim — it is an empirical one, replicated across decades of research in communication psychology. Melanie Green and Timothy Brock's (2000) narrative transportation theory demonstrates that audiences who become absorbed in a story — transported into its world — engage in less counter-arguing, are more accepting of story-world propositions, and show greater attitude and belief change than audiences processing the same information in non-narrative form.
This creates an apparent ethical problem for communicators committed to respecting audience autonomy. If narrative reduces critical thinking, is using narrative a form of manipulation? Should an ethical communicator avoid stories?
The answer requires a careful distinction between two kinds of narrative effect.
The first kind operates in parallel with accurate information: the story makes true information more vivid, more memorable, and more emotionally salient. The character whose lung cancer diagnosis the reader follows has the disease in proportionate rates; the risks depicted are accurate; the outcome is representative of typical cases, not cherry-picked for maximum emotional effect. Narrative transportation here serves the same function as a good visualization or a well-chosen analogy — it helps audiences engage with and retain information they would otherwise process superficially or forget.
The second kind operates in place of accurate information: the story's emotional impact substitutes for evidence, the character's experience is selected to be maximally affecting rather than typical, and the transportation into the story prevents the audience from noticing that the case presented is an outlier, the statistics have been suppressed, or the causal chain depicted is not well supported by evidence. This is the narrative fallacy in its ethically problematic form.
The distinction maps onto a principle offered by Walter Fisher's (1987) narrative paradigm: narrative fidelity — the degree to which a story's elements ring true against the audience's experience of how the world actually works. An ethical narrative is one that could survive the audience's critical evaluation of whether it accurately represents the domain it depicts. An unethical narrative is one that depends on the audience not performing that evaluation.
The political advertisement that presents a single crime committed by an immigrant and generalizes it implicitly to the immigrant population is relying on narrative transportation to prevent the audience from asking the base rate question: what are the actual crime rates among this population compared to others? The public health campaign that tells the true story of one person's addiction, with accurate information about typical outcomes, and provides resources for seeking help, uses narrative transportation to help the audience engage with real information.
The ethical communicator uses narrative to make truth more accessible, not to make falsehood more persuasive. The test is this: if the audience could see the full statistical picture, would they feel that the narrative had illustrated that picture fairly — or would they feel they had been misled?
36.4 Institutional Models of Ethical Communication
Individual ethical commitments are necessary but fragile. The history of journalism, public relations, and public health communication is littered with individuals who maintained personal ethical commitments until the moment they needed a job, a client, or a promotion badly enough to compromise. Sustainable ethical communication requires institutional structures that make ethical practice the default rather than the exception — structures that create incentives for honesty, accountability mechanisms for errors, and cultures in which ethical violations are costly.
Several institutional models merit examination.
The BBC Editorial Standards Process
The BBC's Editorial Guidelines, developed and revised over decades, represent one of the most comprehensive institutional frameworks for ethical journalism in the world. They run to more than one hundred pages and address, with notable specificity, questions that other journalism organizations treat only in general terms: how to handle allegations about living public figures, how to report on suicide without increasing risk, how to cover elections without providing disproportionate airtime to any party, how to distinguish between news reporting and editorial comment.
What makes the BBC model instructive is not the content of the guidelines — many journalistic organizations have written codes with similar content — but the institutional mechanisms that give them force. Editorial decisions on significant stories are reviewed at multiple levels before broadcast. The BBC has an Editorial Complaints Unit that reviews complaints from the public with binding authority. It has a Governors' review function (now exercised by the BBC Board) that can override operational decisions on editorial grounds. And it has a public accountability framework — ultimately to Parliament, which sets the terms of the BBC Charter — that creates external pressure for compliance.
The BBC is not a perfect institution. Its coverage has been subject to well-documented criticisms of national bias, class bias, and institutional deference to power. But its errors tend to generate institutional response — investigations, reforms, public apologies — in ways that suggest the institutional framework is real, not merely nominal.
SVT and the Public Broadcaster Model
Ingrid Larsen notes, in the week's seminar discussion, that SVT's ethical framework operates slightly differently from the BBC's, in ways she finds worth explaining.
"SVT has the same formal mandate — impartiality, accuracy, public service," she says. "But the Swedish media culture adds something that's harder to put in a document. There's a genuine shame culture around factual errors. If you get something wrong, it's professionally costly in a way that going along with a powerful interest sometimes isn't, in other countries. I'm not saying it's perfect. But the incentive structure is different."
The public broadcaster model, represented by SVT, BBC, RTE, ABC (Australia), NHK (Japan), and others, operates from a different funding logic than commercial broadcasting. The public broadcaster's revenue does not depend on audience size in the way that commercial broadcasting does, which removes some of the structural incentive to prioritize emotionally compelling but inaccurate content over accurate but less engaging content. The public mandate — to serve all citizens, not only loyal viewers — creates at least a theoretical obligation to accuracy and plurality that commercial models lack.
Research comparing public broadcaster and commercial broadcaster coverage of the same events consistently finds that public broadcasters tend to cover more topics, cover them in greater depth, and include more diverse sources — though the direction of ideological bias in both tends to reflect the political culture of the country (Aalberg and Curran, 2012). The public broadcaster model is not a solution to political influence on media — public broadcasters in Hungary, Poland, and Turkey have been turned into government mouthpieces under pressure from governing parties. But where institutional independence is maintained, the model has produced demonstrably different editorial cultures.
First Draft and the 2017 French Election
In the weeks before the first round of the 2017 French presidential election, First Draft — a nonprofit newsroom dedicated to fighting disinformation — convened a coalition of seventeen news organizations to verify content about the election in real time. The project, known as CrossCheck, deployed teams of journalists who tracked and labeled false or misleading claims as they spread online, providing verified information to partner organizations who could incorporate it into their own coverage.
The project was tested severely on the final Friday before the runoff election, when Emmanuel Macron's campaign was targeted by a coordinated data breach and subsequent online smear campaign — the "#MacronLeaks" operation, which sought to claim that the candidate had an offshore bank account. CrossCheck had prepared protocols specifically for pre-election disinformation operations, including a rule of particular interest from an ethical standpoint: they would report on the existence of the disinformation campaign without reproducing the false claims in ways that could amplify them.
This "don't repeat the lie" protocol — sometimes called "truth-sandwich" journalism after George Lakoff's (2017) formulation — is an ethical communication strategy derived from the cognitive research on illusory truth (Chapter 11). Research on corrections demonstrates that repeating a false claim in order to correct it increases familiarity with the false claim, even when the correction is explicit. The First Draft protocol required journalists to lead with the true claim, then identify the false claim briefly and specifically, then return to the true claim — minimizing the repetition of the false version while ensuring audiences understood what was circulating.
The #MacronLeaks operation failed. Several analyses attributed this failure partly to the CrossCheck coalition's rapid and coordinated response, and partly to the fact that many French media organizations voluntarily abstained from covering the leaked documents during the election blackout period (Ferrara, 2017). The 2017 French election represents one of the few documented cases in the recent history of information operations where a coordinated disinformation campaign was effectively countered through ethical, journalistic means — without government censorship, without platform intervention, and without producing a counter-narrative that itself involved deception.
AP Newsroom Standards
The Associated Press's editorial standards, maintained and published as the AP Stylebook and separate editorial guidelines, address the disinformation coverage problem directly. The AP instructs its reporters not to repeat false claims without clear labeling, to avoid using official titles that lend false authority to claims that have been discredited, and to seek multiple independent confirmations before reporting allegations about private individuals.
Particularly relevant to the current information environment is the AP's guidance on coverage of fringe claims: reporting on the existence of a conspiracy theory or false claim in circulation requires careful framing that does not inadvertently legitimize the claim by the act of covering it. The AP standard is to explain why the claim is false or misleading before describing what the claim says — the truth-sandwich structure employed by First Draft, formalized as newsroom policy.
36.5 The Effectiveness Question: Does Ethical Persuasion Win?
Webb pauses at a critical point in the lecture and asks the question directly: "Let me be honest with you about something. Does ethical persuasion win?"
The honest answer is: not always, and not in the short term.
The structural asymmetry between ethical and unethical persuasion is well-documented. False information spreads faster than true information on social media (Vosoughi, Roy, and Aral, 2018) — a finding robust to multiple replications and multiple platforms. Emotionally arousing content, which disinformation is often specifically designed to produce, generates more engagement in algorithmic environments than emotionally neutral accurate content. Simple claims are easier to process and remember than complex accurate ones, which is why the "big lie" technique (Chapter 8) is effective even against sophisticated audiences.
Disinformation is often, in a perverse sense, better designed than ethical counter-communication. Skilled propagandists and disinformation operators invest significant resources in message development, audience testing, emotional resonance research, and delivery optimization. Many ethical communicators — public health agencies, fact-checkers, journalism organizations — operate under resource constraints that limit their capacity to do the same. The playing field is not level.
These facts matter. A framework for ethical persuasion that ignores the effectiveness question is not a framework anyone serious will use. What Tariq has been pushing on all semester — the question of whether you always have to choose between ethical and effective — is not a naive question. It is the central practical question in communication ethics.
What does the research say about the conditions under which ethical persuasion can be effective?
Condition One: Sufficient Processing Motivation
The dual-process model of persuasion (Petty and Cacioppo's Elaboration Likelihood Model, 1986; Chaiken's Heuristic-Systematic Model, 1980) predicts that accurate, evidence-based arguments are most persuasive when audiences are both motivated and able to process them carefully. Under low motivation or low ability conditions, peripheral cues — source attractiveness, emotional intensity, social proof — dominate. This means that ethical persuasion performs best with audiences who have reason to care about the issue and the cognitive capacity to engage with the evidence.
Practical implication: ethical communicators need to create processing motivation, not merely provide evidence. The best-designed accurate campaigns give audiences reasons to want to know the truth — they connect accurate information to the audience's existing values, goals, and identities. The prebunking games and inoculation approaches of Chapter 33 do this explicitly: they frame the detection of disinformation as a skill and a game, creating intrinsic motivation to process the accurate information carefully.
Condition Two: Intact Institutional Trust
The effectiveness of ethical persuasion depends heavily on the credibility of the communicating institution. Where institutional trust is high — where the audience believes the communicating organization is competent, honest, and on their side — accurate messages from that organization are processed generously. Where institutional trust has been degraded — through government lies, corporate scandals, journalistic failures, or systematic campaigns to undermine specific sources — the same accurate messages face skepticism that makes them less effective than the structural quality of the evidence would warrant.
This creates a paradox: the populations most targeted by sophisticated disinformation campaigns are often those whose institutional trust has been most deliberately degraded, making them least receptive to the institutional counter-communications that are most likely to be accurate. Ethical persuasion, in polarized information environments, faces the structural disadvantage that it typically comes from sources that have been pre-emptively delegitimized among the audiences most in need of accurate information.
The implication is that institutional trust is a strategic resource that requires sustained investment. News organizations, public health agencies, and educational institutions that maintain transparency about their processes, correct their errors publicly, and demonstrate consistent honesty over time build a reservoir of credibility that makes their communications more effective in crisis moments. Organizations that sacrifice credibility for short-term persuasive advantage — the organizations that shade evidence, overstate certainty, or suppress inconvenient information — find that trust, once lost, is extremely difficult to rebuild.
Condition Three: Well-Designed Truthful Messages
Accurate information can be poorly communicated. Ethical persuasion fails not only when audiences won't process it but when communicators don't design it well. The research on risk communication, health campaigns, and fact-checking is consistent on this point: accurate messages that respect design principles — clarity, salience, appropriate emotional register, action-orientation — systematically outperform accurate messages that assume the truth will speak for itself.
The anti-smoking campaigns of the 1990s that consisted of statistics about lung cancer rates were accurate and ineffective. "The Real Cost," which used the same underlying evidence and translated it into vivid, emotionally resonant, immediate experiences, was accurate and effective. The difference was design, not ethics — the ethical constraints were the same. This finding has a critical practical implication: ethical communicators who lose to unethical communicators often do so because they have under-invested in communication design, not because honesty is inherently disadvantaged.
36.6 Advertising Ethics: The Industry's Internal Debates
The advertising industry has engaged in ethical self-examination, with varying degrees of seriousness, for most of its history. The American Advertising Federation, the 4A's, and the Institute for Advertising Ethics have each produced ethical codes that share common ground while differing in significant ways. Three problems dominate current industry debates.
The Targeting Ethics Problem
Modern digital advertising has access to psychological profiles of individual consumers at a granularity unimaginable to Madison Avenue in 1960. Behavioral targeting — using observed behavior online to infer interests and serve relevant advertisements — has expanded into what researchers call psychographic targeting: using inferred personality traits, emotional states, and cognitive vulnerabilities to deliver messages specifically designed to exploit individual weakness.
The Cambridge Analytica operation (Chapter 24) brought psychographic targeting into public awareness, but the underlying capability is standard practice throughout the digital advertising industry. Insurance companies have served advertisements for anxiety disorders to users whose browsing behavior indicates anxiety. Gambling companies have targeted advertising to users whose behavioral profiles suggest gambling addiction. Political campaigns have targeted messages about immigration to users whose inferred profiles suggest they are susceptible to nativist appeals.
The ethical question is not whether targeting itself is problematic — it is standard for advertisers to serve different messages to different audiences based on inferred interests — but whether targeting that specifically identifies and exploits psychological vulnerabilities violates the autonomy criterion. If a message is designed to be more persuasive specifically because it reaches someone when their defenses are down, their emotional state is distressed, or their cognitive vulnerabilities are exposed, the message is designed to bypass rather than engage rational evaluation.
The industry has not reached consensus on where to draw this line, and the regulatory environment (Chapter 35) has not resolved it. The Institute for Advertising Ethics endorses a standard that prohibits targeting based on "sensitive vulnerabilities including mental health status, financial distress, or addiction," but the standard is aspirational rather than enforced.
The Native Advertising Problem
Native advertising — paid content designed to resemble editorial content — has grown substantially as a proportion of digital media revenue. The FTC requires disclosure that content is advertising, and most major publishers comply with the technical disclosure requirement by including a "Sponsored Content" or "Paid Post" label. Research consistently finds that disclosure labels in their current typical form are seen by a minority of readers and understood by fewer (Wojdynski and Evans, 2016).
The ethics of native advertising turns on intent transparency. A disclosure that satisfies formal legal requirements while being practically invisible to the target audience is not genuine disclosure — it is disclosure designed to provide legal protection while maintaining the persuasive benefit of editorial disguise. The ethical standard requires that disclosure be visible, comprehensible, and positioned so that readers encounter it before engaging with the content rather than after. Several outlets — The New York Times T Brand Studio, The Atlantic, BuzzFeed — have explored more prominent disclosure practices. None has been adopted as an industry standard.
The Political Advertising Problem
Commercial advertising is regulated in ways that political advertising typically is not. A pharmaceutical advertisement must include accurate information about risks and contraindications. An automobile advertisement cannot claim a fuel efficiency figure the vehicle does not achieve. A food advertisement must conform to FTC standards about health claims.
Political advertising operates largely outside these constraints. A political advertisement can make claims that are demonstrably false, present statistics in ways that misrepresent the underlying evidence, and use editing techniques to create false impressions of what opponents said or did — all without triggering enforcement action, because political speech occupies a constitutionally protected category that commercial speech does not.
The industry debate about whether political advertising should be held to the same standards as commercial advertising has intensified with the growth of digital political advertising. The arguments for higher standards are strong: political advertising directly affects democratic governance in ways that commercial advertising does not, and false political advertising undermines the informed electorate that democratic theory requires. The arguments against — rooted in First Amendment doctrine — hold that the government is not a reliable arbiter of truth in the political domain, and that the cure may be worse than the disease.
36.7 Public Relations and Advocacy: Drawing the Line
The Public Relations Society of America's (PRSA) Code of Ethics establishes what it calls the standard of "honest advocacy" — defined as representing clients and employers while being honest with audiences, never presenting false information, and disclosing conflicts of interest when they are material. This is a genuine ethical standard, not merely a public relations construction, and many PR practitioners take it seriously.
Where does advocacy become propaganda? The transition occurs along two axes: transparency and accuracy. Advocacy that is transparent about its source and agenda, and accurate in its factual claims, remains within the ethical boundary even when it is explicitly partisan. A lobbyist who accurately presents the economic case for their client's position, clearly identified as the client's representative, is engaged in legitimate advocacy even if the position serves the client's interests more than the public's.
The line is crossed when:
- The source is concealed or misrepresented (astroturfing — creating the appearance of grassroots support for a position that is actually funded by institutional interests);
- Factual claims are false or material context is deliberately omitted;
- The communication is designed to close off inquiry rather than inform it — to prevent the audience from seeking additional information rather than to provide information for the audience's own evaluation.
Two cases illustrate the distinction. The PRSA itself has identified the Mothers Against Drunk Driving (MADD) media campaigns of the 1980s as meeting its ethical advocacy standard: the organization was clearly identified, its agenda was transparent, its factual claims about drunk driving fatalities were accurate, and its emotional appeals — which were intense — were grounded in documented cases that were representative of the broader pattern rather than cherry-picked for maximum effect.
In the same era, the "Keep America Beautiful" campaign — promoting individual responsibility for environmental pollution — was funded substantially by beverage and packaging companies facing regulatory pressure to reduce plastic production. The campaign accurately noted that individuals throw away litter. It was designed to direct public attention and legislative energy toward individual behavior rather than toward the corporate practices that generated the bulk of plastic waste. The campaign was advocacy in the service of interests that were concealed from its audience — a violation of the transparency criterion regardless of the accuracy of its specific factual claims.
36.8 Advocacy Journalism: Transparency vs. Agenda
The debate within journalism about the value and limits of objectivity has a long and unresolved history. The traditional objectivity model, codified in the Society of Professional Journalists' Code of Ethics and practiced in varying degrees by major wire services and broadcast networks, rests on the principle that journalism should present information and let the audience draw conclusions — that the journalist's role is not to advocate but to inform.
The critique of this model, associated with scholars like Jay Rosen (1993) and practitioners like Ta-Nehisi Coates, is that "objectivity" in practice often means presenting "both sides" of issues that do not have two equally valid sides — treating scientific consensus and motivated denial as equivalent positions, or presenting the perspectives of powerful institutional actors as if they were automatically equivalent to the perspectives of those without institutional power. This is what Rosen calls the "view from nowhere": a pseudo-neutral posture that systematically benefits established power while presenting itself as impartial.
Advocacy journalism — journalism that is transparent about its perspective and advocacy goals — responds to this critique by arguing that honesty about one's position is more epistemically virtuous than a false claim of neutrality. The Nation, Mother Jones, The Guardian, and The New Republic each maintain editorial perspectives that shape their coverage and are transparent about those perspectives to their readers.
What distinguishes transparent advocacy journalism from propaganda, according to the criteria established in 36.1?
First, the source is identifiable and its agenda is publicly stated. The reader of The Nation knows they are reading a left-liberal publication; the reader of the Wall Street Journal's editorial page knows they are reading a center-right one. The propaganda concern arises when the agenda is concealed — when a publication presents itself as politically neutral while systematically favoring a particular political perspective.
Second, the factual claims are verifiable. Advocacy journalism that meets ethical standards holds itself to the same factual standards as neutral journalism — its advocacy is a matter of emphasis, framing, and editorial selection, not false claims. When advocacy journalism crosses into false claims, it has become propaganda regardless of whether its agenda is disclosed.
Third, and most importantly, the advocacy journalism model requires ongoing accountability. Publications that advocate a position must be willing to publish corrections, to cover stories that complicate their preferred narrative, and to distinguish between editorial content (where advocacy belongs) and news reporting (where factual standards apply without editorial distortion). When the line between editorial and news is erased — when advocacy bleeds into the presentation of facts without being labeled as advocacy — the ethical distinction from propaganda has collapsed.
The breakdown is visible in hyperpartisan media on both sides of the political spectrum: outlets that present themselves as journalism while systematically constructing false impressions of events to serve a political agenda. The criterion that separates them from legitimate advocacy journalism is not whether they have a perspective but whether they are honest about it and accurate in their factual claims.
36.9 Teaching Ethical Communication: What Works
If the goal of communication ethics education is to produce graduates who make ethical decisions in their professional practice — not just graduates who can pass an ethics exam — the research on what educational interventions achieve that goal is sobering.
Rule-based ethics education — teaching students a code of ethics and testing their ability to apply it to cases — produces, predictably, graduates who can articulate the code and explain its application to straightforward cases. It does not reliably produce graduates who apply those rules when compliance is costly, when the violation is subtle, or when the organizational culture actively discourages compliance. This is not surprising. Rules work when enforcement is reliable and the probability of being caught is high. In professional environments where ethical violations are often invisible, undetected, and normalized by organizational culture, rule-knowledge is insufficient.
Character-based ethics education — focused on cultivating the dispositions, habits, and moral intuitions that produce ethical decisions rather than on teaching the rules — has shown more durable effects in longitudinal studies of professional behavior (Rest, 1994; Bebeau and Monson, 2008). Character education in communication ethics focuses on developing three capacities:
Moral sensitivity: the capacity to notice the ethical dimensions of a situation — to see, before it is too late, that the decision being made has ethical stakes. Many ethical failures in journalism, public relations, and public health communication involve communicators who made problematic choices without registering that they were making choices at all.
Moral reasoning: the capacity to work through complex ethical trade-offs with rigor rather than rationalization — to distinguish between genuine ethical dilemmas where reasonable people disagree and situations where rationalization has replaced reasoning.
Moral motivation and implementation: the capacity to act on ethical conclusions when doing so is costly — when the ethical choice involves professional risk, social disapproval, or economic sacrifice. This is where most ethics education falls short; it can build the first two capacities but rarely directly addresses the third.
Case study pedagogy — the method this course uses throughout — is particularly effective at building moral sensitivity and reasoning, because cases present ethical complexity in concrete, specific terms that activate the kind of practical judgment that abstract principles do not. Webb's students have spent twelve weeks examining cases that resist simple moral sorting: cases where the line between legitimate persuasion and manipulation is genuinely unclear, where effective communication involved ethical trade-offs, where principled actors made choices they later regretted. This kind of case-by-case engagement with ethical complexity builds the moral sensitivity that professional practice requires.
What builds moral motivation — the disposition to act ethically when it is costly — is less well understood. The research suggests that professional identity plays a significant role: individuals who have a strong sense of themselves as "an ethical communicator" or "a journalist who gets it right" are more likely to resist situational pressures to compromise than individuals who have a strong sense of themselves as "good at the job" without the ethical valence (Bebeau, 2002). Webb's decision to tell his own story at the start of this session is, in one reading, an exercise in moral motivation education: he is demonstrating, from the inside, what it costs to compromise and what it costs to refuse to.
36.10 Research Breakdown: Anti-Disinformation Campaign Effectiveness
Study: Roozenbeek, J., van der Linden, S., Goldberg, B., Rathje, S., and Lewandowsky, S. (2022). Psychological inoculation improves resilience against misinformation on social media. Science Advances, 8(34).
This study, building on the inoculation theory framework examined in Chapter 33, tested whether short psychological inoculation interventions — specifically, the "prebunking" video format developed by the Cambridge Social Decision-Making Lab and Google Jigsaw — could produce durable resistance to misinformation in real-world social media environments.
Design: The research team partnered with YouTube to test short prebunking videos (approximately ninety seconds each) targeting three categories of manipulative rhetoric: false dichotomies, scapegoating, and ad hominem attacks. The videos were deployed as pre-roll advertisements to YouTube users in the United States, selected to reach users across the political spectrum. Pre- and post-exposure measures assessed participants' ability to identify manipulation techniques and their confidence in resisting them.
Key findings: Participants exposed to the prebunking videos showed significantly improved ability to identify the targeted manipulation techniques compared to control groups. Crucially, the effect was not confined to misinformation about any particular political topic — the inoculation transferred across topics, suggesting that the intervention was building a generalizable critical thinking skill rather than providing specific corrective information about specific false claims.
Critically for the ethical persuasion question, the prebunking videos themselves met all five criteria established in 36.1: they were transparent in source (Google and Cambridge), accurate in their characterization of manipulation techniques, complete (they explained why the techniques are manipulative rather than simply labeling them), respectful of audience autonomy (they built critical thinking skills rather than simply telling audiences what to conclude), and aligned with audience interests (the tools they provided were useful for evaluating any manipulation, including from the communicator's own political perspective).
Limitations: The study measured the ability to identify manipulation and self-reported confidence in resisting it. It did not directly measure behavior in real information encounters — whether participants who improved on these measures would actually consume less misinformation or share it less. The study duration was short, leaving open questions about the durability of effects over months or years.
Implications for ethical persuasion: The Roozenbeek et al. findings suggest that communicators committed to ethical practice can design effective interventions that work with rather than around audience cognition — interventions that make audiences more capable of critical evaluation rather than less. The prebunking model is inherently self-limiting in an important ethical sense: an inoculation campaign that works by making audiences more critical will, if successful, make those audiences more critical of the inoculation campaign itself. This self-limitation is a feature rather than a bug. It is the practical instantiation of autonomy-respect.
36.11 Primary Source Analysis: The Society of Professional Journalists Code of Ethics (2014)
The SPJ Code of Ethics, revised in 2014, is the dominant normative framework for journalism ethics in the United States. It is organized around four core principles: Seek Truth and Report It; Minimize Harm; Act Independently; and Be Accountable and Transparent.
The code's theory of communication ethics is implicitly deontological in its orientation toward truth-telling and harm avoidance, but its application guidance is distinctly virtue-ethical — it describes what a good journalist does rather than articulating rules for every situation. The preamble states: "Ethical journalism should be accurate and fair. Journalists should be honest and courageous in gathering, reporting, and interpreting information." Courage — a virtue, not a rule — is named as a foundational requirement.
What the code requires: Under "Seek Truth and Report It," the SPJ Code requires accuracy, source verification, distinguishing news from advocacy, fact-checking, and — crucially — seeking multiple sources and being cautious about claims based on single sources. Under "Minimize Harm," it requires consideration of the consequences of publication for individuals and communities, special care in covering vulnerable populations, and restraint in the use of private information even when legally obtainable. Under "Act Independently," it prohibits journalists from serving as advocates for sources and requires disclosure of conflicts of interest. Under "Be Accountable," it requires journalists to correct errors quickly and prominently and to expose unethical conduct within journalism.
Enforcement mechanisms: Essentially none that are legally binding. The SPJ Code is a professional standard, not a legal requirement. Journalism in the United States has no licensing mechanism — anyone can call themselves a journalist — and the SPJ has no authority to sanction members who violate the code beyond social and reputational consequences. This contrasts significantly with the legal professions, medicine, and engineering, where ethical violations can result in license revocation.
Comparison with IFCN Principles: The International Fact-Checking Network's Code of Principles (examined in Chapter 32) shares significant ground with the SPJ Code — both require commitment to nonpartisanship, accuracy, transparency, and correction. The IFCN Code differs in two important respects: it focuses specifically on fact-checking organizations rather than journalism generally, and it has a more explicit accountability mechanism — IFCN signatory status, which organizations can lose if they violate the code and which provides a form of external verification. The IFCN's external assessment process is a modest but genuine enforcement mechanism that the SPJ Code lacks entirely.
Strengths and limitations: The SPJ Code articulates aspirations that command broad respect across journalism's professional culture and have real influence on editorial decision-making at major news organizations. Its limitation is that it has no teeth: organizations that systematically violate it face no professional consequences beyond reputational damage, which is substantial only if the broader professional and public community cares about the violation. In a polarized media environment where half the audience may actually reward violations that favor their preferred narrative, the reputational mechanism weakens considerably.
36.12 Debate Framework: Is Ethical Persuasion Possible at Scale in the Current Information Environment?
Position A: Structural Disadvantage Makes Ethical Persuasion Insufficient
The current information environment is not a neutral competitive arena in which ethical and unethical persuasion compete on equal terms. Platform algorithms optimize for engagement, and engagement is reliably higher for emotionally arousing, identity-affirming, and socially divisive content than for accurate, nuanced, and corrective content. Advertisers' business models depend on maximizing time-on-platform, creating structural pressure on platforms to prioritize content that produces those outcomes regardless of its accuracy.
In this environment, ethical persuasion faces a systematic competitive disadvantage. It cannot use the techniques — exaggeration, manufactured emotional intensity, false social proof — that drive algorithmic amplification. It cannot use psychographic targeting that exploits psychological vulnerabilities. It cannot use astroturfing to manufacture apparent consensus. It is, in a structural sense, fighting with one hand tied behind its back.
The evidence supports this structural analysis. The Vosoughi, Roy, and Aral (2018) finding that false information spreads faster on Twitter than true information held across political topics, was driven primarily by human sharing choices rather than bots, and was not explained by the age or follower count of accounts spreading false information. The emotionally novel and surprising quality of false information predicted its spread; accurate information tends to be less novel, because it describes a reality that is continuous with prior understanding rather than disrupting it.
Position A concludes that without structural reforms — platform regulation that changes the optimization function, journalism funding models that reduce dependence on engagement-driven advertising, regulatory requirements for political advertising transparency — ethical persuasion will continue to lose to unethical persuasion in the most consequential domains. Individual ethical commitment, however sincere, cannot overcome a structural incentive system designed to reward the opposite.
Position B: Ethical Persuasion Can Be Effective With Better Design
Position B does not deny the structural disadvantages identified by Position A. It argues that those disadvantages are real but not determinative — that the effectiveness gap between ethical and unethical persuasion is substantially a design gap, and that ethical communicators can close it without abandoning their ethical commitments.
The evidence from the cases examined in this chapter supports this position. "The Real Cost" campaign achieved significant behavior change while meeting all five ethical criteria. The 2017 French election CrossCheck coalition successfully countered a coordinated disinformation operation through ethical journalistic practice. The prebunking research of Roozenbeek et al. demonstrated real-world effectiveness for a communication strategy that explicitly builds audience critical capacity rather than circumventing it. The anti-smoking campaigns of the 1980s-1990s contributed to one of the largest sustained behavior changes in modern American public health — a 70% decline in adult smoking rates from 1964 to 2020 — through a combination of ethical communication, policy change, and social norm shift.
Position B argues that ethical communicators have systematically underinvested in design relative to content. The assumption that truth will speak for itself — that accurate information presented in an unpolished, text-heavy, expert-facing format will reach and move general audiences — has been falsified by decades of research. Ethical persuasion that is designed with the same rigor applied to message development, audience research, emotional resonance, and delivery channel optimization that unethical campaigns employ can compete effectively. It will not always win. But it can win, and it wins more consistently when designed well.
Position B also notes that the long-term track record favors ethical persuasion in ways that short-term analysis obscures. The tobacco industry's disinformation campaign, examined in Chapter 22, was spectacularly effective for decades. Its long-term cost — in litigation, regulatory consequence, reputational destruction, and market shrinkage — was enormous. The institutions and practices that have maintained long-term credibility and effectiveness in the information environment — the AP wire service, the BBC, the practice of science communication in peer-reviewed form, public health agencies at their best — have done so precisely because they maintained ethical standards under pressure. Trust, built by sustained ethical practice, is a competitive advantage that is invisible in the short term and decisive in the long term.
36.13 Action Checklist: The Ethical Communicator's Standard
The following twenty questions are organized into five categories. They are designed for practical use by anyone producing persuasive communication — for advocacy, journalism, public health, education, or organizational communication. The checklist is not exhaustive. It is a decision support tool, not a formula.
Category I: Source and Intent Transparency
- Is the source of this communication clearly and prominently identified — not just technically disclosed but actually visible and comprehensible to the typical audience member?
- Are the material interests of the source disclosed? If the communicator stands to benefit from the audience's response, is that benefit identified?
- Is the persuasive intent of the communication disclosed? Does the audience know that you are trying to change their beliefs or actions, rather than simply informing them?
- If the communication is sponsored or produced by a third party, is that relationship disclosed in a way that a typical audience member will notice and understand?
- Would the audience's confidence in the communication change if they knew all the facts about its source and funding? If yes, that information is material and must be disclosed.
Category II: Accuracy
- Can every factual claim in the communication be verified by a neutral party using the cited sources?
- Are statistics presented with the context necessary to interpret them — sample sizes, confidence intervals, comparison baselines, limitations identified by the original researchers?
- Are sources cited accurately — not misrepresented in their conclusions, not stripped of their stated caveats, not from dates that predate significant changes in the evidence?
- Is the overall impression created by the communication proportionate to the evidence — or does the selection, emphasis, and repetition of specific facts create a stronger impression than the underlying evidence supports?
- If a claim could be checked by an expert in the relevant field, would that expert find it accurate?
Category III: Completeness
- Is there material information — information that a reasonable member of the target audience would want to know before making the decision the communication is designed to influence — that has been omitted?
- Are the strongest objections to the communication's central claim acknowledged and addressed, or are they suppressed?
- Are there perspectives relevant to the audience's informed decision that the communication does not represent? If so, why not?
- Does the communication present the full distribution of outcomes, or only the most favorable cases? (Does it show typical outcomes, or only the best-case outcomes?)
- If the communication were accompanied by all the relevant context, evidence, and alternative perspectives, would the audience still find it compelling? If not, why not?
Category IV: Autonomy Respect
- Is this communication designed to make the audience think more carefully or less carefully? Does it perform better, or worse, under scrutiny?
- Does the emotional register of the communication make relevant information more salient, or does it produce emotional states that prevent careful evaluation of the evidence?
- Does the communication give the audience tools to evaluate its own claims — references to original sources, suggestions for further investigation, acknowledgment of uncertainty?
- Are the implicit logic and reasoning of the communication visible — could an informed audience identify the assumptions and inference steps the communication relies on?
- If the audience became more informed about this topic after receiving this communication, would they find the communication was helpful to their understanding — or would they feel they had been misled?
Category V: Interest Alignment
- (Bonus question) Whose interests does this communication serve? If the communicator's interests and the audience's interests are not fully aligned, has that gap been disclosed?
36.14 Inoculation Campaign: Final Assembly
This is the capstone component of the progressive project that has run throughout this course. Your final Inoculation Campaign Brief incorporates all components developed across Parts 2 through 6, plus the Responsible Communication Commitment Statement developed in this chapter.
Campaign Brief Format
INOCULATION CAMPAIGN FINAL BRIEF
Campaign Title: A clear, descriptive title that identifies the campaign's target community and primary threat.
SECTION 1: COMMUNITY AND PROPAGANDA ENVIRONMENT PROFILE (Developed in Parts 2–3; refined here)
1.1 Target Community Description - Demographic and psychographic profile of the target community - Communication channels through which this community primarily receives information - Existing trust relationships: which sources does this community trust? Why? - Existing vulnerabilities: what propaganda techniques have been most effective against this community and why?
1.2 Propaganda Environment Analysis - Identify the specific propaganda threats currently active in this community's information environment - For each threat: the propagandist's apparent goals, the techniques used (cross-reference Part 2 chapters), the channels of delivery, and the scale of reach - Assessment of the community's current level of awareness of these threats
SECTION 2: DOMAIN ANALYSIS (Developed in Part 5; refined here)
2.1 Domain Identification - What specific domain does your campaign address? (e.g., public health, political, economic, religious, military/security) - What are the specific propaganda techniques most characteristic of this domain in your target community's information environment?
2.2 Stakeholder Map - Who are the primary producers of propaganda in this domain? - Who are the gatekeepers (platforms, institutions, media) who shape how this domain's information reaches the community? - Who are the potential allies in counter-communication?
SECTION 3: INOCULATION MESSAGE DESIGN (Developed in Ch.33; refined here)
3.1 Core Inoculation Messages For each primary propaganda technique identified in Section 1: - Forewarning: the specific warning about this technique and how it is being used in this community - Refutational preemption: the weakened form of the manipulation you will expose - Counter-argument: the accurate information and reasoning that builds resistance - Efficacy frame: what the audience can do when they encounter this technique
3.2 Delivery Design - Format: How will the inoculation messages be delivered? (video, game, in-person workshop, social media content, printed material, other) - Channel: Through what channel(s) will the campaign reach the target community? - Messenger: Who should deliver the inoculation messages? (What sources do the community trust? See Section 1.1) - Dosing: How often and over what time period will messages be delivered?
3.3 Pilot and Testing Plan - How will you test the campaign before full deployment? - What measures will indicate that the inoculation is working?
SECTION 4: ETHICAL AUDIT (Developed in Ch.34; completed here using Ch.36 checklist)
Complete the twenty-question ethical checklist from Section 36.13 for your campaign's core messages. For any question where the answer raises concerns, describe the specific concern and how you have addressed it or why you have accepted the trade-off.
4.1 Summary Ethical Assessment - Does your campaign meet all five ethical criteria from Section 36.1? For each criterion, provide a brief assessment (1–2 sentences per criterion). - Are there genuine ethical tensions in your campaign design? If so, identify them honestly and explain how you have navigated them.
SECTION 5: POLICY DIMENSION (Developed in Ch.35; refined here)
5.1 Regulatory Context - What legal and regulatory framework applies to the propaganda threats your campaign addresses? - What legal protections apply to your campaign's counter-communication activities? - Are there specific platforms, institutions, or regulatory bodies whose policies affect your campaign?
5.2 Policy Recommendations - Beyond your campaign, what policy changes would most effectively address the propaganda threats you have identified? - What is the realistic pathway for those policy changes in the current regulatory environment?
SECTION 6: RESPONSIBLE COMMUNICATION COMMITMENT STATEMENT (New in Ch.36)
This section is not a checklist. It is a statement, written in your own voice, of the communicator you intend to be.
The commitment statement should address:
- What ethical principles will guide your communication practice, and why?
- What specific temptations or pressures — in your target domain, in the current information environment, in your anticipated career — are most likely to challenge those principles?
- What personal and institutional safeguards will you put in place to help you maintain your commitments under pressure?
- What will you do if you find that your campaign, despite its best intentions, is having negative consequences you did not anticipate?
Length: minimum 500 words. This is not a formal academic document. It should be genuine.
SECTION 7: IMPLEMENTATION SUMMARY
7.1 Resource Requirements - What human, financial, and technical resources does this campaign require? - What is realistic given the resources available?
7.2 Timeline - Phase 1: Preparation and testing (specify duration) - Phase 2: Initial deployment (specify duration and scope) - Phase 3: Full deployment (specify duration and scope) - Phase 4: Evaluation and adaptation (specify duration)
7.3 Success Metrics - How will you know if the campaign is working? - What would you measure, when, and how? - What outcomes would lead you to revise or discontinue the campaign?
The final brief should be 4,000–8,000 words total across all sections. It should read as a professional planning document — specific, honest about limitations, and grounded in the evidence and frameworks from this course.
36.15 Part 6 Closing: The Communicator You Will Be
Webb comes back to his opening story at the end of the session. It is the last session of Part 6 — there are four more chapters to come, but they belong to Part 7, to the emerging frontiers, to the things that are still becoming. Part 6 is finished today, and Webb wants to close it with something true.
"I want to come back to what I said at the beginning," he says, "because I don't want to leave it where I left it. I made it sound like leaving political communications was primarily about regret. That's not quite right."
He sits on the edge of the desk. The afternoon light is coming in sideways through the seminar room's tall windows.
"The real reason I left is that I couldn't sustain it. Not because I'm a particularly virtuous person. I'm not making that claim. But because the person I was in that job was not the person I wanted to be, and I couldn't make the distance between those two people small enough to live comfortably in. You can tell yourself, for a while, that what you're doing isn't that bad. That everyone does it. That the cause is good enough to justify the means. You can tell yourself that for a surprisingly long time. But there comes a point where the gap is too wide, and you either close it or you live in it permanently."
Sophia looks up from her notes.
"That's what I want to leave with you for Part 6," Webb continues. "Not a framework, though you have one now. Not a checklist, though you have that too. I want to leave you with the question of what gap you're willing to live in. Because you will face that question professionally. You will face situations — in journalism, in public relations, in public health, in political communication, in advocacy, wherever you end up — where you can see clearly what the ethical thing to do is, and you can also see clearly that doing it is costly. It might cost you a client. It might cost you a story. It might cost you a promotion. It might cost you the victory."
He pauses.
"I want to say something honest about that cost. It is real. I'm not going to stand here and tell you that ethical communication always wins, or that the people who maintain their principles always come out ahead. Chapter 36 doesn't show you that, because it isn't true. What it shows you is that there are conditions under which ethical persuasion is effective — that honesty, transparency, and accuracy are not inherently disadvantages, and that with good design they can compete. But sometimes they lose."
Tariq, who has been leaning back in his chair with his characteristic air of skeptical attention, sits forward.
"Then why do it?" he asks. Not combatively. Genuinely.
It is the question Webb has been waiting for.
"Because the alternative is becoming the thing you've spent a semester studying," Webb says. "And because the question of what kind of communicator you want to be is, in the end, the question of what kind of person you want to be. You don't get to separate those."
Ingrid thinks of her grandmother, who lived through the German occupation of Denmark, who has described to her the moment when her neighbors who collaborated with the occupation had to look at themselves in the mirror and decide: is this who I am? That is not a dramatic story. It is a small, interior story that happened to ordinary people in difficult circumstances, and they made different choices, and those choices added up to something. She thinks that this is not unrelated to what Webb is talking about.
Sophia puts down her pen.
"There's something I've been thinking about all semester," she says. "My father is a journalist. He covers cartel disinformation and media manipulation in northern Mexico. And the thing he always says — I've heard him say it since I was a kid — is that the most important thing he does is not any individual story. It's showing up. Being the person in the community who is going to keep being honest regardless of what it costs, so that when people need something true, they know where to find it. The thing he protects is not any particular truth. It's the institution of reliable truth-telling."
Webb nods slowly. "That's the best thing I've heard anyone say in this class all semester. Write that down."
Several students do.
"The institution of reliable truth-telling," Webb repeats. "That is what ethical communication builds and what propaganda corrodes. It's why we've spent a semester on this. Not because you're all going to go out and fight propaganda campaigns — though some of you will. But because every time someone in a position to communicate makes a decision about whether to be honest or whether to shade the truth, whether to be transparent or whether to be strategic, whether to respect their audience's capacity to think or to bypass it — they are either building that institution or corroding it. And the corrosion is cumulative."
He stands up.
"Part 7 is going to take us into the emerging frontiers — AI-generated content, deepfakes, information warfare in the twenty-first century. The problems are going to get harder. The techniques are going to get more sophisticated. The pressure on ethical communication is going to increase, not decrease."
He gathers his things.
"What I hope Part 6 has given you is not just the tools to analyze what's happening. I hope it has given you a clear-eyed understanding of what you are for — not just what you are against. It is possible to know everything about propaganda, to be able to identify every technique, to be a sophisticated analyst of every manipulation, and still not know what you stand for. I have tried to help you know what you stand for."
He looks at the room.
"We'll start Part 7 on Wednesday. Take care of yourselves between now and then."
The students pack up slowly. There is something unusual in the room — not the quick shuffle and exit of students eager to be somewhere else, but a kind of lingering, the way people behave after something they don't want to rush past.
Tariq, on his way out, stops at the door and turns back. "For what it's worth," he says to Webb, "I think you answered my question. The ethical thing and the effective thing aren't always the same. But the ethical thing is what makes you worth listening to."
He leaves. Webb turns toward the window.
Outside, the campus is moving through its ordinary afternoon, students crossing the quad, the ordinary world with its ordinary noise, the ordinary ongoing difficulty of telling the truth in an environment that does not always reward it. The work continues.
Chapter Summary
Chapter 36 has moved from analysis to construction — from understanding what ethical persuasion is not to developing a positive account of what it is and how to practice it. The five criteria for ethical persuasion — transparency of source and intent, accuracy, completeness, respect for autonomy, and alignment with audience interests — provide a practical framework that is both principled and actionable.
The chapter's examination of fear appeals, narrative persuasion, institutional models, and the effectiveness question has tried to be honest about the genuine tensions ethical communication faces in the current information environment. Ethical persuasion is not always effective in the short term, and the structural advantages of disinformation in algorithmic environments are real. But the evidence from anti-smoking campaigns, prebunking research, and the 2017 French election demonstrates that ethical communication can be both principled and effective when it is well-designed, institutionally supported, and trusted.
The Inoculation Campaign Final Brief represents the culmination of this course's progressive project: a practical document that integrates community analysis, domain expertise, inoculation message design, ethical audit, policy analysis, and the personal commitment to responsible communication. It is a professional document, but it ends with a personal one: the Responsible Communication Commitment Statement that asks not only what a campaign will do but what kind of communicator will run it.
Part 6 has argued, in its six chapters, that media literacy, fact-checking, inoculation, ethical reasoning, legal structures, and ethical communication practice together constitute a coherent response to propaganda — not a sufficient one, but a necessary one. Part 7 will ask how these tools must evolve as the threats evolve.
Key Terms
Autonomy respect — The principle that ethical persuasion enables the audience's informed judgment rather than bypassing or suppressing it.
Character-based ethics education — An approach to ethics education focused on developing moral sensitivity, reasoning, and motivation rather than teaching specific rules.
Extended Parallel Process Model (EPPM) — Kim Witte's (1992) framework for understanding fear appeals: effective fear appeals must produce high perceived threat and high perceived efficacy; fear without efficacy produces defensive responses rather than protective behavior.
Honest advocacy — The PRSA standard for ethical public relations: representing clients honestly, without deceiving audiences or suppressing material information.
Institutional trust — The degree to which an audience believes a communicating institution is competent, honest, and acting in their interests; a precondition for the effectiveness of ethical persuasion.
Intent transparency — Disclosure of the communicator's persuasive goal; the audience knows not only who is communicating but what outcome the communicator is seeking.
Material information — Information that a reasonable audience member would want to know before making the decision the communication is designed to influence; the omission of material information violates the completeness criterion.
Moral sensitivity — The capacity to notice the ethical dimensions of a situation; often the first capacity that must be developed in ethics education.
Narrative fidelity — Walter Fisher's (1987) concept: the degree to which a story's elements ring true against the audience's experience of how the world actually works; a criterion for ethical narrative use.
Narrative transportation — Green and Brock's (2000) model: the experience of being absorbed in a story such that counter-arguing is reduced; raises ethical questions when used to bypass rather than support rational evaluation.
Native advertising — Paid content designed to resemble editorial content; ethical only when disclosure is genuinely visible and comprehensible, not merely technically compliant.
Prebunking — An inoculation-theory-based communication strategy that exposes audiences to weakened forms of manipulation techniques before they encounter them in the wild; designed to build resistance rather than correct after the fact.
Proportionality — The principle that the emphasis, repetition, and emotional weight given to claims in ethical communication accurately reflects the underlying evidence.
Source transparency — Disclosure of who is communicating and, where material, what interests they represent; a necessary condition for audience evaluation of persuasive messages.
Truth sandwich — George Lakoff's (2017) protocol for reporting on false claims: lead with the true claim, identify the false claim briefly, return to the true claim; designed to minimize the illusory truth benefit of repeating falsehoods.
View from nowhere — Jay Rosen's (1993) critique of journalistic pseudo-neutrality: presenting "both sides" as equivalent without regard to the actual evidence, which systematically advantages established power.
End of Chapter 36. End of Part 6: Critical Analysis.
Part 7: Emerging Frontiers begins with Chapter 37.