Sophia Marin had been staring at the same paragraph for twenty minutes.
In This Chapter
- 34.1 The Ethics of Persuasion: Why It Matters
- 34.2 The Spectrum from Legitimate Influence to Manipulation
- 34.3 The Consent Framework
- 34.4 The Manipulation Question
- 34.5 The Nudge Debate
- 34.6 Emotional Appeals: Legitimate and Illegitimate
- 34.7 Facebook's Emotional Contagion Experiment: The Ethics of Platform Research
- 34.8 Professional Ethics of Persuasion: Journalism, PR, and Advertising
- 34.9 The Propagandist's Responsibility
- 34.10 Counter-Propaganda Ethics
- 34.11 Research Breakdown: The Emotional Contagion Paper
- 34.12 Primary Source Analysis: The APA Ethics Code, Section 5
- 34.13 Debate Framework: Is Persuasion Using Emotional Techniques Ethically Different from Propaganda?
- 34.14 Action Checklist: Ethical Persuasion Self-Audit
- 34.15 Inoculation Campaign: Ethical Audit — Progressive Project
- Chapter Summary
- Key Terms
Chapter 34: Ethics of Persuasion — Consent, Manipulation, and Responsibility
Sophia Marin had been staring at the same paragraph for twenty minutes.
On her laptop screen was the draft inoculation message she had been refining since Chapter 33 — a narrative-format warning about a specific disinformation technique, designed to pre-empt the persuasive appeal of climate change denial talking points before an audience had encountered them. She had chosen the technique deliberately: a false expert appeal, the kind that floods a listener with the credentials of contrarian scientists without context about how isolated their views are within the field. Her inoculation message used a real example. It named real organizations. It used a mild emotional frame — not fear, but the quiet indignation of discovering you have been misled — to make the warning memorable.
She thought she had done good work. Then she read the message again, this time asking a different question.
Am I manipulating people?
The emotional frame was designed. The narrative structure was designed. The salience of the example had been calibrated. She had read enough behavioral economics at this point to know that her message was exploiting availability bias — a vivid, specific story lodges in memory in a way that a statistical summary does not. She was using that. She had chosen to use that.
She was doing to her audience, in miniature, something structurally similar to what the disinformation campaigns she was trying to counter were doing to theirs.
She brought the question to seminar the next morning.
Professor Marcus Webb listened to her describe the problem for a full three minutes. When she finished, he leaned back in his chair and looked at the ceiling for a moment.
"That," he said, "is the question this chapter examines."
He did not answer it. Not yet.
34.1 The Ethics of Persuasion: Why It Matters
The ethics of persuasion is not a peripheral academic concern. It is one of the most practically urgent questions in democratic life, professional communication, and personal integrity. Every day, in every domain, people try to influence one another — through advertising, journalism, political speech, advocacy campaigns, public health messaging, social media, interpersonal conversation. Most of this persuasion is legitimate, and we should want people to be good at it. But some of it crosses a line. The question that gives ethicists, communicators, and citizens genuine trouble is: where is the line?
The difficulty is not that there is no line. Most thoughtful people, if pressed, can identify clear cases on either side. A doctor who explains the risks of a medical procedure so a patient can make an informed decision is engaged in legitimate persuasion. A company that adds addictive stimulants to food in quantities too small to require labeling and then markets that food specifically to children is doing something different — something that most people would, on reflection, call manipulation. Between those poles, however, there is an enormous contested middle ground where the ethical status of persuasion is genuinely unclear, and where the tools for thinking clearly are not obvious.
This chapter argues that the ethics of persuasion hinges on a cluster of related considerations — truthfulness, emotional proportionality, respect for rational agency, and transparency of intent — and that these criteria, taken together, can distinguish legitimate influence from manipulation. But it also takes seriously the objections: that these criteria are easier to state than apply, that reasonable people disagree about where the line falls in specific cases, and that the ethics of counter-propaganda — specifically — involves uncomfortable complications that do not disappear when you are on the right side.
Why it matters now. The tools of persuasion have become more sophisticated and more asymmetrically available in ways that sharpen the ethical stakes considerably. Micro-targeted advertising can deliver psychologically customized messages to individuals without their knowledge that they are receiving a customized communication. A/B testing allows communicators to systematically identify the most emotionally resonant version of a message across hundreds of thousands of users. Algorithmic distribution amplifies the most emotionally engaging content regardless of its accuracy. Machine learning systems can analyze psychological profiles and match them to persuasion techniques at scale. These capabilities are not morally neutral. They change the terms of the ethical analysis in ways that older frameworks did not anticipate.
Why it is personally urgent. Sophia's question is not just philosophically interesting. If she is designing a persuasion campaign, she is making choices that will affect real people's cognitive processes. She has, in some meaningful sense, taken responsibility for what she is doing to those minds. That responsibility does not disappear because her intentions are good. It does not disappear because the disinformation she is countering is worse. The ethics of persuasion is ultimately a question about the kind of communicator you choose to be — and, by extension, about the practices you endorse when you create or share persuasive content.
34.2 The Spectrum from Legitimate Influence to Manipulation
One of the most clarifying moves in persuasion ethics is to stop treating the question as binary — persuasion good, manipulation bad — and instead map the spectrum of influence practices from most to least ethically defensible.
At one end of the spectrum is rational argument: presenting evidence, making logical connections, inviting the audience to evaluate the case and draw their own conclusions. This is the mode of influence that democratic deliberation, in its idealized form, depends on. It respects the audience as rational agents capable of processing information and deciding. It is fully transparent about what it is doing. When it works, it works because the evidence and reasoning are compelling — not because the audience was circumvented. Rational argument is not, of course, the only ethically acceptable form of communication. Pure rational argument, stripped of all emotional register and narrative structure, would be largely ineffective and would systematically disadvantage causes whose truth is not easily compressed into formal argument chains. But it is the reference point against which other forms of influence are evaluated.
Next along the spectrum is legitimate emotional appeal: communication that uses emotion not as a substitute for evidence but as a proportionate response to evidence — fear of real dangers, grief for real losses, indignation at genuine injustices. The classic philosophical formulation here is Aristotle's: pathos is legitimate in rhetoric when it tracks the actual features of the situation, when the fear invoked corresponds to a real threat of appropriate magnitude, when the grief expressed corresponds to a genuine loss. The key criterion is proportionality — does the emotional intensity of the communication correspond to the emotional significance of the underlying facts?
A public health campaign that shows graphic images of smoking's physical effects, and uses those images to generate fear, is using emotional appeal. If the images are accurate and the fear is proportionate to the actual health risk of smoking, this is legitimate emotional persuasion. The emotional appeal is doing cognitive work — it is making vivid and salient information that the audience already has, in principle, access to — but it is not substituting for evidence. It is emotion in service of truth.
Further along the spectrum is psychological manipulation: influence that operates not by providing good reasons or proportionate emotional responses, but by exploiting cognitive biases, psychological vulnerabilities, or emotional states that are unrelated to the content of the message. The distinction from legitimate emotional appeal is that manipulative emotional appeals are decoupled from truth. They create fear without a corresponding real danger, manufacture urgency where none exists, attach positive emotional associations to claims that don't support them, or manufacture consensus to exploit social proof dynamics.
And at the far end of the spectrum is coercive control: influence that removes the option of genuine choice altogether — through threats, surveillance, overwhelming information environments that make coherent evaluation impossible, or the destruction of relationships and communities that would provide alternative perspectives. This is what Hannah Arendt was describing when she wrote about totalitarian propaganda: not merely persuasion that bypasses reason, but a communicative environment that makes reason impossible.
Four criteria for locating practices on this spectrum:
-
Truthfulness of claims. Does the communication make factually accurate assertions? Does it avoid misleading through selective omission?
-
Relevance and proportionality of emotional appeal. Is the emotional register of the communication appropriate to the actual facts? Or is the emotional intensity designed to exceed what the evidence warrants?
-
Respect for rational agency. Does the communication engage the audience's capacity for evaluation, or does it attempt to bypass it? Does it provide the information needed to assess the claim?
-
Transparency of intent. Does the audience know they are being persuaded, and by whom? Is the source of the communication disclosed?
These four criteria do not always align perfectly. A communication can be fully truthful and still be manipulative if it deploys emotional intensity disproportionate to the evidence. It can be transparent about its source and still manipulate through exploitation of cognitive biases. The criteria work together as a cluster, not as a single binary test.
34.3 The Consent Framework
A distinct but related approach to persuasion ethics starts not with the character of the communication but with the status of the person receiving it. On this view, the ethics of persuasion is fundamentally about consent — does the audience know they are being persuaded, and did they voluntarily expose themselves to the communication?
The consent framework has intuitive force. We typically think that people have a right to make decisions about their own minds — about what influences them, what they are exposed to, what psychological processes they undergo. If someone knowingly walks into a movie theater, sits through a film with emotional manipulation, and walks out persuaded of something they would not otherwise have believed, we generally do not think their rights have been violated. They consented to the experience. The emotional manipulation was, in a meaningful sense, what they were there for.
The consent framework explains some cases well. It explains why advertising is generally less ethically problematic than subliminal advertising: visible advertising announces itself, allows the viewer to apply a discount to the message, and can be avoided. Subliminal advertising — embedding messages below the threshold of conscious perception — is manipulative precisely because it removes the audience's ability to recognize and respond to the persuasive attempt. The audience cannot consent to what they cannot see.
It also explains why children are ethically special cases in advertising ethics. Children lack the cognitive capacity to recognize and evaluate advertising as advertising — their brain development does not yet support the metacognitive ability to ask "what is this message trying to do to me?" They cannot meaningfully consent to persuasion techniques that work specifically by circumventing evaluation. This is why most democracies apply stricter regulations to advertising directed at children, and why researchers working on nudge interventions are particularly cautious about applying them in educational settings.
Where the consent framework becomes complicated. The digital advertising ecosystem presents a consent problem that the traditional framework was not designed for. When users accept a platform's terms of service — a document that few read and fewer understand — they have, in a formal legal sense, consented to data collection, psychological profiling, and targeted advertising. But the substance of what they have consented to — that their emotional states, personality types, political vulnerabilities, and behavioral patterns will be analyzed and used to deliver psychologically customized persuasion messages — was not disclosed in any meaningful way. The terms of service as a consent mechanism is, as legal scholars have noted, close to a fiction.
The Facebook emotional contagion experiment, discussed at length in Section 34.7, is the most vivid example of this problem. Seven hundred thousand users were enrolled in a psychological experiment without their knowledge because a researcher determined that their click-through agreement to Facebook's terms of service constituted consent. This claim was widely, and rightly, contested. It illustrates the gap between formal consent (clicking "I agree") and substantive consent (knowing and understanding what you are agreeing to).
The cognitive capacity requirement. The consent framework also highlights the importance of cognitive capacity — not just legal capacity (age of majority) but the actual psychological resources required to recognize and evaluate persuasion. When people are under extreme emotional stress, when they are sleep-deprived, when they are overwhelmed by high-volume information environments, or when they are in the grip of strong emotional activation, their capacity for rational evaluation is compromised. Targeting people in these states — as crisis fundraising campaigns sometimes do, or as some political advertising is specifically designed to do — raises genuine consent concerns even when the formal legal conditions for consent are met.
34.4 The Manipulation Question
The word "manipulation" is commonly used but rarely defined precisely in everyday discourse. For the purposes of persuasion ethics, it is worth distinguishing among three different philosophical accounts of what makes something manipulation rather than legitimate influence — because they identify different features as ethically central, and they have different implications for the evaluation of specific practices.
The autonomy-based account holds that manipulation is wrong because it bypasses or subverts the target's rational agency. On this view, what is ethically special about manipulation is that it circumvents the normal epistemic processes by which a rational agent evaluates claims and forms beliefs. Robert Nozick's influential formulation: manipulation "involves the nonrational production of assent." The manipulated person assents to a belief or course of action, but not for reasons that they would endorse upon reflection — because they have been led there through processes (emotional exploitation, false impressions, cognitive bias exploitation) that bypassed rather than engaged their rational faculties.
The autonomy-based account has an important implication: it is not sufficient that the outcome be good, or that the message be factually accurate, for the communication to be ethical. A communicator who uses accurate information but delivers it in ways specifically designed to prevent critical evaluation — through extreme emotional saturation, time pressure, social isolation — is manipulating on this account even if every word they say is true.
The welfare-based account focuses on a different feature: manipulation is wrong when it produces outcomes contrary to the target's genuine interests. On this view, the problem with manipulation is not primarily that it bypasses reason, but that it leaves people worse off than they would be if they had been allowed to reason freely. This account has the advantage of explaining why benevolent manipulation — paternalism — is troubling rather than simply praiseworthy: even when a manipulator genuinely believes they are acting in the target's interests, the substitution of their judgment for the target's violates a kind of self-determination that is itself part of welfare.
The welfare-based account also has implications for the ethics of nudging, discussed in Section 34.5. If nudges systematically guide people toward outcomes that increase their welfare — as measured by their own stated preferences in reflective moments — the welfare-based account might find them less objectionable than the autonomy-based account would.
The procedural account locates the wrong in the methods used rather than in either the bypassing of reason or the harm to welfare. On this account, manipulation uses techniques specifically designed to circumvent rational evaluation — techniques that would not work if the audience were fully informed about them. The key test: if the persuasive technique is fully disclosed to the audience in advance, does it lose its effect? Rational argument, by contrast, works precisely through disclosure — you explain the argument, and if it is a good argument, explaining it makes it more, not less, persuasive.
The procedural account is particularly useful for evaluating practices like A/B testing in political advertising, micro-targeting based on psychological profiles, and the use of dark patterns in interface design. These techniques depend for their effectiveness on the audience not knowing about them. Their logic is explicitly the logic of circumvention.
Applying the frameworks to contested cases:
Nudges: A default organ donation opt-out policy (where you are registered as a donor unless you actively opt out) is a nudge. Under the autonomy-based account, it is potentially manipulative because it exploits status quo bias to produce a choice outcome without rational engagement. Under the welfare-based account, it may be justified if the outcome (more organ donations, more lives saved) aligns with what most people say they prefer on reflection. Under the procedural account, it may be acceptable if it is publicly disclosed and can be easily reversed.
Emotional appeals in public health: A graphic anti-smoking advertisement that uses fear. Under the autonomy-based account, this is at most minimally manipulative if the fear is proportionate to the actual risk and is used to engage rather than overwhelm rational evaluation — to make vivid what the audience already knows abstractly. Under the welfare-based account, it is justified if it promotes outcomes the audience genuinely values (health). Under the procedural account, it is acceptable because the technique is publicly known and the audience can recognize it.
Micro-targeted political advertising: An advertisement delivered specifically to users identified as having psychological profiles indicating vulnerability to fear appeals, exploiting information the users did not knowingly provide. Under all three accounts, this is problematic. It bypasses rational evaluation, it uses techniques that depend on the audience not knowing about them, and its welfare effects are at minimum unclear.
34.5 The Nudge Debate
In 2008, economists Richard Thaler and legal scholar Cass Sunstein published Nudge: Improving Decisions About Health, Wealth, and Happiness, which presented a framework for what they called "libertarian paternalism" — using insights from behavioral economics to design choice environments that guide people toward welfare-improving decisions while preserving their formal freedom to choose otherwise.
The core insight of nudge theory is that choices are never made in a vacuum. They are always made within environments — choice architectures — that were designed by someone. The question is not whether to influence choice architecture, but how. Because human beings have systematic and predictable cognitive biases (status quo bias, present bias, loss aversion, availability bias), a choice architecture that is apparently "neutral" will, in practice, systematically influence decisions in ways that may not serve the chooser's actual interests. Nudge theory argues that if you must design a choice architecture anyway, you should design it to work with what people genuinely want, as measured by their reflective preferences, rather than leaving it to chance or to whoever happened to design the default first.
The canonical examples: placing healthier foods at the front of a cafeteria increases consumption of healthy options. Changing the default on retirement savings plans from opt-in to opt-out dramatically increases enrollment. Putting a small picture of a fly on the urinal target in Amsterdam's Schiphol Airport reduced spillage by 80 percent. In each case, the formal freedom to choose otherwise is preserved. No one is prevented from eating the french fries, declining to enroll in the savings plan, or ignoring the fly. The nudge works by making the better option easier.
The case for nudges as ethical persuasion (Sunstein's position). Sunstein argues that nudges are not manipulation for three reasons. First, they are transparent — the best nudge programs are public about what they are doing and why. Second, they do not deceive — they do not provide false information or create false impressions. Third, they work by making the preferred option easier to reach rather than by exploiting cognitive biases against the chooser's interests. The behavioral economics findings that underlie nudge design are not being weaponized; they are being used to correct for biases that otherwise work against the chooser.
The case against nudges as manipulation (Hausman and Welch's critique). Daniel Hausman and Bryan Welch argue in a widely cited 2010 paper that nudges, properly understood, do exploit cognitive biases — that is precisely their mechanism of action. A default that exploits status quo bias is not engaging rational agency; it is circumventing it. The fact that the circumvention may be in the chooser's interest does not change the fact that it is circumvention. On the autonomy-based account, nudges are troubling not because they harm people but because they route around their rational faculties. This is patronizing in a way that does not become less patronizing when the outcomes are good.
The "who nudges the nudgers" problem. Perhaps the most practical critique of nudge programs is about accountability. If choice architects have the power to systematically influence the decisions of millions of people by designing defaults, and if the justification for this power is that they know what is in people's genuine interests, then the question of how choice architects are selected, constrained, and held accountable becomes critically important. Behavioral science has mostly been used so far by governments and corporations with relatively benign intentions in public-facing domains (health, savings, energy use). But the same techniques can be used — and are being used — to increase platform engagement, maximize advertising revenue, and promote political agendas. There is no inherent limitation in nudge theory that confines it to benevolent applications.
Vaccination messaging as a case study. During the COVID-19 pandemic, public health communicators faced a practical question: how do you communicate vaccination recommendations to populations with high vaccine hesitancy? Several interventions were studied. Making vaccination the default for healthcare workers (opt-out rather than opt-in) increased vaccination rates significantly. Tailoring messages to different psychological profiles — emphasizing personal protection for individualists, community protection for collectivists — was found to be more effective than generic messaging. Both of these are nudge-adjacent interventions. Both were justified on public health grounds. Both also raise the questions that nudge theory has always raised: Is it acceptable to exploit known cognitive biases to produce a public health outcome, even if the outcome is good? Does the target population's inability to recognize what is being done to them matter?
34.6 Emotional Appeals: Legitimate and Illegitimate
Aristotle identified three modes of rhetorical persuasion: logos (argument and evidence), ethos (the character and credibility of the speaker), and pathos (emotional appeal). He did not prohibit pathos. He was not naively rationalistic about persuasion. But he was clear about the conditions under which emotional appeals were legitimate: they must be appropriate to the subject, proportionate to the evidence, and used to illuminate truth rather than to circumvent evaluation.
The tradition of concern about emotional appeals in persuasion runs from Aristotle through Plato (who was more categorical: rhetoric is flattery, emotional manipulation dressed as wisdom) to the Enlightenment concern with reason as the proper basis for political life, and into twentieth-century philosophy of communication. What these various traditions share is a recognition that emotional appeals have a dual nature: they can make truth vivid and salient in ways that assist rational evaluation, or they can overwhelm rational evaluation and substitute emotional response for evidential assessment.
When emotional appeals are legitimate:
An emotional appeal is legitimate when the emotion it invokes is proportionate to and grounded in real features of the situation. A campaign that shows real children suffering from a preventable disease to motivate donations to medical research is using emotional appeal. If the children are real, the disease is real, the research is real, and the magnitude of the suffering corresponds to the magnitude of the emotional appeal, the appeal is legitimate. It is using emotion to make vivid what the audience knows abstractly (preventable childhood disease is bad) in a way that connects knowledge to motivation.
Similarly, an emotional appeal that generates appropriate indignation at genuine injustice is legitimate. It is not manipulation to make people feel angry about something that genuinely warrants anger — provided the anger is grounded in accurately presented facts, not artificially amplified by selective presentation or misleading framing.
The key test: if the audience had access to all the relevant facts and were reasoning clearly, would the emotional response invoked be appropriate? If yes, the emotional appeal is tracking reality. If the emotional intensity would seem grossly disproportionate to a well-informed and reflective person, the appeal has crossed the line.
When emotional appeals are manipulative:
Emotional appeals become manipulative when they are decoupled from the facts they purport to represent. There are several mechanisms through which this decoupling occurs.
False emotional grounding: The appeal invokes a genuine emotion (fear, grief, anger, love) but grounds it in false or misleading information. A political advertisement that shows crime statistics selectively, or attributes crimes to a group without identifying that attribution as contested, generates real fear in response to manufactured threat.
Disproportionate emotional intensity: The emotional register of the communication significantly exceeds what would be appropriate to accurately represented facts. This is the mechanism by which yellow journalism works: facts may be technically accurate, but they are framed with a dramatic intensity that implies a higher level of threat, outrage, or urgency than the underlying situation warrants.
Emotional saturation as a deliberate cognitive blocker: Some communication is designed not just to invoke emotion but to invoke it at an intensity that prevents deliberative evaluation. The goal is not to help the audience feel appropriately about real things, but to generate an emotional state so intense that calm evaluation becomes psychologically difficult. Fear campaigns that create panic, rather than appropriate caution, are examples of this.
Emotional irrelevance: The emotion invoked is real and genuine but has no relevant connection to the claim being advanced. Celebrity endorsements typically operate through this mechanism: a positive feeling toward an admired person is transferred to a product or policy position to which it has no logical connection.
Tariq Hassan put the problem directly in seminar: "If disinformation uses emotional manipulation, and we're using emotional persuasion to counter it, what's the difference?"
Webb's response was not dismissive. "The difference is where the emotion points. Emotional appeals that track truth — that create fear of genuine dangers, anger at genuine injustices, hope grounded in genuine possibilities — are different from emotional appeals that create false impressions. But Tariq is right that this distinction can be abused as self-justification. It is entirely possible to convince yourself that your emotional appeals are truthful when they are not. The test is not your intention. The test is whether a well-informed, reflective person would agree that the emotional intensity of your communication is appropriate to the accurately-represented facts."
34.7 Facebook's Emotional Contagion Experiment: The Ethics of Platform Research
In June 2014, the Proceedings of the National Academy of Sciences published a paper titled "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks," authored by Adam D. I. Kramer (Facebook), Jamie E. Guillory (UCSF), and Jeffrey T. Hancock (Cornell). Within days of publication, it had become one of the most controversial scientific papers in the social psychology of media.
The experiment worked as follows. For one week in January 2012, Facebook modified the News Feed algorithms for 689,003 users — without notifying them — to manipulate the emotional content of the posts they saw. One group saw a reduced proportion of posts containing negative emotional content from their friends. A second group saw a reduced proportion of posts containing positive emotional content. A control group's feed was unmodified. The researchers then analyzed the emotional content of the posts these users subsequently made, to test whether the emotional tone of what users saw affected the emotional tone of what they posted.
The finding: it did. Users who saw more positive content in their feeds subsequently posted more positive content. Users who saw more negative content subsequently posted more negative content. The effect was real but small — a few percentage points in the proportion of positive vs. negative words in subsequent posts. The paper concluded that "emotional states can be transferred to others via emotional contagion through the mechanism of exposure to emotional content in the News Feed."
What this established. The research established something that practitioners had long suspected: that the design of social media feeds — specifically, which content the algorithm surfaces — systematically shapes the emotional states of users. This is not a modest finding. Facebook at the time of the study had approximately 1.3 billion users. The capacity for algorithmic emotional manipulation described in this paper was not a hypothetical; it was a documented feature of a platform used by a significant fraction of humanity.
The ethics controversy. The research generated a firestorm for several reasons.
Lack of informed consent. The 689,003 users enrolled in the study had not consented to being research subjects. They had not been told their News Feeds were being experimentally manipulated. They did not know they were being studied. Facebook's claim — asserted by the paper's authors — was that users had consented to such research by agreeing to Facebook's terms of service, which included a provision allowing data use for "research and service improvement." Critics, including prominent bioethicists and researchers, rejected this claim as a consent fiction. Informed consent in research contexts requires disclosure of the nature of the research, the procedures involved, and the potential risks. A click-through agreement to a terms-of-service document satisfies none of these requirements.
IRB failure. The research was reviewed by an Institutional Review Board — specifically, at Cornell, where Hancock was based. However, the review occurred after Facebook had already conducted the experiment, and reviewed only Hancock's analysis of data Facebook had collected and provided. The IRB did not review the experimental design or the consent procedures, on the grounds that Hancock himself had not conducted the experiment. This is widely regarded as an IRB process failure: the most ethically questionable element of the research — the experimental manipulation of users' emotional states without consent — fell outside the IRB's purview because it was conducted by a private company.
Potential for harm. Critics noted that artificially inducing negative emotional states in research subjects raises direct harm concerns, particularly for users who may have been at risk for depression or other mental health conditions. The researchers acknowledged in a response note published alongside the paper that they "are very privacy conscious as scientists" and had "taken steps to minimize risks." They did not explain what those steps were.
The broader implications for platform ethics. The emotional contagion experiment is significant not just as an instance of research misconduct but as a demonstration of capability. It showed, with published evidence, that a major social media platform could systematically alter the emotional states of hundreds of thousands of users without their knowledge, as part of routine product research. The experiment was notable because it was published. The capability it documented existed before the publication and continued afterward.
The experiment made explicit what had previously been implicit: that the architecture of social media platforms is not a neutral conduit for communication but an active shaper of emotional experience. Every design choice — which posts to surface, which to suppress, how to rank them, which notifications to send, when to send them — is a choice about what emotional experiences users will have. These choices are made continuously, at scale, by teams of engineers and product managers whose primary accountability is to engagement metrics. The ethical frameworks governing this process were, as the emotional contagion controversy made clear, underdeveloped.
This connects to Section 34.5's "who nudges the nudgers" problem. Platforms are the most powerful choice architects in the modern information environment. The accountability structures governing their design choices are, at minimum, inadequate.
34.8 Professional Ethics of Persuasion: Journalism, PR, and Advertising
Three major industries are professionally organized around the production of persuasive communication: journalism, public relations, and advertising. Each has developed professional ethical codes that attempt to manage the tension between the communicative goals of the profession and the ethical obligations to audiences and the broader public.
The Society of Professional Journalists Code of Ethics organizes its guidance around four principles: seek truth and report it, minimize harm, act independently, and be accountable and transparent. Its provisions relevant to persuasion ethics are primarily about honesty and disclosure: never plagiarize, never report inaccurately, disclose conflicts of interest, never distort facts or context. The code is explicitly concerned with the journalism/advocacy distinction: journalists who are advocates for causes or positions should disclose that advocacy, because it is relevant to how readers should evaluate their reporting.
The SPJ code does not prohibit the use of emotional appeals, narrative structure, or other techniques that might make journalism more compelling. It does not require that news be written in a way that is designed to minimize reader engagement. But it does hold that the emotional framing of a story must be grounded in accurate information, and that techniques that are designed to produce impressions of fact that go beyond what the evidence supports are violations of journalistic ethics.
The Public Relations Society of America Member Code of Ethics is organized around values including honesty, expertise, advocacy, loyalty, and fairness. Its core provision relevant to persuasion ethics is what it calls "honest advocacy": PR practitioners can legitimately advocate for the interests of their clients, but they must do so through truthful communication. They must not generate "disinformation" or "misleading information," must disclose their client relationships when conducting advocacy, and must not engage in deceptive practices.
The "honest advocacy" concept has significant limitations. PR practitioners are hired specifically to present their clients' interests in the most favorable possible light. They are not required to present all relevant information, only to avoid stating falsehoods. The line between truthful advocacy and misleading through selective omission is one that PR ethics has not resolved satisfactorily.
The American Advertising Federation Principles of Advertising prohibit advertising that is "false, misleading, deceptive or fraudulent" and require that advertising be "clearly recognizable as such." The disclosure requirement addresses the consent problem discussed in Section 34.3: advertising that is not recognizable as advertising — native advertising that mimics editorial content, influencer posts that do not disclose commercial relationships — violates the consent framework by preventing audiences from applying the appropriate discount to the message.
What these codes share. All three codes share a commitment to truthfulness as the minimum ethical standard. They do not require that professional communicators be neutral — journalism's "seek truth" norm involves judgment about what matters and how to present it; PR is explicitly advocacy; advertising is explicitly in the business of making things seem desirable. But all three codes prohibit deception, and all three include provisions about transparency of source and intent.
Where they fall short. Professional ethical codes are enforced primarily through professional reputation and, to a limited extent, through professional association membership. They have no legal force in most democracies (with the partial exception of advertising, which is regulated by the FTC in the United States and similar agencies elsewhere). They do not address the practices that have become most ethically contested in the digital advertising ecosystem: micro-targeting based on psychological profiles, algorithmic amplification of emotionally engaging content, A/B testing to optimize for the most persuasive version of a message. These practices did not exist in the form they currently take when the codes were written, and the codes have not been substantially updated to address them.
34.9 The Propagandist's Responsibility
Does the individual who produces propaganda bear moral responsibility for its effects? This is not a hypothetical question. It has been litigated — literally — in the Nuremberg trials and in subsequent war crimes prosecutions, and it continues to arise in contemporary contexts wherever communications professionals have enabled mass harm through their work.
The Nuremberg precedents. The Nuremberg Tribunal addressed the ethics of propaganda directly in its consideration of Julius Streicher, publisher of the antisemitic newspaper Der Stürmer, and of Hans Fritzsche, head of the Radio Division in the Reich Ministry of Public Enlightenment and Propaganda. The Tribunal found Streicher guilty of crimes against humanity, holding that his decades of virulent antisemitic propaganda had constituted incitement to murder and extermination and thus was inextricably linked to the Holocaust. Fritzsche was acquitted — a decision that was and remains controversial — on the grounds that his radio broadcasts, while propagandistic, did not specifically incite genocide in a way the Tribunal found sufficient.
The Streicher verdict established an important principle: that propaganda can constitute criminal incitement when it is designed to produce specific violent outcomes, when it dehumanizes groups in ways that constitute calls to action, and when the connection between the propaganda and the violence is sufficiently direct. The acquittal of Fritzsche suggests the limits of this principle: it is extremely difficult to demonstrate that a specific propagandist's communications caused specific crimes, particularly when those communications were part of a broader systematic effort.
The spectrum of individual responsibility. Individual responsibility in propaganda systems is not binary — either fully responsible or not responsible at all. It operates on a spectrum that varies with the degree of authorship, the degree of knowledge, and the degree of available alternatives.
Maximum responsibility: the architect. Joseph Goebbels, as the designer and director of the Nazi propaganda apparatus, bore maximum individual responsibility. He chose the techniques, set the strategic goals, monitored the outcomes, and continuously adjusted the system to increase its effectiveness. He was not following orders; he was giving them. The fact that the system he built was embedded in a totalitarian state does not reduce his individual responsibility, because his own exercise of substantial creative agency was an essential component of the system's functioning.
High responsibility: the willing professional. The advertising copywriter who designs a political campaign for a client they know is advancing a cause through disinformation — and who does so skillfully, with full understanding of what the campaign is doing — bears high individual responsibility. The professional distance between creating the message and deploying it does not create moral insulation. A skilled communicator who chooses to put their skills in service of manipulation has made that choice.
Intermediate responsibility: the institutional participant. A civil servant who produces propaganda materials as part of their job function, in a system that does not permit refusal, bears reduced but not negligible responsibility. The concept of "just following orders" was rejected at Nuremberg, but that rejection was not an assertion that institutional pressure counts for nothing in moral evaluation. It was an assertion that institutional pressure does not eliminate individual responsibility — particularly when the individual has competence, exercises judgment in their professional role, and has some capacity for non-compliance.
Minimal but nonzero responsibility: the distant enabler. A schoolteacher who delivers Nazi curriculum in 1938 Germany, who has not designed the curriculum, has no ability to change it, and faces serious consequences for refusal, bears minimal individual responsibility for the propaganda effect of that curriculum. But minimal is not zero. Moral responsibility persists at some level because the teacher's participation, however constrained, is an element of the system's functioning.
Contemporary applications. The spectrum of responsibility applies in contemporary contexts that are less extreme but structurally similar.
The data scientist who builds the targeting algorithm that delivers psychologically customized disinformation to vulnerable populations has made choices — about what to build, how to build it, and for whom — that have significant ethical dimensions. "I was just doing technical work" does not fully excuse participation in a system that produces manipulation at scale, when the technical work is integral to the manipulation.
The content moderation employee who, under pressure from above to maximize engagement, declines to remove demonstrably false but highly engaging content has made a choice. That choice has consequences. The individual's ability to resist is constrained, but the constraint does not eliminate the moral dimension.
The social media copywriter who writes a political advertisement they know relies on psychological micro-targeting to exploit documented emotional vulnerabilities has contributed to the manipulation. The fact that they are one person in a large system does not make that contribution morally irrelevant.
The Nuremberg principle — that "I was just following orders" is not a complete defense — applies beyond its original context. Its core claim is that individuals who exercise professional judgment in their work cannot disclaim responsibility for the uses to which that judgment is put simply because the decision to use it that way was made by someone above them. The claim must be proportionate: it is unreasonable to hold a low-level employee to the same standard as a system architect. But the claim that individual professionals in propaganda and disinformation systems bear some individual responsibility, proportionate to their agency and authorship, is sound.
34.10 Counter-Propaganda Ethics
We come now to Sophia's question.
Counter-propaganda — communication designed to contest, expose, or inoculate against propaganda — raises a specific ethical problem: it frequently uses the same techniques as the propaganda it is countering. Narrative structure, emotional appeal, psychological framing, repetition, source credibility — these are not inherently propagandistic techniques. They are general features of effective communication. But they are also the techniques that, when used in service of false or misleading content, constitute propaganda. The use of these techniques by counter-propagandists creates what we might call the mirror problem: if you look closely enough at counter-propaganda, it can start to look like propaganda aimed in the other direction.
Four criteria for counter-propaganda ethics. Drawing on the frameworks developed in this chapter and on the analysis from Chapter 29, a legitimate counter-propaganda effort must satisfy four criteria:
First, factual accuracy. Counter-propaganda must be grounded in accurate information. Emotional appeals, narrative structures, and psychological framing can all be used legitimately, but they must be used to illuminate true things, not to create false impressions. If a counter-propaganda campaign uses fear, the fear must correspond to a genuine risk. If it uses indignation, the injustice it references must be real.
Second, proportionality. The emotional and rhetorical intensity of a counter-propaganda campaign should be proportionate to the evidence it is presenting. It is not automatically ethical to match the emotional intensity of propaganda one is countering if that intensity would be disproportionate to accurately represented facts. Counter-propaganda that inflames audience emotions beyond what the evidence warrants, in the name of fighting disinformation, has crossed the line.
Third, transparency of source and intent. A counter-propaganda campaign must disclose who is behind it. Astroturfed counter-campaigns — those that simulate organic grassroots responses while being centrally organized and funded — reproduce the deception problem of the propaganda they are countering. Transparency is a condition of ethical counter-propaganda, not a nice-to-have.
Fourth, respect for audience agency. The goal of counter-propaganda must be to strengthen the audience's capacity for autonomous evaluation, not to substitute the counter-propagandist's conclusions for the propagandist's. Inoculation theory (Chapter 33) is a useful model here: the goal of an inoculation intervention is not to create immunity to the specific conclusion of the propaganda being countered, but to build the critical evaluation skills that allow audiences to assess propaganda arguments for themselves. Counter-propaganda that pre-loads specific conclusions, rather than building evaluation skills, risks becoming just another influence operation with better intentions.
Is Sophia's campaign ethical? Using the four criteria: Does her inoculation message use accurate information? She believes so, and the examples she has drawn are factual. Is the emotional intensity proportionate to the evidence? This requires honest self-assessment — is the mild indignation she has designed actually proportionate to the actual severity of the false expert appeal phenomenon? It may well be. Is the source transparent? Her campaign has a disclosed origin as a media literacy initiative. Does it respect audience agency? The inoculation format explicitly aims to build evaluation skills rather than just transmit conclusions: it shows how the false expert technique works, rather than telling audiences what to believe about climate change.
Sophia's campaign, on honest application of these criteria, is in the ethical zone — but barely, and the nearness is instructive. The techniques she is using are the same techniques propagandists use. The difference is entirely in the accuracy, proportionality, transparency, and respect for agency with which she is using them. If any of those four criteria were to slip — if she exaggerated the evidence, amplified the emotion beyond what the facts warrant, concealed her organization's involvement, or designed the campaign to produce a specific conclusion rather than build critical capacity — the campaign would become manipulative.
The ethical line in counter-propaganda is not where most people assume it is. It is not between us and them, between good intentions and bad ones, between true causes and false ones. It is between methods that respect the audience's capacity for rational self-governance and methods that circumvent it — regardless of whose side you are on.
34.11 Research Breakdown: The Emotional Contagion Paper
Full citation: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks." Proceedings of the National Academy of Sciences 111, no. 24 (2014): 8788–8790.
What the study found. The paper reported that manipulating the emotional valence of content shown in Facebook News Feeds produced measurable changes in the emotional content of users' subsequent posts. Users exposed to less positive content (having positive posts from friends removed from their feeds) subsequently wrote posts with reduced positive emotional content. Users exposed to less negative content wrote posts with increased positive content. The effect was statistically significant but substantively modest — the researchers described it as "consistent with … small effect sizes."
How the study was conducted. The manipulation involved algorithmic adjustment of the ranking function Facebook uses to select posts for each user's News Feed. Specific posts from friends were suppressed — not deleted, but not shown — based on whether their emotional content was positive or negative. The study analyzed a sample of 689,003 accounts. Emotional content of posts was assessed using the Linguistic Inquiry and Word Count (LIWC) text analysis tool.
The methodological significance. The study's finding that emotional states can be "transferred" through algorithmic curation — that the emotional texture of the content you see on social media systematically shapes your emotional state, which in turn shapes what you communicate — has been described as one of the most important empirical findings in the social psychology of digital media. It provides direct evidence for what critics of social media algorithms had long argued: that algorithmic amplification of emotionally engaging content is not a neutral technical decision but an intervention in users' psychological states and, through them, in public emotional culture.
The ethics controversy. As noted in Section 34.7, the study attracted sustained criticism on ethical grounds. The critiques fell into several categories:
Informed consent. The most widely cited objection was that 689,003 users had been enrolled in a psychological experiment without informed consent. The terms-of-service consent claim was widely rejected by ethicists. A note published by PNAS after the paper's publication acknowledged "questions [that] have been raised about the informed consent process" and stated that the research "may have had more intensive review." The PNAS editor acknowledged that "the lack of explicit informed consent or debriefing of the participants is an ethical lapse."
Risk of harm. Multiple commentators raised concerns about the possibility that deliberately inducing negative emotional states could cause harm to users with mental health vulnerabilities. The researchers' response acknowledged this concern without providing evidence that adequate steps had been taken.
The retraction debate. Some researchers called for retraction of the paper. Others argued that retraction was not warranted because the findings themselves were not in question — only the ethics of the research design. The paper was not retracted, but the controversy produced significant subsequent discussion in the research community about IRB jurisdiction over corporate research and about the ethical standards applicable to research conducted on platform users.
What the controversy revealed about platform capabilities. The most lasting significance of the emotional contagion study may be less the specific finding than the capability it documented. Facebook, in 2012, had the ability to systematically manipulate the emotional states of hundreds of thousands of users through algorithmic feed curation. This capability was used routinely — not just in this one experiment, but as a feature of ordinary platform operation, since every algorithmic ranking decision selects for some content over other content and thereby shapes the emotional texture of users' experience. The experiment simply made explicit, and measurable, what platform architecture does continuously.
34.12 Primary Source Analysis: The APA Ethics Code, Section 5
Source: American Psychological Association. Ethical Principles of Psychologists and Code of Conduct. Washington, DC: APA, 2002 (amended 2010, 2016, 2017). Section 5: Advertising and Other Public Statements.
Section 5 of the APA Ethics Code governs the public communications of psychology professionals. It is a model case of professional persuasion ethics — not because it is perfect, but because it grapples seriously with a set of tensions that are representative of the broader field.
What it prohibits. Section 5.01 prohibits "false or deceptive statements" in public communications, including "false, misleading, or fraudulent" claims about qualifications, products, services, or research results. It specifically prohibits offering testimonials in clinical contexts (where therapeutic relationships might compromise testimonial independence) and prohibits advertising that contains "misrepresentations of fact."
Section 5.03 requires that descriptions of psychological workshops, non-degree educational programs, and similar offerings provide accurate information about their credentials and content. Section 5.04 addresses "media presentations" — psychologists who speak to media must provide accurate information, even when that information is complex, and must "take reasonable precautions to ensure that statements are based on appropriate psychological literature and practice."
What it permits. The code explicitly permits advocacy — psychologists can publicly advocate for positions, policies, and practices, and can use the tools of persuasion to do so. It does not require psychologists to be neutral communicators. The limitation is on how they advocate: truthfully, without false credentials, without misrepresenting research.
What standards it sets. The APA code's contribution to persuasion ethics is the "appropriate psychological literature" standard: that public statements by psychological professionals should be grounded in the research literature, even when simplified for public communication. This is not a prohibition on simplification — the code recognizes that communication for non-specialist audiences requires translation — but it is a prohibition on simplification so extreme that it misrepresents the actual state of knowledge.
What enforcement mechanisms exist. APA ethics violations are investigated by the APA Ethics Committee and can result in sanctions ranging from reprimand to loss of APA membership. Some violations may be referred to state licensing boards, which can affect professional licensure. However, the code is enforced primarily through professional reputation and peer norms, not through legal mechanisms. APA membership is voluntary, and non-members are not subject to the code at all.
Where it falls short. The code's most significant limitation in the contemporary information environment is that it was designed for individual professional communicators, not for the platforms and aggregators that now mediate most public communication. A psychologist who provides a media interview must comply with Section 5.04. The platform that algorithmically amplifies that interview a hundredfold, because it has high engagement metrics, is not subject to the APA code or to any professional ethics code at all. The code addresses the ethics of individual professional communicators in ways that do not extend to the architectures through which those communications are amplified and contextualized.
34.13 Debate Framework: Is Persuasion Using Emotional Techniques Ethically Different from Propaganda?
The framing question for this chapter's formal debate exercise cuts to the heart of the ethics of persuasion.
Position A: Persuasion using emotional techniques is not ethically different from propaganda. Both use emotional manipulation to bypass rational agency. The difference between a well-funded public health campaign and Nazi propaganda is not in the ethical status of the techniques used but only in the accuracy of the content and the alignment of the goals with the audience's interests. And those latter factors, while important for evaluating the consequences of the communication, do not change the method — both are still using emotions to move people to conclusions that rational deliberation might not produce. The autonomy-based account of manipulation applies equally to both: both communications are trying to produce assent without fully engaging rational evaluation.
The strongest version of Position A goes further: if we accept that emotional appeals in public health, environmental advocacy, or counter-disinformation campaigns are ethical, we have accepted that the ethics of persuasion is entirely about the content and ends, not the methods. But this conclusion is dangerous: it provides a justification structure available to anyone who believes their cause is just. Every propagandist believes their cause is just. If "good ends + emotional manipulation = ethical communication," then the ethics of persuasion reduces to agreement about ends, and there is nothing left for the methods to answer for.
Position B: Persuasion using emotional techniques is ethically different from propaganda, because the ethical status of a persuasive method depends not just on whether it is emotional but on whether the emotional appeal is grounded in accurate information, proportionate to the evidence, transparent in its source, and aligned with the audience's genuine interests. A fear appeal that invokes a genuine and accurately characterized risk is not ethically equivalent to a fear appeal that invokes a manufactured or exaggerated threat. The techniques may look similar from outside, but they are performing different cognitive operations on the audience: one is making real danger vivid; the other is creating a false impression of danger. These are not the same thing.
The strongest version of Position B draws on the distinction between engaging and bypassing rational agency. Emotional appeals that are grounded in real features of a situation do not bypass rational evaluation — they make the evidence for rational evaluation more vivid and salient. A person who sees a graphic anti-smoking advertisement and subsequently decides to quit smoking has not been manipulated into a choice they would otherwise reject on reflection; they have been helped to viscerally connect abstract knowledge they already had to a decision that the evidence supports. This is different from emotional manipulation in the propaganda sense, where the goal is specifically to produce a response that the audience would reject if they evaluated the evidence clearly.
The unresolved core. What Position A correctly identifies, and what should make students of persuasion ethics genuinely uncomfortable, is that the four criteria — accuracy, proportionality, transparency, respect for agency — are conditions that are easy to claim and difficult to verify. Every communicator believes their emotional appeals are proportionate. Every advocate believes their cause is well-grounded in evidence. The criteria do not automatically protect against well-intentioned people who have convinced themselves that their manipulation is justified. This is precisely why the ethics of persuasion requires ongoing critical self-examination rather than a one-time compliance checklist — and why Webb does not simply give Sophia the answer.
34.14 Action Checklist: Ethical Persuasion Self-Audit
Before distributing any persuasive communication, ask:
Category 1: Accuracy 1. Are all factual claims in this communication accurate to the best of my knowledge? 2. Have I omitted information that would materially change the audience's assessment if they knew it? 3. Are the sources I have cited reliable, relevant, and fairly represented?
Category 2: Transparency 4. Does this communication clearly identify its source and, where relevant, its funding? 5. Does the format of this communication make clear that it is a persuasive rather than a neutral informational communication? 6. If this communication uses emotional framing or narrative structure, am I transparent about that choice when describing the campaign to others?
Category 3: Respect for Rational Agency 7. Does this communication provide the audience with enough information to evaluate its claims independently? 8. Is the emotional intensity of this communication proportionate to what a well-informed, reflective person would consider appropriate given the accurately represented facts? 9. Is any emotional appeal I am using tracking a real feature of the situation, or am I using emotion as a substitute for evidence I cannot provide?
Category 4: Alignment with Audience Interests 10. Does this communication, if it succeeds in persuading the audience, leave them better or worse equipped to evaluate future communications on this topic? 11. Am I designing this communication to strengthen the audience's evaluative capacity, or to produce a specific pre-determined conclusion regardless of whether the audience has evaluated the evidence? 12. Is the outcome I am trying to produce one that the audience would endorse if they had access to all the relevant information and were reasoning clearly?
A "no" or "uncertain" answer to any of these questions is a reason to revise the communication before distributing it.
34.15 Inoculation Campaign: Ethical Audit — Progressive Project
Context. In Chapter 33, you designed an inoculation campaign targeting a specific disinformation technique. In this chapter, you will conduct a formal ethical audit of that design using the frameworks developed here.
The Audit Process
Step 1: Apply the spectrum analysis (Section 34.2). Where on the spectrum from rational argument to manipulation does your campaign's core technique fall? Apply the four criteria: - Truthfulness of claims - Relevance and proportionality of emotional appeal - Respect for rational agency - Transparency of intent
Write a 200-word assessment.
Step 2: Apply the consent framework (Section 34.3). Who is your target audience? Do they know they are receiving a persuasion communication? Do they have the cognitive and contextual capacity to recognize and evaluate what you are doing? Are there any consent concerns with the channels or contexts in which you plan to deploy the campaign?
Write a 150-word assessment.
Step 3: Apply one manipulation framework (Section 34.4). Choose one of the three philosophical accounts (autonomy-based, welfare-based, or procedural) and apply it to your campaign's core technique. Does the technique meet the ethical standard on this account? Where does it fall short?
Write a 150-word assessment.
Step 4: Emotional appeal assessment (Section 34.6). Identify each emotional appeal in your campaign. For each: - What emotion is invoked? - What real feature of the situation grounds it? - Is the intensity proportionate? - Would a well-informed, reflective person find the appeal appropriate?
Write 50–75 words per emotional appeal identified.
Step 5: Apply the counter-propaganda ethics criteria (Section 34.10). Does your campaign meet the four criteria for ethical counter-propaganda? - Factual accuracy - Proportionality - Transparency of source and intent - Respect for audience agency
Where are the gray areas? What revisions would address them?
Write a 200-word assessment.
Step 6: Action Checklist. Complete the twelve-question action checklist from Section 34.14. For any "no" or "uncertain" answer, write one sentence describing the revision you would make.
Step 7: Revised campaign element. Based on the audit, identify the single element of your campaign that is most ethically vulnerable. Revise it. Write the revised element alongside the original and explain the change in 100 words.
Chapter Summary
The ethics of persuasion resists simple summary because it resists simple answers. But the chapter's core arguments can be stated clearly.
All persuasion is not manipulation. The spectrum from rational argument through legitimate emotional appeal to psychological manipulation to coercive control is real, and the distinctions between points on it are meaningful and important. The key criteria for locating a practice on this spectrum — truthfulness, emotional proportionality, respect for rational agency, and transparency — are not algorithmic, but they are not arbitrary either.
The consent framework adds an important dimension: who knows what, and does the audience have the capacity to recognize and evaluate the persuasion they are receiving? The digital advertising ecosystem presents consent problems that existing frameworks were not designed to handle.
Three philosophical accounts of manipulation — autonomy-based, welfare-based, and procedural — each capture something real. The autonomy-based account reminds us that good ends do not automatically justify methods that bypass rational evaluation. The welfare-based account reminds us that paternalism is troubling even when the paternalist is right. The procedural account gives us a practical test: does the technique depend for its effectiveness on the audience not knowing about it?
The nudge debate illustrates that even well-intentioned, evidence-based attempts to improve choices through behavioral design raise genuine ethical concerns — and that those concerns become acute when the tools of behavioral design are available to anyone with access to platform infrastructure.
Emotional appeals are not inherently manipulative. They are manipulative when they create emotional responses that are decoupled from accurate information, or when they exceed in intensity what accurate information would warrant. Counter-propaganda can legitimately use emotional appeals, but it cannot exempt itself from these standards by pointing to the sins of the propaganda it is countering.
Individual professionals in propaganda and disinformation systems bear real moral responsibility, proportionate to their agency and authorship. "I was just following orders" or "I was just doing technical work" does not fully discharge that responsibility.
And Sophia's campaign — if it meets the four criteria — is ethical. But meeting the four criteria is not a one-time achievement. It requires ongoing scrutiny, honest self-assessment, and the willingness to hear uncomfortable answers to uncomfortable questions.
That, as Webb would say, is the work.
Key Terms
autonomy-based account of manipulation — The view that manipulation is wrong because it bypasses or subverts the target's rational agency, producing assent through non-rational means.
choice architecture — The design of the environment within which choices are made; the configuration of defaults, options, and presentation that shapes decision-making without restricting formal freedom.
consent framework — An approach to persuasion ethics that focuses on whether the target knows they are being persuaded and has voluntarily exposed themselves to the communication.
emotional contagion — The transfer of emotional states through exposure to others' emotional expressions; the 2014 Facebook study documented this effect through algorithmic feed manipulation.
four criteria — The cluster of standards used in this chapter to assess persuasion ethics: truthfulness of claims, relevance and proportionality of emotional appeal, respect for rational agency, and transparency of intent.
honest advocacy — The PRSA ethical standard that PR practitioners can legitimately advocate for clients but must do so through truthful communication.
informed consent — A disclosure and agreement process in which research subjects are told the nature, procedures, and potential risks of research before agreeing to participate; notably absent in the Facebook emotional contagion experiment.
IRB (Institutional Review Board) — An ethics review body charged with protecting the rights and welfare of human research subjects; the Facebook emotional contagion study exposed significant gaps in IRB jurisdiction over corporate research.
libertarian paternalism — Thaler and Sunstein's framework for nudge design: guiding people toward welfare-improving choices while preserving their formal freedom to choose otherwise.
nudge — A choice-architecture intervention that makes a desirable option easier to reach without restricting formal freedom, typically by exploiting cognitive biases to guide behavior.
procedural account of manipulation — The view that manipulation uses techniques designed to circumvent rather than engage rational evaluation — techniques that would lose their effectiveness if fully disclosed to the audience.
proportionality — The requirement that the emotional intensity of a persuasive communication correspond to what accurately represented facts would warrant.
welfare-based account of manipulation — The view that manipulation is wrong when it produces outcomes contrary to the target's genuine interests.