Appendix F: Propaganda Techniques Reference

A Field Guide for Critical Analysis


This appendix consolidates every propaganda technique covered in the textbook into a single, scannable reference. Entries are organized by category and formatted for rapid field use: when you encounter a piece of media, a speech, or a social-media campaign and want to name what you are seeing, this guide is your starting point. For deeper theoretical treatment, follow the chapter references at the end of each entry.

Each entry follows a consistent format: definition, key markers (what to look and listen for), one historical and one contemporary example, a counter-inoculation message to internalize, and the chapter that covers the technique in depth.


Part One: Emotional Appeal Techniques


Fear Appeal

Definition: Fear appeals activate threat-related emotions to motivate compliance, acceptance, or action. They work by making an audience feel that danger is imminent and that the propagandist's preferred course of action is the only reliable escape from that danger.

Key markers: - Exaggerated or unverifiable threat language ("existential," "invasion," "extinction") - Pairing the threat image directly with the proposed solution in the same message - Time pressure: urgency framing that discourages deliberation ("we must act now") - Dehumanizing or monstrous imagery associated with the threat source - Suppression of information about actual probability or magnitude of the threat

Historical example: Nazi propaganda systematically used fear of "racial contamination" and "Jewish Bolshevism" in posters, film, and radio broadcasts throughout the 1930s to manufacture existential dread and normalize persecution.

Contemporary example: During the 2015–2016 European refugee crisis, several far-right media outlets and political campaigns circulated manipulated or mislabeled images of crime scenes alongside refugee arrival footage, framing ordinary migration as imminent physical danger to native citizens.

Counter-inoculation message: Ask: what is the actual documented probability of this threat, who benefits from my fear, and what am I being prevented from doing while I am frightened?

Chapter reference: Chapter 7


Hope Appeal

Definition: Hope appeals present an idealized future state that is attainable only through alignment with the propagandist's agenda. Unlike fear, they pull rather than push, offering a positive vision that binds audiences emotionally to a cause or leader.

Key markers: - Utopian or redemptive language ("great again," "new dawn," "we can") - Vague specificity: emotionally vivid imagery paired with underspecified policy - Personal identification: the audience is cast as the agent of the hopeful future - Contrast with a degraded present that the speaker claims to uniquely understand - Messianic or savior framing centered on a single leader or movement

Historical example: Franklin Roosevelt's "New Deal" communication campaign used radio fireside chats and optimistic visual iconography to rebuild public confidence during the Depression — a politically motivated use of hope that shaped legislative consent.

Contemporary example: The Barack Obama 2008 campaign's "Hope" poster by Shepard Fairey became one of the most widely reproduced political images in American history, intentionally using a minimalist style derived from socialist realist poster art to attach aspirational emotion to a candidate.

Counter-inoculation message: Ask: what specific, verifiable commitments back up this vision, and who is excluded from or harmed by this imagined future?

Chapter reference: Chapter 7


Pride and Nationalist Appeal

Definition: Nationalist propaganda harnesses group pride — in nation, ethnicity, religion, or culture — to bind individuals to a collective and make criticism of that collective feel like betrayal. It weaponizes identity to suppress dissent and justify in-group favoritism.

Key markers: - Invocation of a glorious historical past as template for the present - "Our people" language that elides internal difference - Equation of criticism with treason or shame - Use of military imagery, national symbols, and heritage aesthetics - Implied or explicit hierarchy placing the in-group above others

Historical example: Imperial Japanese propaganda of the 1930s–1940s constructed "Yamato-damashii" (Japanese spirit) as a racially unique quality that made sacrifice for the Emperor both noble and obligatory, suppressing war-weariness and dissent.

Contemporary example: Russian state media's "Russkiy Mir" (Russian World) concept, promoted aggressively after 2014, frames ethnic Russians anywhere as members of a civilizational community whose protection justifies military intervention — a nationalist pride appeal used to manufacture consent for actions in Ukraine.

Counter-inoculation message: Ask: whose version of this identity is being promoted, what behaviors are being justified in its name, and who inside the group is being erased?

Chapter reference: Chapter 7


Disgust Appeal

Definition: Disgust appeals trigger the contamination-avoidance emotion — one of the most evolutionarily primitive and hard-to-override responses — to generate rejection of a person, group, or idea without rational evaluation.

Key markers: - Vermin, parasite, filth, or disease metaphors applied to human groups - Imagery or language that triggers physical revulsion - Association of the target with bodily contamination (dirt, sewage, infection) - Conflation of moral impurity with physical uncleanliness - Implicit or explicit calls for "cleansing," "purging," or "purification"

Historical example: Rwandan state-sponsored radio RTLM referred to Tutsi people as "inyenzi" (cockroaches) repeatedly in the months before the 1994 genocide, using disgust framing to lower the psychological barrier to mass killing.

Contemporary example: Anti-immigration rhetoric in several European countries through the 2010s–2020s consistently described migrants as bringing disease, crime, or cultural "contamination," using health metaphors that activated disgust rather than policy analysis.

Counter-inoculation message: Notice when disgust emotion is being activated toward people rather than behaviors, and treat that activation as a signal to slow down rather than to act.

Chapter reference: Chapter 7


Moral Outrage Appeal

Definition: Moral outrage appeals deliberately provoke anger at perceived injustice, norm violation, or moral transgression. Unlike fear, outrage energizes rather than paralyzes, making audiences more likely to share content, donate, or act — often before verifying the underlying claim.

Key markers: - Moral violation framing: someone has done something deeply wrong to someone innocent - High-arousal language ("outrageous," "disgusting," "how dare they") - Decontextualized clips or quotes stripped of surrounding circumstances - Clear villain and victim structure with little moral complexity - Implicit or explicit call to public condemnation or punitive action

Historical example: The "Remember the Maine!" press campaign of 1898, driven partly by William Randolph Hearst's newspapers, used moral outrage at a ship explosion to manufacture public support for the Spanish-American War before the cause was established.

Contemporary example: Social-media "cancel" campaigns frequently begin with decontextualized clips that maximize moral outrage; the 2019 Covington Catholic High School viral video was widely shared with outrage framing before context reversed the dominant interpretation — illustrating how outrage travels faster than correction.

Counter-inoculation message: When you feel intense moral outrage, treat it as a reason to verify before sharing, not as confirmation that the story is true.

Chapter reference: Chapter 7


Part Two: Simplification and Totalizing Techniques


False Dichotomy (Black-and-White Thinking)

Definition: False dichotomy artificially reduces a complex issue to two mutually exclusive options, typically one acceptable and one monstrous, eliminating middle ground, nuance, and alternative possibilities. It is the structural backbone of much political propaganda.

Key markers: - "Either/or," "with us or against us," "you choose" framing - Elimination of moderate or hybrid positions without argument - Both options presented as exhaustive (no third path acknowledged) - Moral valence: one option coded good, the other evil or weak - Urgency framing to prevent deliberation about alternatives

Historical example: George W. Bush's post-September 11 declaration "Either you are with us, or you are with the terrorists" (September 20, 2001) is a textbook false dichotomy that was used to suppress diplomatic nuance and justify a broad military response.

Contemporary example: Brexit campaigners on both sides repeatedly framed the choice as binary — "global Britain" versus "EU control" or "economic security" versus "xenophobia" — suppressing discussion of partial integration models or negotiated arrangements.

Counter-inoculation message: When presented with exactly two options, immediately ask: what is the third option, and why am I not being shown it?

Chapter reference: Chapter 8


Scapegoating

Definition: Scapegoating attributes complex social problems — economic hardship, crime, cultural change, military defeat — to a single identifiable group, providing a simple explanation that concentrates anger and deflects from structural causes.

Key markers: - A diffuse social problem paired with a specific, often minority, group as cause - Evidence presented is anecdotal or cherry-picked; statistical context is absent - The group chosen is already socially vulnerable or culturally legible as "other" - Problem-solution pairing: remove the group, solve the problem - Escalating rhetoric as social conditions worsen

Historical example: Weimar Germany's economic collapse after WWI and the punishing terms of Versailles were systematically attributed by Nazi propagandists to Jewish bankers and cultural influences, providing a politically useful but causally false explanation.

Contemporary example: In the aftermath of the 2008 financial crisis, immigrant communities in several countries were blamed for unemployment and wage depression — a scapegoating pattern documented by researchers examining far-right electoral gains in Europe between 2008 and 2016.

Counter-inoculation message: Ask: is the proposed cause actually capable of producing the claimed effect at the scale claimed, and who benefits from this explanation?

Chapter reference: Chapter 8


The Big Lie (Grosse Lüge)

Definition: The Big Lie is a claim so large and so frequently repeated that audiences find it difficult to believe anyone could fabricate something at such a scale. The technique exploits the assumption that elaborate lies require more cognitive effort to maintain than ordinary deception.

Key markers: - Claims that are sweeping, total, and difficult to verify or falsify quickly - Confidence and repetition rather than evidence-based argument - Dismissal of counter-evidence as itself part of the conspiracy or lie - The speaker positions themselves as the only source willing to tell the truth - Social costs for publicly doubting the claim within in-group settings

Historical example: Adolf Hitler described the technique in Mein Kampf (1925), attributing it to his enemies — but Nazi propaganda employed it in claims about Germany's defeat in WWI being caused by internal "stabbing in the back" rather than military failure.

Contemporary example: The "Stop the Steal" campaign following the 2020 U.S. presidential election asserted, without credible evidence, that the election was stolen through widespread systematic fraud — a claim of such sweeping scope that its sheer scale made it difficult for some audiences to dismiss, despite its rejection by 60+ courts and nonpartisan election officials.

Counter-inoculation message: The larger and more sweeping a claim is, the more — not less — evidence you should require before accepting it.

Chapter reference: Chapter 8


Part Three: Social Proof and Authority Techniques


Bandwagon and Social Proof

Definition: Bandwagon propaganda exploits the human tendency toward conformity by suggesting that "everyone" or "most people" already hold a view or take an action, making dissent feel socially risky and agreement feel like alignment with the winning side.

Key markers: - Vague quantifiers: "millions of people," "everyone knows," "the American people believe" - Crowd imagery: rallies, marches, queues presented as evidence of correctness - Popularity framed as validity ("the best-selling," "the most-watched") - Social risk language: being left behind, missing out, being isolated - Manufactured metrics: inflated view counts, purchased followers, fake petition numbers

Historical example: Soviet May Day parades were choreographed on a massive scale specifically to communicate that the Communist project had the unanimous enthusiastic support of the population — a visual bandwagon argument.

Contemporary example: Social-media campaigns routinely use purchased bot traffic to create the appearance of viral momentum before organic spread begins, a practice documented in multiple platform transparency reports from Meta and Twitter/X between 2018 and 2023.

Counter-inoculation message: Popularity tells you about social consensus, not about truth — ask what evidence independent of other people's acceptance supports the claim.

Chapter reference: Chapter 9


Appeal to Authority and False Expertise

Definition: Appeal to authority uses the credibility of an expert, institution, or prestigious source to support a claim. False expertise manufactures the appearance of authoritative endorsement — through credentials, titles, or institutional affiliation — where genuine expertise does not exist or has been distorted.

Key markers: - Expert cited outside their domain of actual expertise - Credentials presented without source verification or peer context - "Studies show" or "scientists agree" without citation of specific studies - Manufactured institutes or think tanks with neutral-sounding names but advocacy funding - Genuine expert consensus misrepresented as divided or uncertain

Historical example: The tobacco industry's "Doctors Smoke Camels" campaign of the late 1940s–1950s paid physicians to endorse cigarettes, exploiting medical authority to cast doubt on accumulating evidence of health harm.

Contemporary example: Climate-change denial campaigns between 2000 and 2020 repeatedly cited a small number of contrarian scientists — often petroleum-industry-funded — as evidence that expert opinion was divided, despite documented 97%+ consensus in the peer-reviewed literature.

Counter-inoculation message: Ask whether the authority cited has relevant expertise, whether their view is representative of informed opinion in their field, and who funds them.

Chapter reference: Chapter 9


Testimonial (Celebrity and Authority Endorsement)

Definition: Testimonial propaganda uses the popularity, likability, or perceived moral authority of a well-known person to transfer positive affect to a product, candidate, or cause — regardless of whether the endorser has relevant knowledge.

Key markers: - Celebrity, athlete, or cultural figure deployed outside their domain - Personal story ("I use this / I believe this") used as evidence rather than argument - Emotional identification with the endorser substitutes for evaluation of the claim - Endorser's image carefully curated to match target audience values - Lack of specificity: endorser rarely provides verifiable evidence or argument

Historical example: During WWII, Hollywood actors including Clark Gable and James Stewart enlisted, and their military service was heavily publicized by the U.S. government as part of a deliberate testimonial strategy to normalize enlistment.

Contemporary example: Anti-vaccine content spread across social media platforms between 2019 and 2023 frequently featured celebrity testimonials from athletes and influencers with no medical training, framed as personal freedom stories that substituted emotional authority for immunological evidence.

Counter-inoculation message: A person's fame or likability tells you nothing about whether their factual claims are accurate — ask what evidence they are citing, not who they are.

Chapter reference: Chapter 9


Plain Folks (Everyman Appeal)

Definition: Plain folks propaganda presents leaders, ideas, or institutions as ordinary, relatable, and unpretentious, building trust through performed authenticity and suppressing scrutiny by framing elitism as the real threat.

Key markers: - Conspicuous displays of common-man behavior (eating at diners, wearing work clothes) - Anti-intellectual rhetoric positioning expertise as arrogance - Regional accent, colloquial language, or grammatical informality as trust signals - Contrast with opponent framed as out-of-touch elite - Media management that controls "candid" moments

Historical example: Franklin Roosevelt's fireside chats used intimate, conversational radio language to make a patrician Harvard-educated president sound like a neighbor explaining the New Deal over the kitchen table.

Contemporary example: Numerous populist political campaigns of the 2010s–2020s (in the U.S., U.K., Brazil, Hungary, and elsewhere) featured wealthy or elite-educated leaders performing working-class authenticity as a rhetorical strategy, documented extensively by political communication researchers.

Counter-inoculation message: Ask whether the plainness is authentic or performed, and whether it is being used to substitute relatability for policy accountability.

Chapter reference: Chapter 9


Part Four: Repetition and Symbolic Techniques


Repetition and the Illusory Truth Effect

Definition: The illusory truth effect is the empirically demonstrated tendency for people to rate repeated statements as more probably true than novel ones, independent of their actual accuracy. Propaganda exploits this by saturating information environments with consistent messaging.

Key markers: - The same claim or slogan appears across multiple platforms and formats - Repetition without new evidence or argument each time - Slight variation in wording to prevent tune-out while preserving core message - High-frequency, low-depth exposure preferred over low-frequency, high-depth - No clear originating source: message appears to arise spontaneously everywhere

Historical example: Goebbels' Ministry of Public Enlightenment coordinated all German media to repeat consistent messaging about Jewish culpability and German victimhood, understanding that saturation was more important than argumentation.

Contemporary example: Research on Russian Internet Research Agency operations (Mueller Report, 2019; Senate Intelligence Committee Report, 2020) documented the systematic repetition of divisive narratives across thousands of accounts and platforms, designed to normalize those narratives through sheer frequency.

Counter-inoculation message: Familiarity is not evidence — ask when you first encountered this claim and how many independent sources (not just separate outlets repeating the same original source) actually support it.

Chapter reference: Chapter 10


Symbols, Flags, and Visual Propaganda

Definition: Visual propaganda uses images, colors, flags, uniforms, and symbols to communicate identity, authority, and emotion more rapidly and durably than text, bypassing analytical processing and activating associative memory.

Key markers: - Consistent color palettes, imagery, and iconography across all materials - Symbols repurposed from positive associations (religion, nature, history) and attached to the movement - Scale: giant rallies, oversized banners, monumental architecture designed to convey power - Uniforms that erase individual identity and signal collective belonging - Images of strength, purity, or historical greatness paired with present-day political figures

Historical example: Albert Speer's design of the Nuremberg rallies — the "Cathedral of Light" created by 130 anti-aircraft searchlights — was explicitly designed to create a quasi-religious atmosphere that made Nazi power feel transcendent and inevitable.

Contemporary example: ISIS's media wing Al-Hayat Production extensively borrowed visual grammar from Hollywood action films and video game aesthetics in its recruitment videos between 2014 and 2017, deliberately using high-production-value symbolic imagery to signal modernity and attract young Western recruits.

Counter-inoculation message: Ask what emotions the visual is designed to activate and whether those emotions are doing the argumentative work that evidence should be doing.

Chapter reference: Chapter 11


Glittering Generalities (Virtue Words)

Definition: Glittering generalities use words with strong positive connotations — freedom, democracy, family, progress, faith — that are universally admired but so vague as to carry almost no specific content, allowing audiences to project their own preferred meanings onto a message.

Key markers: - High abstraction, low specificity: many ideals, few facts - Words chosen for emotional resonance rather than descriptive precision - The audience supplies meaning; the speaker supplies affect - Critique is made difficult because the words name things everyone values - Often stacked: "freedom-loving, family-centered, faith-driven values"

Historical example: Cold War American propaganda consistently paired "democracy" and "freedom" with capitalism and anti-communism without defining terms, making opposition to specific U.S. policies appear equivalent to opposition to democracy itself.

Contemporary example: Political advertising across the ideological spectrum in the 2010s–2020s consistently tested virtue-word density over policy specificity, as documented in political advertising research — candidates of all parties used terms like "values," "strength," and "community" at high frequency while minimizing verifiable commitments.

Counter-inoculation message: Ask: what specific, verifiable action or policy does this word refer to, and would the speaker accept any definition of this word that didn't support their agenda?

Chapter reference: Chapter 12


Part Five: Supplementary Classical Techniques


Card Stacking / Cherry Picking

Definition: Card stacking selects only evidence that supports one's position while systematically omitting contradictory evidence, creating a false impression of the overall evidential landscape. It is one of the core techniques in the FLICC taxonomy of science denial.

Key markers: - Evidence is presented as comprehensive when it is selective - Negative studies, qualifications, or counterexamples are absent - Anecdotes substituted for representative data - Statistical base rates omitted; only confirming instances shown - Footnotes or citations that, on inspection, do not say what is claimed

Historical example: Tobacco industry internal documents revealed in the 1990s showed systematic suppression of internal research showing health harms, while only funding and publicizing studies that could be used to suggest uncertainty.

Contemporary example: Anti-vaccine websites between 2015 and 2023 routinely presented VAERS adverse-event reports (a passive surveillance system that captures unverified self-reports) as proof of vaccine harm while omitting the large-scale controlled studies showing safety — a textbook FLICC cherry-pick.

Counter-inoculation message: Ask: what would the other side's evidence look like, and has the speaker engaged with the strongest version of it?

Chapter reference: Chapter 12


Transfer (Symbol Association)

Definition: Transfer propaganda works by associating a person, idea, or product with a respected (or despised) symbol, institution, or figure, causing the audience to transfer their existing feelings toward that symbol onto the new target.

Key markers: - Deliberate visual or textual proximity between the target and the symbol - The association is asserted or implied, not argued - Positive transfer: flags, religious imagery, respected leaders behind the candidate - Negative transfer: opponent associated with enemies, criminals, or disasters - No logical connection between symbol and target is required

Historical example: Soviet propaganda consistently depicted Lenin and later Stalin alongside peasants and workers in idealized imagery, transferring the moral authority of the working class onto the party leadership.

Contemporary example: Political advertising routinely places opponents alongside unflattering imagery — crime scenes, foreign adversaries, economic disaster footage — using transfer to generate negative affect independent of logical connection, a practice documented in political advertising analysis across multiple election cycles.

Counter-inoculation message: Ask whether any logical connection actually exists between this person and the symbol beside them, or whether you are being asked to feel something rather than to reason.

Chapter reference: Chapter 11


Name-Calling and Labeling

Definition: Name-calling applies a negative label — slur, insult, or ideologically loaded term — to a person or group to trigger rejection without engagement, making the label do the argumentative work that evidence should perform.

Key markers: - Pejorative terms substituted for descriptive ones - Label applied broadly to cover a wide range of actors - Refusal to engage with the substance of the labeled person's argument - Label chosen for maximum emotional charge within the target audience - Repetition until the label precedes any other information about the target

Historical example: McCarthyism in the United States (1950–1954) used "communist" and "Red" as labels to destroy careers and suppress political dissent without requiring evidence of actual party membership or subversive activity.

Contemporary example: "Fake news" shifted from a descriptive term for demonstrably false viral content into a generic dismissal label applied to mainstream journalism critical of political figures, beginning around 2016 and documented in media analysis of U.S. and international political communication.

Counter-inoculation message: When you encounter a label, ask what specific, verifiable claim it is substituting for, and whether the underlying argument can withstand scrutiny without the label's emotional charge.

Chapter reference: Chapter 12


Part Six: Cognitive Bias Exploitation Techniques


Confirmation Bias Exploitation

Definition: Propaganda designed to exploit confirmation bias feeds audiences information that matches pre-existing beliefs while insulating them from contradictory evidence, making existing convictions feel more certain and better-supported than they are.

Key markers: - Content tailored to existing audience beliefs rather than designed to persuade skeptics - Outrage at perceived bias in opposing information sources, not at inaccuracy per se - Echo-chamber amplification: content travels primarily within ideologically homogeneous networks - Framing corrections as themselves biased or politically motivated - Emotional satisfaction from confirmation substitutes for critical evaluation

Historical example: Nazi-controlled German press of the 1930s produced a closed information environment in which every news item confirmed regime narratives — audiences had limited access to contradictory information and significant social cost for seeking it.

Contemporary example: Algorithmic content recommendation systems on platforms including YouTube and Facebook have been documented, in independent research published between 2018 and 2022, as amplifying confirmation-bias effects by optimizing for engagement, which tends to cluster users in ideologically similar content streams.

Counter-inoculation message: Deliberately seek the best-argued version of the opposing position — if you cannot steelman it, you may not understand the issue as well as you feel you do.

Chapter reference: Chapter 3


Availability Heuristic Exploitation

Definition: The availability heuristic causes people to estimate the probability of an event based on how easily examples come to mind. Propaganda exploits this by saturating media environments with vivid, memorable instances of rare events to make them feel common.

Key markers: - Repeated, vivid coverage of statistically rare events (terrorist attacks, immigrant crime) - Emotional intensity of examples, not frequency, drives perceived probability - Base-rate information (overall crime rates, actual probabilities) is absent - Memorable images substituted for representative statistics - "One is too many" rhetoric that blocks statistical context

Historical example: Post-9/11 U.S. media coverage of terrorism was so saturating and vivid that polls consistently showed Americans dramatically overestimated both the frequency of attacks and their personal risk — a documented availability effect that shaped policy support.

Contemporary example: Coverage of migrant crime in several European countries between 2015 and 2019 was documented by researchers including those at the Reuters Institute to be dramatically disproportionate to actual crime rates, creating availability-driven overestimation of the risk.

Counter-inoculation message: Ask: how frequent is this actually, not just how vivid and recent the examples are?

Chapter reference: Chapter 3


In-Group / Out-Group (Us vs. Them)

Definition: Us-versus-them propaganda exploits the evolved tendency toward in-group favoritism and out-group suspicion, dividing complex social landscapes into binary moral communities where in-group members are presumptively good and out-group members are presumptively threatening.

Key markers: - Collective pronouns used to create group boundary ("we," "our people," "they") - In-group depicted as under threat from out-group - Individual out-group members treated as representatives of group characteristics - In-group failings minimized or explained; out-group failings generalized - Empathy blocked: out-group suffering is reframed as justified or irrelevant

Historical example: Every major genocidal campaign in the 20th century — Armenia, the Holocaust, Rwanda, Bosnia — was preceded by systematic propaganda establishing a sharp us-versus-them boundary and attributing existential threat to the out-group.

Contemporary example: Research on political polarization in the United States between 2010 and 2024 (including Pew Research Center longitudinal studies) documents a dramatic increase in affective polarization — negative feeling toward the out-party — driven partly by media and social-media content that systematically amplifies us-versus-them framing.

Counter-inoculation message: Ask: am I evaluating this person or group based on actual individual behavior, or based on group membership?

Chapter reference: Chapter 4


Anchoring

Definition: Anchoring exploits the cognitive tendency to rely too heavily on the first piece of information encountered when making judgments. Propagandists set anchors — extreme initial claims, numbers, or framings — to pull subsequent evaluation in a favorable direction.

Key markers: - Extreme initial offer or claim before negotiation or deliberation begins - The anchor dominates even when participants know it is arbitrary - Corrections rarely move judgment back to pre-anchor baseline - Used in price-anchoring (retail), negotiation, and political framing - First-mover advantage: whoever sets the frame controls the terms of debate

Historical example: Reparations demands in the Treaty of Versailles negotiations were shaped significantly by initial anchor proposals: extreme high numbers set by France anchored the final figure far above what economic analysis suggested Germany could pay.

Contemporary example: Political campaigns routinely use extreme policy proposals — "abolish ICE," "build the wall and make Mexico pay" — as anchoring devices that make subsequent moderate positions appear more reasonable by comparison, a strategy documented in political psychology research.

Counter-inoculation message: When evaluating a claim or proposal, ask what the appropriate baseline is independent of what you heard first.

Chapter reference: Chapter 3


Backfire Effect and Identity-Protective Cognition

Definition: Early research suggested that corrections sometimes "backfire," strengthening false beliefs among committed believers. More recent work (Wood and Porter, 2019) has questioned the robustness of this effect — but identity-protective cognition is well-documented: people process information differently when it threatens their social identity, selectively discounting threatening evidence.

Key markers: - Corrections are rejected when the source is seen as ideologically opposed - The act of correction triggers defensive processing rather than updating - Group membership mediates receptiveness: same evidence is processed differently based on identity - Higher factual knowledge sometimes predicts greater polarization (identity-protective reasoning) - Emotion, not ignorance, is the driver of resistance to correction

Historical example: The persistence of "stab-in-the-back" mythology in Weimar Germany despite extensive documentary evidence of military collapse illustrates how identity-protective cognition can sustain false narratives when they protect national and military self-concept.

Contemporary example: Repeated debunking of specific COVID-19 misinformation claims between 2020 and 2022 showed — consistent with Wood and Porter's findings — that corrections generally reduced false belief, but the effect was smaller when the misinformation was identity-relevant to the audience, documented in multiple preregistered experiments.

Counter-inoculation message: Notice when a piece of information feels threatening to your identity rather than just factually wrong — that feeling is a cue to engage more carefully, not less.

Chapter reference: Chapter 4


Part Seven: Structural and Media Techniques


Agenda-Setting

Definition: Agenda-setting is the media's documented capacity to shape what the public considers important by determining which issues receive coverage, regardless of whether the coverage is favorable or critical. The press may not tell people what to think, but it powerfully shapes what they think about.

Key markers: - Selective allocation of coverage volume to certain issues over others - "Missing" stories: significant events that receive no coverage disappear from public awareness - Issues covered consistently are rated as more important by audiences in polling - Agenda can be set by governments through access restrictions, press releases, and timing - Social media platforms exercise agenda-setting through algorithmic amplification decisions

Historical example: Maxwell McCombs and Donald Shaw's foundational 1972 study of the Chapel Hill election documented empirically that voters' issue priorities closely tracked newspaper coverage priorities, establishing the scientific basis for agenda-setting theory.

Contemporary example: Research on news coverage of the 2016 U.S. election documented that coverage of Hillary Clinton's emails received more column inches and airtime than all substantive policy topics combined — a documented agenda-setting effect that researchers argue shaped voter concern priorities.

Counter-inoculation message: Ask: what issues are not being covered, and whose interests are served by their absence from public conversation?

Chapter reference: Chapter 5


Framing

Definition: Framing shapes not what audiences think about but how they think about it — which aspects of an issue are highlighted, which causal stories are implied, and which values are invoked to evaluate it. The same fact can be framed in multiple ways to produce different evaluations.

Key markers: - Issue framed as crime vs. public health (drug use), threat vs. humanitarian crisis (immigration) - Metaphor choice: immigration as "wave," "flood," or "stream" carries different implications - Who is in the story: whose perspective is centered, who is quoted, who is absent - Emphasis framing: what comes first and what gets the most space - Solution implied by frame: crime frame implies punishment; public health frame implies treatment

Historical example: Robert Entman's analysis of contrasting U.S. media coverage of KAL 007 (Soviet-shot Korean airliner, 1983) versus Iran Air 655 (U.S.-shot Iranian airliner, 1988) showed how identical events were framed as atrocity versus tragic mistake based on national alignment.

Contemporary example: Research on COVID-19 media coverage documented systematic framing differences: conservative-leaning outlets consistently used economic impact frames while public-health-oriented outlets used epidemiological risk frames — shaping audiences' policy preferences, not just their information, according to framing effect studies published 2020–2022.

Counter-inoculation message: Ask: how would a different value or metaphor frame this same set of facts, and what would change about my evaluation?

Chapter reference: Chapter 5


Priming

Definition: Priming activates certain concepts, associations, or standards of evaluation in memory just before a judgment is required, making those activated concepts disproportionately influential in that judgment. Political communicators use priming to control which values voters apply when evaluating candidates or policies.

Key markers: - Crime stories placed just before candidate evaluations in news broadcasts - National security topics raised immediately before questions about a candidate's fitness - Images, words, or concepts introduced that are not themselves the subject of the message - Subtle and often not consciously recognized by audiences - Effects are generally short-lived but can be powerful at decision moments

Historical example: George H.W. Bush's 1988 "Willie Horton" advertisement primed racial fear and crime concerns immediately before asking audiences to evaluate Michael Dukakis's fitness for the presidency — a priming strategy that is now a standard reference case in political psychology.

Contemporary example: Research on televised news sequencing in the 2010s replicated Iyengar and Kinder's foundational findings: audiences consistently rated candidates by whatever standard the preceding news segment had made salient, demonstrating that priming effects are robust and reproducible.

Counter-inoculation message: Ask: what concept or emotion was I just exposed to, and is it actually relevant to the judgment I am now being asked to make?

Chapter reference: Chapter 5


Strategic Omission

Definition: Strategic omission shapes public understanding by systematically excluding information — context, counterevidence, alternative perspectives, or inconvenient facts — that would complicate or contradict the desired interpretation of events.

Key markers: - Technically accurate statements that are misleading by incompleteness - Missing base rates, time horizons, or comparison groups - Absent perspectives from affected communities - No follow-up coverage when initial framing is corrected - Selective use of quotation that removes disqualifying context

Historical example: U.S. government and military briefings during the Vietnam War systematically omitted casualty figures, strategic failures, and South Vietnamese government corruption, creating a public picture of a war being won that was irreconcilable with conditions on the ground (documented in the Pentagon Papers, 1971).

Contemporary example: Platform transparency reports between 2018 and 2023 from major social media companies were documented by independent researchers (including the Stanford Internet Observatory) to omit key data about content-moderation effectiveness and algorithm amplification of misinformation.

Counter-inoculation message: Ask: what would I need to know to falsify this claim, and has the speaker given me that information?

Chapter reference: Chapter 15


Dog Whistle (Coded In-Group Language)

Definition: Dog whistles are coded phrases or symbols that carry one meaning for a general audience and a second, more specific meaning for an in-group, allowing propagandists to communicate exclusionary or inflammatory content while maintaining plausible deniability.

Key markers: - Phrases that have an innocent surface meaning but a charged in-group meaning - Historical resonance: terms that echo earlier openly racist, nativist, or extremist language - In-group recognition: the phrase circulates heavily in extremist forums before entering mainstream discourse - Speaker denies the in-group meaning if challenged, citing the surface meaning - Effectiveness depends on only the target audience decoding the full message

Historical example: "States' rights" as a political slogan became a dog whistle in post-Civil Rights Act America, signaling opposition to racial integration to Southern white audiences while maintaining a federalist-principle surface meaning for general audiences.

Contemporary example: Research on online extremist language documented the use of seemingly neutral terms ("globalist," "replacement") that carried specific in-group meanings in white-nationalist communities while providing surface deniability, with terms transitioning from fringe forums to mainstream political discourse between 2015 and 2020.

Counter-inoculation message: Ask: where did this phrase originate, how does it function in extremist communities, and does the speaker's denial explain the enthusiasm with which the phrase is received by those communities?

Chapter reference: Chapter 17


Part Eight: Disinformation-Specific Techniques


Fake Experts (FLICC)

Definition: Fake experts manufacture the appearance of credentialed scientific or scholarly disagreement with consensus positions by promoting individuals with nominal credentials who do not represent the views of relevant expert communities — a core technique in the FLICC taxonomy of science denial.

Key markers: - Expert cited holds credentials in an unrelated field - Expert's views are not represented in the peer-reviewed literature in their claimed area - Expert is associated with industry-funded think tanks or advocacy organizations - Small number of contrarians cited to imply large-scale expert disagreement - Phrase "scientists disagree" or "the science is unclear" used to describe settled questions

Historical example: The tobacco industry's "Project Whitecoat" (1980s–1990s) recruited physicians and scientists to speak publicly about uncertainty in the link between second-hand smoke and health, manufacturing the appearance of expert disagreement that did not exist in the literature.

Contemporary example: The Global Climate Coalition (1989–2002), funded by major oil and automobile companies, systematically placed contrarian scientists in media as representatives of expert opinion on climate change despite their isolation from mainstream climate science.

Counter-inoculation message: Ask whether the expert's views are represented in the peer-reviewed literature and whether they are speaking within or outside their credentialed domain.

Chapter reference: Chapter 24


Logical Fallacies (FLICC)

Definition: Propaganda and science denial frequently rely on formally invalid arguments — non sequiturs, false equivalences, ad hominem attacks, slippery slopes — that appear to reason toward a conclusion without actually doing so. Recognizing logical form is a primary defense.

Key markers: - Ad hominem: attacking the person making the argument rather than the argument - Slippery slope: claiming that one step inevitably leads to an extreme outcome without evidence - False equivalence: treating unequal things as equivalent ("both sides") - Gish gallop: overwhelming opponents with many weak arguments faster than they can be rebutted - Motte and bailey: defending an extreme claim by retreating to a modest one when challenged

Historical example: McCarthyite rhetoric frequently used guilt-by-association arguments — formal logical fallacies — to imply communist connections from circumstantial personal associations.

Contemporary example: Climate-change denial debates in U.S. legislative hearings and media between 2010 and 2020 were extensively documented as employing the Gish gallop technique: presenting dozens of minor objections in rapid succession to create an impression of overwhelming counter-evidence that would take hours to rebut individually.

Counter-inoculation message: Evaluate the argument's structure independently of who is making it and how many objections they have raised — volume of objections is not equivalent to strength of objection.

Chapter reference: Chapter 24


Impossible Expectations (FLICC)

Definition: Impossible expectations demand a standard of evidence or certainty from a scientific consensus that is never required for other accepted knowledge, effectively making any level of evidence insufficient to overcome denial.

Key markers: - "Prove it 100%" demands for inherently probabilistic scientific claims - Isolated anomalies presented as invalidating entire research programs - Uncertainty in projections used to dismiss the underlying measured findings - Models criticized for not being perfect rather than evaluated for being useful - Double standard: the same rigor is never applied to the denial argument itself

Historical example: Tobacco industry legal strategy through the 1970s–1980s consistently demanded epidemiological certainty that was definitionally impossible — a direct causal link at the individual level — as the condition for accepting the population-level statistical evidence.

Contemporary example: Vaccine-safety skeptics between 2019 and 2023 consistently demanded "long-term studies" as a reason to reject existing safety data, while not applying the same long-term-study requirement to the alternative interventions or to the disease itself.

Counter-inoculation message: Ask: what standard of evidence would actually be sufficient to change this person's mind, and is that standard applied consistently?

Chapter reference: Chapter 24


Conspiracy Framing (FLICC)

Definition: Conspiracy framing attributes a scientific or political consensus not to evidence but to a coordinated secret effort by powerful actors to suppress truth, making the conspiracy claim unfalsifiable by reinterpreting all contrary evidence as further proof of the conspiracy.

Key markers: - "They don't want you to know" framing - All counter-evidence reinterpreted as cover-up - Implausibly large coordination required but never demonstrated - Whistleblower claims treated as more credible than institutional consensus - Pattern-matching substituted for mechanism: coincidences are treated as evidence

Historical example: Anti-vaccine conspiracy theories circulating after the 1998 Wakefield paper — itself fraudulent and retracted — evolved into a comprehensive conspiracy frame in which the pharmaceutical industry, governments, and medical journals were all participants in suppressing the "truth" about vaccine harms.

Contemporary example: QAnon (active 2017–present) represents a fully developed conspiracy framing system in which any event can be incorporated as confirming evidence and any disconfirmation is reinterpreted as part of the cover-up — a structurally unfalsifiable belief system studied extensively in radicalization research.

Counter-inoculation message: Ask: what evidence would falsify this conspiracy theory, and if none exists, what does that tell you about its epistemic status?

Chapter reference: Chapter 24


Firehose of Falsehood

Definition: The firehose of falsehood (identified by RAND researchers Christopher Paul and Miriam Matthews in their 2016 analysis of Russian propaganda) is a high-volume, multi-channel, multi-narrative disinformation strategy that does not seek to persuade so much as to overwhelm, confuse, and erode trust in all information sources.

Key markers: - Extremely high volume: many different false or contradictory claims released simultaneously - Speed prioritized over accuracy or internal consistency - Multiple contradictory narratives deployed about the same event - Goal is confusion and cynicism, not belief in any specific alternative narrative - Denial of obvious facts: claiming things did not happen that clearly did

Historical example: Soviet "active measures" propaganda campaigns during the Cold War used multiple contradictory cover stories for operations like the assassination of dissidents, aiming to confuse rather than to construct a single believable alternative narrative.

Contemporary example: Russian state media's coverage of the MH17 airliner shoot-down (July 2014) included at least four mutually contradictory explanations released in rapid succession — a documented firehose-of-falsehood response that RAND researchers used as a primary case study.

Counter-inoculation message: When confronted with a cascade of contradictory claims about the same event, recognize that confusion itself may be the goal — anchor to the best-documented account and treat subsequent counter-narratives as requiring proportionally high evidence.

Chapter reference: Chapter 37


Liar's Dividend

Definition: The liar's dividend is the strategic benefit that propagandists gain from the existence of synthetic media (deepfakes, audio clones, AI-generated images) — not necessarily by producing such media, but by using the possibility of fakery to dismiss authentic evidence as potentially fabricated.

Key markers: - Authentic documentation dismissed with "that could be AI-generated" - Deepfake accusations deployed against real footage before technical verification - The mere existence of synthetic media used to cast doubt on all visual and audio evidence - Creates epistemic asymmetry: fabrication is easier to allege than to disprove - Benefits the party whose actions are documented more than the party doing the documenting

Historical example: While the specific term is recent, the strategic logic is not: during the Watergate investigation, Nixon's team attempted to cast doubt on authentic recordings by raising questions about their completeness and authenticity — a proto-liar's-dividend strategy.

Contemporary example: When authentic video evidence of atrocities in the Russia-Ukraine conflict emerged between 2022 and 2025, Russian state media and affiliated accounts routinely claimed the footage was staged or AI-generated — a documented liar's dividend strategy analyzed by the EU DisinfoLab and Bellingcat.

Counter-inoculation message: Ask for forensic verification from independent technical analysts — AI detection tools, metadata analysis, and corroborating documentation — rather than accepting either the content or the dismissal at face value.

Chapter reference: Chapter 38


Astroturfing (Fake Grassroots)

Definition: Astroturfing creates the manufactured appearance of spontaneous popular support for a cause, policy, or candidate by disguising organized, often commercially or politically funded, activity as citizen-driven grassroots mobilization.

Key markers: - Organizations with citizen-sounding names funded by corporations or political operatives - Public comment campaigns using form letters or bot-generated variations - Protest events with manufactured materials lacking genuine community organization - Social media campaigns where engagement metrics don't match genuine organic patterns - Disclosure of funding reveals institutional origin of "grassroots" activity

Historical example: The tobacco industry funded Citizens Against Government Intrusion and similar organizations in the 1980s–1990s that appeared to be independent citizen groups but were created and funded to oppose smoking regulations.

Contemporary example: The FCC's 2017 net neutrality comment period was flooded with approximately 22 million comments, the majority of which were found by subsequent analysis (New York Attorney General investigation, completed 2021) to be fake, generated by bots or purchased comment-mill services — an astroturfing operation that used volume to simulate public opposition.

Counter-inoculation message: Ask who funds the organization, when it was founded, and whether its "members" have any independent existence outside its activities.

Chapter reference: Chapter 37


Sockpuppet Networks

Definition: Sockpuppet networks use multiple fake personas, typically operated by a small number of actual people or automated systems, to simulate the existence of a broad community of independent individuals who all hold the same views — creating manufactured social proof.

Key markers: - Accounts created in clusters at similar times with minimal history - Coordinated posting: similar messages posted within narrow time windows - Cross-amplification: accounts in the network consistently boost each other - Thin account history: few personal details, interactions, or non-political content - Language patterns consistent across accounts (shared templates, similar grammatical errors)

Historical example: Soviet-era "active measures" used human-operated fake personas in Western European media and political organizations to simulate grassroots support for Soviet-aligned positions — an analog-era forerunner of digital sockpuppet networks.

Contemporary example: Meta's Coordinated Inauthentic Behavior reports (published quarterly from 2018 onward) have documented hundreds of sockpuppet networks operated by state and non-state actors across multiple countries, with the largest networks comprising thousands of fake accounts operating in coordination.

Counter-inoculation message: Before treating social-media apparent consensus as real consensus, look at the account history, creation date, and posting pattern of those expressing the view.

Chapter reference: Chapter 37


Coordinated Inauthentic Behavior

Definition: Coordinated inauthentic behavior (CIB) is the platform-policy term for organized efforts in which multiple actors work together to deceive about the origin, identity, or popularity of political content, typically at a scale that suggests state or institutional organization.

Key markers: - Networks of accounts operating in coordination that conceal their relationship - Consistent message targeting across platforms simultaneously - Activity patterns inconsistent with organic user behavior (posting at 3am local time, identical scheduling) - Infrastructure shared across accounts (same IP addresses, phone numbers, account creation patterns) - Content designed to appear locally authentic while originating externally

Historical example: The Internet Research Agency's operations documented in the Mueller Report (2019) are the primary reference case: a St. Petersburg-based organization operating American-appearing accounts to influence U.S. political discourse, confirmed through infrastructure analysis.

Contemporary example: The EU's European External Action Service (EEAS) documented in its 2024 annual report on foreign information manipulation and interference (FIMI) over 750 documented CIB incidents targeting European elections, referendums, and policy debates between 2022 and 2024.

Counter-inoculation message: When a message appears to be everywhere simultaneously, ask whether the apparent diversity of sources represents genuine independent agreement or coordinated amplification from a single origin.

Chapter reference: Chapter 37


Part Nine: Coercive Persuasion Techniques


Love Bombing

Definition: Love bombing is an intense, overwhelming deployment of positive attention, affirmation, and belonging directed at a new or potential member of a group — typically a high-control group, cult, or authoritarian movement — designed to create emotional dependency before critical evaluation is possible.

Key markers: - Sudden, intensive affection and attention from multiple group members simultaneously - Idealization of the new member: they are special, uniquely understood by this group - Rapid social integration: new relationships constructed before existing ones can compete - Positive affect precedes information: emotional bonding is established before full disclosure - Withdrawal of positive attention as a control mechanism once dependency is established

Historical example: Unification Church ("Moonies") recruitment in the 1970s was documented in sociological studies (Lofland and Stark) as employing systematic love bombing: recruits were showered with attention, communal warmth, and flattery at initial contact, creating emotional attachment before ideological commitments were made explicit.

Contemporary example: Research on online radicalization pathways between 2015 and 2022 (including studies by the Global Network on Extremism and Technology) documented love bombing as a deliberate recruitment technique in online far-right communities: new members are welcomed enthusiastically and affirmed before more extreme content is gradually introduced.

Counter-inoculation message: When a group's affection and sense of specialness feel too intense, too immediate, or contingent on staying engaged with the group, treat this as a signal to slow down and consult people outside the group.

Chapter reference: Chapter 28


Loaded Language

Definition: Loaded language uses words that carry strong emotional and ideological connotations within a specific group, creating an in-group vocabulary that reinforces group identity, accelerates communication, and gradually replaces critical thinking with reflexive categorization.

Key markers: - Group-specific vocabulary that outsiders find difficult to engage with on its own terms - Words that foreclose discussion by pre-categorizing events as good or evil - Use of jargon makes questioning the underlying concept feel like a linguistic error - Members are socialized to use the vocabulary automatically - Alternative vocabularies are associated with the out-group or with betrayal

Historical example: George Orwell's analysis of totalitarian language in "Politics and the English Language" (1946) and the fictional Newspeak in Nineteen Eighty-Four documented how controlled vocabulary shapes the range of thinkable thoughts.

Contemporary example: Research on high-control groups and online political communities between 2015 and 2022 (including work by cult exit counselors and deradicalization practitioners) consistently identified loaded language — group-specific vocabularies that members used to categorize all experience — as a primary mechanism of thought control across diverse ideological contexts.

Counter-inoculation message: Translate group-specific language into plain descriptive terms before evaluating the underlying claim — if the translation reveals that the claim is less compelling, the language was doing argumentative work that the evidence was not.

Chapter reference: Chapter 28


Thought-Stopping and Milieu Control

Definition: Thought-stopping techniques interrupt critical or dissenting thinking through ritualistic, repetitive, or behavioral mechanisms — chanting, prayer, activity saturation, information restriction — that prevent sustained critical evaluation of group claims.

Key markers: - Prescribed responses to doubt: prayer, meditation, chanting, community consultation (within group) - Information restriction: outside sources are contaminated, biased, or dangerous - Saturation scheduling: members have little unstructured time for independent reflection - Doubt is framed as weakness, faithlessness, or susceptibility to outside manipulation - Community reinforcement: other members model non-questioning behavior

Historical example: Robert Lifton's foundational study "Thought Reform and the Psychology of Totalism" (1961) documented systematic milieu control in Chinese Communist re-education programs: physical isolation, information restriction, and collective confession sessions designed to prevent independent thinking.

Contemporary example: Survivors' accounts documented in academic and journalistic studies of high-control organizations between 2010 and 2024 — including NXIVM, the Jehovah's Witnesses' shunning policies, and several online radicalization communities — consistently describe information restriction and activity saturation as mechanisms that prevented critical evaluation until exit was attempted.

Counter-inoculation message: Any group that discourages you from evaluating its claims using outside information or independent reflection is prioritizing your dependence over your autonomy.

Chapter reference: Chapter 28


Deception Gradient

Definition: The deception gradient describes the practice of introducing recruits or audiences to an organization's ideology, demands, or extreme content gradually, ensuring that each incremental step feels small and reasonable compared to the ground already covered.

Key markers: - Initial presentation of the group or ideology is moderate and publicly acceptable - Core beliefs, demands, or extreme content disclosed only after initial commitment is established - Foot-in-the-door escalation: each small commitment makes the next larger one easier - Sunk cost framing: "you've come this far" used to maintain commitment as demands escalate - Information about what the group ultimately requires is withheld from outsiders and new members

Historical example: The sequential disclosure structure of Scientology's "Operating Thetan" levels is a documented deception gradient: members are not informed of the content of upper levels until they have made substantial financial and social commitments to the organization.

Contemporary example: Online radicalization research (including work by Kathleen Blee, J.M. Berger, and the Global Network on Extremism and Technology, published 2015–2023) consistently documents a deception gradient structure in far-right recruitment: entry points are grievance-based and moderate; extreme ideology is introduced incrementally after community belonging is established.

Counter-inoculation message: Ask: what does this group or movement require of members at its most committed level, and would I have joined if I had known that at the outset?

Chapter reference: Chapter 28


Summary Reference Table

Technique Category Chapter Primary Mechanism
Fear appeal Emotional 7 Threat activation
Hope appeal Emotional 7 Aspirational projection
Pride/nationalist appeal Emotional 7 Identity mobilization
Disgust appeal Emotional 7 Contamination avoidance
Moral outrage appeal Emotional 7 Norm-violation response
False dichotomy Simplification 8 Option elimination
Scapegoating Simplification 8 Causal misdirection
The Big Lie Simplification 8 Scale-induced credibility
Bandwagon / social proof Social 9 Conformity pressure
Appeal to authority Social 9 Credibility transfer
False expertise (FLICC) Social / FLICC 9, 24 Manufactured consensus
Testimonial Social 9 Affect transfer
Plain folks Social 9 Authenticity performance
Repetition / illusory truth Structural 10 Familiarity bias
Symbols and visual propaganda Structural 11 Associative activation
Glittering generalities Language 12 Semantic vacuity
Card stacking (FLICC) Language / FLICC 12, 24 Evidence selection
Transfer Language 11 Symbol association
Name-calling Language 12 Label substitution
Confirmation bias exploitation Cognitive 3 Motivated reasoning
Availability heuristic exploit Cognitive 3 Frequency estimation
In-group / out-group Cognitive 4 Identity-based processing
Anchoring Cognitive 3 Reference-point bias
Identity-protective cognition Cognitive 4 Identity threat response
Agenda-setting Structural 5 Salience control
Framing Structural 5 Interpretive schema
Priming Structural 5 Concept activation
Strategic omission Structural 15 Information suppression
Dog whistle Language 17 Coded signaling
Logical fallacies (FLICC) FLICC 24 Invalid inference
Impossible expectations (FLICC) FLICC 24 Standard asymmetry
Conspiracy framing (FLICC) FLICC 24 Unfalsifiability
Firehose of falsehood Disinformation 37 Epistemic overwhelm
Liar's dividend Disinformation 38 Synthetic media doubt
Astroturfing Disinformation 37 Fake grassroots
Sockpuppet networks Disinformation 37 Manufactured diversity
Coordinated inauthentic behavior Disinformation 37 Coordinated deception
Love bombing Coercive 28 Emotional dependency
Loaded language Coercive 28 Vocabulary control
Thought-stopping / milieu control Coercive 28 Reflection prevention
Deception gradient Coercive 28 Incremental commitment

A Note on Using This Guide

No single technique operates in isolation. Effective propaganda typically combines multiple techniques simultaneously: a fear appeal delivered through symbolic visual imagery, repeated across platforms to exploit the illusory truth effect, supported by fake experts, and insulated from correction by conspiracy framing. The analytical task is not just to name individual techniques but to map the architecture of the entire persuasion system — how the techniques reinforce each other, which cognitive vulnerabilities they target in sequence, and what structural conditions (media ownership, platform algorithms, social isolation) make them effective.

The counter-inoculation messages provided here are starting points, not cures. Research on inoculation theory (see Chapter 33 and Appendix E) demonstrates that advance exposure to weakened forms of propaganda techniques, paired with explicit refutation, builds measurable resistance. The goal of this reference guide is not passive recognition but active pre-emption: knowing what to expect before you encounter it.

For primary sources on each technique, see Appendix E (Key Studies and Cases). For the theoretical framework underlying classification, see Chapters 1–6.


End of Appendix F