Chapter 29 Quiz: Counter-Propaganda, Strategic Communication, and Prebunking
Instructions: Answer all ten questions. Questions 1-7 are multiple choice; questions 8-10 are short answer (3-5 sentences each). The quiz covers core concepts from Chapter 29 and should take approximately 25-35 minutes.
Multiple Choice
Question 1
The "correction paradox" refers to which of the following phenomena?
A. The finding that corrections are always effective at changing beliefs, making fact-checking the most important counter-propaganda tool.
B. The risk that repeating a false claim in order to correct it may strengthen the false claim's perceived truth through the illusory truth effect.
C. The paradox that corrections are more effective when issued by partisan sources than by neutral ones.
D. The finding that audiences who seek out fact-checks are paradoxically more resistant to correction than those who encounter fact-checks passively.
Correct answer: B
Rationale: The correction paradox is the risk that stating a false claim — which is necessary for correcting it — adds to the claim's processing fluency through the illusory truth effect. Each repetition of the false claim, even in a corrective context, increases familiarity and therefore perceived truth. The truth sandwich is a structural response to this paradox.
Question 2
The "truth sandwich" (Lakoff; Jamieson) recommends which structural sequence for counter-messaging?
A. False claim → correction → truth → sources → conclusion
B. Emotional appeal → truth → false claim → return to truth
C. Truth (lead) → brief acknowledgment of false claim → return to truth (close)
D. Sources → truth → false claim → correction → emotional appeal
Correct answer: C
Rationale: The truth sandwich structure specifically recommends leading with the accurate claim (not the false one), briefly and non-prominently acknowledging that the false claim exists (so the audience knows what is being corrected), and returning to the accurate claim as the closing element. This structure minimizes the structural prominence of the false claim and maximizes the fluency of the accurate claim.
Question 3
William McGuire's inoculation theory (1964) is most directly analogous to which of the following?
A. Chemotherapy: using controlled doses of a harmful substance to destroy a false belief
B. Medical inoculation: pre-emptively exposing the system to a weakened form of an attack, building resistance before full-strength exposure
C. Surgery: removing a false belief that has already been fully integrated
D. Quarantine: preventing the false belief from reaching the audience at all
Correct answer: B
Rationale: McGuire explicitly designed inoculation theory as an analogy to medical inoculation. Just as exposing the immune system to a weakened pathogen builds resistance to the full-strength pathogen, exposing people to weakened persuasive attacks (accompanied by refutation) builds resistance to more powerful future persuasion attempts. The key mechanism is developing counterargument capacity before it is needed.
Question 4
Sander van der Linden's research distinguishes between "claim-based inoculation" and "technique-based inoculation." Which of the following accurately describes the key advantage of technique-based inoculation?
A. It is cheaper to produce, requiring less research than claim-specific inoculation.
B. It is more emotionally engaging for audiences, producing stronger immediate attitude change.
C. It builds broad-spectrum resistance to an entire class of disinformation that uses a given technique, rather than resistance only to the specific claim addressed.
D. It is more legally defensible for governmental communicators than claim-specific correction.
Correct answer: C
Rationale: The central advantage of technique-based inoculation is breadth. By explaining the mechanism (how, for example, "fake experts" work as a persuasion technique), rather than correcting a specific claim, technique-based inoculation builds resistance to all disinformation that employs that technique, including novel future instances that the inoculated person has never encountered. This is the equivalent of teaching someone what poisonous mushrooms look like in general, rather than identifying one specific poisonous mushroom.
Question 5
The FLICC framework identifies five categories of disinformation technique. Which of the following correctly lists all five?
A. Fear, Lies, Ideology, Censorship, Conspiracy
B. Fake experts, Logical fallacies, Impossible expectations, Cherry picking, Conspiracy theories
C. False framing, Loaded language, Innuendo, Cherry picking, Circular reasoning
D. Fabrication, Lobbying, Institutional manipulation, Credentialing, Conspiracy
Correct answer: B
Rationale: FLICC stands for Fake experts (manufactured authority), Logical fallacies (formally invalid reasoning), Impossible expectations (demanding an unachievable standard of certainty), Cherry picking (selective evidence presentation), and Conspiracy theories (attributing documented facts to coordinated malevolent actors). The framework was developed by John Cook initially in the context of climate disinformation and subsequently generalized across disinformation domains.
Question 6
Finland consistently ranks first or second in Europe on the Media Literacy Index. Which of the following most accurately describes the feature of Finland's national model that researchers identify as most important to its effectiveness?
A. Finland has the most robust independent fact-checking infrastructure in Europe, with multiple organizations funded by public broadcasting.
B. Finland's media literacy education is a cross-curricular competency integrated into the core curriculum from primary through secondary school, taught by trained teachers, and explicitly addresses propaganda and disinformation techniques.
C. Finland has legal requirements mandating that all advertising disclose its political and commercial sponsors, producing a more transparent information environment.
D. Finnish public broadcasting operates without commercial advertising, producing a media environment less dependent on engagement-maximizing content.
Correct answer: B
Rationale: While Finland does have strong public broadcasting and fact-checking institutions, the feature most directly responsible for its media literacy rankings is the comprehensive integration of media literacy education into the core curriculum — not as an elective, not as a separate subject, but as a cross-curricular competency at every level from primary school through the end of secondary school. Crucially, the curriculum includes explicit instruction in propaganda and disinformation techniques, which is a form of population-level technique-based inoculation.
Question 7
The Roozenbeek, Traberg, Basol, and van der Linden (2022) study published in Science Advances is most notable for which of the following findings?
A. The study demonstrated that fact-checking by established news organizations reduces sharing of false claims by approximately 30% across social media platforms.
B. The study demonstrated that inoculation games like Bad News and Go Viral! produce significantly larger effect sizes than classroom-based media literacy education.
C. The study demonstrated that short inoculation videos distributed through social media platforms (including as pre-roll advertising) produced measurable resistance to disinformation, establishing that inoculation can be delivered at scale through the same platforms that deliver disinformation.
D. The study demonstrated that inoculation effects persist for an average of 18 months without booster exposure, establishing that a single population-level inoculation campaign is sufficient for long-term resistance.
Correct answer: C
Rationale: The key finding of the 2022 Science Advances paper is scalability: inoculation delivered as pre-roll advertising (content users encounter passively, not content they actively seek) produced measurable resistance to disinformation. This establishes that inoculation does not require special conditions (classrooms, motivated participants, extended engagement) — it can be embedded in the same distribution infrastructure that delivers disinformation campaigns. The study did not claim 18-month persistence; inoculation effects decay and require booster exposure.
Short Answer
Question 8
NATO StratCom (Strategic Communications Centre of Excellence) and the EU External Action Service's East StratCom Task Force (EUvsDisinfo) are two institutional counter-disinformation actors. Briefly describe the function of each. What can governmental counter-propaganda legitimately do, and what does the evidence suggest it cannot do effectively?
Expected response elements:
NATO StratCom CoE (Riga): Research, analysis, and practitioner guidance on strategic communication and disinformation. Does not issue orders; produces intelligence and training on state-sponsored influence operations, troll farms, coordinated inauthentic behavior.
EUvsDisinfo: Maintains a publicly accessible database of documented cases of disinformation originating from Kremlin-connected sources. Provides source analysis, factual correction, and context for each case. Over 15,000 documented cases as of 2024.
What governmental counter-propaganda can legitimately do: attribution (identifying state-sponsored campaigns), monitoring and documentation, calling out specific disinformation operations, funding media literacy education.
What it cannot do effectively: counter-messaging that looks like propaganda (credibility depends on audience trust in the government, which is itself contested); operations with concealed sources (which violate the transparency criterion and backfire when discovered); reach and emotionally engage the most vulnerable audiences (institutional tone primarily reaches already-skeptical journalists and researchers).
Question 9
Chapter 29 identifies four systemic limits of counter-propaganda. Name all four, and explain one of them in a single clear paragraph. Your explanation should identify why the limit exists and what it implies for counter-propaganda strategy.
Expected response elements:
The four limits: 1. Identity-protective cognition 2. The algorithmic problem 3. The reach problem 4. The timing problem
Acceptable explanations of each:
Identity-protective cognition (Kahan): When false claims are embedded in a community's shared identity, correction carries a social cost — updating the belief risks social penalty within the group. Corrections that address only the epistemic dimension leave the social dimension untouched. Implies: counter-propaganda for identity-embedded beliefs must address the social environment, not just the claim; peer-to-peer inoculation by trusted community members is more effective than institutional correction.
Algorithmic problem: Algorithms that optimize for engagement systematically advantage emotionally engaging content; false claims designed for outrage or fear are amplified more than emotionally moderated corrections. Implies: counter-propaganda must compete algorithmically, which requires emotional engagement (SUCCES) and distribution resources; ultimately requires platform-level structural intervention.
Reach problem: Fact-checks and corrections consistently reach far fewer people than the original false claim. Implies: technique-level counter-propaganda cannot fully repair the epistemic damage of high-virality disinformation; must be combined with structural platform interventions.
Timing problem: Corrections are most effective when they precede the emotional response. After emotional absorption, correction must also address emotional state. Implies: prebunking has structural advantage over debunking; reactive institutional counter-propaganda faces systematic timing disadvantage against novel disinformation deployed at speed.
Question 10
Chapter 29 presents three positions on the ethical question of whether counter-propaganda can use emotional appeals and narrative techniques. Briefly describe Position C (the Accuracy Constraint) and explain what it permits and what it prohibits. Do you find Position C persuasive? Why or why not?
Expected response elements:
Position C (Accuracy Constraint): The ethical boundary for counter-propaganda is accuracy — but accuracy understood to include completeness, context, and the absence of strategic omission. The test is whether the communication leaves a fully informed audience with an accurate picture of the world.
What Position C permits: Emotional appeals when grounded in accurate information that genuinely warrants the emotion; narrative structure when it does not omit material information; vivid examples when they are representative, not cherry-picked; in-group appeals when they do not require demonizing out-groups.
What Position C prohibits: Manufactured authority (fake experts cited even for accurate claims); strategic omission of material information; emotional escalation disproportionate to the evidence; framing that suppresses complexity in ways that mislead.
Student evaluation: Responses should engage seriously with the competing positions. A student who accepts Position C should articulate why they find it more persuasive than Position A (effectiveness-first) or Position B (technique-purity). A student who is skeptical of Position C should identify specific cases where the "accuracy" standard is ambiguous (e.g., what counts as "material" information, who decides what a "fully informed" audience would conclude).
Scoring Guide
| Section | Points |
|---|---|
| Questions 1-7 (multiple choice) | 2 points each = 14 points |
| Question 8 (short answer) | 8 points |
| Question 9 (short answer) | 10 points (2 for naming all four limits, 8 for quality of explanation) |
| Question 10 (short answer) | 8 points |
| Total | 40 points |
Note for instructors: Question 10 is designed to be open-ended; credit should be given for quality of reasoning and engagement with the text, not for arriving at a predetermined conclusion. Students who reject Position C and argue for Position A or B should receive full credit if their argument is well-reasoned and engages substantively with the competing positions.