Chapter 38 Exercises: Deepfakes, Computational Propaganda, and Influence Operations


Comprehension Exercises

Exercise 38.1 — The Liar's Dividend

Chesney and Citron's "liar's dividend" concept describes a counter-intuitive way that deepfake technology threatens democratic accountability — not by producing convincing fakes, but by enabling the dismissal of authentic footage.

(a) Explain the liar's dividend in your own words. What makes it strategically valuable compared to simply producing a deepfake?

(b) Identify two specific examples from the chapter where the liar's dividend was invoked — one involving state-level documentation of misconduct, and one involving a domestic political case. For each, explain how the liar's dividend operated and what its political effect was.

(c) The liar's dividend works partly through an asymmetry between creating doubt and establishing truth. Explain that asymmetry. Why is raising a doubt cheaper, epistemically and argumentatively, than establishing a fact? What implications does this have for accountability journalism in the deepfake era?


Exercise 38.2 — Taxonomy of Synthetic Media Threats

The chapter identifies four categories of deepfake threat: identity theft deepfakes, non-consensual intimate imagery (NCII), political manipulation deepfakes, and fraud deepfakes.

(a) For each category, briefly describe the mechanism (what is produced, who is targeted, what effect is sought).

(b) The chapter argues that the greatest propaganda threat comes from combining deepfake capability with computational amplification, targeting, and liar's dividend exploitation — not from sophisticated individual deepfakes. Explain this argument. Do you find it persuasive? What evidence does the chapter offer in support?

(c) The chapter notes that NCII deepfakes are the most prevalent by volume but are not the primary propaganda focus. However, it argues they are relevant to the information environment in two ways. What are those two ways? Do you agree that NCII deepfakes should be considered within the propaganda framework, or is this category categorically distinct?


Exercise 38.3 — The Gabon Case as Analytical Test

The 2019 Gabon President video is described as a case where "the resolution of the authenticity question remains genuinely contested."

(a) Explain why the chapter identifies this case as significant for propaganda analysis even though the authenticity question is unresolved.

(b) If the video was authentic (genuine footage of Ali Bongo recovering from a stroke), what does the episode demonstrate about the liar's dividend? If the video was in fact a deepfake, what does it demonstrate about the use of synthetic media by state actors?

(c) The chapter notes that "the deepfake possibility was weaponized" regardless of the actual facts. What does "weaponizing a possibility" mean in a propaganda context? How does this differ from deploying a claim that is known to be true or false?


Application Exercises

Exercise 38.4 — Comparing Influence Operation Models

The chapter contrasts the Russian Internet Research Agency's approach with the Chinese Spamouflage model.

(a) Create a comparison table with at least six criteria (e.g., primary method, scale, target audiences, content strategy, personnel intensity, primary objective). For each criterion, describe how the IRA and Spamouflage differ.

(b) The chapter notes that Spamouflage "shows limited evidence of successfully shifting public opinion" but continues to operate. What alternative success criteria might explain this continued operation? What does this suggest about the definition of a "successful" influence operation?

(c) Ingrid Larsen observes that Nordic researchers have found that takedown reports undercount actual activity, and that the operational intent may not be to escape detection but to "operate at a scale where what gets caught doesn't significantly reduce the overall effect." Explain this observation. What model of influence operation success does it imply? How does it differ from a model focused on specific narrative goals?


Exercise 38.5 — Platform Transparency Reports: Reading the Evidence

Meta, Twitter, and YouTube have published Coordinated Inauthentic Behavior (CIB) transparency reports.

(a) Locate Meta's most recent available CIB transparency report (the Graphika or Stanford Internet Observatory website maintains archives). Identify one specific influence operation described in the report. For that operation, describe: (i) the attributed origin; (ii) the target audience; (iii) the primary platforms used; (iv) the content themes; (v) the approximate scale (number of accounts/pages removed).

(b) What does the report tell you about how this operation was detected? What signals or patterns allowed the platform to identify coordinated behavior?

(c) The chapter identifies limitations in what CIB transparency reports reveal and conceal. For the operation you analyzed, identify at least two things the report cannot tell you — information that would be necessary for a complete analysis of the operation's intent and effect, but that the report does not provide.


Exercise 38.6 — Detection and Authentication Comparison

The chapter describes two distinct approaches to the deepfake problem: detection (identifying fakes) and authentication (verifying authentic content).

(a) For each approach, identify: (i) the basic mechanism; (ii) two specific methods or tools; (iii) a key limitation.

(b) The chapter argues that detection is "a losing race." Explain the structural reason for this claim. Is this a claim about current technology or about the fundamental dynamics of the generator-detector relationship?

(c) The authentication approach (C2PA, watermarking) is limited by adoption. What would be required for widespread adoption, and what actors — platform, hardware manufacturers, governments, users — would need to cooperate for authentication infrastructure to become effective? What structural barriers might prevent that cooperation?


Exercise 38.7 — The Slovak Election Deepfake: Timing as Weapon

The 2023 Slovak election deepfake was distributed two days before the election, during the electoral blackout period.

(a) Explain why the electoral blackout period was operationally significant for the influence operation. What does this specific timing reveal about the operational sophistication of the deployers?

(b) The fact-checking organizations identified the deepfake within hours. But the chapter indicates that identification did not neutralize the effect. Why? What does this tell us about the relationship between rapid debunking and actual information environment outcomes?

(c) Design a specific intervention that could reduce the effectiveness of the "deploy during blackout period" strategy. Your intervention should be realistic — it should not require resources or infrastructure that democratic governments currently lack. Consider regulatory, technical, and educational approaches.


Analysis and Synthesis

Exercise 38.8 — Historical Parallel: The Nazi Propaganda Ministry's International Operations

The chapter draws a parallel between Nazi Germany's international propaganda operations and contemporary state-sponsored influence operations.

(a) Identify three specific operational similarities between the Nazi Propaganda Ministry's international operations and the documented IRA or Spamouflage operations. For each similarity, explain what it reveals about the fundamental logic of state-sponsored information operations.

(b) What are the most significant differences between the 1930s–1940s operations and contemporary digital influence operations? Focus on how differences in distribution infrastructure, speed, cost, and scalability change the nature of the threat.

(c) The chapter describes Nazi film manipulation as "distinct from the more obviously theatrical internal propaganda, precisely because they were designed for audiences who needed to believe they were seeing unmanipulated documentary evidence." Explain this distinction and apply it to the contemporary context. How does the same logic shape the design of contemporary political deepfakes?


Exercise 38.9 — The Big Tobacco Parallel

The chapter identifies Big Tobacco's manufactured scientific consensus as an analog to computational consensus manufacturing.

(a) Describe the Big Tobacco strategy for creating doubt about smoking's health effects. What were the specific mechanisms used to manufacture the impression of scientific uncertainty?

(b) The chapter argues that contemporary computational influence operations can deploy the same "flooding the information environment with contradictory signals" strategy "at scales and speeds that the tobacco industry's research-funding approach could not approach." What specific computational capabilities make this acceleration possible? What implications does scale have for the effectiveness and durability of doubt creation?

(c) The Big Tobacco case eventually reached a legal and regulatory resolution. What factors made that resolution possible? Are those factors present in the contemporary computational influence operation context? Why or why not?


Exercise 38.10 — The Debate Framework Applied

The chapter presents two positions on whether deepfakes of public figures should be prohibited.

(a) Write a 400-word summary of the strongest version of each position. For each position, identify the two arguments you consider most compelling and the two you consider weakest.

(b) The chapter's Position B argues that "the responses that might work are technical (C2PA authentication infrastructure) and educational (prebunking/inoculation campaigns)." Assess this claim. Are these alternatives genuinely sufficient? What would they fail to address that prohibition might address?

(c) Position A argues that the state actor enforcement gap "is irrelevant to the domestic case." Evaluate this argument. Is a regulatory framework that deters domestic bad actors while failing to address state actors a meaningful success? What proportion of the total deepfake propaganda threat comes from domestic vs. state actors?


Extended Research Exercise

Exercise 38.11 — Influence Operation Case Documentation

Select one documented influence operation not discussed in this chapter from the following sources: Stanford Internet Observatory research archive; Graphika published reports; EU DisinfoLab investigations; or Atlantic Council Digital Forensic Research Lab (DFRLab) reports.

Write a 1,000-word analysis of your selected case that includes:

(a) Operational description: Who conducted the operation (attributed to whom, with what confidence), when, using what platforms and account infrastructure, producing what types of content.

(b) Objectives analysis: What were the operation's apparent objectives? Apply the framework from Section 38.4 — was the primary goal persuasion, confusion, polarization, erosion of trust, or information environment preparation?

(c) Propaganda technique identification: Which techniques from earlier chapters in this textbook did the operation employ? (Consider emotional appeals, false authority, repetition, tribal signals, fear appeals — the toolkit from Parts Two and Three.)

(d) Detection and response: How was the operation detected? What was the platform or researcher response? How effective was the response in limiting the operation's effect?

(e) Implications: What does this case add to the chapter's analysis? Does it confirm or complicate the chapter's frameworks?


Chapter 38 of Propaganda, Power, and Persuasion: A Critical Study of Influence, Disinformation, and Resistance