Chapter 38 Quiz: Deepfakes, Computational Propaganda, and Influence Operations
Part I: Multiple Choice
Select the best answer for each question.
1. A Generative Adversarial Network (GAN) produces synthetic images through which process?
(A) A single neural network trained on labeled examples of real vs. fake images (B) Two neural networks — a generator and a discriminator — competing against each other (C) A statistical model that interpolates between existing photographs in a training dataset (D) A rule-based system that applies facial manipulation algorithms to existing video frames
2. Chesney and Citron's "liar's dividend" refers to:
(A) The financial profits that deepfake producers earn by selling synthetic media to influence operations (B) The advantage state actors gain from producing large volumes of deepfakes that overwhelm fact-checkers (C) The strategic benefit of being able to dismiss authentic documentary evidence as a possible deepfake (D) The phenomenon in which audiences who know about deepfakes become more credulous toward written disinformation
3. Which of the following best characterizes the distinction between the Russian Internet Research Agency's approach and the Chinese Spamouflage model?
(A) The IRA focused on visual deepfake content; Spamouflage focused on text-based disinformation (B) The IRA built elaborate fake personas with organic audience relationships; Spamouflage prioritized automated high-volume distribution (C) The IRA targeted foreign governments; Spamouflage targeted domestic Chinese audiences (D) The IRA operated on social media platforms; Spamouflage operated exclusively through state media outlets
4. The term "Coordinated Inauthentic Behavior" (CIB) was developed by which organization as a framework for detecting and removing influence operation networks?
(A) The Stanford Internet Observatory (B) The U.S. Senate Intelligence Committee (C) Meta (Facebook) (D) The European Union's External Action Service
5. In the 2023 Slovak election deepfake incident, why was the deployment timing operationally significant?
(A) The deepfake was released during a period when domestic fact-checking organizations were not operating (B) The electoral blackout period prevented the targeted political party from advertising a response (C) Slovak law prohibits removing content from social media during the 48 hours before an election (D) The deepfake was released simultaneously with authentic damaging revelations, creating maximum confusion
6. The C2PA (Coalition for Content Provenance and Authenticity) standard takes which approach to the deepfake problem?
(A) Training detection algorithms on large datasets of known deepfakes to identify new ones (B) Requiring AI companies to embed watermarks in all synthetic content they produce (C) Embedding cryptographic authentication signatures in media at the moment of capture to verify authentic provenance (D) Establishing an international registry of deepfake producers that platforms must consult before hosting video
7. According to the chapter, research by Goldstein and colleagues found that at the time of writing, most documented AI-enabled influence operations were primarily using AI for:
(A) Creating sophisticated individual deepfakes of political figures (B) Quantity amplification — producing larger volumes of content at lower cost (C) Voice cloning to impersonate journalists and public officials (D) Training audience-targeting models to identify psychologically vulnerable individuals
8. China's domestic deepfake regulations (2023) do which of the following?
(A) Prohibit all AI-generated content that does not bear a visible watermark (B) Require labeling of AI-generated content and restrict the creation of "fake news" using deepfake technology (C) Establish criminal penalties for any organization that produces deepfakes of government officials (D) Require all social media platforms operating in China to submit content to a government review board before publication
9. The chapter describes the "Secondary Infektion" operation as characterized by:
(A) A network of fake personas posing as American political activists on mainstream social media (B) Placement of forged and authentic documents in smaller forums and comment sections, then circulating screenshots as evidence (C) High-volume automated posting across TikTok, YouTube, and Facebook by Chinese state-linked accounts (D) Deepfake audio recordings of NATO officials distributed through encrypted messaging apps
10. Which of the following deepfake cases involved a video of a sitting head of state that was used as partial justification for a military coup attempt?
(A) The Zelensky surrender deepfake (Ukraine, 2022) (B) The Slovak election deepfake (Slovakia, 2023) (C) The Ali Bongo video (Gabon, 2019) (D) The Jordan Peterson deepfake (United Kingdom, 2024)
11. Why does the chapter describe deepfake detection as "a losing race"?
(A) Detecting deepfakes requires computational resources that only governments and large platforms can afford (B) Detection models are always calibrated against existing generation capabilities; published detection research enables generation models to eliminate the vulnerabilities identified (C) Detection algorithms have been shown to misidentify authentic content as deepfake at rates too high to be practically useful (D) Deep learning models cannot be trained to distinguish synthetic from authentic images because the features that differentiate them are too subtle to operationalize
12. The chapter's analysis of the Big Tobacco parallel focuses on which strategic objective?
(A) Persuading the public that smoking is beneficial to health (B) Creating the impression of scientific uncertainty to prevent the acceptance of a consensus finding (C) Manufacturing a false documentary record through forged research publications (D) Amplifying fringe scientific voices through paid advertising to create the appearance of broad disagreement
Part II: Short Answer
Answer each question in 2–4 sentences.
13. What is "face-reenactment synthesis" and how does it differ from face-swapping deepfakes? Which technique poses a greater risk for political influence operations, and why?
14. The chapter identifies four objectives of state-sponsored influence operations: confusion, polarization, erosion of trust, and information environment preparation. Briefly explain one of these objectives and describe one documented operational technique from the chapter that serves it.
15. The chapter notes that Spamouflage specifically targets Chinese diaspora communities in Western countries. Why might diaspora communities be a strategically valuable target for influence operations? What characteristics of diaspora information environments might make them particularly susceptible?
16. Explain why the existence of deepfakes targeting NCII (non-consensual intimate imagery) is considered relevant to the political propaganda analysis in this chapter, even though NCII is not itself a propaganda category.
17. Sophia Marin's experience — watching a deepfake three times while knowing something was wrong and still feeling the emotional response — is described as evidence of "how human perceptual systems work." What does this mean? Why is this experience pedagogically significant for propaganda analysis?
18. What is the definitional problem in deepfake legislation? Why does this problem make narrowly targeted prohibition difficult in practice?
Part III: Analysis
Answer in 150–250 words.
19. Tariq Hassan presents the Spamouflage finding as evidence that "the threat is already operational." Prof. Webb's framework treats Spamouflage's limited persuasion success as relevant to how we understand its design. Using both perspectives, explain what Spamouflage tells us about the objectives of Chinese computational influence operations. What would we need to know to determine whether the operation has succeeded by its own criteria?
20. The debate framework presents two positions on prohibiting political deepfakes. Position B argues that technical and educational responses are more effective than legal prohibition. Evaluate this claim using the chapter's evidence. What would Position A's proponents say about the limits of C2PA and prebunking as substitutes for legal deterrence?
Part IV: Conceptual Integration
21. The chapter connects three historical anchor examples: Nazi Germany's film manipulation for international audiences; the IRA 2016 operation as precursor to computational influence operations; and Big Tobacco's manufactured scientific consensus. Write a paragraph identifying one structural principle that all three examples share, and explain how that principle applies to the contemporary deepfake and computational influence operation landscape.
Answer Key available in Appendix B.
Chapter 38 of Propaganda, Power, and Persuasion: A Critical Study of Influence, Disinformation, and Resistance