Further Reading: Chapter 38
Deepfakes, Computational Propaganda, and Influence Operations
Foundational Scholarship
Chesney, Robert, and Danielle Citron. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review 107, no. 6 (2019): 1753–1820.
The paper that introduced the "liar's dividend" concept to the academic literature. Chesney and Citron's analysis remains the essential starting point for understanding deepfakes not merely as a production threat but as a threat to the evidentiary foundations of democratic accountability. The legal framework analysis, though predating subsequent regulatory developments, provides the doctrinal scaffolding for understanding the challenges of deepfake legislation. Required reading.
Goldstein, Josh A., Shelby Grossman, Renée DiResta, and Sarah Elenbaas. "Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations." arXiv (2023).
One of the most analytically rigorous assessments of how large language models and generative AI intersect with documented influence operation methodology. The distinction between quantity amplification, quality enhancement, and novel capability — developed in this paper — provides the framework for the chapter's analysis of where AI-enabled operations actually stand in their development. The finding that current operations are primarily using AI for volume rather than sophisticated deepfake production has significant implications for resource allocation in the defensive response.
Wardle, Claire, and Hossein Derakhshan. "Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making." Council of Europe Report (2017).
The foundational framework for distinguishing misinformation, disinformation, and malinformation — a taxonomy essential for placing deepfakes within the broader information disorder landscape. Wardle and Derakhshan's work provides the conceptual vocabulary that subsequent researchers have used to classify synthetic media threats. Their framework's insistence on considering both the content and the intent of false information is particularly valuable for deepfake analysis, where the same synthetic media can serve different purposes depending on who deploys it and how.
Influence Operations Research
Graphika and Stanford Internet Observatory. "Unheard Voice: Evaluating Five Years of Pro-Western Covert Influence Operations." Joint Report (2022).
Provides analytical context for Chinese and Russian operations by examining Western-linked covert influence operations — a comparative framework that is essential for avoiding asymmetric analytical treatment of the topic. The report's documentation of Pro-Western operations using similar CIB signatures to those found in Chinese and Russian operations complicates any analysis that frames influence operations as exclusively adversarial.
Graphika. "Spamouflage Dragon: A Deep Dive into the World's Most Prolific Influence Operation." (2022).
The most comprehensive single-source analysis of the Spamouflage network available through the end of 2022. Covers the network's operational evolution from 2019 to 2022, the expansion of cross-platform scope, the content strategy and thematic evolution, and the detection methodologies that identified successive clusters. Essential primary source for Case Study 38-2.
Nimmo, Ben, et al. "Exposing Secondary Infektion." Graphika (2020).
The comprehensive analysis of the Russian Secondary Infektion operation, covering its seven-year operational history across more than 300 platforms. The case is instructive both for what it reveals about Russian operation tradecraft (placement in small forums and comment sections, use of authentic documents alongside fabricated ones) and for what it reveals about the limits of platform-focused research: an operation that explicitly avoided major platforms was visible only to researchers who monitored the full landscape of smaller forums and sites.
Renée DiResta et al. "The Tactics and Tropes of the Internet Research Agency." New Knowledge (2018).
The New Knowledge report, commissioned alongside the Senate Intelligence Committee's investigation, provides the most detailed public analysis of the IRA's 2016 operation: the target audience segmentation, the content themes developed for each audience, the platform-specific strategies, and the internal metrics the operation used to measure success. Particularly valuable for the IRA operational documents recovered through the Mueller investigation, which allow analysis of the operation from the inside.
Stanford Internet Observatory. "Unverified: Examining the Development of Fact-Checking Organizations." (Various reports, 2019–2024).
The Stanford Internet Observatory's archive of influence operation analyses provides the broadest available empirical record of documented state-linked CIB, covering operations attributed to Russia, China, Iran, Saudi Arabia, and other actors. The archive allows comparative analysis across operations, which is essential for identifying which features are specific to particular actors and which are structural features of state-sponsored influence operations generally. Available at stacks.stanford.edu/node/archives.
Deepfake Technology and Detection
Farid, Hany. "Creating, Using, Misusing, and Detecting Deep Fakes." Journal of Online Trust and Safety 1, no. 4 (2022).
Farid is the leading researcher on deepfake forensics, and this review article provides the clearest available account of where detection methodology stands and why detection is structurally limited. The paper's framework for distinguishing detection from authentication is central to the chapter's analysis of the response landscape. Farid's advocacy for C2PA and content authentication approaches provides the technical grounding for the authentication discussion in Section 38.9.
Tolosana, Ruben, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. "Deepfakes and Beyond: A Survey of Face Manipulation and Fake Detection." Information Fusion 64 (2020): 131–148.
A technically accessible survey of the detection research landscape, covering the principal forensic methods (artifact analysis, frequency domain analysis, physiological signal analysis, metadata examination) and their limitations. The survey's documentation of the generator-detector arms race — the consistent finding that detection models trained on older generation methods fail on newer ones — provides empirical grounding for the chapter's claim that detection is structurally a losing race.
Citron, Danielle K. The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age. New York: W.W. Norton, 2022.
Citron's book extends her deepfake analysis into the broader privacy and dignity context, with substantial coverage of NCII deepfakes and their effects on women in public life. The book is valuable for Chapter 38's analysis of the NCII category's relevance to the propaganda framework, specifically the intersection of NCII deepfakes and political intimidation. Citron's treatment of the legal landscape for synthetic intimate imagery is the clearest available account of the state-level patchwork and its limitations.
Historical Context and Comparative Analysis
Nance, Malcolm. The Plot to Hack America: How Putin's Cyberspies and WikiLeaks Tried to Steal the 2016 Election. New York: Skyhorse, 2016.
Despite its publication before the full Senate Intelligence Committee investigation, Nance's early analysis of Russian information operations provides useful context for understanding the pre-digital doctrine that shaped IRA methodology. The connections between Soviet active measures doctrine and contemporary digital influence operations are a running thread that helps readers understand why these operations take the forms they do.
Benkler, Yochai, Robert Faris, and Hal Roberts. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford: Oxford University Press, 2018.
The most data-intensive analysis of the U.S. political information environment in the 2016 election period. Benkler, Faris, and Roberts' central finding — that foreign influence operations were less consequential than domestic right-wing media network dynamics — provides essential comparative context for assessing the relative importance of foreign computational influence operations versus domestic information environment pathologies. The empirical methodology, based on comprehensive analysis of link-sharing and content networks, is a model for the field.
Walker, Shaun. The Long Hangover: Putin's New Russia and the Ghosts of the Past. Oxford: Oxford University Press, 2018.
Walker's journalistic analysis of Russian political culture provides the context necessary for understanding the strategic logic of Russian information operations: not as isolated operational decisions but as expressions of a broader doctrine about Russia's relationship to the international order, the West, and the manipulation of perception in statecraft. Without this context, influence operation analysis can mistake tactical creativity for strategic incoherence.
Platform Governance and Response
Kreiss, Daniel, and Shannon C. McGregor. "The Arbiters of What Our Voters See: Facebook and Google's Struggle with Policy, Process, and Enforcement around Political Advertising." Political Communication 36, no. 4 (2019): 499–522.
Examines the institutional processes by which platforms make content moderation and CIB enforcement decisions, the tensions between commercial and civic responsibilities, and the organizational barriers to consistent enforcement. Essential context for evaluating CIB transparency reports and understanding why platform responses to documented influence operations have been inconsistent.
Roberts, Sarah T. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press, 2019.
Roberts' analysis of content moderation labor — who actually makes the decisions about what content is removed, under what conditions, with what training — provides critical grounding for evaluating platform enforcement claims. The structural limitations of human content review and the impossible scale of moderation tasks explain gaps in CIB enforcement that cannot be attributed simply to insufficient will.
Policy and Regulation
European Parliament. "Artificial Intelligence Act." Regulation 2024/1689 (2024).
The EU AI Act's provisions on synthetic media and high-risk AI applications, now in effect, represent the most comprehensive regulatory framework currently applicable to deepfake technology in any major jurisdiction. The Act's transparency-focused approach (disclosure requirements rather than prohibition) and the definition frameworks it employs are the leading edge of what deepfake regulation looks like in practice.
U.S. Senate Intelligence Committee. "Report of the Select Committee on Intelligence, United States Senate, on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election." Volumes I–V (2019–2020).
The authoritative public record of the IRA 2016 operation. Volume 2 (Social Media Campaigns) is the most relevant for Chapter 38's analysis. The committee's methodology — which included direct examination of platform-provided data not available to independent researchers — provides a level of completeness unavailable elsewhere. Available at intelligence.senate.gov.
Journalism and Investigative Resources
EU DisinfoLab. Research Archive. eudisinfolab.org.
The EU DisinfoLab's research archive includes the documentation of the India Chronicles (the longest-running documented influence operation in Europe), the secondary analysis of Spamouflage in European contexts, and original investigations into influence operation infrastructure. The archive is particularly valuable for documentation of operations targeting European democracies, which receives less coverage in U.S.-focused research.
Bellingcat. bellingcat.com.
Bellingcat's open-source investigative journalism, drawing on OSINT methodology, has documented a range of deepfake and influence operation cases. Their methodology guides — covering reverse video search, geolocation, facial recognition, metadata analysis, and social media investigation — are practical companions to the chapter's Action Checklist. The guides are freely available and are regularly updated to reflect new tools and techniques.
First Draft News. firstdraftnews.org.
The First Draft resource center, maintained by the journalistic organization focused on information disorder, provides practitioner-oriented guides to verification methodology, synthetic media detection, and influence operation identification. The guides are specifically designed for journalists and researchers working in real-time, rather than academic contexts, and reflect operational realities that academic literature sometimes abstracts away.
Chapter 38 of Propaganda, Power, and Persuasion: A Critical Study of Influence, Disinformation, and Resistance