Chapter 39 Further Reading: AI, Generative Models, and the Future of Synthetic Media


Foundational Research

1. Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. Georgetown University Center for Security and Emerging Technology.

The most important early empirical study of LLM capability for influence operations. Goldstein et al. tested GPT-3's ability to generate persuasive political messaging and found that AI-generated messages were roughly as effective as human-written ones across multiple political topics and audience segments. The study is essential reading for understanding what the evidence does and does not support about AI's persuasion threat. Critically, the research frames the threat in terms of scale and cost reduction rather than per-message superiority. Available at: https://cset.georgetown.edu


2. Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023). "Co-Writing with Opinionated Language Models Affects Users' Views." CHI 2023.

Examines how AI-generated opinionated text affects user beliefs and writing. Combined with related Jakesch et al. work on AI misinformation detectability, this research is central to understanding how AI-generated text operates in cognitive and belief-formation contexts. The research demonstrates that exposure to AI-generated opinionated content influences user writing in the direction of the AI's views, with implications for understanding how AI-generated information shapes discourse.


3. Nightingale, S. J., & Farid, H. (2022). "AI-synthesized faces are indistinguishable from real faces and more trustworthy." Proceedings of the National Academy of Sciences, 119(8).

A landmark empirical study demonstrating that human observers cannot reliably distinguish AI-generated faces from real photographs — and that AI-generated faces are rated as more trustworthy on average than real faces. This research established an empirical basis for concerns about synthetic image credibility and has been widely cited in subsequent media literacy and policy work. The "more trustworthy" finding is particularly counterintuitive and important for understanding the credibility dynamics of synthetic media.


4. Citron, D. K., & Chesney, R. (2019). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review, 107(6), 1753–1820.

The foundational legal and policy analysis of deepfake threats, introducing key concepts including the "liar's dividend." Citron and Chesney's framework has shaped subsequent legal scholarship and policy work on synthetic media. While written before the current generation of accessible AI tools, the conceptual analysis remains highly relevant. The article's identification of specific legal vulnerabilities — including privacy torts, defamation, and election law — provides a still-useful framework for the legal analysis of synthetic media harm.


5. Hancock, J. T., & Bailenson, J. N. (2021). "The Social Impact of Deepfakes." Cyberpsychology, Behavior, and Social Networking, 24(3), 149–152.

A compact but important review of the social and psychological mechanisms through which deepfakes affect trust, social cohesion, and individual psychology. Hancock and Bailenson's framework, drawing on social presence theory and the psychology of deception, provides useful conceptual tools for thinking about why synthetic media is harmful beyond the immediate spread of false information. The paper addresses how deepfakes affect the social context of communication, not just the content of specific messages.


Technical References

6. Kirchenbauer, J., Geiping, J., Wen, Y., Katz, J., Miers, I., & Goldstein, T. (2023). "A Watermark for Large Language Models." ICML 2023.

The primary technical paper on LLM watermarking, providing the statistical and cryptographic foundations for embedding detectable signals in LLM outputs. Essential reading for understanding both the promise and limitations of watermarking as a provenance mechanism. The paper's discussion of robustness to paraphrasing and the detectability/quality tradeoffs is particularly valuable for policy and practical applications. Available on arXiv at https://arxiv.org/abs/2301.10226


7. Coalition for Content Provenance and Authenticity (C2PA). C2PA Technical Specification. (Current version available at https://c2pa.org/specifications/)

The authoritative technical documentation for the C2PA standard. The specification defines the manifest format, cryptographic mechanisms, binding approaches, and implementation guidance. The technical documentation is readable by non-specialists in its overview sections and provides the authoritative basis for assessing what C2PA can and cannot do. Essential for understanding content provenance solutions at a technical level.


8. Weber-Wulff, D., et al. (2023). "Testing of Detection Tools for AI-Generated Text." International Journal of Educational Integrity, 19(1).

The most comprehensive independent evaluation of AI text detection tools published to date. Weber-Wulff and colleagues tested fourteen detection tools and found that all had significant error rates, with false positive rates sufficient to cause serious problems for consequential decisions such as academic integrity judgments. The paper concludes that current detection tools are "not suitable for high-stakes decision making." An essential empirical baseline for discussions of AI content detection.


Policy and Governance

9. European Commission. (2024). Artificial Intelligence Act (Regulation (EU) 2024/[number]). Official Journal of the European Union.

The primary text of the EU AI Act — the most comprehensive enacted AI governance framework. Key sections for synthetic media include: the risk-based framework definitions, general purpose AI model requirements (Title II), and the obligations related to deepfake content (Article 50). The Act's preamble recitals provide important context for the legislative intent behind synthetic media provisions. Available at https://eur-lex.europa.eu


10. Federal Communications Commission. (2024). Declaratory Ruling: Artificial Intelligence Voice Cloning in Robocalls. FCC-24-17.

The FCC's February 2024 declaratory ruling clarifying that AI-generated voice robocalls are subject to the Telephone Consumer Protection Act. The ruling established the first federal regulatory response specifically addressing AI-generated audio in political communications, and its reasoning and scope set important precedents for subsequent regulatory interpretation. Available at https://www.fcc.gov


Investigative Journalism and Industry Reports

11. NewsGuard. (2023–2024). AI-Generated News Tracking Center. NewsGuard Technologies.

NewsGuard's ongoing tracking of AI-generated news websites, including the 2023 audit that identified over 200 AI-generated news sites and subsequent updates. The tracking center provides documented examples, detection methodologies, and trend data. Essential for understanding the operational reality of AI-generated fake news farms as a current, not merely theoretical, phenomenon. Available at https://www.newsguardtech.com/special-report/ai-tracking-center/


12. Nimmo, B., & Francois, C. (2020). Inauthentic Behavior: Detecting Coordinated Inauthentic Behavior in Social Networks. Atlantic Council Digital Forensic Research Lab.

While predating the current generative AI era, this research from the DFRLab provides foundational frameworks for identifying coordinated inauthentic behavior — synthetic personas, fake accounts, coordinated posting — that remain essential for analyzing AI-augmented information operations. The detection methodologies developed for pre-AI influence operations form the baseline against which AI-assisted influence operations must be assessed.


13. Chesney, R., & Citron, D. (2019). "Disinformation on Steroids: The Threat of Deep Fakes." Council on Foreign Relations Cyber Brief.

A concise policy-oriented companion to Citron and Chesney's longer California Law Review article, written for a policy audience and focused on practical governance recommendations. The brief's recommendations — disclosure requirements, research investment, platform accountability, and legal reforms — provide a useful policy agenda framework that can be evaluated against subsequent developments. Available at https://www.cfr.org


14. Benkler, Y., Faris, R., & Roberts, H. (2018). Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press.

While focused on the pre-generative-AI information environment, this book provides essential empirical grounding for understanding how political misinformation actually spreads, which partisan ecosystems are most vulnerable, and what structural features of media systems amplify or dampen false information. Understanding the pre-AI baseline is essential for assessing what AI adds to the threat. The book's documented finding that propaganda spreads most effectively through asymmetric partisan media ecosystems, not random social sharing, has important implications for assessing where AI-generated disinformation will have the greatest impact.


15. Rid, T. (2020). Active Measures: The Secret History of Disinformation and Political Warfare. Farrar, Straus and Giroux.

A historical study of disinformation operations from the Cold War to the present, providing essential context for situating AI-generated disinformation within a longer tradition of state-sponsored information manipulation. Rid's historical perspective demonstrates both that political disinformation is not new and that technological capability changes have historically transformed what is possible. Particularly valuable for understanding Russian disinformation traditions that AI tools may accelerate and extend.