Chapter 40 Further Reading: AI, Automation, and the Future of Political Analytics
Large Language Models and Political Communication
Goldstein, Josh A., et al. "Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations." arXiv preprint arXiv:2301.04246 (2023). A comprehensive early analysis of how LLMs could be used for political influence operations, including disinformation, astroturfing, and persona creation. The framework for threat categorization is directly useful for practitioners evaluating AI tools.
Bai, Haohan, et al. "AI Can Now Write Persuasive Political Messaging — And It May Change Democracy." Working paper, 2023. The study referenced in the chapter's opening pages, documenting that LLM-generated persuasion messages can be as effective as those written by experienced human consultants. Understanding the methodology and limitations of this finding is important before drawing sweeping policy conclusions.
Kreps, Sarah, and R. Miles McCain. "Not All AI Is Created Equal: The Role of Human Oversight in AI-Assisted Political Communication." Political Communication (forthcoming). Examines the role of human review in mitigating the harms of AI-generated political content. The paper's central argument — that the question is not whether to use AI but what human oversight architecture surrounds it — is a practical guide for campaign analytics directors.
Deepfakes and Synthetic Media
Citron, Danielle Keats, and Robert Chesney. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review 107 (2019): 1753–1820. The foundational legal and policy analysis of deepfakes, including the original articulation of the liar's dividend concept. Despite its 2019 publication date, the conceptual framework has aged extremely well. Essential reading.
Paris, Britt, and Joan Donovan. Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. Data & Society Research Institute, 2019. A practically oriented analysis of synthetic media that distinguishes "deepfakes" (AI-generated) from "cheap fakes" (lower-tech manipulations like slowing down video). The distinction matters for regulation and detection: cheap fakes are currently more prevalent in political disinformation than true deepfakes.
Bateman, Jon. Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios. Carnegie Endowment for International Peace, 2020. Although focused on financial fraud, the threat assessment methodology and the analysis of detection limitations transfer directly to political contexts. The section on institutional responses is particularly valuable.
Partnership on AI. Responsible Practices for Synthetic Media: A Framework for Collective Action. 2023. The multi-stakeholder framework for responsible synthetic media practices developed by the Partnership on AI, including the content authenticity technical standards that underpin the C2PA initiative. Available at partnershiponai.org.
AI in Survey Research
Bail, Chris. Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing. Princeton University Press, 2021. While focused on social media rather than AI polling specifically, Bail's methodology for large-scale online survey research — including the challenges of representative sampling in digital environments — is directly relevant to evaluating AI-assisted survey methods.
Argyle, Lisa P., et al. "Out of One, Many: Using Language Models to Simulate Human Samples." Political Analysis 31, no. 3 (2023): 337–351. The most rigorous published study of synthetic respondents, which finds that LLMs can approximate aggregate survey results for well-documented populations but diverge significantly for specific local contexts and underrepresented populations. The methodological appendix is essential for understanding the limitations.
Bisbee, James, and Joshua Clinton. "Synthetic Replacements for Human Survey Data? The Perils of Large Language Models." Political Analysis (forthcoming). A critical assessment of synthetic respondent polling that emphasizes the conditions under which aggregate accuracy conceals individual-level failures — directly relevant to the democratic accountability argument in Chapter 40.
Platform Algorithms and Political Information
Guess, Andrew, et al. "How Do Social Media Feed Algorithms Affect Attitudes and Behavior in an Election Campaign?" Science 381, no. 6656 (2023): 398–404. One of the studies from the large-scale 2020 Facebook collaboration, documenting that algorithmically curated feeds shaped content consumption but had smaller effects on political attitudes than previously suspected. Required reading for anyone discussing platform polarization effects.
Nyhan, Brendan, et al. "Like-Minded Sources on Facebook Are Prevalent but Not Polarizing." Nature 620 (2023): 137–144. A companion study from the same collaboration, documenting that exposure to like-minded content on Facebook was common but did not systematically increase polarization. The implications for platform algorithmic governance are significant.
Tufekci, Zeynep. Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale University Press, 2017. Though predating the LLM era, Tufekci's analysis of how platform affordances shape political communication — including the interaction between organic and algorithmic content distribution — provides essential conceptual grounding.
AI Regulation and Disclosure
European Parliament and Council. Regulation (EU) 2024/1689 of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). 2024. The landmark EU AI regulatory framework, which includes provisions relevant to political advertising and high-risk AI systems. Understanding the EU approach — even for US practitioners — is important because it sets a global regulatory reference point.
Federal Election Commission. Internet Communication Disclaimers. Advisory Opinion Archive. The FEC's advisory opinions on digital advertising disclaimers, which represent the closest current federal guidance on AI disclosure in political advertising. The gap between existing disclaimer requirements and the questions raised by AI-generated content is visible in these documents.
Democratic Theory and AI
Runciman, David. How Democracy Ends. Basic Books, 2018. An analysis of the conditions under which democratic systems fail that is particularly useful for situating the AI disinformation threat in broader democratic theory. Runciman's discussion of technological disruption to political epistemology is directly relevant.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019. A comprehensive analysis of the commercial surveillance infrastructure that underlies political targeting, AI personalization, and platform recommendation systems. Zuboff's framework for understanding the "behavioral modification" purpose of surveillance capitalism provides essential context for evaluating AI in political analytics.