Chapter 29: Further Reading — AI and Democratic Processes

Foundational Works on the Attention Economy and Algorithmic Politics

1. Pariser, E. (2011). The Filter Bubble: What the Internet Is Hiding from You. Penguin Press. The book that introduced "filter bubble" into mainstream discourse. Essential reading as a foundational argument, though the empirical evidence since publication is more mixed. Read with Guess et al. (below) for a balanced view.

2. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Comprehensive theoretical framework for understanding how data collection for behavioral prediction creates structural political power. Dense but foundational for understanding the attention economy's political dimensions.

3. Bail, C. A., et al. (2018). "Exposure to Opposing Views on Social Media Can Increase Political Polarization." PNAS, 115(37), 9216–9221. The key empirical paper showing that Twitter exposure to opposing political views increased rather than decreased political polarization — essential for understanding the limits of the "more exposure" solution to echo chambers.

4. Guess, A., Nyhan, B., & Reifler, J. (2019). "Exposure to Untrustworthy Websites in the 2016 US Election." Nature Human Behaviour, 3(6), 612–621. Careful study of actual exposure to misinformation in the 2016 election, finding that older Americans were disproportionate sharers and that algorithmic effects were less determinative than individual selection behaviors.


Disinformation and AI-Generated Content

5. Chesney, R., & Citron, D. K. (2019). "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review, 107(6), 1753–1820. Foundational legal academic treatment of deepfake risks, introducing the "liar's dividend" concept and providing the analytical framework for most subsequent legal discussion. Essential reading.

6. Schiff, M., & Ferrara, E. (2023). "AI-Generated Political Disinformation." AI & Society, 38, 449–464. Systematic review of documented cases of AI-generated political disinformation, methodologies, and detection approaches. Up-to-date and empirically grounded.

7. Vaccari, C., & Chadwick, A. (2020). "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News." Social Media + Society, 6(1). Experimental research on actual effects of deepfakes on viewer uncertainty and trust — important empirical grounding for what deepfakes actually do to information environments.


Political Advertising and Micro-Targeting

8. Cadwalladr, C., & Graham-Harrison, E. (2018). "Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach." The Guardian, March 17, 2018. The original investigative reporting that broke the Cambridge Analytica story. Essential primary document for understanding what occurred.

9. Karpf, D. (2019). "On Digital Disinformation and Democratic Myths." Mediawell, Social Science Research Council. Careful empirical assessment of Cambridge Analytica's actual capabilities, arguing that its boasts were substantially exaggerated. Important corrective to narrative inflation.

10. Borgesius, F. Z., Möller, J., Kruikemeier, S., Ó Fathaigh, R., Irion, K., Dobber, T., ... & de Vreese, C. H. (2018). "Online Political Microtargeting: Promises and Threats for Democracy." Utrecht Law Review, 14(1), 82–96. Balanced academic treatment of political micro-targeting — what the evidence shows about effectiveness and the genuine accountability problems it creates independent of effectiveness.


Platform Governance and Democracy

11. Zittrain, J. (2019). "The Hidden Costs of Internet Centralization." Harvard Law Review Forum, 133. Theoretical framework for understanding platforms as governance institutions and the democratic accountability problems this creates.

12. Klonick, K. (2018). "The New Governors: The People, Rules, and Processes Governing Online Speech." Harvard Law Review, 131(6), 1598–1670. Detailed examination of how major platforms actually govern speech — essential for understanding content moderation as a political process.

13. Facebook/Meta. (2021). "Adversarial Threat Reports." Meta Transparency Center. Meta's periodic reports documenting coordinated inauthentic behavior networks removed from its platforms. Essential primary source for understanding documented disinformation operations.


Case Studies: Myanmar and Electoral Disinformation

14. UN Human Rights Council. (2018). "Report of the Independent International Fact-Finding Mission on Myanmar." A/HRC/39/64. The UN's definitive investigation, which specifically identified Facebook's role as a contributing factor to the violence. Primary source material.

15. Mozur, P. (2018). "A Genocide Incited on Facebook, With Posts Seen by Rohingya as a Refugee Camp." The New York Times, October 15, 2018. Key investigative reporting on the Myanmar genocide and Facebook's role. Accessible primary documentation.


Regulatory Frameworks

16. European Commission. (2022). "Digital Services Act." Official Journal of the European Union. The full regulatory text of the DSA. Essential reference for understanding the legal framework and its requirements for VLOPs on election integrity.

17. Tambini, D. (2018). "Social Media Power and Election Legitimacy." In Tambini, D. & Labo, M. (Eds.), Hosting the Public Discourse, Hosting the Public. Polis. Thoughtful analysis of the normative questions about platform power and electoral legitimacy.


AI for Democracy

18. Hsiao, A. (2022). "How Taiwan Is Pioneering a Digital Democracy." Foreign Policy, January 3, 2022. Accessible overview of Taiwan's digital democracy experiments, including vTaiwan and Audrey Tang's approach. Good introduction to the AI-for-democracy use cases.

19. Landemore, H. (2020). Open Democracy: Reinventing Popular Rule for the Twenty-First Century. Princeton University Press. Political theory treatment of how digital tools including AI can enable more genuinely participatory democracy. Provides the normative framework for evaluating AI-for-democracy applications.

20. Center for AI Safety. (2024). "AI and Elections: A Risk Assessment." CAIS. Current-year risk assessment from a leading AI safety organization, synthesizing the evidence on AI election disinformation risks. Updated regularly and freely available online.