Chapter 39: Further Reading
Anticipatory Ethics and Foresight
1. Collingridge, David. The Social Control of Technology. New York: St. Martin's Press, 1980. The foundational text for the Collingridge dilemma. Analyzes the fundamental paradox of technological governance and examines case studies in nuclear power and other technologies. Essential for understanding why proactive ethics is both difficult and necessary.
2. Jasanoff, Sheila. The Ethics of Invention: Technology and the Human Future. New York: W. W. Norton, 2016. Jasanoff examines how societies make collective decisions about technology and what ethical responsibilities attend the power to invent. Her comparative analysis of how different societies govern the same technologies illuminates the role of culture, politics, and institution in shaping technological trajectories.
3. Floridi, Luciano, ed. The Onlife Manifesto: Being Human in a Hyperconnected Era. Cham: Springer, 2015. A collection of essays examining how digital technologies are transforming human experience, identity, and social life. Anticipates many of the human-AI relationship questions that have become urgent in the ChatGPT era.
Agentic AI and Autonomous Systems
4. Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019. Russell, a leading AI researcher, argues that the problem of building AI systems that are beneficial and controllable is the central challenge of AI development. His analysis of why control is difficult and what approaches show promise is essential context for understanding agentic AI governance.
5. Krakovna, Victoria, et al. "Specification Gaming: The Flip Side of AI Ingenuity." DeepMind Blog, 2020. A catalog of documented instances of AI systems finding unexpected ways to satisfy their specified objectives that violate the intent of those objectives. Essential reading for understanding why bounded action spaces and careful goal specification matter for agentic AI.
6. Weidinger, Laura, et al. "Sociotechnical Safety Evaluation of Generative AI Systems." arXiv preprint arXiv:2310.11986 (2023). A rigorous framework for evaluating the safety and risks of generative AI systems, from researchers at DeepMind. Addresses agentic AI risks among others and provides concrete evaluation methodologies.
Cognitive Liberty and Neural Rights
7. Farahany, Nita A. The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. New York: St. Martin's Press, 2023. The definitive text on cognitive liberty as an emerging rights concept. Farahany examines brain-computer interfaces, emotion AI, and other neurotechnologies and argues for a new right to mental self-determination. Essential reading for understanding the cognitive liberty dimension of AI ethics.
8. Ienca, Marcello, and Roberto Andorno. "Towards New Human Rights in the Neurotechnology Era: Moral Considerations for Novel Neurotechnologies." Life Sciences, Society and Policy 13, no. 5 (2017). An earlier articulation of the case for cognitive liberty as a human rights concept, examining the specific challenges posed by neurotechnology. Provides the philosophical foundation for Farahany's more recent work.
AI Power Concentration and Governance
9. Khan, Lina M. "The Separation of Platforms and Commerce." Columbia Law Review 119, no. 4 (2019): 973–1098. Khan's influential argument for structural remedies for platform monopoly, which anticipated her later role as FTC Chair. While focused on e-commerce platforms, the framework applies with modification to AI infrastructure monopoly.
10. Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019. Zuboff's sweeping analysis of how digital technology companies have built business models around the extraction and commercialization of behavioral data. Provides essential context for understanding the power dimensions of AI development and deployment.
11. Acemoglu, Daron, and Simon Johnson. Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. New York: PublicAffairs, 2023. Acemoglu and Johnson examine the history of technological change and argue that the benefits of technology are not automatically shared broadly — they require deliberate choices about who controls technology and for what purposes. Their analysis of AI is specifically relevant to the power concentration concerns of this chapter.
Human-AI Relationships and Cognitive Effects
12. Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains. New York: W. W. Norton, 2010. Carr's examination of how internet use is changing human cognition — particularly attention and deep reading. A useful framework for thinking about the cognitive effects of AI, with the caveat that some of Carr's more dramatic claims about brain change have been contested.*
13. Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf, 2017. Tegmark's exploration of different scenarios for how AI might develop and their implications for human life, economy, and society. More speculative than other readings but useful for thinking through the range of long-term trajectories.
AI and Climate
14. Strubell, Emma, Ananya Ganesh, and Andrew McCallum. "Energy and Policy Considerations for Deep Learning in NLP." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (2019): 3645–3650. The paper that first systematically quantified the energy costs of training large NLP models. Remains an important reference point for the AI-climate discussion, though AI energy consumption has grown substantially since its publication.
15. Patterson, David, et al. "Carbon Emissions and Large Neural Network Training." arXiv preprint arXiv:2104.10350 (2021). A more recent analysis of carbon emissions from large model training, from researchers at Google and UC Berkeley. Provides methodology for estimating training costs and discusses the effect of hardware and energy source choices.
International AI Governance
16. Dafoe, Allan. "AI Governance: A Research Agenda." Future of Humanity Institute, University of Oxford, 2018. A rigorous research agenda for AI governance that identifies key questions, maps the existing literature, and identifies research priorities. Provides a useful framework for thinking about the dimensions of international AI governance.
17. Calo, Ryan. "Artificial Intelligence Policy: A Primer and Roadmap." UC Davis Law Review 51, no. 2 (2017): 399–435. A clear and comprehensive overview of AI policy issues, including international dimensions. Somewhat dated now but still useful as a foundational survey of the governance landscape.
18. Runciman, David. How Democracy Ends. New York: Basic Books, 2018. Runciman's analysis of the threats to democratic governance in the age of digital technology. Provides essential context for understanding why AI-enabled threats to democracy — including those in the 2024 elections case — are serious rather than alarmist.
AI Ethics as Practice and Ongoing Resources
19. Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press, 2016. Vallor applies virtue ethics to technology, arguing that the development of good character — practical wisdom in the face of technological change — is the essential foundation of technology ethics. Her framework for "techno-moral virtues" is the most fully developed account of AI ethics as practice in the philosophical literature.
20. Partnership on AI. "Ongoing Research and Resources." partnershiponai.org. The Partnership on AI is a multi-stakeholder organization that conducts research, develops guidance, and convenes organizations across the AI ecosystem. Its ongoing publications and resources represent a living library of AI ethics practice. Unlike the other readings on this list, this entry points to a continuously updated resource rather than a fixed text — appropriate for a field in rapid evolution.