Chapter 3: Further Reading and Resources

Annotated bibliography of real, verifiable sources

All citations have been verified as real publications. Page numbers are approximations for edited volumes.


Foundational Ethical Theory

1. Mill, John Stuart. (1863). Utilitarianism. Parker, Son, and Bourn.

The foundational text of utilitarian ethics, written by the 19th-century philosopher who refined Bentham's original formulation. Mill's version of utilitarianism introduced important distinctions between higher and lower pleasures and responded to objections that utility is a doctrine worthy only of swine. Essential for understanding why utilitarian reasoning feels intuitive in business contexts, and what its internal tensions are. Available freely via Project Gutenberg.


2. Kant, Immanuel. (1785). Groundwork for the Metaphysics of Morals. (Translated by Mary Gregor, Cambridge University Press, 1998.)

The source text for the categorical imperative in both its formulations. Kant's argument that morality must be derived from reason alone, independent of consequences, is the foundation of deontological ethics. Challenging reading, but the core arguments are accessible; the Cambridge edition's introduction by Christine Korsgaard provides essential context. For students wanting a more accessible introduction, Korsgaard's Creating the Kingdom of Ends (Cambridge, 1996) offers excellent secondary engagement.


3. Rawls, John. (1971). A Theory of Justice. Harvard University Press.

The most influential work of political philosophy of the twentieth century. Part I, which develops the original position and the veil of ignorance thought experiment, is directly relevant to AI ethics. The difference principle — that inequalities are permissible only if they benefit the least-advantaged — provides a powerful framework for evaluating AI systems whose benefits are not evenly distributed. For a more accessible introduction, Rawls's 2001 Justice as Fairness: A Restatement (Harvard University Press) covers the core arguments more concisely.


4. Scanlon, T.M. (1998). What We Owe to Each Other. Harvard University Press.

Scanlon's contractualism — the view that actions are wrong if they violate principles that cannot be reasonably rejected — is one of the most sophisticated frameworks for AI ethics because it focuses on the standpoint of those affected. The chapter on the structure of contractualism (Chapter 4) is the most directly applicable. Scanlon's contractualism is particularly well-suited to evaluating AI systems that affect specific populations, because it asks: could the people in this population reasonably reject the principles on which the system operates?


5. Nussbaum, Martha C. (2011). Creating Capabilities: The Human Development Approach. Harvard University Press.

The most accessible presentation of the capabilities approach, developed for a general audience. Nussbaum presents and defends her list of ten central human capabilities, responds to critics of the approach, and applies it to questions of global justice. For AI ethics readers, the chapters on capabilities and human dignity are most directly relevant. Nussbaum's longer Women and Human Development (Cambridge, 2000) develops the philosophical foundations in greater depth.


Applied AI Ethics — Foundational Texts

6. Mittelstadt, Brent Daniel, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, and Luciano Floridi. (2016). "The Ethics of Algorithms: Mapping the Debate." Big Data & Society, 3(2), 1–21.

One of the first systematic academic surveys of algorithmic ethics, mapping the conceptual landscape of concerns — including unfairness, opacity, and bias — and connecting them to existing ethical frameworks. An essential orientation paper for understanding how philosophical ethics maps onto AI-specific concerns. Freely available online.


7. Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. (2018). "The Moral Machine Experiment." Nature, 563(7729), 59–64.

The primary source for Case Study 3.1. A landmark empirical paper reporting the results of the MIT Moral Machine experiment — 2.3 million participants, 233 countries, 70 million moral decisions about autonomous vehicle collision scenarios. Reveals dramatic cross-cultural variation in moral preferences. The supplementary data files are freely available and provide granular country-level analysis. A follow-up paper — Awad et al. (2020) in PNAS — responds to critiques and provides additional analysis.


8. Vallor, Shannon. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.

The most rigorous application of virtue ethics to technology, including AI. Vallor develops a "technomoral" virtue ethics that addresses what it means to develop good character in technologically mediated environments. Particularly relevant to Section 3.4 and the organizational virtue discussion in Case Study 3.2. Vallor is one of the most important applied AI ethics philosophers working today; this book is a genuine contribution to both philosophy and technology studies.


9. Floridi, Luciano, and Josh Cowls. (2019). "A Unified Framework of Five Principles for AI in Society." Harvard Data Science Review, 1(1).

Argues for five key principles — beneficence, non-maleficence, autonomy, justice, and explicability — as a unified framework synthesizing existing AI ethics guidance documents. Useful for understanding how ethical frameworks have been translated into AI-specific guidelines, and for comparing the philosophical frameworks in this chapter with practical governance documents. Freely available online.


Consequentialism, Utilitarianism, and AI

10. Singer, Peter. (2011). Practical Ethics (3rd ed.). Cambridge University Press.

Singer is the most influential contemporary utilitarian philosopher. This accessible text applies utilitarian reasoning to a wide range of practical ethical questions. Chapter 2, on equality and its basis, is particularly relevant for AI ethics: Singer's argument that interests must count equally regardless of who holds them challenges AI systems that systematically discount the interests of certain populations. Singer's impartial utilitarianism provides the strongest version of the consequentialist case for distribution-sensitive AI design.


11. Nyholm, Sven, and Jilles Smids. (2016). "The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?" Ethical Theory and Moral Practice, 19(5), 1275–1289.

The most careful philosophical analysis of the deontological alternative to utilitarian autonomous vehicle decision algorithms. Nyholm and Smids argue for constraints on programming cars to deliberately kill, drawing on the doctrine of double effect and Kantian ethics. Essential reading alongside the Moral Machine paper for a balanced understanding of the autonomous vehicle trolley problem. Directly engages the academic debate that Case Study 3.1 introduces.


Non-Western and Relational Frameworks

12. Metz, Thaddeus. (2007). "Toward an African Moral Theory." Journal of Political Philosophy, 15(3), 321–341.

One of the most rigorous philosophical elaborations of Ubuntu as a moral theory. Metz argues that Ubuntu provides a coherent and distinctive moral framework — not merely a cultural sentiment — with implications for how we evaluate political and social arrangements. For AI ethics readers, the discussion of communal versus individual approaches to rights and obligations is directly relevant. Metz has continued this project in his 2011 book Ubuntu as a Moral Theory and Human Rights in South Africa (African Human Rights Law Journal).


13. Nakagawa, Keiichi. (2004). "Confucian Ethics and Contemporary Applied Ethics: Towards a Relational Ethical Theory." Journal of Applied Ethics, 22, 1–22.

Engages Confucian ethics in dialogue with contemporary applied ethics, arguing that relational thinking offers important correctives to individualist Western approaches. For AI ethics, the discussion of role-specific duties and the ethics of hierarchy is particularly relevant. Note that Confucian ethics is a vast and contested tradition; this article provides a good academic introduction but readers should be aware that there are multiple competing interpretations.


14. Carroll, Stephanie Russo, Ibrahim Garba, Oscar L. Figueroa-Rodríguez, Jarita Holbrook, Raymond Lovett, Simeon Materechera, Mark Parsons, Kay Raseroka, Desi Rodriguez-Lonebear, Robyn Russo, Diane Smith, James K. Walker, Aku Daivi Odola, and Tahu Kukutai. (2020). "The CARE Principles for Indigenous Data Governance." Data Science Journal, 19, 43.

The primary source for the CARE Principles — Collective Benefit, Authority to Control, Responsibility, Ethics — that provide a framework for Indigenous data governance. Written by an international team of researchers with Indigenous backgrounds, this paper argues that existing data principles (like FAIR) are insufficient for protecting Indigenous communities' interests in data. Essential reading for Section 3.8 on Indigenous ethics and for Exercise 3.18.


15. Gilligan, Carol. (1982). In a Different Voice: Psychological Theory and Women's Development. Harvard University Press.

The founding text of care ethics. Gilligan's argument that women's moral reasoning tends to center relationships and context rather than abstract rules was both a critique of Kohlberg's developmental psychology and a contribution to feminist ethics. Essential background for Section 3.7. Note that Gilligan's empirical claims about gender differences in moral reasoning have been both supported and contested in subsequent research; the philosophical contribution to care ethics stands independently of the empirical claims.


Ethics in Organizations and AI Governance

16. Jobin, Anna, Marcello Ienca, and Effy Vayena. (2019). "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence, 1(9), 389–399.

A systematic analysis of 84 AI ethics guidelines published by governments, companies, and civil society organizations, identifying both convergence (on principles like transparency, fairness, and accountability) and divergence (on implementation, enforcement, and priority). The paper reveals that most guidelines focus on aspirational principles without operational detail or enforcement mechanisms — a finding directly relevant to the ethics washing discussion in Section 3.4 and Case Study 3.2.


17. Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Mysers West, Rashida Richardson, Jason Schultz, and Oscar Schwartz. (2018). AI Now Report 2018. AI Now Institute at New York University.

One of the most comprehensive annual reports on the social implications of AI, integrating technical, policy, and ethical analysis. The 2018 report, in particular, documents the gap between AI companies' ethical commitments and their organizational practices — providing empirical grounding for the virtue ethics and ethics washing discussions. The AI Now Institute reports are freely available online and updated annually, making them essential for current awareness in AI ethics.


18. Benjamin, Ruha. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.

Benjamin's book is not primarily a work of ethical theory, but it provides the most compelling empirical case for why the ethics washing critique is urgent. Through case studies of algorithmic systems in healthcare, criminal justice, and social services, Benjamin documents how technologically sophisticated systems can reproduce and entrench racial inequality while appearing neutral. The concept of the "New Jim Code" — discriminatory designs that encode inequalities into technical systems — is essential vocabulary for AI ethics. Essential reading for anyone working on fairness and discrimination in AI.


19. O'Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers.

A highly accessible account of how mathematical models — including AI systems — encode values, amplify biases, and can cause serious harm to vulnerable populations. O'Neil's concept of a "Weapon of Math Destruction" (a model that is opaque, widespread, and damaging) provides a practical framework for identifying harmful AI applications. The book covers consequentialist failures — systems that optimize for the wrong metrics — and the distribution problem in concrete, compelling terms. Written for a general business and policy audience.


20. European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (the EU AI Act). Official Journal of the European Union.

The EU AI Act is the most comprehensive AI regulatory framework enacted to date, and it represents an operationalization of ethical principles — including transparency, accountability, non-discrimination, and human oversight — into law. Understanding the AI Act is essential for business professionals because it applies to AI systems deployed in the EU regardless of where the developer is located. The regulation distinguishes between prohibited AI practices (such as real-time biometric surveillance and social scoring), high-risk AI applications (requiring conformity assessment), and lower-risk applications. Available at eur-lex.europa.eu.


For Further Exploration

Podcasts and audio resources: - "Philosophize This!" (podcast) — Episodes on Kant, Mill, Rawls, and Aristotle provide accessible introductions to the philosophical frameworks covered in this chapter - "AI Ethics Brief" (newsletter and podcast, Amsterdam University) — Current AI ethics research and policy, with regular engagement with philosophical frameworks - "Your Undivided Attention" (podcast, Center for Humane Technology) — Focuses on the ethics of persuasion technology and attention economics, highly relevant to Section 3.2

Online resources: - Stanford Encyclopedia of Philosophy (plato.stanford.edu) — Free, academically rigorous entries on all major ethical frameworks. The entries on "Consequentialism," "Deontological Ethics," "Virtue Ethics," and "The Original Position" are excellent starting points. - Moral Machine (moralmachine.net) — The platform on which the Awad et al. experiment was conducted; still active. Running through several scenarios provides direct intuitive experience of the autonomous vehicle dilemmas discussed in Case Study 3.1. - AI Now Institute Annual Reports (ainowinstitute.org) — Annual interdisciplinary analysis of AI's social implications, freely available.


This reading list prioritizes depth over breadth. Students are encouraged to engage fully with two or three texts rather than superficially with many. Items 1–5 (foundational ethical theory) and items 6–9 (applied AI ethics) are the highest priority for readers who are new to philosophy. Items 12–15 (non-Western frameworks) are particularly recommended for students with backgrounds in business or technical fields, where these perspectives are systematically underrepresented.