Further Reading
Chapter 34: Ethics in Automated Decision-Making
Essential Reading
O'Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers. The most accessible and influential treatment of the harms caused by algorithmic decision-making systems in contexts ranging from criminal justice to education to finance. O'Neil's concept of "weapons of math destruction" — opaque, large-scale models with damaging feedback loops — provides the framework that much subsequent regulation implicitly draws on. Essential for any compliance professional working with algorithmic systems.
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. A deeply researched account of how algorithmic decision-making systems specifically harm economically marginalized communities. Eubanks traces case studies across welfare systems, child protective services, and criminal justice — providing the human context for the abstract harms discussed in this chapter. Essential background for understanding why scale and demographic disparity matter ethically.
Sandel, M.J. (2012). What Money Can't Buy: The Moral Limits of Markets. Farrar, Straus and Giroux. An accessible treatment of the ethical limits of market reasoning, drawing on Sandel's philosophy course work. Particularly relevant for understanding why consequentialist (efficiency-maximizing) reasoning is insufficient for ethical analysis of financial systems. Sandel's argument that markets "crowd out" moral reasoning is directly applicable to the use of optimized algorithms to make decisions that were previously made through human judgment.
For Practitioners
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society. A systematic academic review of the ethical issues raised by algorithmic decision-making — covering opacity, bias, autonomy, and the conditions under which algorithms can be held accountable. Available freely online. Provides the academic framework underlying much regulatory guidance.
Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology. The foundational paper on counterfactual explanations and their relationship to GDPR Article 22. Argues that the right to explanation under GDPR is best understood as a right to counterfactual explanations — "what would need to change for the decision to be different?" Directly applicable to the design of adverse action explanation systems.
Floridi, L., et al. (2018). AI4People — An Ethical Framework for a Good AI Society. Minds and Machines. The ethical framework developed for the European Commission's AI policy development, adopted by major tech firms and regulators. Covers four ethical principles (beneficence, non-maleficence, autonomy, justice) and their application to AI systems. Background reading for the EU AI Act's ethical foundations.
Philosophical Foundations
Mill, J.S. (1863). Utilitarianism. (Available in many modern editions.) The foundational text of consequentialist ethics. Understanding consequentialism properly requires engaging with Mill's original argument, not just a summary. His discussion of the "greatest happiness principle" and its relationship to justice is directly relevant to the aggregate-vs.-specific-harm tensions discussed in this chapter.
Kant, I. (1785). Groundwork for the Metaphysics of Morals. (Available in many modern editions.) The foundational text of deontological ethics. Kant's "categorical imperative" — act only according to principles you could universalize; treat persons as ends in themselves, never merely as means — underlies most rights-based analysis of algorithmic decision-making. The principle that people should not be treated as mere data points is a direct application of Kantian ethics.
Aristotle. Nicomachean Ethics. Book II. (Available in many modern editions.) The foundational text of virtue ethics. Aristotle's account of practical wisdom (phronesis) — the ability to discern the right action in particular circumstances — is directly relevant to the judgment that compliance professionals must exercise. The virtue ethics framework is not about following rules but about developing the capacity to make good judgments.
Regulatory Primary Sources
| Document | Jurisdiction | Key Ethical Relevance |
|---|---|---|
| EU AI Act (Regulation 2024/1689) | EU | Prohibited AI practices; high-risk requirements; human oversight |
| ICO guidance: AI and data protection | UK | Fairness requirements; data minimization; automated decisions |
| FCA Consumer Duty (PS22/9) | UK | Good outcomes for all customers; vulnerable customer considerations |
| NIST AI Risk Management Framework | US | Govern function; trustworthy AI characteristics |
| CFPB Circular 2022-03 on ECOA | US | Adverse action explanation for complex models |
| OECD Principles on Artificial Intelligence | International | Non-binding principles adopted by 40+ countries |
| IEEE Ethically Aligned Design | International | Technical standards for ethical AI design |
For the Curious
Broussard, M. (2018). Artificial Unintelligence: How Computers Misunderstand the World. MIT Press. A technically informed critique of AI's actual capabilities vs. its public image, with specific attention to the gap between what algorithms can do and what decision-making requires. Relevant for understanding the limits of automation in judgment-intensive compliance contexts.
Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. An empirically detailed account of how algorithmic systems encode and amplify racial biases — in the context of search engines but with direct implications for financial compliance systems. Relevant for understanding how "neutral" features become discriminatory proxies.
Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press. A provocative analysis of how technology encodes racial bias and serves as a vehicle for discrimination while appearing objective. Benjamin's concept of the "New Jim Code" — discriminatory technology disguised as neutral and efficient — is directly applicable to the credit scoring and AML scenarios discussed in this chapter.
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs. An analysis of how behavioral data is used to predict and influence human behavior for commercial purposes. Relevant to Case C in this chapter (surveillance system expansion) and more broadly to questions about the ethics of data collection in financial services.
Online Resources
AI Now Institute (ainowinstitute.org): Academic research on the social implications of AI. Publishes annual reports on AI accountability, bias, and governance. Free access.
Partnership on AI (partnershiponai.org): Multi-stakeholder consortium (academics, civil society, tech companies) producing practical resources on AI ethics. Case studies and frameworks relevant to financial services AI governance.
Algorithmic Justice League (ajl.org): Research and advocacy on bias in facial recognition and other AI systems. Primary source for demographic performance disparities in biometric systems (relevant to KYC chapter). Free resources.
Data & Society (datasociety.net): Research on the social and cultural implications of data technology. Relevant research on algorithmic accountability, automated decision-making, and vulnerable populations.