Chapter 2: Further Reading and Resources
Chapter 2 | AI Ethics for Business Professionals
All citations are to real, published works. Annotations are provided to guide selection based on reader background, interest, and time available.
Foundational Texts
1. Turing, Alan M. "Computing Machinery and Intelligence." Mind 59, no. 236 (1950): 433–460.
The original paper, and still essential reading. Turing's imitation game is only the starting point; the real substance is his systematic engagement with objections to machine intelligence. Available freely online through Oxford Journals. Business readers who have heard of the Turing Test but never read the original will find it more philosophically nuanced, more historically situated, and more directly relevant to current AI debates than they expect. Read particularly the "Learning Machines" section, which foreshadows machine learning and its ethical implications.
2. Wiener, Norbert. The Human Use of Human Beings: Cybernetics and Society. Boston: Houghton Mifflin, 1950. (Second edition, 1954.)
Arguably the first AI ethics book, written by a mathematician and engineer who grasped the social implications of feedback systems before AI was named as a field. Wiener writes with unusual clarity about the alignment problem (though he does not use that term), about automation and labor, and about the conditions under which intelligent machines benefit or harm human societies. The second edition is preferred; it is more accessible than the first. Essential background for any serious engagement with AI ethics history. More readable than its vintage might suggest.
3. Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W.H. Freeman, 1976.
Written by the creator of ELIZA, this is a sustained argument that human beings' tendency to anthropomorphize computers — to attribute understanding, empathy, and moral judgment to systems that have none — poses serious social risks that the AI research community was not adequately addressing. Weizenbaum's experience watching users of ELIZA form emotional attachments to a pattern-matching program shaped his concern. Chapters 1–3 and 6–7 are most directly relevant to AI ethics; the technical material in between can be skimmed. The book remains in print and is cited extensively in the human-computer interaction literature.
AI History and Governance
4. McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. 2nd ed. Natick, MA: A.K. Peters, 2004.
The standard popular history of AI, comprehensive and readable. McCorduck traces the field from its philosophical antecedents through the development of expert systems and the AI winters to the state of the field at the time of the second edition's publication. Particularly valuable for understanding the social and cultural context of AI research — the personalities, institutional dynamics, and public narratives that shaped the field's development. Business readers who want a thorough historical foundation without engaging primary sources will find this the most efficient path.
5. Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019.
Russell is one of the field's leading researchers, and this book offers a technically grounded account of why getting AI to do what we actually want — rather than what we specify — is genuinely hard. Russell's treatment of the alignment problem is the most readable technically informed account available. The book also traces AI history through an ethics lens and argues for a specific technical approach (cooperative, uncertainty-about-values AI) to the alignment challenge. Essential for understanding the technical dimensions of AI ethics as a field, explained in language that non-engineers can follow.
Algorithmic Bias and Accountability
6. O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.
O'Neil, a mathematician and former hedge fund quant, catalogs the ways that algorithmic systems — in credit, employment, education, criminal justice, and elsewhere — embed and amplify social inequality behind a facade of mathematical objectivity. The book predates the specific bias reckoning documented in Section 2.4 of this chapter but anticipates it with considerable precision. Readable, concrete, and directly applicable to business contexts. One of the most influential popular AI ethics texts of the 2010s and still among the most practically useful.
7. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin's Press, 2018.
Eubanks examines how automated decision systems are used in public benefit programs — child welfare, welfare eligibility, homeless services — and how they function to intensify surveillance and restrict access for people living in poverty. Her work is important because it documents AI ethics failures in government systems (as opposed to corporate systems) and because it centers the experiences of people directly affected by algorithmic decision-making. Essential reading for anyone working in government, healthcare, or social services AI deployment.
8. Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." In Proceedings of the 1st Conference on Fairness, Accountability and Transparency, edited by Sorelle A. Friedler and Claudia Wilson, 77–91. PMLR 81, 2018.
The original Gender Shades paper, available freely through the Proceedings of Machine Learning Research. Essential technical reading that is accessible to non-technical readers. The paper's methodology — constructing a balanced dataset across skin tone and gender to enable valid comparison of error rates — is itself a contribution to how bias evaluation should be done. The paper's findings are described in this chapter; reading the original gives a sense of how rigorous empirical AI ethics research is conducted. Gebru and Buolamwini have both continued publishing important work; following their current research is recommended.
9. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 23, 2016.
The original investigative article documenting the COMPAS recidivism prediction tool's racial disparities. Freely available online. Essential primary source reading; the article is carefully written, technically responsible, and accessible to non-technical readers. Reading this alongside the Northpointe rebuttal (also freely available online) gives a concrete sense of how bias disputes are conducted in practice. The methodological debate that followed this article — about which fairness criteria should apply — is documented in subsequent academic work by Chouldechova, Kleinberg et al., and others.
AI Ethics Frameworks and Institutions
10. Jobin, Anna, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389–399.
The systematic analysis of 84 AI ethics principles documents referenced in Section 2.5. The paper identifies both the apparent consensus (five common principles) and the deep disagreements concealed beneath it. Essential reading for anyone working on organizational AI ethics governance: it demonstrates why adopting principles is easier than implementing them and provides a framework for evaluating what any given principles document actually commits its issuer to. The full paper requires institutional access, but a preprint version is available through arXiv.
11. Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Mysers West, Rashida Richardson, Jason Schultz, and Oscar Schwartz. "AI Now Report 2018." AI Now Institute, New York University, 2018.
One of the AI Now Institute's annual reports, which provide systematic documentation of AI ethics failures, policy developments, and governance gaps across sectors. The 2018 report is particularly strong on employment, healthcare, criminal justice, and welfare — areas where AI deployment was actively causing harm at the time of writing. The AI Now Institute reports (available freely at ainowinstitute.org) are among the most useful ongoing resources for business professionals who want to stay current on AI ethics developments.
12. Crawford, Kate, and Trevor Paglen. "Excavating AI: The Politics of Images in Machine Learning Training Sets." September 19, 2019. https://excavating.ai.
A research investigation into the ImageNet dataset documenting the cultural assumptions, biases, and dehumanizing classifications embedded in one of the most widely used AI training datasets. The piece is written accessibly for non-technical audiences and combines technical investigation with cultural analysis. Essential reading for understanding how training data is not neutral, how the labor of data curation encodes values, and how the choices made in building training datasets shape the systems trained on them.
Labor, Data, and AI's Human Infrastructure
13. Gray, Mary L., and Siddharth Suri. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt, 2019.
The definitive scholarly treatment of the hidden human labor powering AI and technology platforms. Gray and Suri combine ethnographic research — years of interviews and observation with on-demand platform workers — with structural analysis of the labor market dynamics that make ghost work economically attractive to technology companies. The book provides both the human stories that make abstract ethical concerns concrete and the systemic analysis that connects individual experiences to broader patterns. Essential reading for anyone involved in AI procurement, supply chain management, or labor policy.
14. Hao, Karen. "How OpenAI Is Trying to Make ChatGPT Safer and Less Biased." MIT Technology Review, February 2, 2023; and follow-up reporting on Sama Group content moderation, 2022.
Hao's reporting in MIT Technology Review on the content moderation labor that makes AI safety systems function — including the specific investigation of conditions for Sama Group workers in Nairobi reviewing content for OpenAI — is the most important primary source journalism on this topic. MIT Technology Review is a subscription publication, but Hao's AI coverage is widely cited and key articles are often accessible through institutional subscriptions. For business professionals who read only one journalistic source on AI ethics, Hao's body of work in MIT Technology Review is the recommendation.
Regulation and Governance
15. Diakopoulos, Nicholas. "Algorithmic Accountability: Journalistic Investigation of Computational Power Structures." Digital Journalism 3, no. 3 (2015): 398–415.
An early and influential academic article that coined and defined "algorithmic accountability" as both a journalistic and governance concern. Diakopoulos argues that algorithms making consequential decisions about people should be subject to the same accountability demands as other decision-making power structures. The article is technically accessible and provides a framework for thinking about what accountability for algorithmic systems requires. Important background for understanding the field that grew around this concept in subsequent years.
16. European Commission. "Ethics Guidelines for Trustworthy AI." High-Level Expert Group on Artificial Intelligence. Brussels, April 8, 2019.
The EU's official AI ethics guidelines, freely available on the European Commission website. Represents one of the most developed governance frameworks produced by a major jurisdiction and is directly relevant to any organization operating in or serving European markets. Particularly useful for the "Assessment List for Trustworthy AI" (ALTAI), which operationalizes the guidelines into specific questions organizations should ask about their AI systems. Business professionals implementing AI ethics governance programs will find ALTAI a useful starting checklist, with the caveat that checklists are not substitutes for substantive ethical judgment.
17. Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press, 2015.
Pasquale, a legal scholar, argues that the opacity of algorithmic systems — used in finance, healthcare, and information management — is a governance crisis that prevents meaningful accountability. His analysis of the tension between transparency (necessary for accountability) and intellectual property protection (claimed by corporations) provides an essential legal and institutional framework for AI ethics. The book predates many of the specific AI ethics cases discussed in this chapter but anticipates their governance dimensions with considerable precision.
18. Reinhart, Carmen M., and Kenneth S. Rogoff. This Time Is Different: Eight Centuries of Financial Folly. Princeton: Princeton University Press, 2009.
Not an AI book, but essential context for the "this time is different" argument that recurs in every phase of AI development. Reinhart and Rogoff document how financial crisis after financial crisis was preceded by confident assertions, made by sophisticated actors, that the conditions producing past crises no longer applied. Their ironic title names the syndrome that appears in AI development in analogous form. Business professionals who want to understand why "AI is fundamentally different from previous technologies and previous AI waves" should be treated with skepticism will benefit from this historical analysis of how similar arguments have functioned in other domains.
This reading list reflects sources available through 2024. The field of AI ethics moves rapidly; readers are encouraged to supplement this list with current annual reports from the AI Now Institute, current proceedings from the ACM FAccT conference, and current investigative coverage in MIT Technology Review, The Guardian, and ProPublica.