Chapter 1: Further Reading and Resources

What Is AI Ethics? Framing the Challenge


This annotated bibliography provides curated resources for readers who wish to deepen their understanding of the topics introduced in Chapter 1. Sources are organized by type and annotated with notes on content and relevance to the chapter's themes. All sources cited are real, published works.


Books

Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.

Crawford, a senior principal researcher at Microsoft Research and co-founder of the AI Now Institute, offers one of the most comprehensive and original critiques of AI as a sociotechnical system. The book traces the physical supply chains behind AI — the lithium mines, the data centers, the warehouse workers — and argues that AI is fundamentally a system of extraction and power, not merely a set of algorithms. Particularly relevant to this chapter's discussions of environmental impact, power concentration, and who bears the costs of AI development. Crawford's framing of AI as infrastructure rather than magic is essential context for anyone approaching AI ethics seriously.


Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, 2018.

A rigorous and deeply reported examination of how automated decision systems are used in American social services — welfare eligibility, child welfare, public housing — with particular focus on the harms these systems impose on low-income people. Eubanks conducted extensive fieldwork in Indiana, Pennsylvania, and Los Angeles, speaking with people directly affected by algorithmic assessments. This book is indispensable for understanding the SyRI case and analogous systems in the United States. It makes the abstract concern about algorithmic discrimination concrete through specific, documented stories of real people.


Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press, 2018.

Noble, a professor at UCLA, documents how search engine algorithms have encoded and amplified racial and gender bias, particularly through the results returned for searches related to Black women. The book is broader than its title suggests: it is a sustained argument that the design of information systems reflects the values and perspectives of those who build them, and that the absence of diverse perspectives in technology development produces systems that systematically disadvantage already marginalized groups. Directly relevant to this chapter's discussions of bias, diversity, and who is at the table.


O'Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers, 2016.

O'Neil, a mathematician and data scientist, provides an accessible and compelling survey of algorithmic systems that cause harm across multiple domains — college rankings, criminal risk assessment, credit scoring, online advertising, and more. She introduces the concept of a "weapon of math destruction" — an algorithm that is opaque, widely used, and damages the interests of those it evaluates — and applies it across a range of sectors. This is one of the founding texts of mainstream AI ethics discourse and remains essential reading for understanding the breadth of concerns the field addresses.


Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.

Pasquale, a law professor, examines the opacity of the algorithmic systems that govern finance and information — particularly credit scoring, search engines, and social media ranking. The book makes a sustained legal and ethical argument for greater transparency requirements, anticipating many of the debates that would eventually produce the GDPR's right to explanation and the EU AI Act's transparency obligations. Its analysis of why opacity is not merely an inconvenience but a structural mechanism for maintaining power remains relevant and underappreciated.


Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press, 2019.

Benjamin, a professor of African American Studies at Princeton, introduces the concept of the "New Jim Code" — the way in which new technologies can reinforce and reproduce racial hierarchies under the cover of scientific objectivity and technical neutrality. Drawing on examples from healthcare, criminal justice, and consumer technology, she argues that the default settings of many AI systems encode racial inequity. The book's analytical framework — attending to how discrimination is reproduced through apparently neutral technical systems — is essential for understanding the bias and fairness discussions throughout this textbook.


Academic Articles

Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st ACM Conference on Fairness, Accountability and Transparency (FAccT), 2018, pp. 77–91.

This landmark empirical paper documents significant disparities in the accuracy of commercial facial recognition systems from IBM, Microsoft, and Face++ across gender and skin tone, with particularly poor performance for darker-skinned women. The paper is a model of rigorous empirical AI ethics research: it establishes clear findings, uses a transparent methodology, and has direct policy implications. Buolamwini's "Gender Shades" project directly precipitated Microsoft's and IBM's revision of their facial recognition systems and contributed to IBM's eventual decision to exit the facial recognition market. Directly relevant to this chapter's discussion of bias and fairness.


Barocas, Solon, and Andrew D. Selbst. "Big Data's Disparate Impact." California Law Review, vol. 104, no. 3, 2016, pp. 671–732.

This influential law review article examines the mechanisms by which machine learning systems produce discriminatory outcomes and analyzes how existing antidiscrimination law — particularly Title VII's disparate impact doctrine — applies to algorithmic discrimination. The article identifies five distinct pathways through which AI systems can produce discriminatory outcomes even without discriminatory intent, making it essential reading for understanding the relationship between technical design choices and legal liability. Relevant to this chapter's discussions of bias, accountability, and regulatory exposure.


Ribeiro, Manoel Horta, Raphael Ottoni, Robert West, Virgílio Almeida, and Wagner Meira Jr. "Auditing Radicalization Pathways on YouTube." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT), 2020, pp. 131–141.

The paper described in Case Study 2 of this chapter. The researchers audited YouTube's recommendation pathways across channels spanning from mainstream political commentary to the "alt-right," documenting empirical evidence of a recommendation-driven migration path toward increasingly extreme content. Despite the complexity of the causal interpretation, the paper remains the most rigorous empirical investigation of the radicalization pipeline hypothesis and directly influenced YouTube's policy responses. Essential context for understanding the optimization trap and engagement-based recommendation systems.


Jobin, Anna, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence, vol. 1, no. 9, 2019, pp. 389–399.

This systematic review analyzed 84 AI ethics guidelines published by governments, technology companies, and international organizations between 2016 and 2019. The authors identified significant convergence around five principles (transparency, justice/fairness, non-maleficence, responsibility, and privacy) but significant divergence in how these principles are interpreted and operationalized. The paper provides essential empirical foundation for understanding the global variation theme of this book and for critically evaluating whether published AI ethics principles translate into practice — a central concern of the ethics washing discussion.


Eubanks, Virginia, and Rashida Richardson. "What We Talk About When We Talk About Artificial Intelligence." Data & Society Points, September 2019.

A shorter, accessible piece by two leading scholars that addresses the definitional problems at the heart of AI ethics discourse — including the tendency to conflate very different phenomena under the "AI" label in ways that obscure more than they illuminate. The piece is particularly useful for the Section 1.1 discussion of what AI ethics is and is not, and for students who want a rigorous but readable introduction to the field's definitional debates.


Reports

AI Now Institute. AI Now Report 2019. New York University, 2019.

The annual reports produced by the AI Now Institute are among the most comprehensive and rigorous annual surveys of AI policy, ethics, and social impact. The 2019 report covers issues including algorithmic accountability, labor impacts, the use of AI in public services, and the governance of facial recognition. The Institute's reports, produced annually since 2016 by co-founders Kate Crawford and Meredith Whittaker and their colleagues, have become essential reference points in AI ethics policy discourse. Available free at ainowinstitute.org.


OECD. OECD Principles on AI. Organisation for Economic Co-operation and Development, 2019.

The OECD's AI Principles were the first intergovernmental standard on AI, adopted in 2019 by OECD members and subsequently endorsed by additional countries. The principles address inclusive growth, sustainable development and well-being; human-centered values and fairness; transparency and explainability; robustness, security and safety; and accountability. While non-binding, the principles have substantially influenced national AI strategies and regulatory frameworks. Essential context for the global variation theme and for understanding the normative landscape in which national AI regulation has developed.


UNESCO. Recommendation on the Ethics of Artificial Intelligence. United Nations Educational, Scientific and Cultural Organization, 2021.

UNESCO's Recommendation on AI Ethics is the first global AI ethics framework adopted by all member states of a major international organization (193 states). The Recommendation covers values and principles spanning human rights, social justice, environmental protection, data governance, and inclusive AI development. It is particularly notable for its attention to gender equality, indigenous peoples' rights, and the interests of the Global South — areas that have received insufficient attention in AI ethics frameworks produced by wealthy-country organizations. Essential reading for the global variation and diversity themes.


Raji, Inioluwa Deborah, et al. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2020.

This report, produced by a team including researchers from Google Brain, OpenAI, and the Partnership on AI, proposes a framework for internal AI auditing — the process by which organizations evaluate their own AI systems for ethical and safety concerns. The framework draws on practices from financial auditing and product safety review and adapts them to AI contexts. Particularly relevant to the institutional dimension of AI ethics and the practical question of how organizations can move beyond principles to practice. Available at https://arxiv.org/abs/2001.00973.


Journalism

Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 23, 2016.

The ProPublica investigation into COMPAS, the criminal risk assessment algorithm, that documented the system's racially disparate error rates — higher false positive rates for Black defendants, higher false negative rates for white defendants. This investigation launched one of the most productive debates in AI ethics, prompted significant academic research on competing definitions of algorithmic fairness, and remains one of the most important pieces of AI ethics journalism produced. The investigation methodology, the company's response, and the subsequent academic debate are all instructive. Available at propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.


Metz, Cade, and Mike Isaac. "Facebook's AI Whisperer Got Caught in a Messy Power Struggle." The New York Times, February 24, 2021.

This piece covers the departure of Timnit Gebru from Google and the broader internal dynamics that shape how AI ethics functions (and malfunctions) inside large technology companies. It is an important piece of business journalism for understanding the structural pressures on internal AI ethics practitioners — the precarious institutional position of people who raise ethical concerns inside organizations oriented toward speed and competitive advantage. The Gebru episode is relevant to the chapter's discussion of technologists within companies and the conditions required for internal ethics work to be genuinely effective.


Dwoskin, Elizabeth. "The Dutch Tried to Root Out Welfare Fraud With an Algorithm. It Got Messy." The Washington Post, October 26, 2021.

A detailed journalistic account of both the SyRI case and the broader childcare benefit scandal in the Netherlands, providing context that complements the academic and legal analysis available in other sources. Dwoskin's reporting places both cases in the context of a broader policy debate in the Netherlands about automated government decision-making and provides accessible accounts of how the systems operated and how they harmed specific individuals. Essential context for Case Study 1.


Online Resources and Tools

Partnership on AI (partnershiponai.org)

A multi-stakeholder organization that brings together AI researchers, technology companies, civil society organizations, and academic institutions to develop best practices and advance the public understanding of AI. PAI's website includes working groups, case studies, and resources on topics including fairness, transparency, accountability, worker rights, and AI safety. Their Tenets and their ABOUT ML project (documenting machine learning systems) are particularly useful reference points.


AI Now Institute (ainowinstitute.org)

The AI Now Institute at New York University is one of the leading academic research centers on the social implications of AI. The Institute's website provides access to all of its annual reports, policy briefs, and research papers, as well as resources for advocates and practitioners. For readers interested in the governance and accountability dimensions of AI ethics — who makes decisions about AI, who holds power, who is held accountable — this is an essential ongoing resource.


Algorithm Watch (algorithmwatch.org)

A non-profit research and advocacy organization based in Germany that investigates the social effects of algorithmic decision-making in Europe and beyond. Algorithm Watch's website includes policy analysis, investigative reporting, and case studies from a specifically European perspective. Their coverage of the SyRI case, the EU AI Act's development, and the DSA's implementation is among the most detailed available in English. Particularly valuable for readers engaged with the global variation theme and with European AI governance specifically.


Note on citations: All sources listed above are real, published works by the named authors. Readers should be aware that AI ethics is a rapidly evolving field; publication dates are noted, and readers should seek more recent sources for developments since the dates indicated. The ACM Conference on Fairness, Accountability, and Transparency (FAccT) proceedings are an important annual source of new empirical and theoretical research and are freely available at dl.acm.org.