Chapter 4: Further Reading and Resources

Annotated Bibliography — Stakeholders in the AI Ecosystem

The sources below are organized thematically. All are real publications. Annotations describe content, significance, and recommended reading context.


Foundational Stakeholder Theory

1. Freeman, R. Edward. Strategic Management: A Stakeholder Approach. Pitman, 1984. (Reissued Cambridge University Press, 2010.)

The founding text of stakeholder theory in business ethics. Freeman argues that the purpose of a firm is to create value for all stakeholders — not solely shareholders — and develops frameworks for stakeholder identification, analysis, and engagement. The theoretical apparatus in Chapter 4 draws directly on Freeman's work. The 2010 Cambridge reissue includes a new preface reflecting on the theory's development and its ongoing relevance. Essential reading for anyone who wants to understand the intellectual lineage of stakeholder frameworks. Chapter 3 (on stakeholder analysis methodology) and Chapter 5 (on stakeholder strategy) are most directly relevant to the chapter's practical methodology sections.


2. Donaldson, Thomas, and Lee E. Preston. "The Stakeholder Theory of the Corporation: Concepts, Evidence, and Implications." Academy of Management Review 20, no. 1 (1995): 65–91.

An essential synthesis of the stakeholder theory literature that distinguishes three different uses of stakeholder concepts: descriptive (a description of how firms actually behave), instrumental (a prediction that firms that manage stakeholders well will perform better financially), and normative (an ethical argument that firms should create value for all stakeholders). This distinction matters practically: corporate stakeholder programs often invoke instrumental rationales ("stakeholder engagement is good for business") while the underlying ethical claim is normative ("stakeholder engagement is the right thing to do, regardless of whether it produces financial returns"). Understanding the difference helps in evaluating whether a particular engagement program is doing genuine ethics or sophisticated stakeholder management.


Power, Representation, and the Politics of AI Development

3. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press, 2018.

One of the most important books on AI and social justice of the past decade. Eubanks investigates three cases — a predictive system used in child protective services in Allegheny County, Pennsylvania; an automated eligibility system for public benefits in Indiana; and a predictive policing system in Los Angeles — and documents how these systems systematically harm low-income and vulnerable communities. The book is written for a general audience but maintains rigorous empirical standards. Particularly valuable for Chapter 4's discussion of affected communities as stakeholders and the "low power, high interest" dynamic. Chapter 4 of Eubanks's book on Los Angeles predictive policing directly complements Case Study 4.1.


4. Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press, 2019.

Benjamin develops the concept of the "New Jim Code" — the ways in which digital technologies, including AI, encode and reinforce racial hierarchy while presenting themselves as objective and race-neutral. The book is essential for understanding why demographic blindness in AI inputs does not produce demographic neutrality in outputs, and for understanding the structural relationship between the demographics of AI builders and the demographics of AI's victims. Chapter 1 ("The New Jim Code") and Chapter 3 ("Coded Exposure") are most directly relevant to Chapter 4. Benjamin's framework connects the representation problem in AI development to longer histories of racialized technology.


5. West, Sarah Myers, Meredith Whittaker, and Kate Crawford. "Discriminating Systems: Gender, Race, and Power in AI." AI Now Institute, 2019. Available at: ainowinstitute.org

A research report documenting the demographic composition of the AI industry and its consequences for AI development. Provides specific data on the underrepresentation of women, Black, and Latino workers in AI research and engineering; analyzes the relationship between industry demographics and the character of AI systems produced; and examines the cases of Timnit Gebru and Margaret Mitchell, whose departures from Google became emblematic of the organizational challenges facing women of color working on AI ethics in the technology industry. Essential reading alongside Section 4.1's discussion of the representation problem.


6. Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks." Proceedings of the National Academy of Sciences 111, no. 24 (2014): 8788–8790.

The primary source for Case Study 4.2. The paper is brief (three pages) and technically accessible. Reading the original is recommended: it allows students to evaluate the authors' own framing of their methodology and findings, including their treatment of the consent issue in the supplementary materials (they note that the study was determined to be "consistent with Facebook's Data Use Policy" — which is the terms-of-service defense discussed in the case study). The PNAS "editorial expression of concern" published in the same issue as the paper is also worth reading.


7. Solove, Daniel J. "Privacy Self-Management and the Consent Dilemma." Harvard Law Review 126, no. 7 (2013): 1880–1903.

A rigorous legal and philosophical analysis of why consent-based frameworks for privacy protection are structurally inadequate for governing data-driven systems. Solove argues that requiring individuals to manage their own privacy through consent choices imposes cognitive burdens that individuals cannot realistically bear, produces decisions that do not reflect individuals' actual values when confronted with specific practices, and fails to address collective harms that no individual consent can govern. The argument applies directly to the data subject consent issues raised in Chapter 4 and Case Study 4.2. Students interested in the theoretical underpinning of why TOS-based consent fails should read this alongside the case study.


Predictive Policing and Algorithmic Criminal Justice

8. Richardson, Rashida, Jason Schultz, and Kate Crawford. "Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice." New York University Law Review 94 (2019): 15–55.

The foundational academic paper on the "dirty data" problem in predictive policing. Richardson and colleagues document 13 police departments where systemic civil rights violations — including falsification of evidence, illegal stop-and-frisk practices, and racially discriminatory patrolling — contaminated the historical crime data used to train predictive policing algorithms. The paper provides the technical and empirical foundation for understanding why predictive policing systems trained on historical arrest data systematically reproduce discriminatory patterns. Essential reading alongside Case Study 4.1.


9. Stop LAPD Spying Coalition and Free Radical Collective. "Before the Bullet Hits the Body: Dismantling Predictive Policing in Los Angeles." 2018. Available at: stoplapdspying.org

A community-produced report documenting the deployment of PredPol and related surveillance systems in Los Angeles from the perspective of affected communities. This document provides a useful counterpoint to the LAPD's own framing of the technology: it documents community members' experiences, presents evidence of the geographic concentration of prediction-box activity in Black and Latino neighborhoods, and makes the case for abolishing predictive policing rather than reforming it. Reading this alongside the academic literature illustrates the difference between technical analysis and community knowledge — and why both are necessary for complete stakeholder analysis.


Stakeholder Analysis Methods

10. Grimble, Robin, and Man-Kwun Chan. "Stakeholder Analysis for Natural Resource Management in Developing Countries." Natural Resources Forum 19, no. 2 (1995): 113–124.

Stakeholder analysis methodologies were developed primarily in the natural resource management and development policy fields before being adopted in business strategy and, more recently, AI ethics. This paper provides a rigorous introduction to the methodology and its theoretical foundations, including the Power-Interest matrix that Chapter 4 uses as its organizing framework. Reading this alongside the chapter's practical methodology section situates AI ethics stakeholder analysis within its intellectual lineage and reveals aspects of the methodology (particularly on dynamic stakeholder relationships and coalition formation) that the chapter does not have space to cover.


11. Metcalf, Jacob, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. "Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts." Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2021): 735–746.

An empirical study of how algorithmic impact assessments — structured analyses of AI systems' potential effects on stakeholders — are designed and used in practice. The paper finds that the construction of an assessment is itself a political and organizational process that shapes what counts as a relevant impact, who counts as an affected stakeholder, and what mitigation measures are considered acceptable. A valuable read for understanding the gap between stakeholder analysis methodology as described in textbooks and as practiced in organizations.


Regulatory and Governance Frameworks

12. European Commission. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." Official Journal of the European Union, 2024.

The full text of the EU AI Act. For chapter purposes, the most relevant provisions are: Article 5 (prohibited AI practices), Article 6 (classification of high-risk AI systems), Articles 9-15 (requirements for high-risk AI systems), and Articles 29-33 (obligations of deployers). The Act is long (over 150 pages with recitals) but the operative provisions are accessible. Most business professionals will work with summaries, but direct engagement with the primary text is valuable for understanding the scope and limits of the regulatory framework.


13. Reisman, Dillon, Jason Schultz, Kate Crawford, and Meredith Whittaker. "Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability." AI Now Institute, 2018. Available at: ainowinstitute.org

A practical framework for algorithmic impact assessments (AIAs) — systematic pre-deployment analyses of how AI systems will affect different stakeholder groups — developed for public-sector use. The framework draws on environmental impact assessment methodology and adapts it for AI governance. Particularly valuable for Chapter 4's discussion of stakeholder engagement in practice and for the exercises that require designing governance frameworks. The AI Now Institute's website makes this report available as a free download.


14. OECD. "OECD Principles on AI." 2019. Available at: oecd.ai/en/ai-principles

The OECD AI Principles are the most widely adopted international framework for AI governance, endorsed by 46 countries plus the G20. They establish five principles for responsible stewardship of trustworthy AI: inclusive growth, sustainable development, and wellbeing; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and accountability. The principles have been influential in shaping national AI policies. Reading them alongside the chapter's discussion of international bodies as stakeholders (Section 4.4) provides context for understanding how international norms develop and propagate, even without binding enforcement mechanisms.


Journalistic Investigations

15. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. "Machine Bias." ProPublica, May 23, 2016. Available at: propublica.org

The landmark ProPublica investigation of the COMPAS recidivism risk assessment algorithm used in criminal sentencing across the United States. The investigation found that the algorithm was twice as likely to falsely flag Black defendants as future criminals and twice as likely to falsely clear white defendants. It sparked a major academic and policy debate about algorithmic fairness, the definition of fair risk assessment, and the appropriate use of AI in criminal justice. The original article is essential reading for any AI ethics course, and the subsequent academic debate — particularly the exchange between Northpointe (the algorithm's developer), ProPublica, and researchers at Stanford and Georgia Tech — is valuable for understanding the contested nature of fairness metrics. All materials are available free at propublica.org/series/machine-bias.


16. Dastin, Jeffrey. "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters, October 10, 2018.

A news report on Amazon's internal development and eventual abandonment of an AI hiring tool that systematically downgraded resumes from women. The tool had been trained on historical hiring data reflecting Amazon's own male-dominated workforce and had learned to penalize resumes that included words like "women's" (as in "women's chess club") and to downgrade graduates of all-women's colleges. The case is one of the most widely cited examples of how AI systems learn to reproduce historical discrimination from biased training data. It directly illustrates the "dirty data" and feedback loop dynamics discussed in Chapter 4 and Case Study 4.1, applied to the employment context rather than policing.


17. Hill, Kashmir. "The Secretive Company That Might End Privacy as We Know It." The New York Times, January 18, 2020.

The investigation that brought Clearview AI's facial recognition database to widespread public attention. Clearview had scraped billions of photographs from social media platforms without consent and built a facial recognition system sold primarily to law enforcement agencies. The story illustrates the "invisible stakeholder" dynamic at its most extreme: billions of people whose photographs were scraped and used to train a surveillance system had no knowledge of their status as data subjects and no ability to consent or object. The subsequent regulatory and legal actions against Clearview (FTC settlement, GDPR enforcement in Europe, ban in several US jurisdictions) are valuable case material for understanding how accountability mechanisms can be mobilized for data subject protection.


18. Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research 81 (2018): 1–15.

The study that documented large accuracy disparities in commercial facial recognition and gender classification systems across demographic groups, with the worst performance on dark-skinned women (error rates up to 34.7 percentage points higher than for light-skinned men). This paper is foundational for understanding the representation problem in AI development — the connection between who builds AI systems and whose experiences those systems are designed to handle — and for understanding the empirical methodology of algorithmic auditing. The paper is freely available and accessible to non-technical readers. Joy Buolamwini's TED talk ("How I'm fighting bias in algorithms") provides an accessible introduction for students who prefer video.


19. Taddeo, Mariarosaria, and Luciano Floridi. "The Debate on the Moral Responsibilities of Online Service Providers." Science and Engineering Ethics 22, no. 6 (2016): 1575–1603.

A philosophical analysis of how responsibility should be attributed in multi-party online service ecosystems — including the relationships between platform providers, third-party service providers, and end users that characterize AI value chains. The paper applies traditional concepts of moral responsibility (foreseeability, causation, control) to distributed digital systems and develops a framework for thinking about shared and graduated responsibility. Useful for students who want to go deeper on the philosophical foundations of the accountability questions raised in Chapter 4, and as preparation for Chapter 18 (Who Is Responsible).


20. Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs, 2019.

A comprehensive and ambitious analysis of the economic logic that drives data collection and behavioral prediction at scale by major technology platforms. Zuboff's central argument is that "surveillance capitalism" — the extraction of behavioral data as raw material for predicting and modifying human behavior in service of advertising markets — represents a fundamental challenge to human autonomy and democratic society. While the scope of the book extends well beyond Chapter 4's territory, Chapters 2-4 (on the development of surveillance capitalism at Google) and Chapter 11 (on behavioral modification) are directly relevant to the chapter's discussion of data subjects, the principal-agent problem in platform AI, and the relationship between commercial incentives and the treatment of users as data sources rather than as stakeholders with rights.


All sources listed in this Further Reading are publicly available through university library systems. Items marked with a URL are freely available online without subscription. Students in jurisdictions where access to any of these sources is restricted should contact the course librarian for assistance.