Chapter 5: Further Reading and Resources

Annotations focus on accessibility, relevance to business professionals, and practical application. Sources are organized by topic area. All sources were publicly available as of the chapter's writing date.


Foundational Business Case Arguments

1. "Artificial Intelligence — the Next Digital Frontier?" McKinsey Global Institute (2017) Annotation: While somewhat dated in its AI market projections, this report remains a foundational document for understanding how AI investment intersects with business performance. The sections on organizational factors that differentiate AI leaders from laggards — including data governance, talent investment, and leadership engagement — are directly relevant to AI ethics as a business practice. Business professionals will find the ROI framing familiar and accessible. Freely available at the McKinsey Global Institute website.

2. Deloitte Insights, "State of AI in the Enterprise" (published annually) Annotation: Deloitte's annual survey of enterprise AI adoption is among the most comprehensive practitioner-focused research on AI in business. Recent editions have included dedicated sections on AI ethics and governance as factors in AI deployment success. The survey data on the business impacts of AI failures — including reputational and regulatory consequences — provides quantitative grounding for the arguments in Section 5.2. Freely available at Deloitte Insights.

3. IBM Institute for Business Value, "Trust and Transparency in AI" (2021) Annotation: IBM's research arm has produced a series of reports on the business dimensions of AI trust. This report, drawing on surveys of more than 5,000 executives globally, documents the relationship between AI transparency practices and customer trust, enterprise sales outcomes, and regulatory standing. Directly supports the arguments in Section 5.6. Note that IBM is a vendor in this space, and the report should be read with awareness of that positioning. Freely available at IBM Institute for Business Value.


4. Federal Trade Commission, "Aiming for Truth, Fairness, and Equity in Your Company's Use of AI" (2021) Annotation: This FTC guidance document provides a concise, authoritative account of the consumer protection and anti-discrimination legal frameworks that apply to AI systems under existing US law. Written for a business audience, it is accessible and practical. It addresses the FTC's interpretation of Section 5 of the FTC Act as applied to AI, and identifies the categories of AI practice that raise the most significant consumer protection concerns. Essential reading for any organization deploying AI in consumer-facing contexts. Freely available at ftc.gov.

5. EU AI Act — Official Text (2024) Annotation: The complete text of the EU AI Act, available from the European Commission's website, is the definitive primary source for understanding the requirements that apply to AI systems deployed in or affecting EU residents. Business professionals do not need to read the full text, but should engage with the risk classification framework (Annex III for high-risk applications) and the requirements applicable to high-risk systems (Title III). For a more accessible guide, the AI Act Explorer tool maintained by the Future of Life Institute provides a searchable, annotated version. Available at artificialintelligenceact.eu.

6. National Institute of Standards and Technology, "AI Risk Management Framework" (AI RMF 1.0, 2023) Annotation: NIST's AI Risk Management Framework is the US government's primary voluntary standard for AI governance. It provides a structured, non-prescriptive approach organized around four functions: Govern, Map, Measure, and Manage. The framework is explicitly designed to be applicable across sectors, use cases, and organizational sizes. It is the most practical operational resource for organizations beginning to build AI ethics programs, and its vocabulary is increasingly standard in regulatory and procurement conversations. Freely available at nist.gov/system/files/documents/2023/01/26/NIST.AI.100-1.pdf.

7. EEOC, "The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees" (2022) Annotation: This technical assistance document from the Equal Employment Opportunity Commission addresses the application of the ADA to AI-based employment tools. It is directly relevant to organizations using AI in hiring, performance management, or workforce analytics. The document clarifies that ADA liability can attach to AI tools that produce discriminatory outcomes against people with disabilities, even when the tool is purchased from a vendor. A companion document addressing Title VII and AI in hiring was published simultaneously. Freely available at eeoc.gov.


Reputational Risk and Brand Value

8. Edelman Trust Barometer (published annually) Annotation: The Edelman Trust Barometer, now in its third decade, tracks public trust in institutions — including technology companies — across more than 28 countries. Recent editions have addressed AI trust specifically, including consumer preferences for AI transparency and the trust gap between technology company claims and public perception. The data is widely cited in business contexts and is directly relevant to the arguments in Sections 5.2 and 5.6. The annual report is freely available at edelman.com/trust/trust-barometer.

9. Kashmir Hill, "The Secretive Company That Might End Privacy as We Know It," The New York Times (January 18, 2020) Annotation: The investigative article that publicly exposed Clearview AI is required reading for understanding how investigative journalism functions as a mechanism for AI ethics accountability. Hill's methodology — gaining access to the technology, having her own face searched, and documenting the technology's implications — is a model for the genre. The article's immediate impact (regulatory inquiries, platform cease and desist letters, congressional attention, all within days of publication) illustrates the velocity of reputational damage described in Section 5.2. Available in the New York Times archive.


Talent and Organizational Culture

10. Meredith Whittaker et al., "AI Now Report" (AI Now Institute, published annually) Annotation: The AI Now Institute produces the most comprehensive annual assessment of AI's social impacts from a critical perspective. The reports address labor conditions in AI development, the intersection of AI systems and marginalized communities, and the dynamics of power and accountability in the AI industry. The talent sections of recent reports document the employee advocacy movements at major technology companies (Project Maven, Project Dragonfly, and related cases) with more detail and critical depth than other sources. Freely available at ainowinstitute.org.

11. Scott E. Page, The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy (Princeton University Press, 2017) Annotation: Page's research on cognitive diversity and collective problem-solving is the primary academic foundation for the "diversity bonus" argument made in Section 5.8. The book is accessible to a general business audience, and the core empirical argument — that diverse teams outperform homogeneous ones on complex tasks because they access a broader range of mental models — is well-documented and practically relevant. The argument applies directly to AI development teams, though Page does not focus on AI specifically. Widely available.


ESG and Investment

12. BlackRock, "Our Approach to Engagement on Artificial Intelligence" (2023) Annotation: BlackRock's published guidance on AI engagement with portfolio companies provides the clearest public account of what the world's largest asset manager expects from companies on AI governance. The document addresses board oversight, disclosure, and risk management expectations. Business professionals working on investor relations or ESG reporting will find it directly applicable. Available at blackrock.com/corporate/investor-relations/corporate-governance.

13. SASB (Sustainability Accounting Standards Board), Technology Sector Standards Annotation: The SASB standards for the technology sector include specific metrics for data privacy, data security, and responsible AI that are directly relevant to AI ethics reporting. These standards are increasingly used as a basis for ESG disclosures by technology companies and are referenced in institutional investor engagement. The AI-specific metrics provide a useful starting point for organizations building AI ethics measurement frameworks. Available at sasb.org.


Bias, Fairness, and Operational Risk

14. Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S., "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations," Science 366 (2019) Annotation: The academic paper that exposed the Optum healthcare algorithm and its disparate impact on Black patients. This paper is a methodological model for algorithmic auditing: it identifies a proxy variable (healthcare spending) that encodes racial disparity, quantifies the disparate impact with statistical rigor, and proposes a corrective approach. It is also a case study in how academic research can generate business consequences — regulatory scrutiny and enterprise customer concern — for a company that believed its AI system was sound. Freely available at science.org.

15. Buolamwini, J. and Gebru, T., "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification," Proceedings of Machine Learning Research (2018) Annotation: The Gender Shades study is one of the most influential AI fairness papers of the past decade. It documented significant accuracy disparities in commercial facial recognition systems across gender and skin tone, with the highest error rates for darker-skinned women. The paper generated substantial commercial consequences: IBM, Microsoft, and Amazon all responded publicly to the findings and made changes to their products. For business professionals, it illustrates how academic research on AI performance disparities translates into commercial and reputational pressure. Freely available at proceedings.mlr.press.


Corporate Governance and Implementation

16. Salesforce, Office of Ethical and Humane Use — Annual Reports (2020–present) Annotation: Salesforce's annual reports from its Office of Ethical and Humane Use provide a real-world example of AI ethics program reporting. The reports include data on cases reviewed, training conducted, policy updates, and challenges encountered. They are useful both as a governance model and as a benchmark for what public transparency about AI ethics programs looks like in practice. Available at salesforce.com/company/ethical-humane-use.

17. NACD (National Association of Corporate Directors), "Director's Handbook on Cyber-Risk Oversight" (most recent edition) Annotation: While focused on cybersecurity, the NACD's director handbook provides the governance framework through which AI risk is increasingly understood at the board level. The five principles for board oversight of cyber risk — understanding and approaching it as an enterprise-wide risk, understanding legal implications, having adequate access to expertise, establishing an oversight framework, and incorporating discussion into board agendas — translate directly to AI risk. Business professionals working on board education will find this framework useful. Available at nacdonline.org.

18. ISO/IEC 42001:2023 — Artificial Intelligence Management Systems Annotation: The first international standard specifically for AI management systems, ISO/IEC 42001 provides a certification pathway for organizations that want external validation of their AI governance practices. The standard addresses governance structures, risk management, transparency, and human oversight. While the full text requires purchase through ISO, summaries and implementation guides are available from national standards bodies. This standard is increasingly referenced in enterprise procurement requirements and regulatory guidance. Information available at iso.org.


The Ethics Washing Problem

19. Floridi, L., et al., "An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations," Minds and Machines (2018) Annotation: This paper by the Oxford Internet Institute AI Ethics and Society program provides the foundational academic analysis of the emerging AI ethics principles landscape. Its discussion of the gap between stated principles and institutional accountability mechanisms is directly relevant to the ethics washing analysis in Section 5.9. It also identifies the specific features that distinguish substantive ethical frameworks from performative ones. Freely available at link.springer.com.

20. Mötthöltä, O., and Vesanen, J. (AlgorithmWatch), "AI Ethics Guidelines Global Inventory" (2020) Annotation: AlgorithmWatch's inventory of AI ethics guidelines — tracking more than 160 documents published between 2016 and 2020 by governments, companies, and civil society organizations — provides an empirical basis for the ethics washing analysis in this chapter. The key finding: the vast majority of AI ethics guidelines published by companies had no enforcement mechanism, no accountability structure, and no evidence of influencing actual product decisions. The inventory is a valuable research resource and a sobering counterpoint to optimistic accounts of the AI ethics movement. Available at inventory.algorithmwatch.org.


Note on source currency: The AI ethics regulatory and corporate landscape changes rapidly. Readers are encouraged to check for updates to regulatory documents (EU AI Act implementation guidance, NIST AI RMF revisions, EEOC AI guidance), company reports (Salesforce annual reports, IBM FactSheets), and research outputs (AI Now annual reports, NIST AI standards updates) from the dates cited here.