Chapter 2: Exercises

Chapter 2 | AI Ethics for Business Professionals

Difficulty levels: ⭐ Foundational | ⭐⭐ Intermediate | ⭐⭐⭐ Advanced | ⭐⭐⭐⭐ Integrative † = Recommended for group discussion or classroom use


Foundational Exercises (⭐)

Exercise 2.1 ⭐ Norbert Wiener wrote The Human Use of Human Beings in 1950 — the same year as Turing's "Computing Machinery and Intelligence." Without looking it up, write a brief paragraph (150–200 words) explaining why Wiener's book might be considered the first AI ethics text, based solely on what you learned in this chapter. Then identify one question about social implications of AI that Wiener raised that remains unresolved today.


Exercise 2.2 ⭐ The AI winters (1974–1980 and 1987–1993) resulted from overconfident promises about AI capabilities. Make a list of five specific claims you have heard or read about current AI capabilities in the past year — from news coverage, company announcements, or product marketing. For each claim, note: (a) what the claim promises, (b) what evidence supports or undermines it, and (c) what the potential consequences would be if the claim proved to be overstated.


Exercise 2.3 ⭐ Define "disparate impact" in your own words, using an example from the chapter. Then provide one original example — not from the chapter — of how an AI system used in a context you are familiar with might produce disparate impact without explicitly considering protected characteristics like race or gender.


Exercise 2.4 ⭐ † Match each item in Column A with the best description in Column B.

Column A: 1. Asilomar AI Principles (2017) 2. COMPAS 3. Gender Shades (2018) 4. GDPR (2018) 5. The filter bubble 6. Ghost work 7. ELIZA (1966) 8. Tay (2016)

Column B: A. MIT Media Lab research demonstrating significant racial and gender disparities in commercial facial analysis systems B. A learning chatbot deployed by Microsoft on Twitter that was manipulated into producing hateful content within 24 hours C. The invisible, platform-mediated human labor that powers AI training and content moderation D. A set of 23 principles for AI development produced at a Future of Life Institute conference, focused partly on long-term AI risks E. The first major data protection regulation with significant enforcement mechanisms, including penalties up to 4% of global revenue F. A recidivism prediction tool used in US criminal sentencing found by ProPublica to produce racially disparate predictions G. A 1966 MIT chatbot that revealed human beings' tendency to anthropomorphize AI systems despite knowing they are not human H. A concept describing how personalization algorithms create individual information environments that reduce exposure to divergent views


Exercise 2.5 ⭐ Write a one-paragraph explanation of "ethics washing" suitable for explaining it to a business colleague who has never heard the term. Your explanation should include: what ethics washing is, why organizations engage in it, and how you might distinguish it from genuine ethical commitment.


Exercise 2.6 ⭐ † The chapter describes five recurring AI ethics failure modes: (1) homogeneous development teams, (2) training data that reflects historical inequity, (3) opacity as a business strategy, (4) consequential deployment without adequate testing, and (5) ethics infrastructure without authority. For each failure mode, write one sentence explaining why it produces the kind of harms the chapter describes. Then rank the five failure modes by how difficult they are to correct, and explain your ranking.


Exercise 2.7 ⭐ Identify three specific AI systems or applications that are currently in widespread use (in healthcare, finance, hiring, law enforcement, education, or another sector you know). For each one, identify at least one historical parallel from the cases discussed in this chapter. What does the parallel suggest about the risks of the current system?


Exercise 2.8 ⭐ The chapter describes Turing's "blank sheets" metaphor for a learning machine's mind. Write a two-paragraph response to this question: If AI systems are blank sheets whose values are substantially determined by their training inputs, what responsibilities does that place on organizations that design training processes? Be specific about who in an organization holds these responsibilities.


Intermediate Exercises (⭐⭐)

Exercise 2.9 ⭐⭐ † Stakeholder Mapping: The COMPAS Case

The COMPAS recidivism prediction tool was used in criminal sentencing decisions in multiple US states. Create a stakeholder map for COMPAS that identifies:

(a) All parties who had a stake in the system's performance — including defendants, judges, prosecutors, defense attorneys, the company (Northpointe), the court systems, victims of crime, and the broader public.

(b) For each stakeholder, what was their interest in the system? What did they stand to gain or lose from its deployment?

(c) Which stakeholders had the power to influence how the system was built, deployed, or used? Which had no power?

(d) Which stakeholders were harmed by the system's documented bias? Did those stakeholders have recourse?

Write a 500–600 word analysis based on your stakeholder map, concluding with a recommendation about what a responsible deployment process for a tool like COMPAS should have included.


Exercise 2.10 ⭐⭐ Timeline Analysis: From Harm to Response

Using the cases discussed in this chapter, construct a timeline that maps the gap between: (a) the date when a specific AI harm was deployed or when the harm-producing practice began, (b) the date when the harm was publicly documented, and (c) the date when a substantive organizational or regulatory response occurred.

Do this for at least four cases. Then write a 300-word analysis: What does your timeline reveal about the pace of AI accountability? What factors explain the gaps you observe? What would shorten those gaps?


Exercise 2.11 ⭐⭐ Principles Evaluation Exercise

Choose one published AI ethics principles document — from Google, Microsoft, the EU Ethics Guidelines, the Asilomar Principles, or another organization's published principles — and evaluate it against Anna Jobin et al.'s critique that such documents feature "superficial consensus masking deep disagreement."

Your evaluation should address: - What principles does the document articulate? - For each principle, what does the document say about implementation — how the principle will actually be operationalized? - Where does the document decline to specify trade-offs between competing principles? - What enforcement mechanisms, if any, does the document include? - Based on your analysis, how would you rate the document on a spectrum from "ethics washing" to "genuine commitment"?

Write 500–700 words.


Exercise 2.12 ⭐⭐ † The Amazon Hiring Algorithm Counterfactual

Amazon's hiring algorithm was trained on 10 years of resumes submitted to Amazon — a period during which Amazon's workforce was predominantly male — and learned to penalize resumes that indicated female applicants. Amazon disbanded the team when the bias was discovered.

Write a 400–500 word analysis addressing the following: (a) At what stage in the development process could this bias have been detected? What specific tests or evaluations would have caught it? (b) Why do you think those tests were not conducted? What organizational factors would explain the failure? (c) If you were the chief ethics officer at Amazon and had been told about this project during its development, what would you have required before approving its deployment?


Exercise 2.13 ⭐⭐ Global Variation in AI Regulation

The chapter discusses regulatory responses to AI from the European Union and the United States. Using the information in this chapter and your general knowledge, compare the regulatory approaches of three different jurisdictions — the EU, the United States, and one country of your choice — on the following dimensions: - General philosophy (precautionary vs. permissive) - Primary regulatory mechanism (binding law vs. soft law vs. self-regulation) - Enforcement capacity - Areas of particular focus

Then write a 300-word reflection: If you were advising a multinational company on AI governance strategy, what would the regulatory variation across jurisdictions imply for how the company structures its global AI compliance program?


Exercise 2.14 ⭐⭐ Annotation Labor Supply Chain Audit

Imagine you are the Chief Procurement Officer of a company that has just signed a contract to deploy a large language model from a major AI vendor. You are committed to ethical supply chain management. Write a formal memo (400–500 words) to your CEO outlining: (a) The specific ethical risks in the AI annotation and content moderation supply chain, as identified in this chapter. (b) The specific questions you would ask your AI vendor about its annotation and content moderation labor practices. (c) The minimum standards you would require vendors to meet as a condition of contract. (d) How you would verify compliance with those standards on an ongoing basis.


Exercise 2.15 ⭐⭐ † Tay Post-Mortem Report

You have been hired by a technology industry oversight organization to write a brief post-mortem report on the Microsoft Tay incident. Your report should be structured as a genuine post-mortem: identifying what happened, what caused it, what could have prevented it, and what the lessons are for future AI chatbot deployments.

Your report should be 500–600 words and should draw on both the case study in this chapter and Section 2.4 of the main text. Conclude with three specific, actionable recommendations for any organization deploying a learning chatbot in a public social media environment.


Advanced Exercises (⭐⭐⭐)

Exercise 2.16 ⭐⭐⭐ The Impossibility Theorem for Fairness

The chapter references the mathematical result showing that it is impossible to simultaneously satisfy multiple intuitive fairness criteria when base rates differ across groups. Research this result in more depth (it is sometimes called the COMPAS fairness impossibility, or is associated with work by Chouldechova, Kleinberg et al., and others), and write a 600–800 word essay addressing the following:

(a) State the result in plain language, with a concrete example using the criminal justice context. (b) Explain why this is a values question, not merely a technical one. (c) If you were advising a court system that wanted to use a risk assessment tool and had to choose among competing fairness criteria, what process would you recommend for making that choice? Who should be involved in the decision? What values should be weighed?


Exercise 2.17 ⭐⭐⭐ † Organizational Design for Genuine Ethics Commitment

The chapter argues that "ethics infrastructure without authority" is a recurring failure mode — ethics teams, principles documents, and review boards that provide the appearance of governance without the substance. Design a genuine AI ethics governance structure for a mid-sized financial services company (approximately 5,000 employees, significant use of AI in credit decisions, fraud detection, and customer service).

Your design should specify: - What organizational structures will carry ethical responsibility for AI systems - What authority those structures will have (including the ability to delay or stop deployments) - How ethics review will be integrated into the product development process - How the organization will handle situations where ethics review conflicts with business objectives - What accountability mechanisms will ensure the ethics function is not captured by the business units it oversees - How you will measure whether the ethics function is actually effective

Write 700–900 words. Be specific: do not describe principles; describe structures, processes, and decision rights.


Exercise 2.18 ⭐⭐⭐ Historical Pattern Recognition in Current Deployment

Select a currently deployed AI system — in hiring, credit, criminal justice, healthcare, education, or another consequential domain — that has not been discussed in this chapter. Research it using publicly available sources.

Write a 700–900 word analysis that: (a) Describes the system and its deployment context (b) Applies the five recurring failure modes described in Section 2.8 of this chapter to evaluate the system's development and deployment (c) Identifies any evidence of ethics washing in how the deploying organization has communicated about the system (d) Evaluates what accountability mechanisms exist for the system (e) Concludes with a judgment: does this system represent a genuine improvement on historical AI ethics failures, a repetition of those failures, or something in between? Support your judgment with specific evidence.


Exercise 2.19 ⭐⭐⭐ † The Speed Problem: A Policy Proposal

The chapter identifies the speed of generative AI deployment relative to governance capacity as a fundamental challenge. Some policy thinkers have proposed mandatory pre-deployment review for high-risk AI systems, analogous to drug approval processes or financial product review. Others argue such requirements would stifle innovation and could be captured by incumbents to block competition.

Write a 700–900 word policy proposal that takes a specific position on this debate. Your proposal should: (a) Define what "high-risk AI systems" should require pre-deployment review and justify that definition (b) Describe what the review process should assess and who should conduct it (c) Estimate the timeline and cost implications and argue that they are or are not acceptable (d) Address the strongest objection to your position (e) Propose how to prevent regulatory capture of the review process


Exercise 2.20 ⭐⭐⭐ Comparative Case Study: Google Photos (2015) and Gender Shades (2018)

Both the Google Photos gorilla mislabeling incident (2015) and Joy Buolamwini's Gender Shades research (2018) documented racial bias in AI image recognition systems. Compare and contrast these two cases on the following dimensions:

(a) Discovery mechanism: How was the bias discovered in each case? What does the discovery mechanism reveal about the accountability infrastructure for AI systems? (b) Organizational response: How did the deploying organizations respond? Was the response adequate? Why or why not? (c) Depth of remedy: What changes, if any, were made to the systems? Did the changes address the underlying cause or the surface symptom? (d) Broader impact: What effect, if any, did each case have on industry practice, policy, or public understanding?

Write 700–900 words. Conclude with a judgment about what these two cases together teach about the conditions under which AI bias documentation leads to meaningful improvement.


Exercise 2.21 ⭐⭐⭐ † Wiener's Genie Problem Applied

Norbert Wiener warned in The Human Use of Human Beings about machines that could achieve specified objectives in ways that violated human values — what he illustrated with the genie-wish metaphor. Identify three current AI deployments in which the genie problem is observable: cases where AI systems are achieving their specified objectives in ways that have negative consequences for human values that the objective specification did not capture.

For each case, write 200 words analyzing: What was the specified objective? What human values did it fail to capture? What harm resulted? What would a better-specified objective have looked like?

Then write a 300-word conclusion about what the Wiener genie problem implies for how organizations should specify objectives for AI systems — not just what they want the system to do, but what they do not want it to do in pursuit of that goal.


Integrative Exercises (⭐⭐⭐⭐)

Exercise 2.22 ⭐⭐⭐⭐ † The AI Ethics Audit: A Simulated Board Presentation

Your company is a financial services firm that has been using an AI credit scoring system for three years. The system was purchased from a vendor, deployed with limited pre-deployment testing for demographic bias, and has since been used in millions of credit decisions. An independent researcher has just published a paper showing that the system produces statistically significant disparate impact for Black and Hispanic applicants, denying credit at higher rates than the paper's analysis suggests their creditworthiness would warrant.

You are presenting to the board of directors. Prepare a 20–30 slide presentation outline (slide titles and 3–4 bullet points per slide) that addresses: - What happened and what the company's legal exposure is - What the ethical responsibility of the company is, independent of legal exposure - What the company's response should be (short-term and long-term) - What governance changes are needed to prevent recurrence - How the company should communicate with affected customers - What role the board itself should play in AI governance going forward

Write the outline (slide titles and bullets), then write a 400-word "speaker notes" section for the most difficult slide — the one where you tell the board what remediation the company owes to affected customers.


Exercise 2.23 ⭐⭐⭐⭐ The Long Arc: A Historical Prediction

The chapter traces AI ethics history from 1950 to the present and identifies recurring patterns. Based on that analysis, write a 1,000–1,200 word analytical essay predicting how the ethical challenges of generative AI will play out over the next 10 years (2025–2035).

Your essay should: (a) Apply the recurring pattern (capability deployment → harm documentation → organizational denial → public pressure → partial, delayed response) to current generative AI ethics concerns, predicting specifically what will be documented, when, and by whom (b) Predict which current AI ethics concerns will have produced genuine regulatory change by 2035 and which will still be unresolved, with reasoning (c) Identify one current concern that you believe will prove to be genuinely novel — not captured by the historical pattern — and explain why (d) Conclude with a recommendation for what a business professional reading this in 2025 should do to position their organization well for the governance landscape of 2035


Exercise 2.24 ⭐⭐⭐⭐ † Cross-Chapter Integration: Power, Accountability, and the AI Supply Chain

This exercise asks you to integrate the "power and accountability" theme that runs throughout the chapter across three levels of analysis: global, organizational, and individual.

Global level: The chapter documents how AI annotation labor is concentrated in the Global South while AI benefits accrue predominantly in the Global North. Using the lens of global political economy, write 300 words analyzing this pattern: What forces produce it? What would change it? What obligations does it create for organizations and governments in wealthier countries?

Organizational level: The chapter documents how ethics washing functions at the organizational level — how companies deploy ethics language to manage reputational risk without making substantive changes in practice. Write 300 words describing what genuine organizational accountability for AI ethics would require in terms of: governance structures, performance metrics, compensation incentives, and cultural norms.

Individual level: The chapter traces individual actors — Joy Buolamwini, Timnit Gebru, Joseph Weizenbaum, Karen Hao — who played critical roles in AI accountability that the organizations involved in developing and deploying AI systems were not playing. Write 300 words reflecting on the ethical responsibilities of individual professionals working in or adjacent to AI development. What do they owe to affected communities that organizations do not provide? What protections do they need to play this role?

Then write a 300-word synthesis connecting the three levels: How do global, organizational, and individual power dynamics interact to produce the accountability failures documented in this chapter?


Exercise 2.25 ⭐⭐⭐⭐ Curriculum Design: Teaching AI Ethics History to a Non-Technical Audience

You have been asked to design a two-hour module on the history of AI ethics for a group of C-suite executives at a healthcare organization that is considering deploying AI in clinical decision support, patient triage, and administrative functions. The executives have no technical background in AI but significant organizational authority and genuine interest in deploying AI responsibly.

Design the module, including: (a) A 30-minute lecture outline on the key historical themes from this chapter, tailored to the healthcare context (b) A 45-minute case discussion based on one of the cases in this chapter, adapted for the healthcare audience, with specific discussion questions (c) A 30-minute structured exercise in which the executives apply historical lessons to a hypothetical AI deployment decision at their own organization (d) A 15-minute synthesis and commitment section in which executives identify specific commitments to take back to their organizations

Write the full design, including the purpose of each section, the facilitation approach, the materials needed, and the expected learning outcomes.

Then write a 400-word facilitator's note addressing the most common resistances you would expect from this audience (likely including: "our situation is different from those historical cases," "we have ethics review already," and "AI will save lives, so the risks are worth it") and how you would address those resistances without being dismissive.