Chapter 1: Exercises
What Is AI Ethics? Framing the Challenge
Instructions: Exercises are organized by difficulty level. Star ratings indicate cognitive demand:
- One star: Recall — demonstrate understanding of key concepts and facts
- Two stars: Apply — use concepts to analyze a new scenario or context
- Three stars: Analyze — evaluate arguments, compare frameworks, assess trade-offs
- Four stars: Synthesize — design solutions, produce original analysis, lead structured discussions
Exercises marked with a dagger symbol (†) have suggested answers in the appendix. All other exercises are open-ended and intended for discussion or individual reflection.
One Star: Recall
Exercise 1.1 (One star) †
Define "AI ethics" in your own words, in no more than three sentences. Your definition should distinguish AI ethics from (a) AI safety and (b) corporate compliance.
Exercise 1.2 (One star)
List the eight categories of AI ethics concern identified in Section 1.2 of this chapter. For each category, provide a one-sentence description of the type of harm involved.
Exercise 1.3 (One star) †
What is "ethics washing"? Give an example of a hypothetical ethics washing scenario in a corporate context. What would distinguish the ethics washing scenario from genuine ethical commitment?
Exercise 1.4 (One star)
Identify the three dimensions of AI ethics described in Section 1.1 (technical, social, and institutional). For each dimension, provide one question that an ethics practitioner working in that dimension would ask about an AI system.
Exercise 1.5 (One star)
The chapter identifies five recurring themes that run through the entire textbook. List these five themes and write one sentence explaining each.
Exercise 1.6 (One star) †
Define "algorithmic decision-making" and identify three sectors or domains where it is commonly used. For each domain, name one type of decision that is frequently made or influenced by AI systems.
Exercise 1.7 (One star)
What was SyRI? What did it do, what data did it use, and why was it struck down by a Dutch court in 2020? Answer in three to five sentences.
Exercise 1.8 (One star)
The chapter describes an "accountability gap" in AI systems. What is it? What structural features of complex AI systems tend to produce it?
Two Stars: Apply
Exercise 1.9 (Two stars)
Read the following scenario and identify which AI ethics concern(s) from Section 1.2 it raises. Explain your reasoning.
A major bank deploys an AI system to screen mortgage applications. The system was trained on ten years of approved and denied loan applications. An internal audit reveals that the system approves applications from white applicants at a rate 23% higher than applications from Black applicants with comparable financial profiles. The bank's communications team notes that the system does not use race as an input variable.
Exercise 1.10 (Two stars) †
A logistics company is considering deploying an AI system to optimize delivery routes. The system would also monitor driver behavior — speed, hard braking, idle time — in real time and generate performance scores used in quarterly reviews. The company's HR team is enthusiastic because the current review process is widely considered inconsistent and subjective.
Apply the three-dimension framework (technical, social, institutional) from Section 1.1 to this scenario. What questions would an ethics practitioner ask at each dimension?
Exercise 1.11 (Two stars)
Consider the "optimization trap" concept from Case Study 2. Identify a business AI system outside of social media that you believe may be subject to the optimization trap — that is, a system that optimizes for a measurable business metric in ways that may produce harm that is not captured by that metric. Describe: (a) the system, (b) the metric it optimizes, (c) what harms might result, and (d) what values would need to be incorporated into the objective function to address those harms.
Exercise 1.12 (Two stars)
The chapter argues that AI ethics is not something that "only matters in high-risk domains." Choose a domain that is not typically classified as high-risk — for example, a retail recommendation system, a workplace productivity monitoring tool, or an AI-generated content moderation system for a local news publication — and explain what ethical concerns it raises and why those concerns matter even in the absence of formal risk classification.
Exercise 1.13 (Two stars) †
The chapter introduces the concept of the "accountability gap." Map the accountability gap for the following scenario:
A city contracts with a private vendor to deploy an AI system that assesses applications for a housing assistance program. The vendor's model is proprietary. The city's case workers use the model's recommendations in their final decisions. A community organization publishes a report showing that applicants from one neighborhood receive assistance at significantly lower rates than applicants from comparable neighborhoods. The city says the vendor is responsible for the model's fairness. The vendor says the city's case workers make the final decisions. The case workers say they were told the model had been validated.
Who might plausibly bear moral and legal responsibility for the disparity? What accountability mechanisms were absent?
Exercise 1.14 (Two stars)
Review the three stakeholder perspectives in Section 1.4 of the chapter (the data scientist, the community organizer, and the compliance officer). Identify a fourth stakeholder perspective that is not represented in that box — a person in a specific role whose experience of AI ethics would differ significantly from the three provided. Write a brief (150–200 word) first-person account of that perspective.
Exercise 1.15 (Two stars)
The chapter notes that "meeting regulatory requirements means doing the least that you are legally required to do." Is this characterization fair to compliance-oriented approaches to AI ethics? Write a short response (150–200 words) either defending the compliance approach or explaining its limitations, drawing on specific examples from the chapter.
Three Stars: Analyze
Exercise 1.16 (Three stars)
The SyRI ruling held that the opacity of the algorithm — not its existence — was the decisive legal problem. Evaluate this reasoning. Is opacity the right focus for legal and ethical analysis of government AI systems? What are the strongest arguments for the court's approach? What concerns does it leave unaddressed? Consider, in particular, whether a fully transparent but discriminatory AI system would pass the court's test.
Exercise 1.17 (Three stars) †
Compare the two case studies in this chapter — SyRI and YouTube's recommendation system — on the following dimensions:
(a) Who designed the system, and for what purpose? (b) Who was most harmed? (c) What accountability mechanisms existed, and were they effective? (d) What role did external pressure (litigation, regulation, public pressure) play in producing change? (e) What does the comparison reveal about AI ethics challenges that are common across public and private sector contexts?
Exercise 1.18 (Three stars)
The chapter describes several arguments for what AI ethics is not — including "not a technology problem with a technology solution" and "not a compliance checkbox." Select two of these "what AI ethics is not" claims and evaluate them critically. Are there conditions under which technical solutions or compliance frameworks could be genuinely adequate responses to AI ethics concerns? What would those conditions require?
Exercise 1.19 (Three stars)
The "Good Algorithm" dilemma box presents a housing algorithm that improves average outcomes but systematically places one demographic group in neighborhoods with inferior services. Apply three distinct ethical frameworks to this dilemma — utilitarian, Rawlsian, and rights-based — and explain what each framework would conclude. Then evaluate: do these frameworks yield compatible conclusions, or do they conflict? If they conflict, how would you adjudicate between them?
Exercise 1.20 (Three stars)
The chapter argues that "diversity and inclusion" is an AI ethics concern, not merely an HR concern — specifically, that the homogeneity of AI development teams is a structural cause of AI ethics failures. Critically evaluate this argument. What is the mechanism by which team diversity is supposed to reduce ethical failures? What evidence supports or complicates this claim? Are there limitations to diversity as an AI ethics intervention?
Exercise 1.21 (Three stars)
The chapter claims that the YouTube recommendation system's harmful outputs were a predictable result of optimizing for engagement, not an unforeseen side effect. Evaluate this claim. What would YouTube engineers have needed to know or foresee to recognize the ethical implications of engagement optimization before those harms were documented by external researchers? What does your answer suggest about how AI ethics responsibilities should be allocated within development organizations?
Four Stars: Synthesize
Exercise 1.22 (Four stars)
Design an AI ethics governance structure for a mid-sized regional bank (approximately 2,000 employees, operations in three states) that has recently deployed AI systems for the following purposes: (a) credit risk assessment, (b) customer service chatbot, (c) employee performance evaluation, (d) fraud detection.
Your governance design should address: who is responsible for AI ethics oversight; what processes govern the development and deployment of new AI systems; what mechanisms exist for detecting and addressing ethical problems in deployed systems; and how the bank would respond to an ethics failure. Write your response as a governance memo addressed to the bank's CEO and Board of Directors. A strong response will address the tension between institutional overhead and genuine accountability, will specify roles and processes rather than merely principles, and will acknowledge specific ethical risks associated with each of the four AI applications.
Exercise 1.23 (Four stars)
You have been asked to lead a one-hour workshop for the senior leadership team of a manufacturing company that is considering deploying AI systems for: predictive maintenance (identifying when equipment will fail), quality control (automated visual inspection of products), and workforce planning (scheduling and shift management). The leadership team is enthusiastic about efficiency gains but has no prior engagement with AI ethics.
Design the workshop. Specify: (a) the learning objectives for the session, (b) the structure and content of each segment, (c) the exercises or discussion questions you would use, and (d) the three key commitments you would want leadership to leave the session having made. A strong response will be realistic about what can be accomplished in one hour, will connect AI ethics concerns to business risks that resonate with a non-technical leadership audience, and will specify what follow-up would be needed to translate the session into organizational action.
Exercise 1.24 (Four stars)
Write a 500–700 word policy brief addressed to the mayor of a medium-sized American city (population: 350,000) recommending a policy framework for the city's use of AI in public services. The city currently uses AI in its 311 non-emergency services call center (chatbot and routing), in its public housing authority (application screening), and in its police department (license plate readers and a pilot predictive patrol program). The brief should: identify the primary ethical risks in each current application; recommend three specific policy requirements that should apply to all city AI use; address the question of public participation in AI governance; and acknowledge the resource constraints facing a mid-sized municipality. A strong response will be specific and actionable rather than aspirational, will acknowledge genuine trade-offs, and will distinguish between requirements that can be implemented immediately and those that require longer institutional development.
Exercise 1.25 (Four stars)
The chapter argues that AI ethics requires "ongoing organizational discipline rather than a one-time compliance exercise." An executive at a technology company responds: "We conducted a comprehensive ethics review of all our AI systems eighteen months ago. We identified and addressed twelve specific issues. We have published our AI principles. We have an ethics advisory board that meets twice a year. We have done more than most companies our size. What more do you expect us to do?"
Write a response to this executive of 400–500 words. Your response should: (a) acknowledge what the company has done well; (b) explain why these steps, while valuable, are not sufficient to constitute an ongoing ethical practice; (c) identify specifically what ongoing elements are missing; and (d) make the business case for why investing in those ongoing elements is worth it. A strong response will be respectful and specific — it will not moralize, but it will be honest about gaps; it will be grounded in the concepts introduced in this chapter; and it will demonstrate that genuine AI ethics practice is organizationally and strategically distinct from ethics compliance.