Chapter 3: Exercises

Difficulty legend: - ⭐ Foundational — recall and basic application - ⭐⭐ Intermediate — analysis and comparison - ⭐⭐⭐ Advanced — synthesis and evaluation - ⭐⭐⭐⭐ Capstone — original argument and extended analysis

† = Recommended for class discussion or group work


Foundational Exercises (⭐)

Exercise 3.1 ⭐ Match each ethical framework to its central evaluative question:

Framework Central Question
Consequentialism ______
Deontology ______
Virtue ethics ______
Contractualism ______
Capabilities approach ______

Options: (a) "What can this person actually do and be?" (b) "What kind of character does this action express?" (c) "Would a rational person choose this rule not knowing their position?" (d) "What are this action's consequences for welfare?" (e) "Does this action violate any duty or right regardless of consequences?"


Exercise 3.2 ⭐ State Kant's categorical imperative in two formulations, in your own words. For each formulation, identify one AI application that fails the test and briefly explain why.


Exercise 3.3 ⭐ † The chapter opens with the hospital's AI triage system. List three reasons why a pure consequentialist analysis might favor deployment and three reasons why it might favor delay or modification. Do not yet evaluate which side is stronger — just identify the considerations.


Exercise 3.4 ⭐ Define "ethics washing" in your own words, drawing on the chapter's discussion of the virtue ethics framework. Provide two observable indicators — things an outside observer could look for — that might distinguish genuine ethical commitment from ethics washing.


Exercise 3.5 ⭐ Briefly explain Rawls's "veil of ignorance" thought experiment. Then identify one AI system you have personally interacted with (a recommendation algorithm, a credit decision system, a content moderation system, etc.) and describe how your design choices might change if you were behind the veil of ignorance — not knowing whether you would be the system's designer or one of its subjects.


Exercise 3.6 ⭐ Write a one-paragraph summary of the Ubuntu concept and explain how it differs from the individual-rights framework dominant in Western AI ethics discourse. Give one concrete example of an AI governance question that Ubuntu would approach differently from a rights-based framework.


Exercise 3.7 ⭐ The chapter identifies ten capabilities in Nussbaum's list. For each of the following AI applications, identify which capability or capabilities it most directly affects, and note whether the effect is expansion or contraction: (a) AI-powered prosthetics that restore movement to people with limb loss (b) Automated benefits termination that ends food assistance without human review (c) AI tutoring systems that adapt to individual learning pace and style (d) Predictive policing algorithms that increase surveillance in specific neighborhoods


Exercise 3.8 ⭐ † Describe the difference between a "conversation-starter" use of ethical frameworks and a "conversation-ender" use. Why does the chapter argue that frameworks should be conversation-starters? Do you agree? Can you think of situations where having a firm rule — a conversation-ender — is appropriate?


Intermediate Exercises (⭐⭐)

Exercise 3.9 ⭐⭐ A social media company's recommendation algorithm is found to dramatically increase the time that users with signs of anxiety spend on the platform, by serving them content that triggers anxiety responses that keep them scrolling. The algorithm was not designed to do this; it emerged from optimizing engagement.

Apply consequentialist reasoning to this scenario. In your analysis: (a) Identify who the affected parties are and what their welfare interests are (b) Identify the relevant aggregate and distributional consequences (c) Specify what you would need to measure to make the consequentialist analysis rigorous (d) State what consequentialism recommends and identify the analysis's limitations


Exercise 3.10 ⭐⭐ Compare and contrast how a deontologist and a consequentialist would evaluate the following scenario: A municipality installs a network of AI-powered surveillance cameras in a high-crime neighborhood. The cameras are shown to reduce violent crime by 18%, but 100% of the surveillance falls on a predominantly Black and Latino community, none of whom were consulted about the system's deployment.

Your comparison should: (a) state each framework's analysis clearly; (b) identify the specific point of disagreement; (c) note what additional information, if any, would change each analysis.


Exercise 3.11 ⭐⭐ † Return to the "Debate Box" in Section 3.3 — surveillance cameras in public housing. Add a third analytical voice: how would a virtue ethicist evaluate the decision to deploy surveillance in public housing? What questions would a virtue ethicist ask about the organization making the decision? What would a virtuous housing authority do?


Exercise 3.12 ⭐⭐ The chapter argues that care ethics is "especially relevant for AI in healthcare, childcare, eldercare, and mental health." Choose one of these domains and analyze a specific AI application in that domain through the care ethics lens. Your analysis should: (a) Identify the relevant relationships and their power dynamics (b) Identify who is vulnerable in the relationship and how (c) Assess what genuine care would require in this context (d) Evaluate whether the AI application supports or undermines genuine care


Exercise 3.13 ⭐⭐ The "Stakeholder Perspective Box" in Section 3.5 presents three voices applying the veil of ignorance to an AI transit authority performance monitoring system. Add a fourth voice: a driver from a racial minority group who is already subject to differential complaint rates based on the demographics of their route. How does their experience change the veil of ignorance analysis?


Exercise 3.14 ⭐⭐ Apply the five-step moral cross-examination method to the following scenario: A company develops an AI system for detecting depression from social media posts, without users' knowledge, for the purpose of targeted advertising for mental health products. The system is 74% accurate. No data is shared with third parties.

For each step, write 2–3 sentences applying it to this specific scenario. Your final step should include a clear recommendation.


Exercise 3.15 ⭐⭐ † The chapter describes three cultural clusters in the Moral Machine experiment data — Western, Eastern, and Southern — that showed different moral preferences for autonomous vehicle collision decisions. Consider the following: if you were advising a global autonomous vehicle manufacturer operating in all three markets, what would you recommend? Would you recommend: (a) uniform global ethical standards, even if they conflict with local preferences; (b) culturally customized ethical standards; or (c) some combination? Defend your recommendation with reference to at least two ethical frameworks.


Advanced Exercises (⭐⭐⭐)

Exercise 3.16 ⭐⭐⭐ The chapter identifies the "aggregation problem" as consequentialism's central weakness: utilitarian calculus can justify harm to minorities if the aggregate majority benefit is large enough. Several responses to this problem have been proposed: (a) weighted utilitarianism, which counts the welfare of the worst-off more heavily; (b) rule utilitarianism, which optimizes at the level of rules rather than individual decisions; (c) prioritarianism, which builds distribution-sensitivity into the welfare function. Research one of these responses and write a 500–700 word analysis of whether it successfully addresses the aggregation problem in AI ethics contexts. Use at least one concrete AI application as your test case.


Exercise 3.17 ⭐⭐⭐ † The "competitive market" problem from Case Study 3.2 (Project Maven): Google's ethical decision to withdraw from certain defense AI markets means competitors with fewer ethical constraints win those contracts. Write a structured analysis of this problem from three perspectives:

(a) The strict virtue ethicist: What does virtue require, even if competitors are less virtuous? (b) The consequentialist: Does Google's abstention make defense AI better or worse overall? (c) The contractualist: What rules would rational parties choose for defense AI contracting, knowing they might be either the company, the government, the soldiers in the field, or the civilians in the target area?

Conclude with your own reasoned position on what Google should have done.


Exercise 3.18 ⭐⭐⭐ Indigenous data sovereignty — the right of Indigenous communities to govern data about themselves — is presented in the chapter as an application of Indigenous ethical frameworks to AI. Research the CARE Principles for Indigenous Data Governance (Carroll et al., 2020) and write an analysis of:

(a) How the CARE Principles differ from the FAIR Principles in their conception of data (b) What specific AI applications or data practices the CARE Principles would prohibit or modify (c) What institutional mechanisms would be required to actually implement Indigenous data sovereignty in the context of a major AI development project (d) Whether CARE-principle-based data governance is compatible with the scale at which contemporary AI operates


Exercise 3.19 ⭐⭐⭐ The chapter argues that virtue ethics applied to organizations must grapple with the difference between genuine virtue and ethics washing. Develop a five-question diagnostic framework that a board of directors could use to assess whether their company's AI ethics program reflects genuine ethical commitment or ethics washing. For each question, explain what a satisfactory answer would look like and what an unsatisfactory answer would reveal. Your framework should be operationalizable — the questions should have answers that can be assessed with observable evidence.


Exercise 3.20 ⭐⭐⭐ † The "Ethical Dilemma Box" at the end of Section 3.9 analyzes a loan approval algorithm with 15% higher error rates for rural applicants. The analysis concludes that all frameworks converge on remediation. But suppose the bank's technical team reports that improving the model's rural accuracy would require a 40% reduction in the model's overall accuracy across all applicants — a trade-off driven by limited rural training data. Does this technical constraint change the ethical analysis? Apply at least three frameworks to the revised scenario and explain whether the convergence toward remediation holds.


Exercise 3.21 ⭐⭐⭐ The Moral Machine experiment (Case Study 3.1) was criticized by several scholars on methodological grounds: the scenarios were unrealistic, the participants were self-selected, the framing imposed Western cultural assumptions, and the binary forced-choice format prevented participants from expressing "none of the above" preferences. Research at least two published critiques of the Moral Machine experiment and write an analysis of:

(a) Whether the methodological criticisms undermine the experiment's conclusions (b) What the experiment can and cannot legitimately tell us about how to program autonomous vehicles (c) What alternative research methods would produce more reliable data for informing autonomous vehicle ethical design


Capstone Exercises (⭐⭐⭐⭐)

Exercise 3.22 ⭐⭐⭐⭐ † The Framework Challenge: You are a member of an AI ethics review board at a company developing a medical AI system that predicts patient risk for hospital readmission. The system will be used by hospital administrators to make discharge decisions. A preliminary audit shows that the system performs significantly better for patients with extensive electronic health record histories — disproportionately white and insured patients — and significantly worse for patients with sparse records — disproportionately Black, Latino, and uninsured patients.

Write a structured ethics review memo (1,000–1,500 words) that: (a) Applies all five primary frameworks covered in the chapter to this scenario (b) Identifies convergences and genuine disagreements among the frameworks (c) Gives special attention to the capabilities and perspectives of the most affected patients (d) Proposes specific remediation steps, ranked by ethical priority (e) Addresses the organization's accountability obligations to affected patients (f) Specifies what conditions would need to be met before you would approve the system for deployment


Exercise 3.23 ⭐⭐⭐⭐ The Global Framework Problem: The chapter argues that global AI ethics has been dominated by Western frameworks, and that non-Western frameworks (Ubuntu, Confucian ethics, Indigenous ethics) offer genuine insights that the mainstream conversation misses.

Write a 1,500–2,000 word essay that: (a) Argues for or against the proposition that a single, globally unified AI ethics framework is possible and desirable (b) Engages seriously with at least three non-Western ethical traditions covered in the chapter (c) Addresses the institutional question: what would genuinely inclusive global AI governance look like, and what barriers stand in the way? (d) Discusses at least one concrete case where a Western and a non-Western ethical framework produce different evaluations of the same AI system (e) Concludes with a specific recommendation for how international AI governance bodies should approach framework pluralism


Exercise 3.24 ⭐⭐⭐⭐ † Designing an Ethics Program: You have been hired as the chief ethics officer of a mid-sized AI company (approximately 500 employees) developing AI for employment screening. The company has no existing ethics program. Leadership has given you six months and a small budget to design and implement a genuine ethics program — not an ethics washing exercise.

Write a detailed implementation plan (1,200–1,800 words) that: (a) Explains what ethical frameworks will anchor the program and why (b) Specifies the organizational structures required (ethics board, review process, reporting lines) (c) Describes how you will address the psychological safety problem — creating conditions in which employees can raise ethical concerns without career risk (d) Explains how you will engage external stakeholders, including affected communities and independent ethicists (e) Specifies what success looks like and how you will measure it (f) Anticipates the three most likely resistance points and explains how you will address them


Exercise 3.25 ⭐⭐⭐⭐ Framework Integration Research Paper: Select an AI application currently in use in a domain of your choice (healthcare, criminal justice, employment, financial services, education, or another). Research the existing ethical debates about this application and write a 2,000–2,500 word analysis that:

(a) Describes the AI application concretely — what it does, who deploys it, who is affected (b) Identifies the ethical frameworks that have been most prominent in existing debates about this application, and explains why those frameworks have dominated (c) Identifies at least one framework covered in this chapter that has been underutilized in the existing debate and applies it to generate new insights (d) Applies the five-step moral cross-examination method and identifies where frameworks converge and diverge (e) Makes a specific, evidence-based recommendation about whether, and under what conditions, this application should be deployed (f) Includes proper citations to at least six credible sources


Exercises are designed to be completed individually unless marked with †, which indicates exercises especially suited to group discussion or collaborative work. All analysis exercises should draw specifically on the frameworks and examples covered in Chapter 3.