Chapter 5: Exercises

Difficulty ratings: - ⭐ Foundational — recall and comprehension - ⭐⭐ Developing — application and analysis - ⭐⭐⭐ Advanced — synthesis and evaluation - ⭐⭐⭐⭐ Expert — original argument and professional deliverable

† = Exercises designed for class discussion, debate, or group work


Part A: Foundational Comprehension (⭐)

Exercise 1 ⭐ Define the following terms in your own words, and provide one concrete example of each drawn from the chapter or your own knowledge. Write 2–3 sentences per term.

a) Reputational risk b) Ethics washing c) Disparate impact d) Privacy-by-design e) Distribution shift


Exercise 2 ⭐ Section 5.1 distinguishes the "compliance framing" from the "strategic framing" for AI ethics. Fill in the table below with at least two characteristics, one example, and one limitation for each framing.

Dimension Compliance Framing Strategic Framing
Primary question asked
Who owns it in the organization
Example
Key limitation

Exercise 3 ⭐ Match each AI ethics failure in Column A with its primary category of business consequence in Column B. Some consequences may apply to more than one case; choose the most prominent one.

Column A: 1. Amazon's hiring algorithm downgrading women's resumes 2. Facebook's Emotional Contagion experiment on 700,000 users 3. Clearview AI scraping billions of photos without consent 4. Google Photos labeling Black individuals as gorillas 5. HireVue using facial analysis to score job candidates

Column B: a) Legal/regulatory (BIPA, GDPR, or similar) b) Reputational (brand damage from press coverage) c) Operational (product had to be withdrawn or substantially changed) d) Talent (employee pressure and departures) e) Customer trust (adoption barrier)


Exercise 4 ⭐ The chapter states that "bias is a quality problem." In 3–4 sentences, explain what this means and why it matters for how business leaders should think about algorithmic fairness. Does framing bias as a quality problem strengthen or weaken the ethical argument for fixing it? Explain your reasoning.


Exercise 5 ⭐ List three specific features that distinguish a genuine AI ethics program from an ethics-washing program. For each feature, explain why its presence or absence matters.


Part B: Application and Analysis (⭐⭐)

Exercise 6 ⭐⭐ You are a senior product manager at a mid-sized financial technology company that uses AI to make automated lending decisions. Your company has no formal AI ethics program. Using the risk register framework introduced in Section 5.10, identify three specific AI ethics risks your company faces. For each risk, provide:

  • The specific failure mode
  • The likelihood (low/medium/high, with reasoning)
  • The magnitude of potential consequences (in at least two of: financial, reputational, operational, talent)
  • One specific mitigation measure

Exercise 7 ⭐⭐ Re-read the Clearview AI case study. Identify three decision points at which Clearview's founders could have made a different choice that would have reduced their subsequent legal and reputational exposure. For each decision point:

  • Describe the original decision
  • Describe the alternative
  • Estimate what the alternative would have cost at the time
  • Assess what the alternative would have prevented in terms of subsequent exposure

Is there a version of Clearview AI's technology that could have been built on a legally and ethically defensible foundation? What would that version look like?


Exercise 8 ⭐⭐ The chapter argues that ethical AI practices produce an "innovation dividend" — that they make AI technically better, not just ethically better. Evaluate this argument for each of the following practices:

a) Requiring diverse training data b) Implementing explainability requirements c) Conducting community engagement before deployment d) Privacy-by-design

For each practice, identify: (1) a plausible technical benefit, (2) a plausible technical cost or trade-off, and (3) your overall assessment of whether the innovation dividend argument is convincing for that practice.


Exercise 9 ⭐⭐ Section 5.9 identifies three groups that can detect ethics washing: employees, regulators, and journalists. For each group:

a) Describe the specific mechanisms through which they detect the gap between stated ethics commitments and actual practices. b) Describe the specific consequences that detection by this group generates. c) Identify what an organization would need to change about its AI ethics practices to withstand scrutiny from this group.


Exercise 10 ⭐⭐ † The Salesforce case study documents the company's decision to maintain its CBP contract despite employee pressure. Divide the class into three groups:

  • Group 1: Argue that Salesforce made the right decision, drawing on the business case analysis in this chapter.
  • Group 2: Argue that Salesforce made the wrong decision, drawing on the ethical frameworks in Chapter 3.
  • Group 3: Identify the specific information you would need to evaluate this decision well, and explain why the information you have is insufficient for a confident judgment.

After the debate, each group should identify one consideration raised by another group that they found genuinely compelling.


Exercise 11 ⭐⭐ GDPR Article 22 gives individuals rights regarding "automated decisions that significantly affect them." Identify an AI system used by an organization in your industry (or a hypothetical industry of your choice) that appears to involve such automated decision-making. Then:

a) Describe the AI decision and its stakes. b) Identify the specific GDPR requirements that would apply. c) Assess whether current practices you are aware of meet those requirements. d) Identify what changes would be needed for full compliance.


Exercise 12 ⭐⭐ The chapter notes that the EU AI Act imposes fines of up to 6% of global annual revenue for non-compliance, while BIPA statutory damages can reach $5,000 per violation per person. Select a company with significant AI deployment (a technology company, financial institution, or healthcare system you are familiar with). Estimate — using publicly available information — the potential financial exposure that company would face if:

a) It deployed a high-risk AI system in violation of the EU AI Act. b) It collected biometric data from Illinois residents without BIPA consent.

What does this calculation imply about the appropriate level of investment in AI ethics compliance?


Part C: Synthesis and Evaluation (⭐⭐⭐)

Exercise 13 ⭐⭐⭐ † A colleague argues: "The business case for ethical AI is a double-edged sword. If we justify ethical AI solely on business grounds, then companies will abandon ethical AI the moment it becomes commercially inconvenient — and that moment will come. We should argue from values, not from ROI."

Write a 500–700 word response that: (a) identifies what is right about this argument, (b) identifies what is incomplete or mistaken about it, and (c) proposes a framework for using both business and values arguments in the appropriate contexts.


Exercise 14 ⭐⭐⭐ Section 5.11 identifies three specific trade-offs where the business case and the ethical case for AI may diverge:

  1. Privacy vs. personalization
  2. Fairness constraints vs. aggregate accuracy
  3. Explainability vs. model performance

For each trade-off:

a) Describe a specific real or hypothetical AI application where the trade-off is acute. b) Identify what additional information would be needed to evaluate whether the trade-off is real or merely apparent in that context. c) Argue for what you think the right resolution is — using both business and ethical reasoning.


Exercise 15 ⭐⭐⭐ The Salesforce case study notes that the company's Trusted AI Principles have been criticized for: (1) primarily governing Salesforce's own AI rather than customer-built AI, and (2) using process metrics rather than outcome metrics to evaluate program performance.

Design a revised measurement framework for Salesforce's AI ethics program that addresses both criticisms. Your framework should include:

  • At least four outcome metrics (not just process metrics)
  • A methodology for measuring each metric
  • An assessment of the trade-offs involved in measuring outcomes rather than just processes
  • A recommendation for how the results should be reported publicly

Exercise 16 ⭐⭐⭐ † A healthcare system is evaluating an AI diagnostic tool for detecting early-stage cancer. The tool performs with 91% accuracy overall, but its accuracy is 94% for patients with light skin tones and 82% for patients with dark skin tones. The clinical team wants to deploy the tool immediately, arguing that 82% accuracy is still better than the current standard of care for the affected population. The ethics committee wants to delay deployment until the performance gap is closed.

Evaluate this decision from multiple perspectives:

a) The clinical director's perspective: patient outcomes b) The chief financial officer's perspective: cost, liability, and market position c) The compliance officer's perspective: regulatory exposure under civil rights law d) The affected community's perspective: what fairness requires e) Your own perspective: what the right decision is and why

The class should then discuss: Is "better than the current standard of care" sufficient justification for deploying a system with demonstrated disparate performance? Under what conditions, if any?


Exercise 17 ⭐⭐⭐ The chapter argues that employee advocacy groups — like the Salesforce Ethics Alliance — can be productive participants in AI ethics governance when the relationship between formal and informal ethics functions is managed well. Analyze the following scenario:

A major technology company has a formal AI ethics board with an annual budget of $2 million and a mandate to review high-risk AI deployments. It also has an informal employee ethics advocacy group that has been critical of several recent product decisions. The two groups have a tense relationship: the formal board resents what it perceives as the advocacy group's naivete about technical trade-offs; the advocacy group resents what it perceives as the formal board's capture by business interests.

a) What organizational interventions might improve the relationship? b) What are the risks of the two groups working too closely together? Too independently? c) How would you design a governance structure that maintains productive tension without organizational dysfunction?


Exercise 18 ⭐⭐⭐ Conduct a comparative analysis of the AI ethics positioning of two large technology companies. For each company:

a) Find and summarize their publicly stated AI ethics principles or commitments. b) Identify at least one documented case where their AI systems generated ethical concerns. c) Assess whether their stated principles are consistent with their documented practices. d) Evaluate the strength of their ethics governance structure (authority, accountability, transparency). e) Assess whether their ethics positioning would be convincing to a sophisticated enterprise customer conducting AI ethics due diligence.

Conclude with a 200-word comparative assessment of which company's ethics program is more credible and why.


Part D: Expert-Level Deliverables (⭐⭐⭐⭐)

Exercise 19 ⭐⭐⭐⭐ † Build a Business Case for AI Ethics Investment

You are the Chief Ethics Officer of a company that deploys AI in one of the following contexts (choose one):

a) A financial institution using AI for mortgage lending decisions b) A hospital system using AI for patient triage and resource allocation c) A retailer using AI for targeted advertising and personalization d) A municipality using AI for criminal justice risk assessment

Your CFO has asked you to justify a $1.5 million annual investment in an AI ethics program. Prepare a business case document (1,000–1,500 words) that includes:

  1. An executive summary of the proposed investment
  2. A risk register identifying the top five AI ethics risks for your organization (with likelihood and magnitude estimates)
  3. A cost-avoidance analysis estimating the expected value of the risks you are proposing to mitigate
  4. A value-creation analysis identifying the revenue, talent, and market access benefits
  5. A proposed program structure with a breakdown of how the $1.5 million would be allocated
  6. Key performance indicators by which you will measure the program's success
  7. A section addressing the limits of the business case and the values-based arguments that supplement it

Exercise 20 ⭐⭐⭐⭐ Draft an Acceptable Use Policy

Following the Salesforce model, draft an Acceptable Use Policy (AUP) for an AI platform company's technology. The platform provides AI capabilities (natural language processing, computer vision, predictive analytics) to enterprise customers across industries.

Your AUP should:

  1. Identify at least five categories of prohibited use, with clear rationale for each prohibition.
  2. Identify at least three categories of use that require enhanced review (approval from a designated review process) before proceeding.
  3. Establish the governance process for reviewing edge cases.
  4. Address the question of enforcement: how will violations be detected and what consequences will follow?
  5. Include a provision for updating the policy as technology and social norms evolve.

In an accompanying memo (300 words), explain the key trade-offs you faced in drafting the policy and the design choices you made.


Exercise 21 ⭐⭐⭐⭐ † AI Ethics Due Diligence: Vendor Assessment

Your organization (choose a context) is evaluating three AI vendors for a high-stakes application. You have been asked to develop an AI ethics due diligence questionnaire and then conduct a mock evaluation of one real AI vendor using publicly available information.

Part 1: Develop a 15-question AI ethics due diligence questionnaire covering: - Governance structure and authority - Fairness testing and documentation - Transparency and explainability features - Data practices and privacy - Incident response and remediation - Regulatory compliance posture - External auditing practices

Part 2: Apply your questionnaire to one real AI vendor using only publicly available information (published principles, technical documentation, press coverage, regulatory filings). Document: (a) what information you found, (b) what information you could not find (and what the absence suggests), and (c) your assessment of the vendor's ethics posture.

Part 3: Write a one-page memo recommending for or against proceeding with this vendor, based on your due diligence findings.


Exercise 22 ⭐⭐⭐⭐ Board Briefing on AI Ethics Risk

You have been asked to prepare a 10-minute briefing for the board of directors of a publicly traded company on AI ethics risk. The board has modest technical literacy but strong business acumen.

Prepare: 1. A one-page briefing document (executive summary format) covering: the top three AI ethics risks the company faces, the regulatory landscape, and the recommended governance investments. 2. Three specific questions that you would ask the board to discuss and decide. 3. A metrics dashboard showing the five KPIs the board should receive quarterly to monitor AI ethics risk.

In a 300-word reflection, explain: what information is most likely to be lost or distorted when translating AI ethics concerns for a board audience? What should practitioners do to mitigate that translation loss?


Exercise 23 ⭐⭐⭐⭐ † Ethics Washing Audit

Select a company that has published AI ethics principles. Conduct an "ethics washing audit" using the following methodology:

  1. Document the company's stated AI ethics commitments (from published principles, reports, and public statements).
  2. Research the company's actual AI practices using: academic research, regulatory filings, investigative journalism, and employee reviews (e.g., Glassdoor, Blind).
  3. Identify three to five areas where stated commitments and documented practices appear consistent.
  4. Identify three to five areas where there appears to be a gap between stated commitments and documented practices.
  5. Evaluate the company's ethics governance structure against the criteria in Section 5.9: authority, accountability, measurable commitments, external review, transparency about failures.
  6. Produce a 500-word assessment of whether the company's AI ethics program is genuine, performative, or somewhere in between — with specific evidence for your conclusion.

Be prepared to present your findings to the class and to defend your methodology.


Exercise 24 ⭐⭐⭐⭐ The Divergence Problem: When Business and Ethics Conflict

Design a hypothetical AI system for a specific business application in which the business case and the ethical case for responsible design clearly diverge — that is, where building the AI system ethically would materially reduce short-term profitability.

Then: 1. Describe the AI system, its application, and its business model. 2. Identify specifically where and why the business case and the ethical case diverge. 3. Analyze the decision using three ethical frameworks from Chapter 3: consequentialist, deontological, and virtue ethics. 4. Analyze the decision from a long-term vs. short-term business perspective: what are the long-term business consequences of each choice? 5. Make a recommendation, and defend it.

This exercise should demonstrate that you can hold both the business argument and the ethical argument simultaneously, without reducing one to the other.


Exercise 25 ⭐⭐⭐⭐ † Global Variation Simulation

The chapter notes that AI ethics regulation varies substantially across jurisdictions. This exercise explores the implications of that variation for a multinational company.

A US-based AI company is considering whether to deploy the same AI system — a predictive analytics platform for employee performance management — in three markets: the United States, the European Union, and India.

Working in groups of three, each group member takes responsibility for one jurisdiction and researches:

  1. The applicable legal requirements for AI in employment decisions in that jurisdiction.
  2. The cultural and social norms around workplace monitoring and AI-mediated employment decisions.
  3. The likely regulatory trajectory (what new regulations are coming?).
  4. The practical implications for the AI system's design (what features would need to change?).

The group then convenes to answer: Should the company deploy one system with jurisdiction-specific modifications, or separate systems for each market? What are the cost, fairness, and governance implications of each approach?

Produce a joint 800-word recommendation and present it to the class.