Chapter 5: Quiz — The Business Case for Ethical AI

Total questions: 20 Format: 8 Multiple Choice | 5 True/False | 4 Short Answer | 3 Applied Scenario Recommended time: 45–60 minutes


Part I: Multiple Choice

Select the single best answer for each question.


Question 1 Which of the following best describes the difference between the "compliance framing" and the "strategic framing" for AI ethics?

A) The compliance framing applies only to regulated industries; the strategic framing applies to technology companies. B) The compliance framing asks "what must we do to avoid punishment?"; the strategic framing asks "how do we capture value from doing what's right?" C) The compliance framing is more expensive; the strategic framing is more efficient. D) The compliance framing is appropriate for legal teams; the strategic framing is appropriate for marketing teams only.


Question 2 The Optum healthcare algorithm controversy (referenced in the chapter's opening) is primarily an example of which type of bias?

A) The algorithm used protected characteristics as explicit inputs to the model. B) The algorithm used a proxy variable (healthcare spending) that encoded existing racial disparities, producing discriminatory outcomes without explicit use of race. C) The algorithm was trained on too little data to make accurate predictions. D) The algorithm was intentionally designed to direct resources away from Black patients.


Question 3 Under Illinois' Biometric Information Privacy Act (BIPA), which of the following statements is most accurate?

A) Companies may collect biometric data from publicly available photos without consent because the photos are already public. B) BIPA only applies to government agencies, not private companies. C) BIPA requires informed written consent before collecting biometric data, regardless of whether the data was derived from publicly available sources, and provides a private right of action. D) BIPA applies only to fingerprint data, not facial geometry.


Question 4 What is the primary business argument for investing in diverse training data for AI systems?

A) Diverse training data is required by the EU AI Act. B) Diverse training data signals ethical commitment to regulators without requiring substantive governance changes. C) AI systems trained on diverse data are more accurate across the full population of users, reducing performance disparities that constitute both ethical failures and technical failures. D) Diverse training data reduces the cost of model training by reducing dataset size.


Question 5 In the context of AI ethics, "ethics washing" refers to:

A) The process of cleaning unethical data from AI training sets. B) The practice of making public commitments to ethical AI without the accountability mechanisms, authority, or institutional investment required to deliver on those commitments. C) The legitimate use of ethical arguments to justify business decisions that would otherwise be unpopular. D) A regulatory audit process in which AI systems are reviewed for compliance with ethics standards.


Question 6 Which of the following features is most important for distinguishing a genuine AI ethics program from a performative one?

A) A publicly posted AI ethics principles document endorsed by the CEO. B) An annual diversity report showing demographic statistics for the AI team. C) An ethics function with authority to influence or block product decisions, accountability mechanisms, and public transparency about outcomes. D) Membership in an industry AI ethics consortium.


Question 7 The "asymmetry of trust" argument in Section 5.2 refers to which of the following?

A) Large companies have more trust to lose than small companies from AI ethics failures. B) Trust is built slowly through consistent behavior over time and can be destroyed quickly by a single visible failure. C) Consumer trust in AI is systematically lower than consumer trust in human judgment. D) The AI ethics community distrusts technology companies more than they distrust other industries.


Question 8 According to the chapter, Scott Page's research on the "diversity bonus" is relevant to AI development because:

A) Diverse AI teams are required by most AI governance frameworks. B) Diverse teams are more likely to comply with anti-discrimination law during AI development. C) Diverse teams find more bugs, test more edge cases, and anticipate more failure modes — producing more robust AI systems, not just more ethical ones. D) Diverse teams develop AI systems faster because they have access to more data sources.


Part II: True/False

Write TRUE or FALSE and provide a one-sentence explanation of your answer.


Question 9 The EU AI Act's maximum fine for non-compliance is 4% of global annual turnover, the same as GDPR's maximum fine.


Question 10 The chapter argues that the business case for ethical AI is always sufficient to justify ethical behavior — that is, when the business case is weak, ethical behavior is not obligatory.


Question 11 An AI system can be fully compliant with all existing anti-discrimination law and still produce discriminatory outcomes under the disparate impact doctrine.


Question 12 Privacy-by-design refers to a set of technical tools that are added to AI systems after deployment to protect user privacy from external breaches.


Question 13 The Clearview AI BIPA settlement included a prohibition on selling its database to private (non-government) companies, effectively foreclosing most of the civilian market.


Part III: Short Answer

Answer in 100–200 words each.


Question 14 Explain the concept of "distribution shift" and describe one specific example from healthcare or criminal justice in which distribution shift produces both an ethical failure and a performance failure. Why does this example support the argument that bias is a quality problem, not just an ethical problem?


Question 15 The chapter identifies several ways in which ethics washing can backfire as a business strategy. Describe two specific mechanisms by which ethics washing is detected — one internal (within the organization) and one external (outside the organization) — and explain the business consequences that follow from each type of detection.


Question 16 Describe the "risk register" approach to building the internal business case for AI ethics investment. What are the four components of a risk register entry (as described in the chapter), and how does quantifying AI ethics risk enable comparison with other enterprise risks?


Question 17 The Salesforce case study identifies three specific structural features that made the Office of Ethical and Humane Use different from a typical corporate ethics statement. Describe those three features and explain why each one matters for the program's effectiveness.


Part IV: Applied Scenario

Read each scenario carefully and respond as directed.


Question 18 (Applied Scenario)

The Hiring Algorithm Decision

MedTech Solutions, a medical device company, is considering deploying an AI-based resume screening tool to manage the high volume of applications for technical roles. The vendor's marketing materials claim the tool uses "objective, data-driven scoring." Your preliminary analysis reveals:

  • The tool was trained primarily on historical hire data from the past eight years.
  • The company's historical technical workforce is 78% male.
  • The vendor does not publish fairness testing results across demographic groups.
  • The vendor's terms of service say the tool "complies with applicable law."

Answer the following:

a) Identify three specific business risks this deployment generates, with the type of risk (legal, reputational, operational, talent) for each.

b) Identify the specific legal frameworks that would apply if the tool produces disparate impact against women or minority candidates.

c) What additional information would you request from the vendor before proceeding?

d) Would you recommend proceeding with the deployment as proposed, modifying it with conditions, or rejecting it? Defend your recommendation in 3–4 sentences.


Question 19 (Applied Scenario)

The Facial Recognition Pitch

A security technology startup has approached your company — a large retail chain — with a pitch for a facial recognition system that would identify known shoplifters as they enter stores by comparing their faces against a database of individuals previously caught shoplifting. The startup claims the system would reduce retail theft by 30% and is "already deployed successfully by 50 retailers nationwide."

The proposal raises several concerns among your team:

  • It is unclear how individuals end up in the "shoplifter database" and how errors are corrected.
  • The system has not been tested for accuracy across racial and demographic groups.
  • Your store locations include Illinois, Texas, and Washington — all states with biometric privacy laws.
  • A competitor retailer was sued after deploying a similar system.

Answer the following:

a) Identify the top three legal risks associated with deploying this system, citing the specific legal frameworks that apply.

b) Beyond legal risk, identify two additional categories of business risk (from the framework in this chapter) that this deployment generates.

c) Describe the specific due diligence steps you would require before making a deployment decision.

d) What conditions, if any, would need to be met for you to recommend proceeding with deployment? If no conditions would make deployment acceptable, explain why.


Question 20 (Applied Scenario)

The Board Presentation

You are Chief Ethics Officer of a publicly traded technology company. The company's CEO has asked you to present at the next board meeting on the topic of AI ethics risk and what the company is doing about it. The board has strong business credentials but limited technical AI knowledge. You have 10 minutes.

Answer the following:

a) What are the three most important points you would make to this board in 10 minutes? For each point, explain why it is important for a board audience specifically (as opposed to a technical or compliance audience).

b) The board chair asks: "Our legal team tells us we're compliant with all applicable AI regulations. Isn't that sufficient?" How do you respond in a way that is direct, evidence-based, and appropriately respectful of the legal team's contribution?

c) A board member asks: "How much should we be spending on this?" Describe the framework you would use to answer this question, including what information you would need and what methodology you would apply.

d) In a 3–5 sentence reflection: what is the primary risk of translating AI ethics concerns into board-level language? What is lost in that translation, and how do you minimize that loss?


Answer Key


Part I: Multiple Choice

  1. B — The compliance framing is reactive and asks what must be done to avoid punishment; the strategic framing is proactive and asks how value can be created through ethical practice. (Section 5.1)

  2. B — The Optum algorithm used healthcare spending as a proxy for health need, which encoded existing racial disparities because Black patients receive less care for equivalent health needs. No protected characteristic was explicitly used. (Opening section)

  3. C — BIPA requires informed written consent before collecting biometric data including facial geometry, regardless of whether the data derives from publicly available sources. The private right of action (allowing suits without demonstrating actual harm) is a distinctive and financially significant feature. (Section 5.3)

  4. C — The primary business argument for diverse training data is accuracy: AI systems trained on representative data perform better across the full user population, reducing the performance gaps that constitute both ethical failures and technical liability. (Section 5.4)

  5. B — Ethics washing is the AI equivalent of greenwashing: making public commitments without the structural features (authority, accountability, measurement, transparency) required to deliver on them. (Section 5.9)

  6. C — A principles document is a starting point, not a governance structure. The distinguishing features of genuine ethics programs are authority (the ability to influence decisions), accountability (consequences for violations), and transparency (public reporting on outcomes). (Section 5.9)

  7. B — The asymmetry of trust refers to the temporal asymmetry: trust is accumulated slowly through consistent ethical behavior and destroyed quickly through a single visible failure. (Section 5.2)

  8. C — Scott Page's diversity bonus research shows diverse teams outperform homogeneous ones on complex problem-solving; in AI development, this translates to more comprehensive testing, more edge case identification, and more robust systems. (Section 5.8)


Part II: True/False

  1. FALSE — The EU AI Act's maximum fine is 6% of global annual turnover (or €30 million, whichever is higher), which is higher than GDPR's 4% maximum. (Section 5.3)

  2. FALSE — The chapter explicitly acknowledges in Section 5.11 that the business case is sometimes insufficient to justify ethical behavior, and argues that ethical behavior remains obligatory for deontological reasons independent of its business benefit.

  3. TRUE — Under the disparate impact doctrine, facially neutral AI systems can violate anti-discrimination law if they produce discriminatory outcomes across protected groups, even without any protected characteristic being used as an explicit input. (Section 5.3)

  4. FALSE — Privacy-by-design refers to the practice of building privacy protection into system architecture during development — a proactive design philosophy, not a post-deployment technical add-on. (Section 5.8)

  5. TRUE — The BIPA settlement included a prohibition on selling to private (non-government) companies, effectively foreclosing most of the civilian market that was part of the company's original growth ambitions. (Case Study 5.2, Section 7)


Part III: Short Answer

Question 14 — Strong answers will: (1) define distribution shift as the difference between training data distribution and deployment data distribution; (2) provide a specific healthcare example (diagnostic AI trained predominantly on male or light-skinned patients performs worse on female or dark-skinned patients) or criminal justice example (recidivism AI trained on historical data in which minority defendants were over-represented in arrests, not just in actual crime); (3) connect the performance failure to the ethical failure — both exist because the training data was not representative of the full population.

Question 15 — Strong answers will: (1) describe at least one internal detection mechanism (employees comparing stated values to product decisions, advocacy groups, ethical whistleblowing channels) and explain how it generates consequences (departures, internal pressure, leaked communications); (2) describe at least one external detection mechanism (regulatory investigation, investigative journalism, academic research) and explain its consequences (enforcement actions, press coverage, congressional attention); (3) note the specific aggravating factor: detection makes the stated principles themselves evidence against the company.

Question 16 — Strong answers will: (1) describe the risk register's components as: risk identification, likelihood assessment, magnitude assessment, and expected value calculation; (2) explain that expected value (probability × magnitude) enables comparison with other enterprise risks that management already quantifies; (3) note that the mitigation investment can be compared to the expected value of risk reduction to produce an ROI calculation that is legible to CFOs and boards.

Question 17 — Strong answers will identify: (1) cross-functional authority — the office could participate in contract reviews, not just advise; (2) a formal case review process with documented outcomes; (3) public reporting on activities and outcomes creating external accountability. Answers should explain that authority matters because without it the function is merely symbolic; the case review process matters because it creates institutional memory and consistency; public accountability matters because it creates external pressure for follow-through.


Part IV: Applied Scenarios

Question 18 — Strong answers will: (a) identify at minimum: legal risk (EEOC/Title VII disparate impact liability from training on historically biased data), reputational risk (Amazon hiring algorithm precedent), and talent risk (female candidates and employees who perceive the company as discriminatory); (b) cite Title VII, the EEOC's AI guidance on employment decisions, and potentially state anti-discrimination law; (c) request: demographic performance testing across gender and race, documentation of training data composition, auditing methodology, and the vendor's regulatory compliance history; (d) most defensible recommendation: conditional deployment (not outright rejection), requiring independent fairness audit, performance data across demographic groups, and a human review process for rejections before proceeding.

Question 19 — Strong answers will: (a) identify at minimum: BIPA liability in Illinois, Texas, and Washington (with specific reference to the statutory damages structure); potential Title VI or consumer protection exposure; and Computer Fraud and Abuse Act issues if the database was assembled from scraped data; (b) identify at minimum two of: talent risk (association with discriminatory technology), reputational risk (one adverse news story), customer trust risk (shoppers who are wrongly identified); (c) due diligence should include: accuracy testing by demographic group, database inclusion criteria and error correction process, vendor's existing litigation exposure, legal review under BIPA in applicable states; (d) there is no single correct answer; strong answers will engage seriously with the conditions under which deployment could be ethical (meaningful accuracy equity, consent-based database, robust error correction) and will acknowledge the case for rejection if those conditions cannot be met.

Question 20 — Strong answers will: (a) identify three board-relevant points such as: the financial magnitude of AI ethics risk (using BIPA and EU AI Act figures), the gap between legal compliance and adequate risk management, and the governance expectation that boards have AI literacy; (b) respond to the "compliance is sufficient" question directly: compliance is the floor, not the ceiling; regulations lag technology; compliance-only posture leaves the organization exposed to risks that current law doesn't cover but future law, litigation, and regulatory innovation will; (c) the framework should include: a risk register approach, a cost-avoidance analysis comparing mitigation cost to expected value of risk, and a value-creation component; (d) the reflection should identify that board-level translation typically loses technical nuance, genuine uncertainty, and distributional harm concerns — these losses should be mitigated by presenting specific cases, using concrete financial estimates rather than abstract principles, and ensuring the board has a continuing education resource on AI ethics developments.