Chapter 34 Exercises
Ethics in Automated Decision-Making
Exercise 34.1: Ethical Framework Application
Difficulty: Introductory
Apply all three ethical frameworks (consequentialism, deontology, virtue ethics) to each of the following scenarios. For each scenario: (a) identify the ethically relevant facts; (b) apply each framework to reach a conclusion or identify a key question; (c) note where the frameworks converge and where they diverge.
Scenario A: A bank's automated overdraft system declines customers who have been overdrawn more than three times in the past six months. These customers are disproportionately younger customers with irregular income (gig workers, students). The system correctly predicts that declined customers have higher overdraft default rates.
Scenario B: An insurance company's claims assessment algorithm automatically flags certain postcode areas as "elevated fraud risk." Claims from these areas require additional documentation and take 3× longer to process. The company's data shows that claims from these areas are indeed fraudulent at higher rates. The areas also have higher proportions of recent immigrant populations.
Scenario C: A financial wellbeing app uses behavioral data (spending patterns, savings habits, time spent in the app) to identify customers at risk of financial distress and proactively offers them debt management services. Customers have consented to data use in the app's terms and conditions. Most customers don't read the terms. The service genuinely helps some customers but also exposes them to upselling of financial products.
Scenario D: A large employer uses an AI screening tool to assess job applications for its compliance team. The tool scores applications based on features including university attended, prior employers, and writing style in cover letters. The compliance function remains predominantly from a narrow range of universities and backgrounds.
Exercise 34.2: Ethical Analysis — The Accountability Gap
Difficulty: Intermediate
A regional bank deploys a vendor-provided automated credit decisioning system. The system is a black-box AI; the vendor does not share the model's feature set or weights. Eighteen months after deployment, the bank receives an Equal Credit Opportunity Act complaint from a borrower claiming the decision was discriminatory.
The bank's investigation reveals: - The decision was made automatically (no human review) - The bank cannot explain why the applicant was declined - The vendor says the model was validated for compliance before sale but declines to share validation documentation citing trade secrecy - The bank's contract with the vendor does not include a right to audit the model
a) Map the accountability chain for this decision. Who might be responsible: the bank, the vendor, the data scientist who trained the model, the sales team that sold it, the bank's compliance officer who approved the procurement?
b) Apply the three ethical frameworks to assess the bank's decision to deploy a black-box system it cannot explain.
c) Draft six contractual provisions that the bank should have required in its vendor contract to prevent this situation. For each, explain the accountability gap it addresses.
d) The bank is now facing an ECOA complaint it cannot defend because it cannot explain the decision. From an ethical perspective, should the bank proactively disclose to the regulator that it cannot explain the decision? Or should it let the complaint process run? Apply the virtue ethics framework specifically to this question.
e) Design an "ethics due diligence checklist" (minimum 8 items) that the bank's compliance team should apply to any AI vendor procurement going forward.
Exercise 34.3: Designing Contestability
Difficulty: Intermediate
GDPR Article 22 and the Consumer Duty both require, in different ways, that automated decisions affecting customers can be contested and receive human review. Design a contestability process for each of the following automated systems:
System A: An automated fraud detection system that blocks customer card payments in real time. Requirements for your design: (i) what triggers contestation eligibility?; (ii) how does the customer request review?; (iii) what information does the reviewer receive?; (iv) what is the timeline for resolution?; (v) what remedies are available if the block was wrong?
System B: An automated AML system that flags accounts for enhanced due diligence, resulting in transaction delays. Requirements: same five questions as above, plus (vi) how does the contestability process interact with the bank's legal obligations not to "tip off" customers about SARs?
System C: A credit application decisioning algorithm that automatically declines applications below a score threshold. Requirements: same five questions as above, plus (vi) what specific adverse action reasons must be provided?; (vii) how does re-application work?
For each system, identify one specific design challenge and propose a solution.
Exercise 34.4: Ethics by Design Review
Difficulty: Applied
You are conducting an ethics-by-design review for a new automated KYC onboarding system being built by a UK challenger bank. The system uses: - Document verification (OCR of passport/ID document) - Facial recognition (matching selfie to document photo) - Database checks (credit bureau, sanctions, PEP lists) - Behavioral signals (time to complete application, device type, application channel) - An ensemble ML model combining all signals to produce an onboarding approval/decline recommendation
The bank plans to accept the recommendation automatically for scores > 0.75 (approve) or < 0.25 (decline), with human review for the middle band.
a) Conduct an ethical pre-deployment analysis using all three frameworks. Identify the top 3 ethical risks for each framework.
b) The chapter discusses four core ethical problems: opacity, scale, accountability gap, consent. For each problem, identify a specific manifestation in this KYC system and propose a design mitigation.
c) The facial recognition component has a documented false non-match rate that is 2.8× higher for darker-skinned applicants (based on the vendor's FRVT-equivalent testing). The vendor says this is "within industry norms." Evaluate this situation from all three ethical frameworks and make a specific recommendation.
d) The bank's product team wants to add "time of application submission" as a behavioral signal, citing data showing that 2–4 AM applications have higher default rates (referencing the Cornerstone case study). Apply the ethical analysis from the chapter and make a recommendation.
e) Design a monitoring framework that would detect emerging ethical problems (disparate impact, opacity failures, contestation process breakdowns) in the first 12 months of the system's operation.
Exercise 34.5: The Professional Ethics Dilemma
Difficulty: Applied — Reflective
Read the following scenario and write a 600-word analysis addressing the questions below.
Scenario: You are a compliance manager at a large bank. The bank has built an automated credit scoring system that you helped design and implement over the past 18 months. You believe the system is technically sound and legally compliant.
Three weeks before the system is scheduled to go live, your data science colleague shares a SHAP analysis with you showing that the model's primary signal for declining applications from a specific demographic group is "time since last application" — a feature that is heavily weighted because, in the training data, members of this demographic group tend to reapply at lower rates after a decline (possibly because of historical experiences of discrimination that discouraged persistence). The model has learned to decline repeat applicants from this group at higher rates, in part because the historical training data reflects patterns of discrimination that discouraged them from reapplying.
The feature is not illegal — "time since last application" is not a protected characteristic. The aggregate model performance is strong. The legal team says the system is compliant with ECOA. The head of retail credit is eager to launch.
You have one week before the launch decision meeting.
Questions to address: 1. Apply all three ethical frameworks to the "time since last application" feature specifically. 2. What is your professional obligation? What should you do at the launch decision meeting? 3. What are the professional risks of raising this concern explicitly? What are the professional risks of not raising it? 4. Is there a way to raise the concern constructively — proposing a solution rather than simply objecting to the feature? 5. How does this scenario relate to the chapter's discussion of the compliance professional's distinctive ethical role?