Chapter 34 Quiz

Ethics in Automated Decision-Making

14 questions. Answers follow.


1. The chapter argues that "compliance is necessary but not sufficient for ethical behavior." This means:

A) Compliance is irrelevant — ethics is all that matters B) Legal compliance defines the minimum required; ethical analysis asks what is right above that minimum C) Ethical firms never need to worry about compliance because good character guarantees legal compliance D) Compliance and ethics are identical — if you are compliant, you are ethical by definition


2. Consequentialist ethical analysis evaluates automated decision-making systems by:

A) Whether the system respects the rights of affected individuals regardless of outcomes B) Whether the people who built the system are of good character C) The aggregate outcomes produced — benefits vs. harms across all affected parties D) Whether the system's design process followed prescribed ethical procedures


3. A deontological critique of an automated credit scoring system would most likely focus on:

A) Whether the system's aggregate false positive rate is below a specified threshold B) Whether applicants have a right to explanation, contestation, and not to be assessed through discriminatory proxies — independent of the system's overall accuracy C) Whether the system produces better credit loss outcomes than the alternative D) Whether the system was built by a team with diverse demographic representation


4. Virtue ethics asks, in the context of automated financial decision-making:

A) What does the law require in this situation? B) What produces the best aggregate outcome across all affected parties? C) What does an individual applicant's right to fair treatment require? D) What does this decision reflect about the kind of institution we are and aspire to be?


5. The "opacity problem" in automated decision-making refers to:

A) The lack of racial diversity in the data science teams building these systems B) The difficulty of auditing vendor systems because proprietary source code is not disclosed C) The inability of algorithms to articulate the reasoning behind specific decisions in human-interpretable terms, undermining accountability D) The use of encryption that prevents regulators from accessing transaction data


6. The "scale problem" in automated decision-making ethics means:

A) Large firms have more complex systems and therefore face more ethical risks B) Decisions made by algorithms affect millions of people, so errors and biases that would be tolerable at individual scale cause systemic harm at algorithmic scale C) The scale of regulatory requirements makes ethical analysis impractical D) Algorithms should only be used by firms large enough to have dedicated ethics teams


7. A credit scoring model uses postcode as a feature. Research shows that the highest-risk postcodes under the model correspond significantly with areas of historically concentrated ethnic minority populations. The model does not use race or ethnicity as features. This situation illustrates:

A) No ethical problem — postcode is a neutral geographic variable B) The algorithmic amplification of historical discrimination through proxy variables, even when protected characteristics are not directly used C) A legal violation that must be immediately reported to the FCA D) A model validation failure — postcode should not be used in credit scoring under any circumstances


8. Case A in the chapter (AML alert closing a business account for 17 days) illustrates which ethical tension?

A) Whether AML compliance is more important than customer service B) The tension between aggregate benefit (preventing money laundering across the system) and specific harm (£180,000 loss for a legitimate business with false positive flag) C) Whether technology firms are liable for the harms caused by their platforms D) Whether regulators should pre-approve AML systems before deployment


9. Priya asks her client's CEO: "If your mother applied for credit through this system and was declined, would you be proud of how that decision was made?" This question represents which ethical approach?

A) Consequentialism — focusing on aggregate outcomes for all users B) Deontology — identifying specific rights that must be respected C) Virtue ethics — asking whether the system reflects an organization's values and character D) Regulatory compliance — checking whether the system meets required standards


10. "Ethics by design" in the context of automated decision-making means:

A) Hiring an external ethics committee to review systems annually B) Incorporating ethical analysis — potential harms, disparate impact testing, explainability, human oversight — into the design and development process, not as a retrospective check C) Making all algorithm source code publicly available for ethical review D) Requiring that all automated decisions be approved by a certified ethicist before implementation


11. The chapter identifies "human accountability" as a principle of ethical automated decision-making. This means:

A) Humans should make all financial decisions without algorithmic assistance B) Automated systems should always defer to human judgment when the two disagree C) For consequential decisions, there must be a specific person or function that is genuinely responsible and can be held accountable for the system's behavior D) Automated decisions require the physical signature of a human approver


12. The chapter criticizes "reputation-based ethics" — grounding ethical behavior in business benefits. What is the specific problem with this approach?

A) It is illegal to market products on the basis of ethical behavior B) Reputation-based ethics works in easy cases but fails when ethical behavior becomes costly — at which point the business rationale for ethical action disappears C) Customers don't care about ethics; only regulators do D) Reputation-based ethics produces better outcomes than principled ethics


13. An automated fraud detection model achieves 8% higher recall (catching more fraud) at the cost of a 3% higher false positive rate specifically for customers with certain South Asian names. The correct ethical analysis is:

A) The recall improvement justifies the tradeoff because the aggregate benefit outweighs the group-specific harm B) The system must be rejected entirely because it produces any differential outcome C) Multiple ethical frameworks must be applied: the consequentialist aggregate improvement, the deontological right of affected customers not to bear disproportionate false positive burden, and the virtue ethics question of whether deploying this system is consistent with the organization's values D) The 3% differential is within legal thresholds and therefore requires no further ethical analysis


14. The chapter's key claim about the compliance professional's distinctive ethical role is:

A) Compliance professionals should become moral philosophers before advising on AI systems B) Compliance professionals are uniquely positioned to raise ethical questions — who is this affecting, is it fair, can we explain it — in institutional contexts where those questions might otherwise not be asked C) Compliance professionals should defer all ethical questions to external ethics consultants D) The ethical role of compliance professionals is limited to identifying legal violations


Answer Key

Q A Explanation
1 B Compliance is the legal floor. Ethical analysis asks what is right above that floor — recognizing that the most harmful practices in financial services are often technically legal.
2 C Consequentialism evaluates by aggregate outcomes across all parties. Key questions: who benefits? Who is harmed? How are outcomes distributed? Does the aggregate calculation include all affected parties?
3 B Deontology focuses on rights: explanation, contestation, non-discrimination. These rights apply regardless of whether the system's aggregate performance is good. A rights violation is not cured by good aggregate performance.
4 D Virtue ethics asks what kind of institution this makes us — not what the law requires (law), not what maximizes outcomes (consequentialism), but what reflects organizational values and character.
5 C The opacity problem: algorithms cannot explain specific decisions in human-interpretable terms. This undermines the accountability chain — a customer cannot contest what they cannot understand; an institution cannot take responsibility for what it cannot explain.
6 B Scale transforms harm. A 1% error rate that harms 10 customers per day is individually manageable. The same 1% error rate applied to 10 million customers per day is a systematic harm affecting 100,000 people daily.
7 B This illustrates proxy discrimination: protected characteristics are encoded through geographically correlated variables. The algorithm reproduces historical discrimination without directly using the protected variable. This is the central mechanism of algorithmic fairness problems (Chapter 29).
8 B The AML case illustrates the aggregate-vs.-specific tension central to consequentialist analysis. Society benefits from AML detection (aggregate benefit); the specific legitimate business bears severe harm (specific harm). The ethical question is: how should institutions weigh and mitigate this tension?
9 C "Would you be proud?" is a virtue ethics question — it asks about character and organizational values. It is not about calculating outcomes (consequentialism) or identifying specific rights (deontology).
10 B Ethics by design means incorporating ethical analysis into the design process: identifying harms before deployment, testing for disparate impact during development, designing explainability as a requirement, building human oversight into the system architecture.
11 C Human accountability requires a named person or function that is genuinely responsible for the system's behavior. Diffusing accountability across vendors, data scientists, and deployers means no one is genuinely accountable.
12 B Reputation-based ethics is contingent on business benefit. When ethical behavior becomes costly (reducing model accuracy to address disparate impact; providing genuinely meaningful human oversight), the business rationale dissolves. Ethics grounded solely in reputation is not stable.
13 C Complex ethical tradeoffs require multiple frameworks. Consequentialist analysis looks at aggregate benefit. Deontological analysis asks whether certain customers bear rights violations. Virtue ethics asks whether deploying this tradeoff is consistent with the organization's values. No single framework gives the complete answer.
14 B The compliance professional's distinctive ethical role is to ask, systematically and institutionally, the questions that others are not structurally positioned to ask: who is this affecting? Is it fair? Can we explain and defend it? This is a function that the compliance profession is uniquely placed to perform.