Chapter 3: Assessment Quiz

Ethical Frameworks for AI Decision-Making

Total questions: 20 Suggested time: 50–60 minutes Point values: Multiple Choice (2 pts each), True/False (1 pt each), Short Answer (5 pts each), Applied Scenario (10 pts each) Total possible: 16 + 5 + 20 + 30 = 71 points


Part A: Multiple Choice (8 questions, 2 points each)

1. Which ethical framework holds that the rightness of an action is determined entirely by its consequences for aggregate welfare?

a) Deontology b) Virtue ethics c) Utilitarianism d) Contractualism


2. Kant's categorical imperative includes the formulation: "Act so that you treat humanity, whether in your own person or in that of another, always as an end and never as a means only." Which of the following AI applications most directly violates this principle?

a) An AI system that recommends movies based on viewing history b) A covert behavioral manipulation system designed to change users' political views without their knowledge c) An AI content moderation system that removes hate speech with 95% accuracy d) A medical AI system that assists radiologists in detecting tumors


3. Rawls's "veil of ignorance" thought experiment asks decision-makers to:

a) Ignore all available data and rely purely on moral intuition b) Consider only the preferences of the majority c) Choose rules for society without knowing their own position within it d) Apply the principle of maximizing aggregate welfare


4. The "aggregation problem" in consequentialist ethics refers to:

a) The difficulty of measuring outcomes accurately b) The fact that utilitarian calculus can justify harm to minorities if the majority benefit is large enough c) The problem of determining which consequences are morally relevant d) The challenge of aggregating data from multiple sources for AI training


5. According to the chapter's discussion of the MIT Moral Machine experiment (Awad et al., 2018), which of the following was a major finding?

a) Participants across all cultures showed identical moral preferences for autonomous vehicle decisions b) A majority of participants preferred that autonomous vehicles protect passengers over pedestrians in all scenarios c) Significant cross-cultural variation was found in moral preferences, particularly between Western, Eastern, and Southern country clusters d) Participants preferred that autonomous vehicles be programmed to protect older individuals over younger ones


6. In virtue ethics, the concept of phronesis refers to:

a) The greatest happiness of the greatest number b) Practical wisdom — the capacity for sound judgment in complex, context-dependent situations c) The categorical rule that prohibits using people as means d) The principle of choosing rules you would endorse from behind the veil of ignorance


7. The capabilities approach to justice, developed by Amartya Sen and Martha Nussbaum, evaluates social arrangements based on:

a) Their contribution to maximizing aggregate welfare b) Whether they are consistent with duties and rights c) The substantive freedoms people have to do and be what they have reason to value d) Whether they reflect the values of the relevant cultural community


8. "Ethics washing" is best defined as:

a) An approach to AI ethics that applies multiple frameworks simultaneously b) The use of ethical language and performative signals to deflect criticism without actually changing practices c) The process of removing bias from AI training data d) A contractualist method for evaluating AI systems from behind the veil of ignorance


Part B: True/False (5 questions, 1 point each)

9. True or False: Moral intuitions are unreliable and should be discarded when applying ethical frameworks to AI decisions.


10. True or False: According to the chapter, deontological ethics prohibits mass surveillance without consent regardless of whether the surveillance reduces crime.


11. True or False: The Ubuntu ethical framework from southern African philosophy primarily focuses on protecting individual rights and personal autonomy.


12. True or False: In the five-step moral cross-examination method, the chapter recommends giving special weight to the perspectives of the most vulnerable stakeholders when frameworks disagree.


13. True or False: Care ethics, developed by Carol Gilligan and Nel Noddings, holds that ethics is grounded in abstract universal principles rather than particular relationships and contexts.


Part C: Short Answer (4 questions, 5 points each)

Answer each question in 3–6 sentences.

14. Explain why the chapter argues that moral intuitions alone are insufficient for AI ethics decision-making. Identify two specific limitations of relying on intuition and explain how ethical frameworks address those limitations.


15. The hospital's AI triage tool in the chapter's opening hook improves average survival rates by 12% but performs worse for elderly patients with multiple comorbidities. Briefly explain how a capabilities approach analysis of this situation would differ from a pure utilitarian analysis. What different question does the capabilities approach ask, and why does it matter?


16. Case Study 3.2 (Project Maven) is described as a "character test" for Google. Using the vocabulary of virtue ethics, explain what the case reveals about the difference between organizational virtue and ethics washing. Identify at least two specific pieces of evidence from the case that inform your assessment.


17. The chapter argues that the global AI ethics conversation has been dominated by Western ethical frameworks, and identifies this as a problem. Explain the argument in your own words: what specifically has been lost by this domination, and what would a more genuinely inclusive AI ethics conversation require?


Part D: Applied Scenario (3 questions, 10 points each)

For each scenario, identify the relevant ethical frameworks, apply at least two of them, and state a clear, justified recommendation. Each answer should be 200–350 words.

18. The Hiring Algorithm Scenario

A large employer deploys an AI system to screen resumes for software engineering positions. A post-deployment audit reveals that the system rates resumes with traditionally male names 23% higher than equivalent resumes with traditionally female names. The system was not intentionally designed to do this — the pattern emerged from training data reflecting historical hiring patterns. The company argues that the system still produces better hiring outcomes on average than unassisted human screeners.

Apply consequentialist and deontological frameworks to this scenario. Do the frameworks agree or disagree about what the company should do? State your recommendation and explain how you resolved any disagreement between the frameworks.


19. The Mental Health Chatbot Scenario

A technology company launches a free mental health chatbot aimed at teenagers experiencing anxiety and depression. The chatbot provides evidence-based cognitive behavioral therapy techniques and is available 24/7. In clinical studies, it reduces self-reported anxiety scores by an average of 22% — comparable to brief therapy with a human counselor. However, the company has not disclosed to users that the chatbot collects conversation data that is used to train future versions of the system. The privacy policy mentions data collection in general terms but does not specify mental health conversations.

Apply care ethics and contractualist frameworks to this scenario. What does each framework identify as the most important ethical concern? Do they agree on a recommendation? What would you advise the company to do, and why?


20. The Predictive Policing Scenario

A city's police department deploys a predictive policing system that uses historical crime data, socioeconomic indicators, and location data to generate daily "hot spot" maps indicating areas where crime is most likely to occur. Officers are directed to increase patrol presence in flagged areas. An analysis of the system's performance over 18 months shows that it has increased arrest rates city-wide by 14%. However, 87% of the high-risk areas identified are in majority-Black and Latino neighborhoods, and community organizations report that increased patrol presence has resulted in harassment of residents and a deterioration of community trust in the police.

Apply three ethical frameworks to this scenario. Identify any points of agreement and disagreement among them. Then apply the full five-step moral cross-examination method and state your recommendation. Be explicit about how you handle the trade-off between the aggregate arrest rate benefit and the harm to affected communities.


Answer Key

Part A: Multiple Choice

  1. c — Utilitarianism (a form of consequentialism) holds that the right action maximizes aggregate welfare. (Section 3.2)

  2. b — Covert behavioral manipulation treats users as targets to be modified without their knowledge or consent, using them merely as means. Movie recommendations and content moderation involve transparent purposes; medical AI assists rather than deceives. (Section 3.3)

  3. c — The veil of ignorance asks you to choose rules for society without knowing your own position within it, forcing impartiality. (Section 3.5)

  4. b — The aggregation problem: utilitarian calculus adds up welfare across people, permitting harm to minorities if the majority benefit is sufficiently large. (Section 3.2)

  5. c — The Moral Machine experiment found dramatic cross-cultural variation in moral preferences, with three distinct clusters (Western, Eastern, Southern) showing meaningfully different patterns. (Case Study 3.1)

  6. b — Phronesis is Aristotle's term for practical wisdom — the judgment capacity that enables sound decisions in complex, context-specific situations. (Section 3.4)

  7. c — The capabilities approach evaluates arrangements by their effect on substantive freedoms — what people can actually do and be, not just their welfare or resources. (Section 3.6)

  8. b — Ethics washing is the use of ethical performance — principles documents, ethics teams, responsible AI language — to deflect criticism without producing genuine ethical change. (Section 3.4)

Part B: True/False

  1. False — The chapter explicitly argues that moral intuitions "are not nothing" and encode genuine moral wisdom. Frameworks discipline intuitions, not replace them. (Section 3.1)

  2. True — Deontological analysis holds that mass surveillance without consent violates dignity and autonomy rights regardless of its crime-reduction effects. (Section 3.3)

  3. False — Ubuntu grounds moral identity in communal relationship and collective responsibility — "I am because we are" — which contrasts with individual-rights frameworks. (Section 3.8)

  4. True — Step 4 of the moral cross-examination method explicitly recommends weighting the perspectives of the most vulnerable when frameworks diverge. (Section 3.9)

  5. False — Care ethics holds that ethics is grounded in relationships and context-specific care, not abstract universal principles. Abstract principle-based ethics is precisely what care ethics critiques. (Section 3.7)

Part C: Short Answer — Grading Rubric

14. Full credit (5 pts): Identifies at least two of the following limitations — inconsistency across framings, cultural/demographic bias, poor calibration for large-scale statistical decisions. Explains how frameworks provide structure, common vocabulary, accountability, or explicit trade-off analysis that addresses these limitations.

15. Full credit (5 pts): Explains that utilitarian analysis asks "what is the average outcome?" while the capabilities approach asks "what can elderly patients with complex needs actually do and be?" Notes that the capabilities approach is distribution-sensitive and focused on concrete substantive freedoms, which reveals harms that average-outcome analysis obscures.

16. Full credit (5 pts): Uses virtue ethics vocabulary accurately (integrity, practical wisdom, honesty, courage). Identifies specific evidence from the case — e.g., principles published after (not before) the controversy, absence of ethics review in the contracting process, subsequent controversies suggesting principles did not create durable constraints.

17. Full credit (5 pts): Explains that Western frameworks center individual privacy, algorithmic transparency, and labor displacement while marginalizing collective data sovereignty, relational ethics, intergenerational responsibility. Identifies specific losses — community consent mechanisms, Indigenous data sovereignty, Ubuntu-based governance. Notes institutional requirements: genuine authority for non-Western communities, not just consultation.

Part D: Applied Scenario — Grading Rubric

18–20. Each applied scenario is worth 10 points: - Framework application accuracy (3 pts each framework): Does the student correctly apply the framework's logic to the specific scenario? - Convergence/divergence identification (2 pts): Does the student correctly identify where frameworks agree and disagree? - Recommendation quality (2 pts): Is the recommendation specific, justified, and consistent with the analysis? - Analytical sophistication (1 pt): Does the student handle complexity, trade-offs, and uncertainty appropriately?

Scenario 18 guidance: Consequentialism will focus on whether aggregate outcomes justify the gendered bias; deontology will identify a rights violation (sex discrimination) that is impermissible regardless of average performance. Frameworks disagree — the company should be advised to remediate the bias immediately, with a deontological rationale for rejecting the consequentialist defense.

Scenario 19 guidance: Care ethics will focus on the vulnerability of the teenager-chatbot relationship and the trust violation involved in undisclosed data collection from mental health conversations; contractualism (veil of ignorance) will ask whether teenagers would consent to this data use if they knew about it — the answer is almost certainly no. Both frameworks point toward: (a) explicit informed consent for mental health data use; (b) redesign of the privacy disclosure.

Scenario 20 guidance: Consequentialism faces a distribution problem — 14% aggregate arrest increase vs. concentrated harm in specific communities; deontology identifies rights violations (harassment, surveillance without consent); contractualism asks whether residents would choose this system not knowing whether they'd be police or community members. Moral cross-examination should identify the vulnerability of community residents as warranting extra weight, and should recommend at minimum: community oversight of the system, transparent reporting of disparity data, and genuine harm-monitoring — with suspension if community trust deterioration is documented.


Quiz aligns with Chapter 3 learning objectives. Recommended for use as a formative assessment after Chapter 3 reading, or as part of a Part I (Foundations) cumulative exam.