Chapter 1: Quiz

What Is AI Ethics? Framing the Challenge

Total Questions: 20 Estimated Time: 35–45 minutes Instructions: Answer all questions. For multiple choice questions, select the single best answer. For true/false questions, indicate whether the statement is true or false. For short answer questions, write 2–3 sentences. For applied scenario questions, respond to each sub-question.


Part A: Multiple Choice (8 questions)

Question 1

Which of the following best distinguishes AI ethics from AI safety as fields of inquiry?

A) AI ethics focuses on preventing harm from AI systems; AI safety focuses on the business benefits of responsible AI.

B) AI ethics is primarily concerned with moral questions about currently deployed AI systems and their social effects; AI safety focuses primarily on preventing catastrophic or existential harm from future advanced AI systems.

C) AI safety is concerned with technical problems in AI design; AI ethics is concerned exclusively with legal compliance.

D) AI ethics and AI safety are different names for the same field, distinguished only by whether the researcher has a background in philosophy or engineering.


Question 2

In the SyRI case (Privacy First et al. v. State of the Netherlands, 2020), what was the Hague District Court's primary basis for striking down the system?

A) The system was found to be factually inaccurate and produced more false positives than true positives in fraud detection.

B) The system explicitly used race and national origin as input variables, which the court found discriminatory.

C) The system's opacity — the inability of citizens to understand or contest their risk scores — violated Article 8 of the European Convention on Human Rights.

D) The Dutch parliament had not authorized the collection of the data types SyRI used, making the legal basis insufficient.


Question 3

A company claims to be committed to "responsible AI" and has published a detailed set of AI principles on its website. An independent investigation later reveals that the company's AI hiring tool systematically disadvantages candidates from certain racial groups, a fact the company's internal teams were aware of for over a year. This scenario best illustrates which concept?

A) The accountability gap

B) Ethics washing

C) The optimization trap

D) Algorithmic discrimination


Question 4

Which of the following is an example of the "optimization trap" as described in this chapter?

A) A company intentionally programs its AI to discriminate against certain users to benefit its core customer demographic.

B) A government deploys an AI welfare fraud detection system without disclosing how it works.

C) A social media platform's recommendation algorithm, optimized for engagement (watch time), systematically recommends increasingly extreme content because extreme content reliably generates longer viewing sessions.

D) A hospital's AI diagnostic system performs less accurately on patients who speak English as a second language because they were underrepresented in the training data.


Question 5

Which of the following most accurately describes the "accountability gap" in AI systems?

A) The difference in capability between AI systems built by large technology companies and those built by smaller firms.

B) The difficulty of assigning clear moral and legal responsibility for harm when that harm results from a complex chain of automated decisions, institutional choices, and individual actions distributed across multiple actors.

C) The gap between the ethical principles an organization publishes and the practices it actually implements.

D) The absence of regulatory frameworks governing AI systems in most industries outside of healthcare and finance.


Question 6

The chapter argues that AI ethics requires attending to technical, social, and institutional dimensions. Which of the following situations is best understood primarily as an institutional AI ethics failure?

A) A facial recognition system has significantly higher error rates for individuals with darker skin tones because the training dataset was unrepresentative.

B) An AI credit scoring model learns to replicate historical redlining patterns because it was trained on discriminatory historical loan data.

C) A hospital deploys an AI diagnostic tool that clinicians have been told to follow without override authority, and no process exists for reporting errors or triggering a system review.

D) A natural language processing system fails to understand regional dialects because its training corpus was drawn predominantly from online text generated in major urban centers.


Question 7

The European Union's Artificial Intelligence Act (2024) establishes a framework for AI regulation. Which of the following best describes its approach?

A) It prohibits all AI use in public sector applications, restricting AI deployment to private businesses.

B) It creates a tiered risk-based framework that imposes the most significant obligations on "high-risk" AI systems including those used in employment, credit, education, and law enforcement.

C) It requires all AI systems sold in the EU to be open-source so that independent researchers can audit their behavior.

D) It creates a single uniform standard that applies equally to all AI systems regardless of their application or potential for harm.


Question 8

The chapter identifies several ways the environmental impact of AI is an ethics concern. Which of the following best captures the ethical dimension of AI's environmental footprint?

A) AI systems use energy, and energy companies have been criticized for unethical practices, creating a reputational risk for AI companies by association.

B) The computational costs of AI training are so high that they threaten to bankrupt smaller companies, concentrating AI capability in the largest corporations.

C) The benefits of AI development accrue primarily to well-resourced users and companies in wealthy countries, while environmental costs — carbon emissions, water consumption, e-waste — disproportionately burden communities that have contributed least to AI's development and benefited least from it.

D) Environmental concerns about AI are primarily a public relations problem; the actual carbon footprint of AI is smaller than commonly believed.


Part B: True or False (5 questions)

For each statement, indicate whether it is True or False and provide a brief explanation (1–2 sentences) of your reasoning.


Question 9

True or False: An AI system that does not explicitly use race, gender, or national origin as input variables cannot produce discriminatory outcomes.


Question 10

True or False: According to the chapter's analysis, an organization that complies with all applicable AI regulations has met its AI ethics obligations.


Question 11

True or False: The SyRI system was struck down primarily because the court found that its risk scores were inaccurate and produced too many false positive fraud identifications.


Question 12

True or False: The chapter argues that AI ethics concerns are primarily relevant to large technology companies and do not pose significant risks for organizations in other sectors.


Question 13

True or False: "Meaningful human control" over AI decisions — maintaining a human in the decision loop — is sufficient to guarantee ethical outcomes from an AI-assisted decision process.


Part C: Short Answer (4 questions)

Answer each question in 2–3 sentences.


Question 14

What is the difference between "AI ethics" and "AI policy," and why does the distinction matter for how organizations approach AI governance?


Question 15

Explain the concept of "disparate impact" in the context of AI systems. Why can disparate impact occur even when an AI system does not explicitly use protected characteristics as inputs?


Question 16

What does the YouTube recommendation system case (Case Study 2) illustrate about the relationship between business metrics and ethical outcomes? Use the concept of the "optimization trap" in your answer.


Question 17

The chapter argues that AI ethics requires genuine participation by affected communities, not just ethical analysis by technical experts. What is the justification for this claim? What does community participation add that expert analysis alone cannot provide?


Part D: Applied Scenarios (3 questions)

Read each scenario carefully and answer the sub-questions.


Question 18

An e-commerce company deploys an AI system to set prices dynamically for products in its marketplace. The system adjusts prices based on factors including time of day, device type, browsing history, and inferred location. An independent researcher publishes an analysis showing that the same products are priced, on average, 12% higher for users in low-income zip codes than for users in high-income zip codes. The company responds that its system does not use income or zip code as direct inputs.

Sub-question A: Which AI ethics concern(s) from Section 1.2 does this scenario primarily raise?

Sub-question B: How would you explain to the company's leadership why "we don't use income as an input" is not an adequate ethical response to the researcher's findings?

Sub-question C: What information would you need to assess whether the pricing disparity constitutes an ethical problem, as distinct from a legal one?


Question 19

A public school district deploys an AI tutoring system that adapts its instruction to individual student learning patterns. The company that sells the system claims it can "close achievement gaps" by providing personalized instruction at scale. A teacher at one of the pilot schools notices that the system consistently assigns lower-difficulty content to students in the English Language Learner (ELL) program, reinforcing rather than closing the achievement gap.

Sub-question A: Which dimensions of the three-dimension framework (technical, social, institutional) are relevant to diagnosing this problem?

Sub-question B: Who are the relevant stakeholders in this scenario, and what interest does each have in how the problem is resolved?

Sub-question C: What steps should the district take immediately upon learning of the teacher's observation?


Question 20

A financial technology company develops an AI system to assess creditworthiness for small business loans. The company's marketing emphasizes that the system "expands access to capital for underserved entrepreneurs" by using "alternative data" — including social media activity, online reviews, and website traffic — to assess businesses that lack long credit histories. A nonprofit advocacy organization publishes a report raising concerns that the system's use of social media data introduces proxies for race and religion that may produce discriminatory outcomes.

Sub-question A: What is meant by "proxy discrimination" in this context, and how does the use of "alternative data" create this risk?

Sub-question B: The company claims its mission is to "expand access" — and argues that any group disparities in its lending outcomes are smaller than the disparities in traditional credit systems it is replacing. Evaluate this argument. Is a system that produces less discrimination than its predecessor ethically satisfactory?

Sub-question C: What would meaningful transparency in this system look like for (i) loan applicants, (ii) regulators, and (iii) independent researchers?


Answer Key

Multiple Choice

  1. B — AI ethics focuses primarily on moral questions about currently deployed systems; AI safety focuses primarily on preventing catastrophic harm from future advanced AI.

  2. C — The court's primary basis was the system's opacity, which prevented citizens from understanding or contesting their risk scores, violating Article 8 ECHR.

  3. B — Ethics washing: deploying ethical language and principles without substantive organizational commitment, while knowingly allowing harms to continue.

  4. C — The YouTube recommendation algorithm case is the paradigm example of the optimization trap presented in the chapter.

  5. B — The accountability gap refers specifically to the diffusion of responsibility across multiple actors in complex AI systems.

  6. C — The absence of a process for reporting errors or triggering system review is a governance/institutional failure, not a technical or social one per se.

  7. B — The EU AI Act uses a tiered, risk-based approach with the greatest obligations for high-risk applications.

  8. C — The ethical dimension of AI's environmental footprint is the inequitable distribution of costs and benefits across communities.

True or False

  1. False. AI systems can produce discriminatory outcomes through proxy variables — inputs that are correlated with protected characteristics. An algorithm that does not use race as an input can still produce racially discriminatory outputs if it relies on variables (zip codes, browsing history, device type) that correlate with race.

  2. False. Regulatory compliance establishes a minimum legal standard, not an ethical ceiling. An organization can comply with all applicable regulations and still behave in ways that are ethically problematic — causing harms that regulation has not yet addressed, or that fall below the threshold of legal liability.

  3. False. The court did not make a finding about the system's accuracy or error rate. The decisive issue was opacity: the system's workings were secret, preventing citizens from understanding the basis of their risk scores or contesting them. The court would have required meaningful transparency even for an accurate system.

  4. False. The chapter explicitly argues that AI ethics is not uniquely a problem for large technology companies. AI systems are deployed throughout the economy — in healthcare, banking, insurance, manufacturing, logistics, and government — and the ethical obligations of those deployments do not scale with company size.

  5. False. The mere presence of a human in a decision loop does not guarantee meaningful oversight. A human who approves hundreds of algorithmic recommendations per hour may exercise little genuine independent judgment. Meaningful human control requires the combination of genuine authority, adequate information, sufficient time, and appropriate training — not just technical loop membership.

Short Answer

  1. Model answer: AI policy refers to formal rules — laws, regulations, standards — that translate ethical judgments into enforceable requirements. AI ethics is the underlying normative analysis that should inform those rules. The distinction matters because an organization can comply with every applicable AI regulation while still behaving unethically — addressing harms that law has reached while ignoring harms that law has not yet caught up to. Treating ethics and policy as equivalent leads organizations to chronically lag behind best practice.

  2. Model answer: Disparate impact occurs when a facially neutral policy or system produces significantly different outcomes for different demographic groups without adequate justification. In AI systems, disparate impact can occur even without explicit use of protected characteristics because other input variables — zip code, browsing history, device type, social network structure — may be statistically correlated with race, gender, or national origin. The algorithm learns from these correlations and effectively uses protected characteristics by proxy, even when those characteristics are not in the model.

  3. Model answer: The YouTube case illustrates that when an AI system is optimized for a single metric — engagement, measured as watch time — it will reliably learn to serve whatever content characteristics produce that metric, regardless of whether those characteristics correspond to user wellbeing or social benefit. Extreme, emotionally intense, and sensationalist content generates strong engagement signals; the optimization trap occurs when the system serves this content at scale, producing radicalization and harmful exposure as an externality of engagement maximization. The harm was not intended but was structurally predictable.

  4. Model answer: Community participation is justified by both epistemic and ethical arguments. Epistemically, people who are subject to an AI system's decisions have information about the lived experience of that system — what it actually does, who it harms in ways that may not be visible in aggregate data — that expert analysis alone cannot recover. Ethically, people who are significantly affected by decisions have a legitimate claim to participate in the governance of those decisions, independent of whether their participation improves technical outcomes. Expert analysis can identify problems; community participation can identify which problems matter most and to whom.

Applied Scenarios

18. A: Bias and fairness (the system appears to charge more to users in low-income areas); privacy (the system uses personal data to infer characteristics); autonomy (users may not know their data is being used to set prices).

B: The response "we don't use income as an input" is inadequate because disparate impact can occur through proxy variables — device type, browsing history, and inferred location can all correlate with income, allowing the system to effectively price-discriminate by income through indirect means. The relevant question is not what inputs the model explicitly uses but what outcomes the model produces and whether those outcomes are justifiable.

C: To assess the ethical problem, you would need: the magnitude of the disparity and whether it is statistically significant; the mechanism — which input variables drive the pricing difference; whether lower-income users are receiving inferior products/service (in addition to higher prices) or comparable ones; whether comparable products are available elsewhere at comparable prices; and whether the company's business model depends on this differential pricing in a way that represents an intentional design choice.

19. A: Technical dimension: the model may be performing as designed — learning that ELL students have engaged less successfully with higher-difficulty content and defaulting to lower difficulty, creating a self-reinforcing loop. Social dimension: ELL students face additional systemic disadvantages; an AI tool that reinforces rather than addresses those disadvantages amplifies existing inequality. Institutional dimension: the district did not establish a monitoring process to detect this pattern, relied on vendor claims about effectiveness, and lacked a feedback mechanism for teacher observations.

B: Students (especially ELL students, whose educational opportunities are directly affected); teachers (who observe the system and have professional responsibility for student outcomes); the district administration (which is accountable for educational outcomes and has contracted with the vendor); the vendor (which made effectiveness claims that appear to be false for at least one group); parents; and the company's designers who trained and validated the system.

C: Immediately: pause ELL students' use of the adaptive difficulty feature; conduct a formal audit of difficulty assignments across student groups; engage the vendor with specific data on the disparity; establish a teacher reporting mechanism for similar observations; and communicate proactively with ELL families about the concern.

20. A: Proxy discrimination occurs when a model uses variables that are not themselves protected characteristics but are statistically correlated with those characteristics, effectively incorporating protected attributes into the decision indirectly. "Alternative data" like social media activity, online review patterns, and geographic signals can function as proxies for race, religion, or national origin because these characteristics correlate with the kinds of social networks people belong to, the businesses they patronize, and the neighborhoods they operate in.

B: The "less bad than the alternative" argument has real force — if the system genuinely expands access to credit for groups that were previously excluded, that matters morally. However, "better than a discriminatory baseline" is not ethically sufficient. An improvement over a discriminatory status quo is still an improvement worth making, but it does not settle the ethical question of whether the system, as designed, is treating applicants fairly. The relevant standard is not "does our system discriminate less than traditional credit systems?" but "does our system treat applicants fairly and consistently with applicable legal requirements?" Additionally, the claim that the system expands access needs to be demonstrated empirically, not assumed.

C: For loan applicants: disclosure that AI is used; a general description of the types of data and factors considered; an explanation of the outcome (approval or denial) in terms they can act on; and a mechanism to correct errors in the underlying data. For regulators: access to the model's technical specifications sufficient to assess disparate impact; results of fairness audits conducted before and during deployment; aggregate outcome data broken down by demographic group. For independent researchers: access to de-identified application and outcome data sufficient to replicate a disparate impact analysis; documentation of model architecture and training methodology; and a defined process for submitting findings and receiving responses.