Chapter 6: Assessment Quiz

Instructions: This quiz assesses comprehension and application of Chapter 6 material. Complete all sections. For applied scenarios, answers should demonstrate both conceptual understanding and analytical reasoning.


Part I: Multiple Choice (8 questions — 2 points each)

Question 1 Which of the following best describes the relationship between AI ethics and AI governance?

A) Ethics and governance are synonyms — both refer to the set of values that guide AI development. B) Ethics provides normative content (what AI should do); governance provides operational machinery (how and who). C) Governance is a subset of ethics — one ethical framework among several that apply to AI. D) Ethics is concerned with individual decisions; governance is concerned only with organizational-level decisions.


Question 2 The NIST AI Risk Management Framework organizes AI risk management around four functions. Which of the following correctly lists all four, with the foundational function first?

A) Map, Measure, Manage, Monitor B) Govern, Map, Measure, Manage C) Identify, Protect, Detect, Respond D) Assess, Design, Implement, Review


Question 3 Under the EU AI Act, which of the following AI applications would most likely be classified as "unacceptable risk" and therefore prohibited?

A) A credit-scoring algorithm used by a bank to assess loan applications B) An AI system used by emergency services to prioritize calls by predicted severity C) A government system that assigns citizens behavioral scores used to restrict their access to public services D) An AI-powered chatbot used by a government agency to answer citizen queries


Question 4 The chapter describes the "capture problem" in AI governance. Which of the following best describes this problem?

A) AI systems are difficult for regulators to understand because they are technically complex. B) AI systems cross national borders, making territorial regulation ineffective. C) Industries with resources to influence regulatory processes tend to shape regulations in ways that reflect their own interests. D) AI systems can be modified after regulatory assessment, making static regulations quickly obsolete.


Question 5 Which of the following would be the strongest evidence that an organization's AI governance is genuine rather than performative?

A) The organization has published a detailed set of AI principles on its website. B) The organization is a member of the Partnership on AI and participates in its working groups. C) The organization's ethics review process has required substantial redesign of a high-revenue AI product before allowing deployment. D) The organization employs a Chief Ethics Officer who reports to the Chief Executive Officer.


Question 6 The Facebook Oversight Board, established in 2020, is described in Case Study 6.2 as a genuine governance innovation with structural limitations. Which of the following best characterizes the most significant structural limitation?

A) Board members are not sufficiently independent from Facebook's management. B) The Board can review specific content cases but lacks jurisdiction over the algorithmic systems that determine which content users see. C) The Board's decisions are non-binding and Facebook routinely ignores them. D) The Board lacks sufficient funding to review more than a handful of cases per year.


Question 7 A company claims that it conducts AI ethics reviews for all AI projects. Which of the following would most call into question whether these reviews are genuine rather than performative?

A) The reviews are completed by a team of three people. B) The reviews consistently occur after the AI system is fully developed and immediately before launch, with no instances of a project being delayed or redesigned as a result. C) The reviews use a standardized checklist rather than open-ended deliberation. D) The reviews are documented in internal records that are not publicly disclosed.


Question 8 Which statement most accurately describes the difference between "hard law" and "soft law" in the context of AI governance?

A) Hard law applies to large companies; soft law applies to small and medium enterprises. B) Hard law covers all AI applications; soft law covers only high-risk applications. C) Hard law is legally binding with state enforcement authority; soft law is non-binding but may shape behavior through reputational or market mechanisms. D) Hard law is enacted by national legislatures; soft law is issued by international bodies.


Part II: True or False (5 questions — 2 points each)

For each statement, indicate whether it is True or False, and write 1-2 sentences explaining your reasoning.

Question 9 The United States has, as of 2026, enacted comprehensive federal AI legislation comparable to the EU AI Act, establishing a single unified framework for AI governance.

Question 10 Model cards are a form of AI governance documentation designed to provide transparency about what an AI model does, how it was trained, its limitations, and its recommended uses.

Question 11 Industry self-regulation is always preferable to government regulation because it can respond more quickly to technological change and benefits from deeper technical expertise.

Question 12 The OECD AI Principles are legally binding on all OECD member states, with enforcement mechanisms that allow the OECD to penalize member states that fail to implement them.

Question 13 According to the chapter, an organization can have genuinely good AI ethics and genuinely poor AI governance at the same time — the two are related but not identical.


Part III: Short Answer (4 questions — 5 points each)

Answers should be 150–250 words each. Clear, concise responses demonstrate stronger understanding than lengthy ones.

Question 14 The chapter identifies six dimensions of the "governance gap" that explain why AI governance struggles to keep pace with AI development. Name and briefly explain four of these dimensions. For each dimension, give a concrete example of how it manifests in practice.


Question 15 What is "red-teaming" in the context of AI governance, and why does the chapter argue that red-teaming must involve "genuine adversarial intent" rather than a ritualistic version of adversarial testing? What conditions make the difference between genuine and ritualistic red-teaming?


Question 16 Compare the EU and US approaches to AI governance in terms of: (a) the underlying governance philosophy each reflects, (b) the primary mechanisms each uses, and (c) the practical implications for a company operating AI systems in both jurisdictions. Your answer should identify at least one specific requirement or framework from each jurisdiction.


Question 17 The chapter argues that "governance as culture" is as important as "governance as structure." Explain what this means. Give two concrete, observable indicators that would allow you to distinguish an organization with a genuine AI governance culture from one where AI governance is primarily performative.


Part IV: Applied Scenarios (3 questions — 10 points each)

These questions require analysis and application of chapter concepts to realistic scenarios. Responses should be 300–500 words and demonstrate structured reasoning.

Question 18: The Governance Audit

A mid-size insurance company has approached you to evaluate its AI governance. During your review, you discover the following:

  • The company has published a one-page AI principles statement emphasizing "fairness, transparency, and accountability"
  • There is a "Responsible AI Committee" that meets quarterly, composed of the CTO, the General Counsel, and three senior engineers
  • The committee has reviewed 47 AI projects in the past two years and has approved all of them without requiring changes
  • The company's AI systems include an underwriting algorithm that sets premiums based on customer data; internal data shows the algorithm charges higher premiums to customers in predominantly minority zip codes
  • No model cards or impact assessments exist for any deployed AI system
  • The company's AI procurement standard for vendors is a one-paragraph clause requiring that vendors "comply with applicable law"

Apply the governance principles from Section 6.7 (authority, independence, diversity, documentation, accountability, iteration, transparency) to evaluate this company's governance. Identify the three most critical governance failures and explain what you would recommend to address each.


Question 19: The Regulatory Strategy

You are advising a startup building an AI system for clinical diagnosis support — the system analyzes patient symptoms, medical history, and imaging to suggest differential diagnoses for physician review. The company plans to launch in the EU within 18 months.

a) Under the EU AI Act, what risk tier would this system likely be classified in? What specific requirements does this impose on the company? b) What governance structures must the company build before launch to meet these requirements? c) Beyond legal compliance, what additional governance investments would you recommend to ensure the system is genuinely safe and trustworthy? d) What is the most significant governance risk for a startup in this space — the factor most likely to cause governance failure — and what would you do to address it?


Question 20: The Governance Culture Problem

A technology company has invested significantly in AI governance structures: a Responsible AI Office with a dedicated team of eight specialists, an AI Ethics Committee with external members, mandatory impact assessments for all AI projects, and comprehensive model card requirements. Despite these structures, three serious AI-related harms have occurred in the past 18 months:

  • A hiring algorithm was found to consistently under-rank candidates from certain universities (later found to reflect the company's historical hiring patterns)
  • A customer-service AI was discovered to provide different information to users based on their perceived demographic characteristics
  • An internal productivity AI was flagging employees from certain national backgrounds for "performance concerns" at disproportionate rates

An internal investigation found that in each case: the ethics review had been completed, the documentation requirements had been met, and the impact assessments had been filed. But in each case, the relevant engineers and product managers had described the systems as lower-risk than they were, the impact assessments had been completed late in the development process, and the Responsible AI Office had been resource-constrained and unable to conduct independent verification.

Diagnose what is wrong with this company's governance. Is the problem structural (the structures are inadequate), cultural (the culture does not support genuine governance), or both? What specific changes — both structural and cultural — would you recommend? How would you ensure that the same dynamics do not produce a fourth harm?


Answer Key

Part I: Multiple Choice

  1. B — Ethics provides normative content; governance provides operational machinery. This is the central distinction the chapter establishes in Section 6.1.

  2. B — Govern, Map, Measure, Manage. "Govern" is explicitly identified as the foundational function. Options A, C, and D reflect other frameworks (NIST Cybersecurity Framework, etc.) rather than the AI RMF.

  3. C — Government systems scoring citizens' behavior to restrict access to public services are explicitly prohibited in the EU AI Act's unacceptable risk category ("social scoring"). Option A (credit scoring) is high risk; Options B and D are at most high risk or limited risk.

  4. C — Regulatory capture occurs when regulated industries have sufficient resources and access to shape the regulatory processes that govern them. Option A describes the expertise gap, B describes the jurisdiction problem, and D describes the pacing problem.

  5. C — A governance process that has required substantial redesign of a high-revenue product is the strongest evidence of genuine authority. Options A, B, and D are consistent with performative governance — they describe stated commitments and structures without evidence that the structures have constrained commercial decisions.

  6. B — The Board's most significant structural limitation is its case-level jurisdiction, which excludes the algorithmic systems that are most consequential for platform governance. Option C is incorrect (the Board's decisions on individual cases are binding, though policy recommendations are not). Option A is incorrect (Board members have genuine independence). Option D is a real limitation but not the most significant structural one.

  7. B — Ethics reviews that consistently occur late in the development cycle and never result in delays or redesigns are the clearest indicator of performative review. The timing problem is the most structurally significant: late review cannot shape fundamental design decisions. Options A, C, and D are consistent with genuine governance.

  8. C — Hard law is legally binding with state enforcement authority; soft law is non-binding but may influence behavior through other mechanisms. The other options incorrectly characterize the distinction.

Part II: True or False

  1. False. As of 2026, the United States has not enacted comprehensive federal AI legislation. The US approach remains a patchwork of executive action (Biden's 2023 Executive Order), sector-specific enforcement (FTC, EEOC), and state-level law, without a unified federal framework comparable to the EU AI Act.

  2. True. Model cards, introduced by Mitchell et al. (2019), are standardized documentation artifacts that describe what a model does, how it was trained, what evaluation datasets were used, performance across demographic groups, known limitations, and appropriate use cases. They are an important governance documentation tool.

  3. False. Industry self-regulation has structural limitations that make it consistently inferior to government regulation on several dimensions — particularly enforcement, capture risk, and lowest-common-denominator standards. The chapter presents the case for self-regulation (speed, expertise, flexibility) and the case against with equal seriousness, concluding that self-regulation without independent enforcement is often insufficient on its own.

  4. False. The OECD AI Principles are soft law — they are not legally binding and there are no enforcement mechanisms. They represent intergovernmental normative consensus, not treaty obligations. Their significance is in establishing shared reference points that influence national legislation and industry frameworks.

  5. True. The chapter explicitly distinguishes ethics (normative content — what AI should do) from governance (operational machinery — how ethical commitments become consistent practice). An organization can have sophisticated ethical principles (strong ethics) without governance structures to implement them (weak governance), as the Google advisory board example illustrates.

Part III: Short Answer

  1. Sample answer should include four of: the pacing problem (AI evolves faster than regulatory processes), the expertise gap (regulators lack technical knowledge; technologists lack regulatory knowledge), the capture problem (industry shapes the regulations governing it), the jurisdiction problem (AI systems cross borders; governance frameworks are territorial), the definitional problem (defining "AI" in law is technically and politically contested), and the enforcement problem (regulators lack resources; AI systems are opaque; harms are diffuse). Each should have a concrete example.

  2. Sample answer should explain: Red-teaming is adversarial testing of AI systems by teams specifically trying to find failure modes, circumvent safety measures, and generate harmful outputs before deployment. "Genuine adversarial intent" means the team is actually trying to break the system, not performing the ritual while pulling punches. Conditions distinguishing genuine from ritualistic red-teaming: whether the team has independence and protection to report findings honestly; whether the team has time and resources proportionate to the system's risk level; whether findings are taken seriously and result in changes; and whether leadership has committed to delaying deployment if significant findings emerge.

  3. Sample answer should address: EU — precautionary, rights-based philosophy; EU AI Act as primary mechanism; four-tier risk framework with significant pre-market requirements for high-risk AI; prohibitions on specific applications; requires risk assessment, documentation, human oversight for high-risk systems. US — sectoral, innovation-permissive philosophy; Executive Order, FTC enforcement, EEOC guidance, state patchwork; no comprehensive framework; primarily reactive enforcement. Practical implications: companies in both jurisdictions must meet the more demanding EU requirements for products serving EU users; may face compliance gaps where US state law and EU requirements diverge; need to build documentation and assessment processes that satisfy EU high-risk requirements.

  4. Sample answer should explain: "Governance as culture" means that genuine AI accountability requires the informal, behavioral, and social dimensions of organizational life — not just formal structures. Observable indicators of genuine culture: (1) Engineers who raise ethical concerns are celebrated and visible, rather than marginalized or managed out — you could ask employees to name colleagues who raised concerns and what happened; (2) Senior leaders raise ethics questions in product review meetings, not just in ethics function meetings — you could attend product reviews and observe what questions leadership actually asks; (3) When business metrics and ethics concerns conflict, examples exist of the ethics concern winning. Indicators of performative culture: ethics language appears in public communications but not in internal performance reviews; ethics professionals report being ignored; engineers describe ethics review as a paperwork exercise.

Part IV: Applied Scenarios

  1. Scoring guidance: Full credit requires: identifying specific governance failures against specific principles (not just general criticism); analyzing the underwriting algorithm's discriminatory outcomes as the most serious compliance and ethical failure; identifying the "approve-all" track record as evidence of missing authority; identifying late-stage review as structural failure; and making specific, implementable recommendations (not generic advice to "take governance more seriously"). Look for: (1) Authority failure — committee has never required changes; needs genuine blocking authority; (2) Independence failure — committee composed of executives who report to each other; needs external or more independent members; (3) Documentation failure — no model cards or impact assessments; needs immediate documentation requirement; (4) The discriminatory pricing outcome — not identified through governance despite being internally visible; fundamental accountability failure.

  2. Scoring guidance: Full credit requires: (1) Correctly identifying this as a high-risk AI system under EU AI Act (clinical decision support meets the high-risk healthcare category); (2) Specific requirements: conformity assessment, technical documentation, data governance, accuracy and robustness requirements, human oversight requirements, registration; (3) Governance structures: responsible AI function, ethics review process, clinical expert oversight, documentation infrastructure; (4) Additional governance: diverse evaluation datasets, clinical validation in relevant patient populations, ongoing monitoring in deployment, patient transparency, adverse event reporting. Most significant governance risk for startup: resource constraints leading to inadequate safety testing and documentation before commercial pressure to launch.

  3. Scoring guidance: This scenario involves both structural and cultural failure. Full credit requires diagnosing both dimensions: Structural failures — impact assessments completed late; RAI office resource-constrained and cannot independently verify claims; no mechanism to detect misrepresentation of risk level by engineering teams. Cultural failures — engineers systematically describing systems as lower-risk than they are, which suggests either cultural safety problem (can't raise concerns honestly) or incentive misalignment (rewarded for shipping, not for accurate risk assessment). Recommended structural changes: move impact assessments to earlier in development cycle; resource RAI office for independent verification; create audit mechanism. Cultural changes: leadership modeling; incentive alignment; anonymous reporting mechanism for ethics concerns; post-incident review that is genuinely honest about cultural factors.