Chapter 21: Quiz — Corporate Governance of AI
20 questions. Unless noted, each question has one best answer.
Question 1 When the Axon AI Ethics Board resigned en masse in 2019, the primary reason stated was:
A) They disagreed with Axon's facial recognition technology on technical grounds B) They believed their concerns would not translate into binding constraints on Axon's plans C) They were concerned about conflicts of interest among board members D) They had completed their advisory mandate and the board's work was done
Question 2 Which of the following best describes the distinction between the "oversight" and "enablement" pillars of AI governance?
A) Oversight involves external review; enablement involves internal review B) Oversight provides independent review of AI systems; enablement provides practitioners with tools, training, and guidance C) Oversight covers pre-deployment review; enablement covers post-deployment monitoring D) Oversight applies to governance bodies; enablement applies to individual engineers
Question 3 An organization's AI ethics committee is composed entirely of internal employees, reports to the Chief Legal Officer, and has authority to recommend but not require changes before deployment. Which governance problem does this structure most clearly illustrate?
A) Insufficient expertise diversity B) Inadequate meeting frequency C) Insufficient independence and authority D) Inappropriate reporting line for a legal function
Question 4 Microsoft's Responsible AI Standard is distinguished from most AI principles documents primarily because it:
A) Is publicly available and freely downloadable B) Specifies operational requirements, testing standards, and review processes rather than only aspirational principles C) Was developed in collaboration with external civil society organizations D) Applies only to AI systems with direct consumer impact
Question 5 "Ethics washing" in the context of corporate AI governance refers to:
A) Removing ethical concerns from AI systems through technical mitigation B) The process of reviewing AI systems for ethical compliance before deployment C) Using ethics vocabulary and structures to signal commitment without making substantive operational changes D) The practice of washing training data to remove biased examples
Question 6 When an organization deploys a third-party AI hiring tool that is later found to discriminate against protected groups, which of the following best describes the organization's legal and ethical position?
A) The organization bears no liability since it did not develop the discriminatory system B) The organization's liability is limited to what it could reasonably have known at time of purchase C) The organization is accountable for the discriminatory impact regardless of whether it developed the system D) Liability is shared equally between the deploying organization and the AI vendor
Question 7 A company's responsible AI team identifies a significant bias problem in a model three weeks before its scheduled deployment. Management reviews the finding, determines that delay would be commercially costly, and proceeds with deployment over the team's objection. This scenario most directly illustrates which governance failure?
A) Insufficient technical expertise in the responsible AI team B) Governance structures that lack authority to act on their findings C) Inadequate pre-deployment review timeline D) Absence of external oversight mechanisms
Question 8 GDPR's "right to erasure" creates a specific technical challenge for AI governance because:
A) It requires all AI systems to be retrained without the deleted data B) Erasing personal data from storage does not necessarily erase the model's learned parameters encoding that data C) It prevents organizations from using personal data in AI training at all D) It creates conflicting obligations with US federal data retention requirements
Question 9 The concept of "red-teaming" in AI governance refers to:
A) Creating a separate review committee for high-risk AI applications B) Structured adversarial testing designed to surface harmful or unintended AI outputs C) A protocol for escalating ethics concerns to executive leadership D) External auditing of AI systems by third-party reviewers
Question 10 In the governance maturity framework presented in this chapter, which level is characterized by governance being "measured and monitored" with regular reporting to leadership?
A) Level 2 — Developing B) Level 3 — Defined C) Level 4 — Managed D) Level 5 — Optimizing
Question 11 Google's ATEAC (Advanced Technology External Advisory Council) failed in 2019 primarily because:
A) The council lacked technical expertise to evaluate AI systems B) It was composed entirely of Google employees without external perspectives C) The appointment of a controversial member generated protests that led to rapid dissolution D) Google's board determined that external advisory councils were inappropriate governance structures
Question 12 Which of the following best describes the governance function of "model cards"?
A) Business cards distributed to responsible AI team members identifying their role and authority B) Standardized documentation describing a model's intended use, performance characteristics, limitations, and ethical considerations C) Playing cards used in ethics review exercises to simulate adversarial scenarios D) Credit card-style access credentials for internal AI governance systems
Question 13 An organization publishes AI principles that include a commitment to "fairness." In the following year, an independent audit finds that three of its AI systems produce significantly worse outcomes for protected groups. Which analytical framework from this chapter best explains this gap?
A) The organization's AI systems are technically incapable of achieving fairness B) The principles document lacked specificity, accountability, and enforcement mechanisms to translate aspiration into practice C) Fairness is inherently impossible to achieve in AI systems at scale D) The independent audit used methodological approaches the organization's principles did not anticipate
Question 14 The "capacity vs. authority trade-off" in responsible AI functions refers to the tension between:
A) The capacity of AI systems and the authority of ethics boards to review them B) Having enough staff to review AI systems and having enough authority to act on review findings C) Technical capacity requirements and organizational authority structures D) Data storage capacity requirements and authorized data retention periods
Question 15 Which of the following is the strongest argument for including external members on an AI ethics committee?
A) External members are more technically knowledgeable about AI systems than internal employees B) External members can provide compensation benchmarking for AI ethics practitioners C) External members provide independence from the organization's commercial interests and perspectives that internal members cannot supply D) External members protect the organization against regulatory enforcement by demonstrating good faith
Question 16 A company's AI governance review process has been described by participating engineers as a "speed bump" — adding delay without genuinely influencing product decisions. Which of the following interventions would most directly address this problem?
A) Increasing the frequency of ethics committee meetings B) Publishing the company's AI principles externally for accountability C) Giving the review process authority to require remediation or delay deployment, not merely to recommend it D) Increasing the number of engineers assigned to the responsible AI team
Question 17 The concept of "datasheets for datasets" is most analogous to which other AI governance artifact?
A) Ethics board charters B) Model cards C) Red-team reports D) Vendor due diligence questionnaires
Question 18 According to the chapter's analysis, what was the most significant governance lesson from Axon's facial recognition moratorium — which was announced after the ethics board's resignation rather than in response to the board's concerns?
A) Facial recognition technology was not technically ready for deployment in 2019 B) The reputational consequences of the resignation, not the board's expert concerns, were the effective governance mechanism C) External advisory boards are inherently less effective than internal governance bodies D) The moratorium demonstrated that the board's authority had been appropriate all along
Question 19 Which of the following best describes the governance implication of most organizations using far more AI than they build?
A) Vendors bear primary governance responsibility for the AI systems they sell B) Organizations can delegate AI governance to the vendor ecosystem, reducing internal governance requirements C) AI governance frameworks must extend to vendor due diligence and procurement to be comprehensive D) External AI systems present lower ethics risks than internally developed AI because they have been tested by the market
Question 20 An organization at Level 5 ("Optimizing") in the governance maturity model is primarily characterized by which of the following?
A) Having completed all governance requirements with no further improvement needed B) AI governance driving innovation, contributing to industry standards, and being integrated with product strategy C) Having no AI incidents or governance failures in the preceding 12-month period D) Full regulatory compliance across all jurisdictions where the organization operates
Answer Key
1-B, 2-B, 3-C, 4-B, 5-C, 6-C, 7-B, 8-B, 9-B, 10-C, 11-C, 12-B, 13-B, 14-B, 15-C, 16-C, 17-B, 18-B, 19-C, 20-B