Chapter 27 Quiz: AI Governance Frameworks


Multiple Choice

Question 1. Which of the following best defines AI governance as described in the chapter?

  • (a) A set of ethical principles that guide AI development.
  • (b) The policies, processes, organizational structures, and accountability mechanisms that ensure AI is used safely, ethically, and effectively.
  • (c) A regulatory compliance program for AI systems.
  • (d) A technical testing framework for validating model performance.

Question 2. According to the Stanford HAI 2025 AI Index Report cited in the chapter, approximately what percentage of organizations that had deployed AI also had a formal AI governance framework?

  • (a) 15 percent
  • (b) 35 percent
  • (c) 55 percent
  • (d) 72 percent

Question 3. The NIST AI Risk Management Framework is organized around four core functions. Which of the following correctly lists all four?

  • (a) Plan, Build, Test, Deploy
  • (b) Govern, Map, Measure, Manage
  • (c) Identify, Protect, Detect, Respond
  • (d) Assess, Implement, Monitor, Review

Question 4. What distinguishes ISO/IEC 42001 from the NIST AI RMF?

  • (a) ISO/IEC 42001 is mandatory for all organizations using AI, while the NIST AI RMF is voluntary.
  • (b) ISO/IEC 42001 is a certifiable management system standard, while the NIST AI RMF is a voluntary framework.
  • (c) ISO/IEC 42001 applies only to financial institutions, while the NIST AI RMF is cross-industry.
  • (d) ISO/IEC 42001 focuses on AI ethics, while the NIST AI RMF focuses on AI risk.

Question 5. In Athena's risk classification framework, what level of governance oversight is required for an AI system that screens job applications?

  • (a) Documentation only
  • (b) Abbreviated impact assessment and peer review
  • (c) Full impact assessment and ethics board review
  • (d) Full impact assessment, ethics board review, and independent external audit

Question 6. The "three lines of defense" approach to model risk management originated from which regulatory framework?

  • (a) The EU AI Act
  • (b) OCC Supervisory Letter 11-7 (SR 11-7)
  • (c) ISO/IEC 42001
  • (d) The OECD AI Principles

Question 7. Which of the following is NOT one of the five OECD AI Principles?

  • (a) Inclusive Growth, Sustainable Development, and Well-Being
  • (b) Transparency and Explainability
  • (c) Profit Maximization and Competitive Advantage
  • (d) Accountability

Question 8. An organization has adopted a governance operating model where a central AI governance function sets policies and standards, while each business unit has an embedded governance liaison who conducts initial risk assessments and facilitates reviews. This describes which governance model?

  • (a) Centralized governance
  • (b) Federated governance
  • (c) Hybrid governance
  • (d) Distributed governance

Question 9. According to the chapter, what is the most common failure mode for AI ethics committees?

  • (a) The committee is too large to make decisions.
  • (b) Committee members lack technical expertise.
  • (c) The committee becomes irrelevant — meeting too infrequently, reviewing too slowly, or lacking binding authority.
  • (d) The committee blocks too many projects, creating antagonism with development teams.

Question 10. In a RACI matrix, what does the "A" (Accountable) role signify, and why does the chapter emphasize its importance?

  • (a) The person who performs the actual governance work; important because the work must be done.
  • (b) The person who owns the outcome and has authority to enforce governance requirements; important because responsibility without authority is ineffective.
  • (c) The person who provides expert input on governance decisions; important because decisions require diverse perspectives.
  • (d) The person who receives updates about governance activities; important because transparency is essential.

Question 11. A company implements an AI governance framework with comprehensive policies, an ethics committee, and impact assessment templates. However, project teams routinely submit formulaic impact assessments copied from templates, the ethics committee rubber-stamps every submission, and no project has ever been delayed or blocked by governance review. Which governance pitfall does this describe?

  • (a) Innovation antagonism
  • (b) The last mile problem
  • (c) Governance theater
  • (d) Under-resourcing

Question 12. What triggered the establishment of Athena's AI Governance Framework?

  • (a) A regulatory requirement from the EU AI Act
  • (b) A board mandate following a routine governance review
  • (c) The discovery that the HR screening model discriminated against older applicants and candidates from non-traditional educational backgrounds
  • (d) A competitive benchmarking study showing that peer companies had governance frameworks

Question 13. Which of the following correctly describes the relationship between the NIST AI RMF's four functions?

  • (a) They are sequential — you complete Govern before beginning Map, Map before Measure, and Measure before Manage.
  • (b) They are concurrent and iterative — all four functions operate simultaneously, with insights from each feeding back into the others.
  • (c) They are independent — each function can be implemented without reference to the others.
  • (d) They are hierarchical — Govern sits above the other three and delegates work to them.

Question 14. In Athena's incident response playbook, what severity level triggers escalation to the CEO and the Board Risk Committee?

  • (a) Level 1 (Minor)
  • (b) Level 2 (Moderate)
  • (c) Level 3 (Significant)
  • (d) Level 4 (Critical)

Question 15. According to the chapter, the 2024 Holistic AI study found that organizations with formal AI governance frameworks deployed models to production how much faster than those without?

  • (a) 10 percent faster
  • (b) 23 percent faster
  • (c) 40 percent faster
  • (d) 65 percent faster

Short Answer

Question 16. In three to four sentences, explain the difference between compliance-driven governance and culture-driven governance. Provide one example of a behavior that distinguishes the two approaches.


Question 17. Grace Chen tells the board: "We move at the speed of trust, not the speed of code." In two to three sentences, explain what this statement means in the context of AI governance and why it is strategically significant.


Question 18. Describe the six components of the AI policy stack identified in the chapter. For each, write one sentence explaining its purpose.


Question 19. The chapter describes how one of Athena's business units voluntarily took a customer churn prediction model offline after an impact assessment revealed it systematically deprioritized customers in lower-income zip codes. In two to three sentences, explain why this example is significant for the value proposition of AI governance.


Question 20. Professor Okonkwo says: "Every high-performing system needs a governor — a regulating mechanism that prevents it from destroying itself." In two to three sentences, explain this analogy and how it reframes the relationship between governance and innovation.


True or False (with Justification)

For each statement, indicate whether it is true or false and provide a one-sentence justification citing evidence from the chapter.

Question 21. The NIST AI Risk Management Framework carries regulatory force and is mandatory for US-based organizations.


Question 22. According to the chapter, organizations in unregulated industries generally have more mature AI governance than organizations in regulated industries.


Question 23. The chapter recommends that AI ethics committees should have advisory authority only, to avoid slowing down innovation.


Question 24. In the "unacceptable risk" tier, the governance framework's response is to implement maximum oversight and monitoring on the AI system.


Question 25. Lena Park argues that designing a governance framework proactively is both less expensive and more flexible than waiting for regulators to impose one.


Answer key available in Appendix B.