Chapter 27 Exercises: AI Governance Frameworks


Section A: Recall and Comprehension

Exercise 27.1 Define AI governance in your own words. Your definition should address all four components identified in the chapter: policies, processes, organizational structures, and accountability mechanisms.

Exercise 27.2 List and briefly explain the five pillars that establish the case for formal AI governance. For each pillar, provide a one-sentence example of what can go wrong in its absence.

Exercise 27.3 Describe the four functions of the NIST AI Risk Management Framework (Govern, Map, Measure, Manage). Explain why the chapter emphasizes that these functions are iterative rather than sequential.

Exercise 27.4 What distinguishes ISO/IEC 42001 from the NIST AI RMF? Under what circumstances would an organization pursue ISO/IEC 42001 certification? Identify three types of organizations for which certification would be strategically valuable.

Exercise 27.5 List the five OECD AI Principles. For each, write one sentence explaining how it translates into a specific organizational practice.

Exercise 27.6 Explain the "three lines of defense" model for AI model risk management. What is the role of each line, and why is independence between them critical?

Exercise 27.7 What is the governance gap, and what four factors contribute to it according to the chapter? Which factor do you believe is most significant, and why?


Section B: Application

Exercise 27.8: Risk Tier Classification For each of the following AI systems, assign a risk tier (Low, Medium, High, Critical) and justify your classification in two to three sentences: - (a) A chatbot that answers frequently asked questions about a company's return policy - (b) A model that predicts which employees are most likely to resign within six months - (c) An internal tool that summarizes meeting transcripts for team distribution - (d) A credit scoring model that determines loan approval amounts - (e) A recommendation engine that suggests training courses for employees based on their career goals - (f) A facial recognition system that controls access to a secure facility - (g) A demand forecasting model that determines how much inventory to order - (h) An AI system that screens job applications and ranks candidates

Exercise 27.9: Ethics Board Composition You are designing an AI Ethics Board for a mid-sized healthcare company (5,000 employees, $2 billion revenue) that uses AI for patient triage recommendations, claims processing, and provider network optimization. - (a) Propose a committee composition of 6-8 members. For each member, specify the role, the perspective they bring, and why they are essential. - (b) What perspectives are critical for a healthcare AI ethics committee that might not be necessary in other industries? - (c) Draft a one-page charter for the committee, covering scope, authority, trigger criteria, decision-making process, and meeting cadence.

Exercise 27.10: Impact Assessment Select one of the following AI systems and complete a full impact assessment using the seven-domain framework from the chapter: - (a) A bank's AI system that determines credit card spending limits for individual customers - (b) A university's AI system that predicts which students are at risk of dropping out and triggers advisor outreach - (c) An insurance company's AI system that adjusts auto insurance premiums based on driving behavior data collected from a smartphone app For each of the seven domains (system description, data assessment, stakeholder analysis, fairness and equity, transparency and explainability, risk analysis, monitoring and review), write two to three paragraphs of substantive analysis.

Exercise 27.11: Acceptable Use Policy Draft a two-page AI Acceptable Use Policy for one of the following organizations: - (a) A law firm with 200 attorneys - (b) A manufacturing company with 10,000 factory workers and 2,000 office workers - (c) A public school district with 500 teachers Your policy should address: approved and prohibited AI tools, data handling requirements (especially for confidential and personal data), human review requirements, documentation obligations, consequences for violations, and a process for requesting exceptions.

Exercise 27.12: RACI Matrix Create a RACI matrix for AI governance at a fictional financial services company with the following roles: Data Science Team, AI Governance Office, AI Ethics Committee, Business Unit Head, Chief Risk Officer, Chief Technology Officer, and Legal/Compliance. Your matrix should cover at least eight governance activities including: risk classification, impact assessment, fairness testing, model deployment (by tier), incident investigation, policy development, vendor AI assessment, and annual governance review.


Section C: Analysis and Evaluation

Exercise 27.13: Centralized vs. Federated Governance A global consumer goods company with $50 billion in revenue operates in 40 countries. Its AI use spans marketing personalization, supply chain optimization, HR analytics, and customer service automation. Each regional division has significant operational autonomy. - (a) Evaluate the advantages and disadvantages of each governance operating model (centralized, federated, hybrid) for this organization. - (b) Recommend a specific operating model and justify your recommendation. - (c) What specific challenges would arise from applying a single governance framework across 40 countries with different regulatory requirements and cultural expectations? - (d) How would you handle a situation where a model classified as "medium risk" in one jurisdiction is classified as "high risk" in another due to different regulatory requirements?

Exercise 27.14: The Cost of Governance A technology startup's CEO argues: "We're a 50-person company. We can't afford the overhead of an AI ethics board, impact assessments, and a model registry. That's for big companies. We need to move fast." - (a) Evaluate this argument. What are its strengths? - (b) What are its weaknesses? What risks does the CEO's position create? - (c) Propose a "lightweight governance" framework appropriate for a 50-person AI startup — one that provides meaningful oversight without the full organizational infrastructure described in this chapter. - (d) At what organizational size or AI maturity level would you recommend transitioning from lightweight to full governance? What triggers should indicate that the transition is needed?

Exercise 27.15: Governance Failure Analysis The chapter describes six common governance pitfalls: governance theater, one-size-fits-all, the last mile problem, innovation antagonism, static governance, and under-resourcing. - (a) For each pitfall, describe a specific organizational behavior that would indicate the problem is occurring. - (b) For each pitfall, propose one concrete countermeasure. - (c) Which two pitfalls do you believe are most common, and why? - (d) Are there circumstances under which two pitfalls could reinforce each other — creating a compound failure? Describe one such scenario.

Exercise 27.16: SR 11-7 and AI The chapter describes how the OCC's SR 11-7 model risk management framework, originally designed for traditional statistical models, can be applied to AI systems. - (a) Identify three specific ways in which AI models are more challenging to validate than traditional statistical models. - (b) For each challenge, propose an approach that extends traditional model validation practices to address it. - (c) Should all organizations apply SR 11-7-style model risk management to their AI systems, or should it be limited to regulated financial institutions? Argue your position.


Section D: Research

Exercise 27.17: Framework Comparison Research two AI governance frameworks not covered in depth in this chapter (e.g., Singapore's Model AI Governance Framework, Canada's Algorithmic Impact Assessment, the World Economic Forum's AI Governance Alliance recommendations, or a major technology company's published responsible AI framework). - (a) Summarize each framework's key components, scope, and intended audience. - (b) Compare them to the NIST AI RMF and ISO/IEC 42001 on three dimensions: scope, prescriptiveness, and enforceability. - (c) Identify one innovative element in each framework that organizations should consider adopting even if they are not subject to the framework's jurisdiction.

Exercise 27.18: Industry Benchmarking Research the AI governance practices of two companies that have publicly disclosed their frameworks (e.g., Microsoft, Google, IBM, Salesforce, or a company in your industry of interest). - (a) Describe each company's governance structure: committee composition, risk classification system, and key policies. - (b) Identify three strengths of each company's approach and one area for improvement. - (c) How do the companies' governance frameworks compare to Athena's framework as described in the chapter? - (d) What elements of their frameworks could be adapted for a smaller organization (under 1,000 employees)?

Exercise 27.19: Regulatory Alignment Select a specific jurisdiction (EU, US, UK, Canada, Singapore, or another country with published AI governance guidance). - (a) Identify the key regulatory requirements or guidelines for AI governance in that jurisdiction. - (b) Map those requirements to the NIST AI RMF functions (Govern, Map, Measure, Manage). Which requirements align with which functions? - (c) Identify any regulatory requirements that are not adequately addressed by the NIST AI RMF. - (d) How would an organization operating in this jurisdiction need to customize its governance framework to meet local requirements?


Section E: Discussion and Debate

Exercise 27.20: Speed vs. Safety For classroom debate or written argument. Patricia Huang's question to Ravi — "Is this going to slow down our AI projects?" — reflects a tension at the heart of AI governance. Position A: Governance necessarily slows AI development, and this is acceptable because the cost of ungoverned AI failure far exceeds the cost of slower development. Position B: Effective governance should not slow AI development at all. If governance slows things down, the governance framework is poorly designed and needs to be streamlined. Choose a position and argue it, using evidence from the chapter and your own reasoning.

Exercise 27.21: Who Should Serve on an Ethics Committee? For classroom discussion. The chapter argues for diverse committee composition, including an external ethics advisor and an employee representative. Some organizations resist this, arguing that external members create confidentiality risks and employee representatives create power dynamics that complicate decision-making. - (a) What are the strongest arguments for including external members on an AI ethics committee? - (b) What are the strongest arguments against? - (c) How would you address the confidentiality concern? - (d) Should affected communities (e.g., customers whose data is used, applicants screened by AI) have direct representation on ethics committees? Why or why not?

Exercise 27.22: The Unacceptable Risk Category For classroom discussion. The chapter argues that the "unacceptable risk" category is the most important in the risk classification framework — and that if nothing ever lands in it, governance is performative. - (a) Describe a scenario in which a clearly valuable AI system should be classified as "unacceptable risk." What makes it unacceptable despite its value? - (b) How should an organization handle the revenue or efficiency loss from declining to deploy an unacceptable-risk system? - (c) Who should have the authority to override an "unacceptable risk" classification — if anyone? What safeguards would be needed?

Exercise 27.23: Governance and Competitive Advantage For written reflection. Ravi argues that governance enables sustainable innovation. Tom initially sees governance as overhead, then comes to view it as the foundation of sustainable speed. - (a) Under what market conditions is strong AI governance most likely to be a competitive advantage? - (b) Under what conditions might strong governance be a competitive disadvantage? - (c) Is there a first-mover advantage in AI governance — do organizations that build governance early gain benefits that latecomers cannot replicate? Why or why not?


Section F: Integrative Application

Exercise 27.24: AI Governance Proposal You have been hired as the first AI Governance Director at a mid-sized financial services firm ($5 billion AUM, 3,000 employees, 25 AI models in production). The CEO is supportive but the CTO is skeptical, calling governance "innovation tax." Prepare a comprehensive proposal (5-7 pages) including: - (a) An executive summary making the business case for governance (not just the ethical case) - (b) A proposed governance structure (committee, office, liaisons) - (c) A risk classification framework tailored to financial services - (d) A phased implementation plan (12 months) - (e) A resource request (staff, budget, tools) - (f) Metrics for measuring governance effectiveness after 12 months - (g) A strategy for winning over the skeptical CTO

Exercise 27.25: Governance Maturity Assessment Using the concepts from this chapter, develop a five-level AI Governance Maturity Model (analogous to the AI Maturity Model from Chapter 1). For each level: - (a) Define the level with a descriptive name and two-sentence summary - (b) Describe the governance structures, policies, and practices characteristic of that level - (c) Identify the key indicators that an organization is at that level - (d) Describe what is required to advance to the next level Apply your maturity model to Athena Retail Group: at which level was Athena before the HR screening crisis? At which level is it now, after implementing Ravi's framework? What would it take to reach the next level?

Exercise 27.26: Cross-Chapter Integration Trace the connections between AI governance (this chapter) and three earlier chapters: - (a) Chapter 22 (No-Code/Low-Code AI and Shadow AI): How does the governance framework address the shadow AI problem? What specific governance mechanisms are most relevant? - (b) Chapter 25 (Bias in AI Systems): How would the governance framework described in this chapter have changed the trajectory of Athena's HR screening crisis? Walk through the specific governance mechanisms that would have intervened. - (c) Chapter 26 (Fairness, Explainability, and Transparency): How does the governance framework incorporate the fairness and explainability tools discussed in Chapter 26? Where do they fit in the impact assessment, risk classification, and monitoring processes? Then project forward: - (d) Chapter 28 (AI Regulation — Global Landscape): How does internal governance relate to external regulation? Can strong internal governance reduce regulatory risk? Under what circumstances might internal governance and regulatory requirements conflict?


Answers to selected exercises are available in Appendix B.