Chapter 6: Exercises

Difficulty Scale: - ⭐ Foundational — tests comprehension of key concepts - ⭐⭐ Developing — applies concepts to new contexts - ⭐⭐⭐ Challenging — requires synthesis and critical evaluation - ⭐⭐⭐⭐ Advanced — requires original analysis, design, or extended argument

† Denotes exercises particularly suited for in-class discussion or team-based work


Part A: Comprehension and Concept Application

Exercise 1 ⭐ Define "AI governance" in your own words, and explain how it differs from "AI ethics." Use a concrete example to illustrate the distinction: what would a company have if it had strong AI ethics but weak AI governance? What would it have if it had strong AI governance but weak AI ethics?


Exercise 2 ⭐ The chapter identifies three levels at which AI governance operates. For each of the following governance mechanisms, identify which level it operates at (organizational, industry, or societal) and briefly explain your reasoning:

a) The NIST AI Risk Management Framework b) The EU AI Act's prohibition on social scoring systems c) An internal AI project ethics checklist d) IEEE's Ethically Aligned Design framework e) The FTC's enforcement actions against deceptive AI claims f) A company's responsible AI team g) The OECD AI Principles h) New York City's Local Law 144 requiring bias audits of hiring AI


Exercise 3 ⭐ Match each term on the left with its correct definition on the right:

Term Definition
Hard law Adversarial testing of AI systems to find failure modes
Soft law The gap between AI's pace and governance capacity
Red-teaming Legally binding rules with state enforcement
Model card Non-binding governance instruments
Governance gap Documentation of a model's design, data, and limitations
Ethics washing Performance of ethical commitment without operational substance

Exercise 4 ⭐ The EU AI Act classifies AI systems into four risk tiers. For each of the following AI applications, identify the most appropriate risk tier and briefly justify your answer:

a) A spam filter for email b) A system that ranks job applications for human review c) A government system that scores citizens' social behavior to determine access to services d) A chatbot for a retail company's customer service e) An AI system used in clinical diagnosis f) A recommendation algorithm for a streaming service g) A predictive policing algorithm used by law enforcement h) An AI-powered grammar checker in word processing software


Exercise 5 ⭐⭐ † The chapter argues that "the test of whether your organization's AI principles are substantive: has the principles review process ever blocked a revenue-generating project?"

a) Evaluate this test: is it a fair and sufficient test of whether AI principles are substantive? What are its limitations? b) Can you think of scenarios where governance processes never blocking a project would be consistent with genuine governance? What would those scenarios look like? c) What alternative tests might complement this one for assessing whether AI governance is genuine or performative?


Exercise 6 ⭐⭐ Compare Microsoft's AI governance approach (Case Study 6.1) with Facebook's stated governance approach (Case Study 6.2). Use a structured comparison:

a) What formal governance structures did each company have? b) What authority did those structures have? c) What transparency did each company provide about governance outcomes? d) What evidence exists that the governance structures constrained commercial decisions? e) What is your overall assessment of the governance quality in each case, and why?


Part B: Analysis and Evaluation

Exercise 7 ⭐⭐ † The chapter presents arguments both for and against industry self-regulation. Consider the following self-regulatory initiative: the Partnership on AI's tenets and research outputs.

a) Apply the "case for" arguments to PAI: how does PAI score on speed, technical expertise, and flexibility? b) Apply the "case against" arguments: how does PAI score on capture risk, enforcement, and standard quality? c) What would PAI need to change to address the most serious objections to industry self-regulation? d) Is PAI genuinely better than nothing, or is it primarily ethics washing? Support your position with specific reasoning.


Exercise 8 ⭐⭐ The governance gap has six structural dimensions identified in the chapter: the pacing problem, the expertise gap, the capture problem, the jurisdiction problem, the definitional problem, and the enforcement problem.

For each dimension, identify: a) One concrete example of that dimension causing real harm in a documented AI governance failure b) One proposed remedy and an honest assessment of that remedy's limitations

Present your analysis in a structured table or memo format.


Exercise 9 ⭐⭐ Read the following scenario and apply the EU AI Act's risk framework:

A large retailer has deployed an AI system that analyzes customer purchase history, in-store movement patterns captured by cameras, facial recognition, and social media data to predict customer churn risk and personalize in-store staff interactions. The system assigns each customer a "value score" that determines whether they receive priority service, promotional offers, or targeted upselling approaches.

a) What risk tier would this system most likely fall into under the EU AI Act? Justify your classification. b) What specific requirements would that classification impose on the retailer? c) Are any elements of this system potentially in the "unacceptable risk" category? Which elements, and why? d) How would this system be regulated under the current US approach? What gaps do you identify?


Exercise 10 ⭐⭐⭐ † The Myanmar case (Case Study 6.2) showed that Facebook's platform amplified anti-Rohingya content for years before generating sufficient regulatory or reputational consequences to prompt action. The chapter argues this reflects governance designed around legal and reputational risk rather than human rights risk.

a) What would a human rights risk assessment of Facebook's launch in Myanmar have looked like, conducted in 2012 when Facebook was beginning to expand aggressively in the region? b) What information was available at that time about the ethnic tensions in Myanmar, the role of hate speech in generating violence, and the likely amplification dynamics of Facebook's algorithm? c) What governance actions would a company that had conducted such an assessment, and took it seriously, have taken? d) What organizational and structural conditions would have needed to exist at Facebook to make (c) a realistic outcome, rather than a theoretical one?


Exercise 11 ⭐⭐⭐ The chapter notes that international AI governance faces the challenge that major AI powers have genuinely different values: the EU prioritizes fundamental rights and democratic accountability; the US emphasizes market freedom and national security; China pursues state-directed development and social stability.

a) Identify three specific governance requirements that the EU would likely insist on, the US would likely resist, and China would likely oppose. b) Identify two governance requirements that you think all three could agree to, and analyze why agreement would be possible in those cases. c) What does the pattern of disagreements and potential agreements reveal about the underlying political dynamics of international AI governance? d) Is there a principled basis for determining which values should prevail in international AI governance, or is this simply a question of which actors have more power?


Exercise 12 ⭐⭐⭐ The NIST AI RMF identifies "Govern" as the foundational function that makes "Map," "Measure," and "Manage" possible.

a) In what sense is "Govern" foundational? What would happen if an organization implemented Map, Measure, and Manage without first addressing Govern? b) Design an implementation roadmap for a mid-size financial services company (1,500 employees, significant use of algorithmic credit decisioning) to implement the four NIST AI RMF functions over 18 months. Specify: what actions are required in each phase, who is responsible, what resources are required, and what success looks like. c) What are the three biggest risks to successful implementation of your roadmap, and how would you mitigate them?


Part C: Design and Application

Exercise 13 ⭐⭐⭐ † Design Exercise: AI Ethics Review Process

You have been hired as a consultant to design an AI ethics review process for a healthcare system (a large regional hospital network with 12,000 employees) that is deploying AI for clinical decision support, administrative automation, and patient communication.

Design a review process that addresses: a) Which AI projects trigger formal ethics review (criteria for inclusion) b) Who sits on the review body (composition, expertise, independence) c) What the review process actually involves (stages, timeline, documentation) d) What authority the review body has (advisory, approval, blocking authority) e) What happens when the review identifies serious concerns f) How the process is monitored and evaluated over time

Present your design in a structured format suitable for presentation to the hospital's board of directors.


Exercise 14 ⭐⭐⭐ † Design Exercise: Vendor Procurement Standards

Your organization is evaluating three AI vendors for a new employee performance management system that will provide algorithmic assessments of employee productivity, flag performance concerns to managers, and inform compensation decisions.

Design a responsible AI procurement standard that you would require all three vendors to meet. Your standard should address: a) Documentation requirements (what must vendors provide about the system) b) Fairness and non-discrimination requirements (what testing, at what thresholds) c) Transparency requirements (what information employees are entitled to) d) Audit rights (what your organization can access and verify) e) Incident reporting obligations (what vendors must tell you and when) f) Contractual remedies if vendor fails to meet standards

Critically assess: are there circumstances under which you should decline to deploy this type of system entirely, regardless of how good the vendor's governance is?


Exercise 15 ⭐⭐⭐⭐ † Extended Design Exercise: Organizational AI Governance System

You are the newly appointed Chief AI Ethics Officer of a major e-commerce platform (50 million monthly active users, 8,000 employees, significant use of AI in product recommendations, dynamic pricing, fraud detection, customer service, and supply chain management).

The company has: - Published AI principles (fairness, transparency, accountability) but no operational governance structures - No dedicated responsible AI function - No ethics review process for AI projects - No documentation standards for AI systems - Several documented customer complaints about discriminatory pricing and biased search results

Design a comprehensive organizational AI governance system. Your design should address: a) What governance structures you will build, and why b) What the reporting lines, authority, and composition of those structures will be c) What processes you will implement (ethics review, documentation, incident response, red-teaming) d) What the phased implementation timeline will look like (you cannot build everything at once) e) How you will demonstrate that governance is genuine rather than performative f) What metrics you will use to evaluate governance effectiveness g) How you will handle the cultural change required alongside the structural change

Submit as a governance strategy memo to the CEO and Board of Directors.


Exercise 16 ⭐⭐⭐⭐ Comparative Regulatory Analysis

You are advising a startup building an AI-powered hiring platform (CV screening, video interview analysis, and candidate ranking) that will be sold to employers in the US, EU, and UK.

Conduct a comparative regulatory analysis: a) What requirements does each jurisdiction impose on this type of AI system? b) Where do the requirements conflict, and how would you resolve conflicts? c) What is the most demanding standard across all three jurisdictions, and what would compliance with that standard require? d) What recommendations would you make for the company's product design and governance to enable responsible multi-jurisdictional operation? e) Are there markets you would advise the company to exit or not enter based on regulatory or ethical grounds? Justify your recommendation.


Exercise 17 ⭐⭐ Consider the following statement from a fictional company's annual report:

"We are committed to the responsible development of AI. Our AI Ethics Council met four times last year and reviewed 23 AI projects. We participate in the Partnership on AI and have adopted the NIST AI RMF. Our AI principles are embedded in our engineering processes. We believe AI has the potential to benefit humanity and we take our governance responsibilities seriously."

a) What information is present that is consistent with genuine governance? b) What information is absent that would be necessary to evaluate whether the governance is genuine? c) Write a list of 10 specific questions you would ask this company's leadership to determine whether the governance is substantive or performative. d) What would constitute a satisfactory answer to each of your questions?


Exercise 18 ⭐⭐ The chapter argues that governance bodies need genuine independence — reporting lines that bypass business units under review, employment protections, and external credibility.

a) What are the practical objections to full independence of a governance function within a corporation? Who would raise those objections, and why? b) Design a compromise governance structure that provides meaningful (if not complete) independence for an AI ethics committee at a mid-size technology company, while remaining organizationally feasible. c) At what point does the compromise become so significant that the governance body is effectively co-opted? Where is the line?


Exercise 19 ⭐⭐⭐ † The Governance Culture Diagnostic

The chapter argues that governance culture — psychological safety, leadership modeling, aligned incentives — matters as much as governance structure. Design a diagnostic tool for assessing the governance culture of an organization with respect to AI.

Your diagnostic should: a) Identify 8-12 observable indicators of genuine governance culture (not structure) b) For each indicator, describe how you would assess it (what questions would you ask, what documents would you review, what behaviors would you observe) c) Develop a simple scoring methodology that could be used to compare governance culture across organizations or across time d) Identify the three indicators you believe are most predictive of genuine vs. performative governance, and explain why


Exercise 20 ⭐⭐ The Facebook Oversight Board has genuine independence but limited jurisdiction — it can review specific content cases but not algorithmic design decisions.

a) Draft a revised mandate for an oversight body that would have jurisdiction over Facebook's (Meta's) algorithmic systems, not just content case decisions. b) What authority would this body need to be effective? c) How would you ensure genuine independence given that Facebook would presumably need to fund or facilitate the body's operations? d) What are the strongest objections to your proposed mandate, and how would you address them?


Exercise 21 ⭐⭐ The chapter notes that the definitional problem — defining "AI" in law — is harder than it appears.

Review the EU AI Act's definition of an AI system. Then: a) Identify three systems that are clearly covered by this definition and three that are clearly not covered. b) Identify two "edge cases" — systems where it is genuinely unclear whether the definition applies. c) What governance implications follow from the definitional ambiguity? Who benefits from ambiguity, and who is harmed by it?


Exercise 22 ⭐⭐⭐ † Stakeholder Representation Exercise

The chapter argues that AI governance bodies need diverse representation, particularly of people likely to be affected by the AI systems being governed.

Imagine you are designing the AI governance committee for a company that provides AI-powered credit scoring to banks and mortgage lenders. The committee needs to include both technical/business expertise and stakeholder representation.

a) Who are the affected stakeholders for this type of AI system? b) How would you structure meaningful (not merely symbolic) representation of those stakeholders on a governance committee? c) What structural protections would affected community representatives need to have genuine influence? d) What are the practical challenges of including non-expert community members on a technically complex governance body? How would you address them?


Exercise 23 ⭐⭐ The chapter discusses AI incident response as a key governance mechanism. Design an AI incident response protocol for a bank that uses algorithmic loan decisioning.

Your protocol should address: a) Detection: how will the bank become aware that its system is causing harm? b) Classification: how will the bank assess the severity and scope of an incident? c) Escalation: who is notified, in what order, with what urgency? d) Response: who has authority to modify or suspend the system? e) Remediation: how are harms to affected borrowers addressed? f) Post-incident review: what process determines systemic changes? g) Disclosure: what obligations exist to inform regulators, affected parties, and the public?


Exercise 24 ⭐⭐⭐ † The chapter ends with the question of democratic legitimacy: who should decide how AI is governed?

a) Identify at least four different mechanisms through which democratic input into AI governance could be structured (beyond conventional electoral politics). b) For each mechanism, assess: what are its strengths? What are its limitations? Who would it amplify, and who would it marginalize? c) Which combination of mechanisms do you think would produce the most legitimate AI governance, and why? d) Is "democratic legitimacy" even the right frame for AI governance, or are there alternative legitimacy frameworks that might be more appropriate for some governance domains?


Exercise 25 ⭐⭐⭐⭐ † Capstone Governance Audit

Working in teams, conduct a governance audit of a publicly accessible AI system of your choice (options include: a major platform's content recommendation system, an AI-powered hiring tool with a public-facing interface, a government-deployed AI system with public documentation, or another AI system with sufficient public information available).

Your audit should: a) Describe the AI system: what does it do, who does it affect, and what harms could it cause? b) Describe the governance infrastructure you can identify from public sources: what principles, policies, committees, documentation, and oversight mechanisms exist? c) Apply the governance evaluation criteria from Section 6.7 (authority, independence, diversity, documentation, accountability, iteration, transparency): how does the governance score on each dimension? d) Identify the three most significant governance gaps: what is missing, why does it matter, and what harm could it enable? e) Make three specific, actionable governance recommendations. f) Assess your confidence in your audit conclusions: what information would you need, but do not have, to make more definitive assessments?

Present as a structured audit report with findings and recommendations.