Chapter 4: Exercises

Difficulty Scale: - ⭐ Recall and comprehension - ⭐⭐ Application and analysis - ⭐⭐⭐ Synthesis and evaluation - ⭐⭐⭐⭐ Original design and creative problem-solving

† = Recommended for in-class discussion or group work


Part A: Recall and Comprehension (⭐)

Exercise 1 ⭐ In your own words, define each of the following terms and give a concrete example of each in an AI context: - Stakeholder - Data subject - Affected community - Power asymmetry - Principal-agent problem

Exercise 2 ⭐ List the eight tiers of the AI value chain described in Section 4.2, in order from most upstream (closest to raw research) to most downstream (most directly affected). For each tier, name one specific organization that exemplifies that tier.

Exercise 3 ⭐ True or false, with one sentence of explanation: a) An AI system that does not use demographic data as an explicit input cannot produce demographically discriminatory outputs. b) Legal compliance with applicable law is sufficient to establish that an AI deployment is ethical. c) Data subjects necessarily have a user account or formal relationship with the AI system that processes their data. d) Freeman's stakeholder theory argues that the sole purpose of a firm is to maximize shareholder returns. e) The EU AI Act classifies all AI systems as high-risk and subjects them to the same requirements.

Exercise 4 ⭐ Name three internal organizational stakeholders in an AI-deploying company whose roles were discussed in Section 4.3. For each, describe in two to three sentences the specific source of ethical tension in their role — that is, where the incentives and requirements of their job may conflict with ethical AI development or deployment.

Exercise 5 ⭐ What is the "dual newspaper test"? Describe a hypothetical AI deployment decision that passes both tests, one that fails the first test (would be reported as harmful), and one that fails the second test (would be reported as needlessly cautious).


Part B: Application and Analysis (⭐⭐)

Exercise 6 ⭐⭐ † A national bank is deploying an AI credit-scoring system for personal loan applications. Using the five-step stakeholder analysis methodology from Section 4.6, conduct a preliminary stakeholder analysis. Your response should: - Identify at least 12 distinct stakeholders or stakeholder groups - Classify each by power (H/M/L) and interest (H/M/L) - For the three stakeholders in the "low power, high interest" quadrant, describe a specific engagement mechanism that goes beyond simple notification - Identify the most significant potential stakeholder conflict in this deployment and explain how you would recommend addressing it

Exercise 7 ⭐⭐ The Section 4.4 discussion of external stakeholders lists regulators including the FTC, EEOC, HHS, ICO, CNIL, and EU AI Office. For each of the following AI deployment scenarios, identify which regulatory body or bodies would have primary jurisdiction and explain why: a) A US hospital uses AI to prioritize patient triage in its emergency department. b) A UK e-commerce retailer uses AI to dynamically price products for individual users based on their browsing behavior. c) A US employer uses an AI system to analyze video job interviews and score candidates on personality traits. d) A French bank uses AI to make automated decisions on mortgage applications with no human review. e) A US social media platform uses AI to recommend political content to US users in the months before a federal election.

Exercise 8 ⭐⭐ The Facebook emotional contagion experiment (Case Study 4.2) was defended in part on the grounds that Facebook's terms of service provided consent for research use of user data. Write a 400-word argument for the position that the TOS did provide adequate consent, and then a 400-word argument for the position that it did not. Conclude with a 200-word assessment of which argument you find more persuasive and why.

Exercise 9 ⭐⭐ † Interview exercise: Identify someone in your professional network who works in a role with direct AI-related responsibilities — data scientist, product manager, software engineer at an AI company, compliance officer at a company deploying AI, or similar. Conduct a 30-minute interview focused on the following questions: - What stakeholders does your organization formally consider in AI development or deployment decisions? - What stakeholder groups do you believe are underrepresented in those decisions? - Have you ever experienced a conflict between what you believed was ethically right and what organizational incentives pushed you toward? How did you navigate it? Write a 600-word reflection on what you learned from the interview, with reference to concepts from Chapter 4.

Exercise 10 ⭐⭐ Compare and contrast the stakeholder frameworks implicit in the EU GDPR and the US Federal Trade Commission's approach to AI governance. Your comparison should address: Who counts as a stakeholder with legally enforceable rights? What accountability mechanisms exist for each category of stakeholder? What happens when an AI system causes harm to a user — what are their practical options for recourse?


Part C: Synthesis and Evaluation (⭐⭐⭐)

Exercise 11 ⭐⭐⭐ † A city government is considering deploying an AI system to optimize the routing of its social services — home care visits, food assistance delivery, mental health outreach — across the city. The system would analyze demographic data, service utilization history, and need assessments to predict which residents require which services and prioritize resource allocation accordingly.

Conduct a comprehensive stakeholder analysis for this deployment, including: - A complete stakeholder identification process with at least 15 distinct stakeholder groups - A completed Stakeholder Analysis Worksheet (using the template format from Section 4.6) - An analysis of the three most significant potential stakeholder conflicts - A proposed community engagement process with specific mechanisms for the "low power, high interest" stakeholders - An assessment of which global regulatory framework (EU, US, or other) would provide the strongest protections for affected community members and why

Your response should be 800-1,200 words.

Exercise 12 ⭐⭐⭐ The chapter introduces the concept of "ethics washing" — performing the signifiers of ethical commitment without implementing governance structures that actually constrain AI behavior. Using the structural indicators identified in the chapter (reporting structure, authority, resources, independence, transparency):

a) Design a fictional "ethics washing" responsible AI team — one that looks credible on paper but provides no genuine governance. Describe its structure, reporting lines, mandate, and typical activities.

b) Design a genuinely effective responsible AI team with real governance authority. How does it differ on each of the five structural indicators?

c) An organization's senior leadership is willing to invest in a responsible AI function but has not decided whether to create a genuine governance structure or an ethics-washing equivalent. Write a one-page memo from the perspective of the general counsel advising the CEO on which structure to adopt and why. The memo should address both ethical and legal/reputational risk dimensions.

Exercise 13 ⭐⭐⭐ The concept of "future generations" as stakeholders in AI governance raises genuine philosophical challenges. Future people do not exist yet and cannot represent themselves. Yet decisions made now about AI systems, AI infrastructure, and AI governance norms will significantly shape the world future generations inherit.

Write a structured analysis (600-900 words) addressing the following: a) What are the strongest philosophical arguments for treating future generations as current stakeholders in AI governance decisions? What ethical frameworks support these arguments? b) What are the strongest philosophical arguments against treating non-existent parties as current stakeholders? What frameworks support these arguments? c) Drawing on precedents from environmental law, constitutional law, and long-term fiscal policy, propose at least two specific institutional mechanisms that could give future generations a meaningful (if indirect) voice in current AI governance decisions. d) Apply your proposed mechanisms to a concrete AI governance decision — for example, the decision to develop autonomous weapons systems, or the decision to deploy persistent biometric surveillance infrastructure in public spaces.

Exercise 14 ⭐⭐⭐ The chapter describes the "feedback loop" problem in predictive policing: an algorithm trained on historically racially patterned arrest data produces predictions that direct police to the same neighborhoods, generating more arrests that reinforce the model's assessments in a self-perpetuating cycle.

Identify two other domains (not criminal justice) where this feedback loop dynamic is likely to occur in AI systems, and for each: a) Describe the specific mechanism by which historical data patterns would be reproduced and amplified by an AI system trained on that data. b) Identify the specific communities who would bear the costs of this amplification. c) Propose a specific technical or governance intervention that could interrupt the feedback loop. d) Explain what stakeholder engagement would be required to implement your proposed intervention effectively.

Exercise 15 ⭐⭐⭐ † The chapter notes that Global South communities are often "subject to AI systems designed elsewhere without representation in their design." This raises questions that go beyond domestic stakeholder analysis to questions of global power and technological sovereignty.

Case scenario: A major US-based fintech company is deploying an AI-powered mobile lending platform in three Sub-Saharan African countries. The platform uses alternative data — mobile phone usage patterns, social network characteristics, purchasing behavior — to make lending decisions for users who lack traditional credit histories. The system was designed and trained primarily on data from US and Latin American users.

Analyze this scenario using the chapter's frameworks: a) Who are the key stakeholders, and how does the global variation in regulatory regimes (Section 4.8) affect their rights and recourse? b) Apply the concept of "power asymmetry" to describe the relationship between the fintech company and the communities it is entering. c) What specific risks does deploying a model trained on US/Latin American data in Sub-Saharan African contexts create for the users of this system? d) Design a stakeholder engagement process that would be appropriate for this deployment, acknowledging the practical constraints of operating across international borders with limited local infrastructure.


Part D: Original Design and Creative Problem-Solving (⭐⭐⭐⭐)

Exercise 16 ⭐⭐⭐⭐ † Design a "Stakeholder Rights Charter" for AI deployments affecting more than 10,000 people in a single jurisdiction. Your Charter should: - Define which categories of stakeholders have enforceable rights (not merely consultation rights) - Specify what those rights are: to notice, to explanation, to challenge, to compensation, to representation in governance - Establish governance mechanisms through which rights can be exercised - Define minimum engagement standards that deploying organizations must meet before deployment - Identify who administers the Charter and what enforcement mechanisms apply

Your Charter should be specific enough to be actionable — not a list of aspirational principles but a set of concrete requirements. It should be 600-800 words. Be prepared to defend your choices in class.

Exercise 17 ⭐⭐⭐⭐ Imagine you are the newly appointed Chief AI Ethics Officer at a Fortune 500 financial services company. Your mandate is to redesign the company's approach to AI ethics from an ethics-washing function (which is what your predecessor ran) to a genuine governance function. You have a budget of $5 million per year and the support of the CEO but significant resistance from the Chief Product Officer and the CTO, who view the ethics function as a bottleneck.

Write a 12-month transformation plan that: - Defines the organizational structure and reporting lines you would establish - Identifies the five most important governance mechanisms you would implement in year one - Describes how you would address resistance from the CPO and CTO - Specifies what success looks like at the end of year one — what metrics you would use and what changes you would point to as evidence that the function is genuine rather than performative - Acknowledges the most significant risks to your plan and how you would mitigate them

Your plan should be 800-1,000 words.

Exercise 18 ⭐⭐⭐⭐ The chapter notes that the most consequential behavioral research is now conducted by technology companies without the IRB oversight that governs academic research. Design a proposed regulatory framework for AI behavioral research — research conducted by companies using AI systems to study the behavior of their users.

Your framework should address: - What activities constitute "AI behavioral research" subject to your framework (defining the scope carefully to avoid capturing normal product development) - What oversight body would review AI behavioral research protocols (government agency? independent board? industry body? academic-industry hybrid?) - What standards would govern review: how would you adapt the Belmont Report's principles of respect for persons, beneficence, and justice to AI research contexts? - What protections would be required for vulnerable populations (minors, people with mental illness, people in crisis situations)? - What enforcement mechanisms would apply for violations? - How your framework would interact with existing data privacy regulations

Your framework should be 1,000-1,500 words and should engage with specific examples from the chapter (including the Facebook emotional contagion case) to illustrate how it would apply.

Exercise 19 ⭐⭐⭐⭐ † Roleplay exercise (requires group of 5-8 participants):

Scenario: A mid-sized US city is holding a public hearing on a proposal to deploy AI-powered traffic surveillance cameras that would collect continuous video footage of city streets, use computer vision to identify traffic violations (running red lights, illegal parking, speeding), and use facial recognition capabilities to potentially identify individuals involved in serious traffic incidents.

Assign participants to the following roles: - City traffic department commissioner (proponent) - Civil liberties attorney representing the ACLU - Community organizer from a neighborhood that would be heavily surveilled - Small business owner from the business improvement district - Academic researcher specializing in computer vision bias - City council member facing election in the affected district - Representative of the technology vendor (Motorola Solutions) - Privacy advocate representing a digital rights organization

Each participant should prepare a 3-minute opening statement for the hearing, then participate in 15 minutes of structured dialogue. After the roleplay: a) Each participant writes a 300-word reflection on what they learned from occupying their assigned stakeholder's perspective. b) The group collectively identifies which stakeholder's perspective was most underrepresented in the hearing structure and why. c) The group proposes three changes to the hearing format that would better represent the full range of affected stakeholders.

Exercise 20 ⭐⭐⭐⭐ Research project: Identify an AI system currently deployed in your local community — your city government, your employer, your university, a major local employer, a healthcare system, or a public utility. Conduct a complete stakeholder analysis of this AI deployment drawing on publicly available information, including any public records requests you can practically make.

Your analysis should include: - A description of the AI system, its stated purpose, and its actual deployment context - A complete stakeholder map using the Power-Interest matrix - An assessment of who was engaged in the deployment decision, based on available public records - Documentation of any stakeholder groups that appear to have been excluded - An assessment of whether available information about the system's impacts is sufficient to evaluate its effects on affected communities - A specific set of recommendations for how the deploying organization should improve its stakeholder engagement

This is a substantial research project. Allocate 8-12 hours for research and 4-6 hours for writing. Your final report should be 1,500-2,500 words.


Part E: Integrative Exercises

Exercise 21 ⭐⭐ Create a visual stakeholder map for the predictive policing scenario from the chapter's opening (or from Case Study 4.1). Your map should: - Use the Power-Interest matrix as the organizing framework - Identify at least 12 stakeholders - Use visual conventions (size, color, connecting lines) to indicate the direction and nature of relationships between stakeholders - Include a legend explaining your visual conventions - Include a 200-word narrative summary of the most important relationships and tensions your map reveals

Exercise 22 ⭐⭐ † The chapter discusses four forms of AI stakeholder engagement: notification, consultation, participation, and co-design. For each of the following AI deployment scenarios, recommend the minimum appropriate level of stakeholder engagement for the most directly affected community, and justify your recommendation: a) An online retailer deploys AI-powered product recommendations. b) A school district deploys AI to predict which students are at risk of dropping out. c) A hospital deploys AI to triage emergency room patients. d) A state government deploys AI to make child protective services case routing decisions. e) A private prison company deploys AI to assess inmates' risk levels for parole decisions.

Exercise 23 ⭐⭐⭐ The chapter presents an "Ethical Dilemma Box" about a company deploying an AI system that affects 500 customers and 50,000 community members who are not customers. Write a policy memo (400-600 words) from the perspective of the company's General Counsel advising the CEO on: - The legal obligations (if any) to the 50,000 non-customer affected parties - The ethical obligations (which may exceed legal obligations) - A recommended approach to community engagement that is both ethically substantive and operationally realistic - The reputational and legal risk of proceeding without engagement vs. the cost of the engagement process

Exercise 24 ⭐⭐ Using the vocabulary from this chapter, analyze the following statement: "Our AI system is completely objective because it uses only data, not human judgment."

Your analysis should: - Identify the specific conceptual errors in this statement - Explain, using the concept of "dirty data" and the feedback loop dynamic, why data-driven systems are not automatically objective - Describe at least two scenarios in which an AI system that uses "only data" would produce outcomes that reflect human values and judgment embedded in that data - Explain what a more defensible claim about AI objectivity or neutrality would look like

Exercise 25 ⭐⭐⭐⭐ † Capstone project (may be assigned as group project or final paper):

Design a complete stakeholder governance framework for the following scenario:

A regional health insurance company (covering approximately 800,000 members across three states) is developing an AI system that will: 1. Analyze member health data, claims history, and social determinants of health to identify members at high risk for chronic disease development. 2. Automatically enroll high-risk members in outreach programs and assign them care coordinators. 3. Flag certain high-cost claims for additional clinical review before authorization. 4. Provide members with AI-generated personalized health recommendations through the company's mobile app.

Your governance framework should address: - Complete stakeholder identification and mapping (minimum 20 stakeholders) - Governance structure: who has authority over what decisions? - Community engagement processes for affected populations, with special attention to vulnerable groups (elderly, low-income, people with disabilities, racial and ethnic minority populations who may be underrepresented in training data) - Data subject rights: what rights do members and non-member data subjects have, and how are those rights exercised? - Monitoring and accountability: how will the organization detect and respond to emerging harms? - External accountability mechanisms: what third-party oversight would you recommend? - Compliance framework: which regulatory bodies have jurisdiction, and how does your governance framework satisfy their requirements?

Your framework should be presented as a professional document suitable for presentation to a board of directors — clear, structured, and actionable. Length: 2,500-4,000 words.