Chapter 15: Exercises

Communicating AI Decisions to Stakeholders

25 exercises ranging from individual reflection to group analysis to applied design projects.


Part A: Comprehension and Analysis (Exercises 1–8)

Exercise 1: Stakeholder Mapping Identify an AI system used in a domain you are familiar with (healthcare, finance, employment, education, public benefits, or another). Map all three levels of stakeholders: professional intermediaries, directly affected individuals, and affected communities. For each group, describe what meaningful communication about the AI system's outputs would require, applying the four criteria: clarity, accuracy, actionability, and dignity.

Exercise 2: Legal Requirements Review Research the current CFPB guidance on adverse action notices for machine learning credit models. Write a two-page summary that addresses: What does the CFPB currently require? Where does its guidance acknowledge gaps? What does the CFPB suggest as best practice beyond the minimum? How far does current guidance fall short of meaningful communication for ML credit decisions?

Exercise 3: Automation Bias Detection Read the original research by Mosier and Skitka (1996) on automation bias in aviation, and a more recent study applying automation bias concepts to clinical decision support. Synthesize the findings: What conditions increase automation bias? What conditions reduce it? What communication design features might reduce automation bias in a healthcare AI context?

Exercise 4: GDPR Article 22 Analysis Read GDPR Article 22 in full and the relevant recitals (particularly Recital 71). Then read the ICO's (UK Information Commissioner's Office) guidance on automated decision-making. Write an analysis of what "meaningful information about the logic involved" requires in practice for a machine learning credit scoring model. What information would satisfy the requirement? What information would not?

Exercise 5: Counterfactual Explanation Design An employer uses an AI resume screening system. A candidate, Priya, applied for a software engineering position and was screened out. The system weighted these factors: years of experience (30%), relevant project portfolio (25%), educational institution (20%), certifications (15%), other factors (10%). Design a counterfactual explanation for Priya that is: accurate about the system's assessment; actionable; plain-language; and respectful of her dignity. Then design a second version for Priya, who is a non-native English speaker with limited familiarity with AI systems.

Exercise 6: Uncertainty Translation A clinical AI tool for sepsis risk assessment reports: "Patient risk score: 82/100. Model AUC: 0.83. Sensitivity at this threshold: 73%. Specificity at this threshold: 79%." Your task: translate this information into: (a) a briefing for the attending physician, written for a clinical professional without statistical training; (b) a communication for the patient's family, who has no medical or technical background; (c) a question-and-answer guide for nurses using the tool. Each version should be accurate, appropriate for its audience, and complete.

Exercise 7: Appeal Process Audit Select a type of organization that uses AI for consequential decisions (a bank, an insurance company, a government benefits agency, or another). Research the appeal process it publicly describes for AI-influenced decisions. Evaluate it against the criteria for genuine appeal: accessibility, genuine human review, adequate timeline, transparent outcome. Write a two-page evaluation identifying where the process meets and fails these criteria, and proposing specific improvements.

Exercise 8: Communication Failure Investigation Research a documented case where failure to adequately communicate an AI decision caused harm (the Arkansas Medicaid case is one example; others include automated content moderation errors, hiring AI discrimination, or predictive policing). Write a case brief of 500–700 words that identifies: what was communicated; what should have been communicated; what harm resulted from the communication failure; and what organizational or regulatory changes might have prevented the harm.


Part B: Applied Design (Exercises 9–17)

Exercise 9: Adverse Action Notice Redesign The following is a fictional but representative adverse action notice: "Dear Applicant: We regret that we are unable to approve your application for a credit card at this time. The principal reasons for this decision are: (1) Too many recently opened accounts; (2) Number of recent inquiries; (3) Proportion of revolving accounts with balances." Redesign this notice to meet meaningful communication standards: specific reasons connected to the applicant's actual situation, actionable information, recourse options, and plain language. Write two versions: one for an applicant with strong financial literacy and one for an applicant with limited financial literacy.

Exercise 10: Clinical Communication Protocol You are the Chief Nursing Informatics Officer at a regional medical center that has recently deployed the Epic Deterioration Index. Design a clinical communication protocol that addresses: (a) what nurses should tell patients when a DI alert is triggered; (b) what training nurses should receive to communicate about the DI; (c) what documentation requirements should apply to DI-triggered clinical events; (d) what patient-facing material the hospital should produce about its use of clinical AI tools.

Exercise 11: Community Meeting Design A city government plans to deploy a predictive policing algorithm in three high-crime neighborhoods. Design a community engagement process that meets the participatory governance standard: What information would you present at a community meeting? How would you structure the meeting to enable genuine community input? What decisions would you allow community members to influence? How would you handle disagreement between community members and law enforcement about the value of the system?

Exercise 12: Training Module Outline Design a one-day training module for loan officers at a bank that uses AI-assisted credit decisions. The module should cover: what the AI system does and does not do; how to interpret confidence levels and feature importance; how to exercise genuine professional judgment alongside the AI; how to communicate with applicants about AI-influenced decisions; how to handle applicant challenges and escalations. For each component, specify learning objectives, instructional methods, and evaluation criteria.

Exercise 13: Disclaimer Reform Review five AI-related disclaimers from actual products or services (check terms of service for financial products, medical apps, or other AI-powered services). For each, evaluate: Is the disclaimer accurate? Is it meaningful to the intended user? Does it provide actionable information? Does it effectively communicate limitation? Rewrite one of the disclaimers to be genuinely informative rather than merely protective of the organization's legal position.

Exercise 14: Accessibility Audit Select a real AI communication — an adverse action notice, a content moderation decision, a benefits determination letter — and audit it for accessibility. Does it meet plain-language standards (reading level, sentence complexity, vocabulary)? Is it available in multiple languages for communities the organization serves? Is it accessible to people with visual or cognitive disabilities? What specific changes would improve its accessibility?

Exercise 15: Feedback Loop Design You are the Head of AI Governance at a large insurance company. Design a feedback loop that connects customer communication outcomes back to model improvement. Specifically, address: How will you track which customer communications lead to challenges or appeals? How will you analyze patterns in challenges to identify potential model errors? What threshold of challenge rates should trigger model review? Who in the organization should receive feedback loop reports, and what authority should they have to act on them?

Exercise 16: Vendor Contract Negotiation You are the General Counsel of a hospital preparing to purchase a clinical decision support AI tool from a vendor. The vendor's standard contract includes broad confidentiality clauses that would prevent the hospital from disclosing details about the algorithm's methodology, training data, or performance metrics to patients, regulators, or researchers. Draft the contract modifications you would demand. For each modification, explain the legal and ethical rationale.

Exercise 17: Public Policy Brief Write a 1,000-word policy brief addressed to the Consumer Financial Protection Bureau recommending specific regulatory changes to adverse action notice requirements for AI-based credit decisions. Your brief should: identify the gap between current requirements and meaningful communication; propose specific changes to regulatory text or guidance; address potential objections from the financial industry; and estimate the benefits to consumers and costs to industry of your proposed changes.


Part C: Case Discussion and Role Play (Exercises 18–22)

Exercise 18: Hospital Ethics Committee Role-play scenario: A patient's family has discovered that the hospital used an AI model to recommend transitioning their 78-year-old father to palliative care. The family is demanding: (a) access to the model's recommendation and the data it used; (b) an explanation of the model's accuracy and limitations; (c) a re-evaluation by physicians who are not using the model. The hospital ethics committee must decide how to respond. Divide into groups representing the family, the attending physician, the hospital administration, and the AI vendor. Each group presents its position and interests, and the full group seeks a resolution.

Exercise 19: Regulatory Enforcement Simulation Role-play scenario: A state benefits agency is using an AI system to determine eligibility for housing assistance. A legal aid organization has filed a complaint with the state attorney general alleging that the agency's decision notices are constitutionally inadequate — they do not disclose AI involvement, do not explain the factors driving decisions, and do not provide a meaningful appeal process. Divide into groups representing the legal aid organization, the agency, and the attorney general's office. Each group develops its legal and policy arguments, and the class debates what remedy the attorney general should seek.

Exercise 20: Community Forum Role-play scenario: A transit authority is deploying an AI system to monitor for "suspicious behavior" on public transit, using computer vision and behavioral analytics. A community forum has been convened at which members of the public can comment. Divide into roles: transit authority representatives, civil liberties advocates, community members from neighborhoods with high transit use, disability rights advocates, and city council members. Each group presents its perspective; the forum must reach at least partial consensus on disclosure and accountability requirements.

Exercise 21: Patient Communication Practice In pairs: one student plays a physician, one plays a patient. The "patient" has been told by the hospital's AI-assisted treatment planning tool that continuing chemotherapy is unlikely to improve her prognosis. The "physician" must communicate this information honestly, including: what the AI system found; what the physician's own clinical judgment is; what the patient's options are; what information the patient can request; and how to proceed if the patient wants a second opinion. After the role play, discuss: What was difficult? What communication strategies worked? What would need to change in a real clinical setting for this communication to occur consistently?

Exercise 22: AI Communication Audit Working in teams of three or four, conduct a communication audit of a real organization's AI disclosures. Select a company in financial services, healthcare, or employment that publicly discloses its use of AI (through terms of service, privacy policy, or published statements). Evaluate: What does the company disclose about AI's role in decisions? What does it disclose about how to challenge AI-influenced decisions? How does its disclosure compare to regulatory requirements? How does it compare to meaningful communication standards? Present your findings as a structured audit report.


Part D: Research and Extended Analysis (Exercises 23–25)

Exercise 23: Comparative Regulatory Analysis Compare the AI communication requirements in at least four jurisdictions: the United States (federal), the European Union, the United Kingdom, and one additional jurisdiction of your choice (Canada, Australia, China, or another). For each, identify: What are the primary AI communication requirements? What enforcement mechanisms exist? What have been the notable enforcement actions or regulatory decisions? What gaps remain? Write a 2,000-word comparative analysis that concludes with your assessment of which jurisdiction has the most effective AI communication regulatory framework and why.

Exercise 24: Empirical Research Design Design an empirical study to evaluate the effectiveness of different adverse action notice formats in enabling loan applicants to understand and act on AI credit decisions. Specify: the research question; the study population; the communication formats you would test (at least three); how you would measure comprehension, actionability, and dignity; the ethical considerations in your research design; and what results would tell you about policy recommendations. You do not need to conduct the study — design it rigorously enough that it could be submitted for IRB review.

Exercise 25: Strategic Recommendation You have been hired as an AI ethics consultant by a large US retail bank that is planning to expand its use of ML models in credit decisions. The bank currently provides the legal minimum adverse action notices. Leadership is divided: the Chief Risk Officer wants to expand disclosures to improve customer trust and reduce regulatory risk; the Chief Marketing Officer is concerned that more detailed disclosures will help customers "game" the credit system; the General Counsel is concerned about creating legal exposure by disclosing more information. Write a strategic recommendation of 1,500–2,000 words that addresses all three perspectives, recommends a specific disclosure policy, and explains how to implement it in a way that serves customer communication needs while managing the organization's legitimate concerns.