Chapter 36: Exercises — AI in Healthcare Decision-Making

25 Exercises for Business and Policy Professionals


Clinical AI Foundations

Exercise 1: Clinical AI Inventory Select a large academic medical center (use publicly available information — most major health systems publish information about their technology adoptions). Research what clinical AI tools the health system has announced deploying. For each tool you identify, attempt to determine: (a) the FDA authorization status; (b) the training data methodology; (c) any published independent validation studies; (d) any documented equity concerns. What governance gaps do you identify based on publicly available information alone?

Exercise 2: Augmentation vs. Automation Spectrum Analysis For each of the following clinical AI applications, assess where on the augmentation-to-automation spectrum the tool falls in its typical deployment: (a) AI triage in emergency departments; (b) AI-assisted chest X-ray reading with radiologist review; (c) fully automated diabetic retinopathy screening with no physician review; (d) AI sepsis alert requiring physician acknowledgment; (e) AI-generated medication dosing recommendation in the EHR. For each, identify what conditions would be necessary for the human oversight to be meaningful rather than nominal.

Exercise 3: The Automation Bias Interview With appropriate IRB guidance or in a classroom simulation context, design an interview study to assess automation bias in clinical decision-making. What questions would you ask clinicians about their use of AI tools? What behavioral patterns would you look for? If you are able to interview a clinician, nurse, or health informaticist with experience in AI-assisted care, conduct the interview and analyze the responses for evidence of automation bias. Write a 500-word reflection on what you found.

Exercise 4: Early Warning Score Comparison Research the National Early Warning Score (NEWS) — a widely used standardized early warning score based on simple vital sign measurements — and compare it to the description of Epic's Deterioration Index. What are the relative advantages and disadvantages of each approach? What would a rigorous head-to-head clinical trial comparing the two approaches need to demonstrate to justify replacing NEWS with an AI-based system in a clinical setting?


IBM Watson for Oncology

Exercise 5: The Evidence Gap Mapping Based on the case study and any additional research you can conduct using publicly available sources, create a timeline mapping: (a) IBM's key marketing claims and announcements about Watson for Oncology; (b) the evidence available at each point about Watson's actual clinical performance; (c) the gap between what was claimed and what was evidenced. What specific information was publicly known at each stage? What information was kept internal?

Exercise 6: Clinical Procurement Due Diligence Protocol You are the Chief Medical Officer of a health system evaluating whether to purchase Watson for Oncology in 2015, before the STAT News investigation. Based on what would have been reasonable due diligence standards at the time, what questions should you have asked IBM? What information should you have demanded? With the benefit of hindsight, what additional standards should have applied? Draft a due diligence protocol for clinical AI procurement that would have been appropriate in 2015 and contrast it with the standards that should apply today.

Exercise 7: Training Data Ethics Watson was trained on hypothetical cases curated by MSKCC oncologists rather than on real patient outcomes data. Discuss the ethical dimensions of this training choice: (a) What assumptions about the validity of expert clinical judgment were embedded in this choice? (b) What alternative training methodologies were available? (c) What informed consent obligations would arise from training on real patient outcomes data? (d) Who owns the training data when it is derived from real patient records — the patient, the hospital, or the AI company?

Exercise 8: International Deployment Governance Watson for Oncology was deployed in hospitals in India, South Korea, Thailand, and multiple other countries without country-specific clinical validation. Design a governance framework that any AI clinical tool should be required to complete before international deployment. Your framework should address: local validation requirements; regulatory compliance in the destination country; assessment of availability of drugs and resources recommended by the AI; and communication to local clinicians about the tool's limitations in their specific context.


FDA Regulation and Clinical AI

Exercise 9: SaMD Classification Exercise Apply the FDA's SaMD risk classification framework to the following clinical AI applications: (a) a chatbot that provides general wellness information about diet and exercise; (b) an AI system that analyzes CT scans to detect pulmonary nodules and flags findings for radiologist review; (c) an AI system that automatically adjusts insulin dosing in a closed-loop glucose monitoring system; (d) an AI tool that predicts a patient's suicide risk from electronic health record data; (e) a natural language processing tool that extracts clinical data from physician notes for quality measurement. For each, identify the risk level and what regulatory pathway would apply.

Exercise 10: Predetermined Change Control Plan Design Design a predetermined change control plan (PCCP) for a hypothetical AI sepsis prediction model that will update quarterly based on new patient data. Your PCCP should specify: (a) what types of updates are anticipated; (b) what performance specifications must be maintained across updates; (c) how performance will be tested before an update is deployed; (d) what constitutes a significant performance change requiring full FDA review; (e) what notification is owed to deploying hospitals when an update is implemented. Reference the FDA's 2021 AI/ML Action Plan in your analysis.

Exercise 11: The Predicate Problem The 510(k) pathway requires demonstrating substantial equivalence to a predicate device. Research two clinical AI tools that received 510(k) authorization and identify the predicates they cited. Analyze: Was the predicate substantially equivalent to the cleared device? Does substantial equivalence to the predicate provide meaningful assurance of safety and effectiveness for the AI product? What would a more appropriate evidentiary standard look like for these products?

Exercise 12: Comparative Regulatory Analysis Research how clinical AI tools are regulated in the European Union under the Medical Device Regulation (MDR) and the AI Act. Compare the EU framework to the U.S. FDA SaMD approach on: (a) risk classification; (b) evidence requirements for market authorization; (c) post-market surveillance requirements; (d) transparency and explainability requirements. Which framework provides more robust governance for clinical AI? What elements of each framework should inform U.S. regulatory development?


Algorithmic Bias and Health Equity

Exercise 13: The Optum Proxy Analysis Obermeyer's 2019 Science paper found that using healthcare costs as a proxy for health need introduced racial bias into Optum's care management algorithm. Research and analyze three other potential proxy variables that might be used in clinical AI systems: (a) socioeconomic status proxies used to predict care compliance; (b) geographic proxies (zip code) used to identify social determinant risk; (c) insurance type as a proxy for care access barriers. For each, analyze what the proxy is intended to measure, how it might diverge from what it measures, and whether and how it might introduce discriminatory effects.

Exercise 14: eGFR Race Correction Case Analysis Research the history of race correction in the eGFR formula: when the correction was introduced, what evidence was cited for it, how it was used in clinical practice, and how it was eventually challenged and removed. Write a case analysis addressing: (a) What methodological errors led to the inclusion of race as a variable? (b) What harm did the correction cause and for how long? (c) What stakeholders could have raised concerns earlier and did not? (d) What does this case teach about the use of race as a biological variable in medical algorithms?

Exercise 15: Equity Impact Assessment Tool Design a health equity impact assessment tool for clinical AI procurement. Your tool should guide health system leaders through an assessment of: demographic composition of the training dataset relative to the health system's patient population; demographic subgroup performance data; potential proxy variable biases; likely differential effects on access to care for different patient groups; and planned monitoring to detect equity issues post-deployment. Pilot your tool on a publicly available description of a commercial clinical AI product.

Exercise 16: Pulse Oximeter Policy Response The pulse oximeter's inaccuracy across skin tones was documented in published research during the COVID-19 pandemic. Research what regulatory actions the FDA has taken or proposed in response. Evaluate the adequacy of the FDA's response: Was it timely? Did it address all relevant clinical contexts? What additional actions should have been taken? What clinical guidance should hospitals have issued to clinicians during the period between documented inaccuracy and regulatory response?


Exercise 17: AI Consent Form Drafting Draft an informed consent disclosure for patients receiving care at a hospital that uses AI tools including: a clinical deterioration prediction model, an AI radiology reading assistant, and an AI-assisted triage tool in the emergency department. Your disclosure should: be understandable to a patient with average health literacy; accurately describe the role of AI without overstating accuracy; enable the patient to make meaningful decisions about their care; and meet the ethical requirements of informed consent. Evaluate the disclosure against existing hospital consent forms for comparison.

Exercise 18: The Patient's Right to Explanation A patient is told that, based on an AI risk score, they will be discharged earlier than they expected. The patient asks why the AI assessed their risk as low. The clinical team cannot explain the AI's reasoning because it is a black-box model. Analyze: (a) Does the patient have a legal right to an explanation? (b) Does the patient have an ethical right to an explanation? (c) What response should the clinical team give? (d) What governance changes would be necessary to enable meaningful explanation of AI clinical decisions to patients?


Mental Health AI

Exercise 19: Digital Mental Health App Audit Select a widely used AI mental health application (Woebot, Wysa, or another app you can access). Read its privacy policy and terms of service. Analyze: (a) What data does it collect? (b) With whom does it share data? (c) What happens to data if a user discloses suicidal ideation? (d) What FDA regulatory status does it claim? (e) What clinical evidence is cited for its effectiveness? Write a consumer-facing report assessing the app's transparency and trustworthiness.

Exercise 20: Crisis Intervention Protocol Design Design a crisis intervention protocol for a consumer mental health AI application. Your protocol should address: (a) How will the system detect crisis disclosures (suicidal ideation, self-harm, acute distress)? (b) What response will the system provide? (c) When and how will it connect users with human crisis counselors? (d) What documentation will be maintained? (e) What are the liability implications if the crisis response fails? Reference relevant clinical guidelines (e.g., Zero Suicide framework) in your design.


End-of-Life Care

Exercise 21: Prognostic AI Ethics Framework Design an ethics framework for the use of AI mortality prediction tools in clinical care. Your framework should address: (a) Who should have access to a patient's AI mortality prediction score? (b) How should scores be communicated to patients and families? (c) What role should an AI prediction play in decisions about care intensity and code status? (d) What should happen when a patient's clinical presentation conflicts with the AI prediction? (e) What audit processes should exist for AI-influenced end-of-life decisions?


Governance Integration

Exercise 22: Clinical AI Governance Framework You are the CMO of a 500-bed community hospital that is implementing a comprehensive clinical AI governance framework for the first time. Your hospital uses Epic (including the Deterioration Index) and has recently agreed to pilot a commercial AI radiology tool. Draft a governance framework that addresses: (a) organizational roles and accountability for clinical AI; (b) procurement standards and vendor evaluation criteria; (c) pre-deployment validation requirements; (d) clinician training and disclosure requirements; (e) patient communication policies; (f) ongoing monitoring procedures; (g) incident response procedures for suspected AI errors; and (h) annual review processes.

Exercise 23: Hospital Board Presentation Using the material from this chapter, prepare a ten-minute presentation (slide deck outline with speaker notes) for a hospital board of directors on the governance risks of clinical AI. Your presentation should: provide a concise overview of the risk landscape using real examples; assess your hypothetical hospital's current governance maturity; propose three to five specific governance investments the board should authorize; and address how governance investments affect both patient safety and institutional risk management.

Exercise 24: Vendor Contract Negotiation Your health system is negotiating a contract with a vendor providing a commercial AI deterioration prediction tool (similar to the Epic Deterioration Index). Identify the ten most important contractual provisions you should negotiate on the following issues: model transparency, performance disclosure, demographic performance data, model update notification, clinical validation obligations, data processing and privacy, incident reporting, termination rights, and liability. For each provision, describe what you want and why the vendor might resist it.

Exercise 25: Regulatory Horizon Planning The FDA's regulatory framework for clinical AI is actively evolving. Based on the proposed rules and guidelines described in this chapter, and any additional regulatory developments you can research, identify three regulatory changes that are likely to take effect within the next three years that will affect how your organization procures and governs clinical AI tools. For each change, describe: what the regulatory requirement is, what it will require your organization to do differently, what resources compliance will require, and what benefits it will provide for patient safety and equity.