Chapter 15: Key Takeaways

Communicating AI Decisions to Stakeholders


Core Concepts

  1. The communication gap is structural, not incidental. AI systems produce outputs in the language of statistics — scores, classifications, probabilities — that carry no inherent communicative value to the humans they affect. Bridging this gap requires deliberate organizational design, not just translation.

  2. Three types of stakeholders, three types of needs. Professional intermediaries (doctors, loan officers) need technical context to exercise genuine judgment. Affected individuals (patients, applicants) need plain-language explanation, reasons, and recourse. Communities need public accountability and participatory governance.

  3. The four requirements of meaningful communication are clarity (adapted to the recipient's background), accuracy (faithful representation of the system and its limitations), actionability (information the person can do something with), and dignity (treating the recipient as a subject of concern, not a data point).

  4. Legal compliance is not meaningful communication. ECOA adverse action notices, GDPR Article 22 disclosures, and FDA labeling requirements define minimum floors, not ethical standards. Organizations that treat regulatory compliance as a ceiling will systematically fail the people their AI affects.

  5. Automation bias and automation complacency are predictable hazards. When professionals over-trust AI recommendations (automation bias) or allow their independent skills to atrophy (automation complacency), the quality of human oversight degrades. Communication and training must actively counteract these dynamics.

  6. Meaningful human review is not nominal human review. A human reviewer who cannot identify or correct algorithmic errors — because they lack information, authority, time, or training — does not constitute the genuine human oversight that ethical AI deployment requires.

  7. Counterfactual explanations are particularly valuable for individuals. "What would have changed the outcome?" gives recipients actionable, specific information they can use to understand, challenge, or respond to a decision. This format is superior to feature importance descriptions for lay audiences.

  8. Uncertainty must be communicated, not hidden. Vendors and deployers have incentives to overstate AI capability. Honest communication requires disclosing confidence levels, error rates, and system limitations — in terms non-technical recipients can interpret.

  9. Community-level communication requires participatory governance. Public-facing AI deployments require more than transparency-as-disclosure; they require mechanisms for community input into whether and how AI is deployed.

  10. Communication design must happen before deployment, not after. Retrofitting explanation capacity onto a deployed AI system is technically difficult and often inadequate. Building communicability into model design from the start produces better results.


Framework Jurisdiction Key Requirement Limitation
ECOA / Regulation B US Adverse action notices with principal reasons Inadequate for ML models with complex feature interactions
GDPR Article 22 EU Meaningful information about logic; right to human review Scope limited to "solely automated" decisions; "meaningful information" undefined
EU AI Act EU Transparency obligations for high-risk AI; disclosure of AI involvement Implementation ongoing; detail of requirements still developing
FDA SaMD guidance US Labeling with performance metrics and limitations Patient-level communication often not required
NYC Local Law 144 New York City Bias audit and disclosure for automated employment tools Enforcement limited; audit quality variable

Organizational Checklist

Organizations deploying AI in consequential decisions should be able to answer "yes" to the following:

  • [ ] We have identified all groups affected by this AI system and mapped their communication needs.
  • [ ] Our communications have been tested with representative samples of recipients, including non-technical users.
  • [ ] Professionals using this AI tool receive training on its limitations, error rates, and how to maintain independent judgment.
  • [ ] Affected individuals receive plain-language explanations that include specific reasons, actionable information, and recourse options.
  • [ ] Our appeal process is genuine: human reviewers have information, authority, and time to make independent decisions.
  • [ ] Our communications disclose AI involvement, not just the decision outcome.
  • [ ] We document AI communications for accountability and audit purposes.
  • [ ] We have feedback loops that use challenge and appeal outcomes to improve both communication and model performance.
  • [ ] We communicate uncertainty and limitations honestly, not just as disclaimers in fine print.

Glossary of Key Terms

Automation bias — The tendency to over-weight automated recommendations, particularly under time pressure or cognitive load, leading to insufficient independent professional judgment.

Automation complacency — The gradual degradation of professional skill and vigilance that results from consistent reliance on automated systems for tasks within the professional's domain.

Counterfactual explanation — An explanation of an AI decision that specifies what would have been different about the situation to change the outcome. Example: "If your debt-to-income ratio had been 5 points lower, the decision would have been different."

Meaningful human review — Human oversight of an AI decision that is genuine: the human reviewer has sufficient information, expertise, time, and organizational authority to identify and correct errors, not merely to ratify the AI's recommendation.

Participatory governance — An approach to AI deployment in which affected communities participate in decisions about whether and how AI is deployed, rather than simply receiving information about systems already deployed.

Positive predictive value (PPV) — The probability that an AI system's positive prediction is correct, given the base rate of the condition in the population. Distinct from the model's sensitivity or confidence score.


Key Figures and Cases Mentioned

  • Ledgerwood v. Jobe (2016) — Arkansas Medicaid case where algorithmic care hour determinations were ruled to violate due process due to inadequate notice and meaningless appeal process.
  • Epic Systems Deterioration Index — Clinical AI tool deployed in thousands of US hospitals to predict patient deterioration; rarely disclosed to patients.
  • GDPR Article 22 — EU provision establishing the right not to be subject to solely automated decisions with significant effects; requires meaningful information about logic.
  • NYC Local Law 144 (2023) — First US employment-specific AI disclosure and bias audit requirement.
  • New York City ADS Task Force (2019) — City task force examining automated decision systems, producing the first systematic inventory of city government AI use.