Chapter 15: Quiz

Communicating AI Decisions to Stakeholders

20 multiple-choice questions. Select the best answer for each.


Question 1. Which of the following BEST describes the "communication gap" in AI decision-making?

A) The inability of AI systems to communicate with human operators B) The mismatch between what AI systems produce (statistical outputs) and what affected humans need (clear, accurate, actionable, dignified communication) C) The lack of sufficient computing power to explain AI decisions in real time D) The legal gap between what GDPR requires and what US law requires


Question 2. A professional intermediary in the context of AI decision-making is BEST described as:

A) A technology vendor who sells AI systems to organizations B) A government regulator who oversees AI deployment C) A professional (physician, loan officer, judge) who uses AI tools in their work D) A data scientist who builds and maintains AI models


Question 3. The four requirements of meaningful AI communication are:

A) Speed, precision, consistency, and automation B) Clarity, accuracy, actionability, and dignity C) Compliance, disclosure, auditability, and transparency D) Personalization, relevance, timeliness, and accessibility


Question 4. ECOA adverse action notices are BEST described as:

A) A fully adequate mechanism for explaining ML-based credit decisions B) A regulatory requirement that works well for rule-based credit decisions but is strained by modern machine learning models C) A standard that has been modernized specifically for AI-based credit decisions D) A European Union requirement that has been adopted by the United States


Question 5. Under GDPR Article 22, individuals have the right not to be subject to solely automated decisions with significant effects. The term "solely automated" is significant because:

A) It means the right applies to all AI-assisted decisions B) It creates a loophole through which any nominal human involvement — even rubber-stamp review — can exclude decisions from Article 22's scope C) It only applies to decisions made by government agencies D) It requires all automated systems to be replaced by human decision-makers


Question 6. Automation bias in AI-assisted professional decision-making refers to:

A) The tendency of AI systems to produce biased outputs due to biased training data B) The tendency of professionals to over-weight AI recommendations, reducing independent judgment C) The tendency of AI systems to favor automation over human review D) Legal bias introduced into regulations governing automated systems


Question 7. What distinguishes automation complacency from automation bias?

A) Automation complacency involves outright rejection of AI recommendations; automation bias involves over-acceptance B) Automation bias is acute (immediate), while complacency is chronic (gradual skill atrophy from sustained reliance on AI) C) They are the same phenomenon described by different researchers D) Automation complacency applies to consumer contexts; automation bias applies to professional contexts


Question 8. A counterfactual explanation of an AI decision BEST describes:

A) The historical data the model was trained on B) A comparison between this AI system and alternative AI systems C) What would have been different about the situation to change the outcome D) An explanation provided by a human reviewer after the fact


Question 9. For AI communications to respect the "dignity" of affected individuals, organizations should:

A) Use technical language to demonstrate the sophistication of the system B) Treat recipients as subjects of concern whose situations matter, not merely as data points that have been processed C) Minimize communication to reduce the risk of legal exposure D) Restrict communication to regulatory minimums to avoid over-promising


Question 10. The literacy challenge in individual AI communication refers to:

A) The difficulty of teaching AI systems to generate plain-language explanations B) The variable background knowledge and AI familiarity across the populations receiving AI communications, requiring communication adapted to the least prepared recipients C) The requirement that all AI communications be written at a sixth-grade reading level D) Legal requirements for communications to be available in multiple languages


Question 11. Predictive policing community communication BEST reflects which principle?

A) Operational security justifies withholding information about algorithmic policing from residents B) Communities should be informed about and able to participate in decisions about AI systems deployed in their neighborhoods C) Police departments may communicate about predictive policing only to elected officials, not to the public D) Community transparency is a best practice but not an ethical requirement


Question 12. Which statement about communicating AI uncertainty is MOST accurate?

A) Confidence scores should not be communicated to non-technical recipients because they will misinterpret them B) Disclaimers at the bottom of AI outputs are sufficient to communicate system limitations C) Uncertainty must be integrated into the primary communication, not appended as a disclaimer, and should be explained using absolute frequencies and consequence-based framing D) AI confidence scores are self-explanatory and require no additional interpretation


Question 13. In the Arkansas Medicaid case (Ledgerwood v. Jobe), the federal court found that the state's communications violated due process primarily because:

A) The algorithm was racially biased B) The vendor charged too much for the algorithm C) Recipients received letters that did not provide sufficient information to understand why their benefits were cut or to prepare a meaningful appeal D) The algorithm was not approved by the FDA


Question 14. A "meaningful human review" of an AI-influenced decision requires:

A) Simply having a human sign off on the decision before it is communicated B) A human reviewer with sufficient information, expertise, time, and organizational authority to identify and correct errors in the AI's recommendation C) Re-running the same AI model with updated inputs D) Any review that takes place before the decision becomes final


Question 15. The FTC's position on disclosure of AI involvement in consumer interactions holds that:

A) AI disclosure is only required for healthcare and financial services B) Failure to disclose material facts about AI involvement can constitute deceptive practice under Section 5 of the FTC Act C) Companies are not required to disclose AI involvement unless the AI is making fully autonomous decisions D) AI disclosure requirements fall exclusively under the purview of the FCC, not the FTC


Question 16. Why should AI communication design occur BEFORE model deployment rather than after?

A) Post-deployment explanation is technically impossible B) Regulatory requirements mandate pre-deployment communication planning C) Systems built with interpretability as a design goal produce better explanations than systems for which explanation is retrofitted through post-hoc methods D) It is required by ISO standards for AI systems


Question 17. NYC Local Law 144 (2023) requires employers using automated employment decision tools to:

A) Obtain explicit consent from job applicants before using AI screening B) Conduct annual bias audits and publish results, and notify candidates when an AEDT was used C) Replace AI screening with human resume review for all managerial positions D) File a report with the NYC Commission on Human Rights before deploying any AI hiring tool


Question 18. The participatory governance model for community AI communication differs from transparency-as-disclosure primarily in that:

A) Transparency-as-disclosure provides more detailed technical information B) Participatory governance involves community members in decisions about AI deployment before systems go live, not just informing them afterward C) Participatory governance is more cost-effective than transparency requirements D) Transparency-as-disclosure is required by law; participatory governance is voluntary


Question 19. Which of the following is MOST characteristic of an inadequate AI appeal process?

A) The appeal process has a 30-day resolution timeline B) Appeals are reviewed by the same department that made the original decision, using the same algorithmic tool, with the outcome communicated via form letter citing the same reasons as the original decision C) The appeal process requires the applicant to submit documentation in writing D) The appeal process is administered by an independent office


Question 20. The statement "AI communication is an ethical obligation, not just a regulatory requirement" is BEST supported by which of the following arguments?

A) Regulatory requirements are always sufficient to protect the public; any additional communication is therefore a business decision, not an ethical one B) Without adequate communication, people cannot exercise autonomy over decisions that affect them, cannot seek appropriate recourse for errors, and are treated as data points rather than as persons with interests that matter — regardless of what law requires C) The costs of communication exceed the benefits in most deployment contexts D) Communication obligations should be determined exclusively by market competition, not by ethics or regulation


Answer Key

  1. B
  2. C
  3. B
  4. B
  5. B
  6. B
  7. B
  8. C
  9. B
  10. B
  11. B
  12. C
  13. C
  14. B
  15. B
  16. C
  17. B
  18. B
  19. B
  20. B