Chapter 32 Further Reading: When NOT to Use AI (and Why That Matters)


Safety-Critical AI Failure Cases

1. Sallam, M. (2023). ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare, 11(6), 887. A comprehensive systematic review of AI in healthcare contexts, including documented accuracy failures in clinical information. Essential reading for understanding why safety-critical boundaries are not theoretical concerns. Open access via MDPI.

2. Moy, A. J., et al. (2023). Comparison of Physician and AI Response Quality on Doctor-Patient Q&A for Ambulatory Oncology. JAMA Oncology, 9(8), 1076-1081. A comparison study of physician vs. AI responses to oncology questions. Despite findings that AI responses were in some ways comparable in quality, the study highlights the specific contexts where AI responses fell short and where professional medical judgment remained necessary.

3. National Comprehensive Cancer Network — AI Policy Statement (nccn.org) The NCCN's policy guidance on AI tools in cancer care provides a framework for thinking about how professional organizations are establishing appropriate and inappropriate AI use in clinical contexts. Representative of the broader trend in professional bodies developing AI governance for safety-critical domains.


Authenticity and Relationship in Communication

4. Origgi, G. (2017). Reputation: What It Is and Why It Matters. Princeton University Press. A philosophical and empirical treatment of reputation as a social phenomenon. Relevant to the relationship-critical communication category: why authentic communication builds trust and why perceived inauthentic communication damages it. The theoretical framework clarifies what is at stake in using AI for communications that require genuine personal engagement.

5. Turkle, S. (2015). Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin Press. Turkle's examination of digital communication and its effects on human connection is directly relevant to the authentic communication concerns in Chapter 32. While not AI-specific, the findings about what is lost in computer-mediated vs. face-to-face communication are applicable to AI-mediated vs. personally-written communication in high-relationship contexts.

6. Baym, N. K. (2015). Personal Connections in the Digital Age (2nd ed.). Polity Press. Examines how authentic personal connection is maintained (and compromised) in digital communication. The framework for understanding authenticity in digital contexts is directly relevant to the question of when AI mediation changes the meaning of communication.


Skill Maintenance and Cognitive Offloading

7. Barr, N., Pennycook, G., Stolz, J. A., & Rand, D. G. (2015). The Brain in Your Pocket: Evidence That Smartphones Are Used to Supplant Thinking. Computers in Human Behavior, 48, 473-480. A study finding that individuals higher in cognitive laziness were more likely to use their smartphones to look up answers rather than working through them mentally. Relevant to the skill atrophy argument: cognitive tools can replace rather than supplement mental effort, with consequences for capability maintenance.

8. Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778. The landmark "Google effects" study showing that people remember where to find information rather than the information itself when they know a digital source is available. Important for the theoretical framework of cognitive offloading and its effects on retained knowledge.

9. Deci, E. L., & Ryan, R. M. (1985). Intrinsic Motivation and Self-Determination in Human Behavior. Springer. The foundational work on intrinsic motivation and the conditions that support autonomous skill development. Relevant to the learning context discussion: why struggle and challenge, while uncomfortable, are motivationally and developmentally important — and why removing the struggle through AI assistance can undermine development.

10. Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. Psychological Review, 100(3), 363-406. The foundational deliberate practice paper. Establishes that expert performance requires specific types of practice — not just experience, but intentional, effortful practice at the edge of current capability. Directly relevant to the AI-free practice zone argument: AI assistance that removes the effortful challenge may remove the conditions under which deliberate practice develops skill.


Professional Ethics and Boundary-Setting

11. Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press (available free at fairmlbook.org) While primarily focused on fairness, the discussion of where AI decision-making is and isn't appropriate establishes a framework for professional boundary-setting that applies to Chapter 32's categories.

12. Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press. A virtue ethics approach to technology, arguing that how we use tools shapes character and habit. Directly relevant to the Chapter 32 argument: AI use patterns form habits and dispositions, and some patterns (delegation of authenticity, avoidance of challenge, outsourcing of care) cultivate virtues we do not want to cultivate.


Confidentiality and Data Governance

13. HHS Office for Civil Rights — HIPAA for Professionals (hhs.gov/hipaa/for-professionals) The official resource for HIPAA compliance requirements. Essential reading for any professional in healthcare or working with health data. The section on business associate agreements is directly relevant to the AI tool selection question for protected health information. Free to access.

14. American Bar Association — Ethics and Professional Responsibility Resources (americanbar.org/groups/professional_responsibility) The ABA's resources on lawyer professional responsibility include guidance on attorney-client privilege, confidentiality obligations, and the use of technology in legal practice. Essential reading for legal professionals and relevant to any professional handling attorney-client privileged material. Some free resources; membership required for others.

15. IAPP (International Association of Privacy Professionals) — AI and Privacy Resources (iapp.org/resources/article/artificial-intelligence-resources) The IAPP maintains practitioner-focused resources on AI and privacy law, including GDPR, CCPA, and sector-specific requirements. Relevant for any professional working with personal data who needs to understand the confidentiality implications of AI tool selection. Free resources available; membership provides additional access.