The most popular framing of AI literacy focuses on capability: what AI can do, how to prompt it well, where it adds value. This framing is incomplete.
In This Chapter
- Introduction: The Test of Sophistication
- Section 1: Safety-Critical Contexts
- Section 2: Relationship-Critical Communication
- Section 3: Learning Contexts
- Section 4: Confidentiality-Constrained Contexts
- Section 5: High-Novelty and High-Context Tasks
- Section 6: Reputation-Critical First Impressions
- Section 7: The "Just Because You Can" Problem
- Section 8: Building Your Personal AI No-Fly List
- Section 9: The Nuanced Middle Ground
- Conclusion: The Sophisticated User's Judgment
- Section 10: The Decision Framework in Practice
- Section 11: Common Objections and Honest Responses
- Section 12: Revisiting the No-Fly List
Chapter 32: When NOT to Use AI (and Why That Matters)
Introduction: The Test of Sophistication
The most popular framing of AI literacy focuses on capability: what AI can do, how to prompt it well, where it adds value. This framing is incomplete.
The complete framing includes a second, harder skill: knowing when not to use AI at all.
This is not a framing for AI skeptics. It is the framing of sophisticated practitioners. Every experienced AI user — people who use these tools extensively and to genuine advantage — has developed a personal map of the contexts where AI does not belong. The more experience they have, the clearer and more specific that map becomes.
The reason this matters is not just philosophical. Inappropriate AI use creates real professional risks: errors in safety-critical contexts, damage to genuine human relationships, skill atrophy that leaves you less capable over time, legal and confidentiality exposure, and reputational damage when AI use in contexts where it shouldn't appear becomes visible.
This chapter builds your map. It covers six major categories where AI use is inappropriate, counterproductive, or harmful. For each category, it explains the reasoning so you can apply the principle to cases the chapter doesn't cover explicitly. And it gives you a framework for building your own personal AI no-fly list — the specific, concrete list of tasks and contexts in your professional and personal life where AI doesn't belong.
Section 1: Safety-Critical Contexts
The Non-Negotiable Boundary
Some decisions have consequences that cannot be reversed, corrected after the fact, or adequately recovered from. In these contexts, the probability of AI error — however low — multiplied by the severity of the consequence — however catastrophic — produces an unacceptable expected harm.
This is the first and most absolute category of AI no-fly zones.
Medical diagnosis and clinical decision-making. AI models contain extensive medical knowledge and can produce impressively plausible clinical assessments. They should not be used as the primary or sole basis for diagnostic or treatment decisions. Medical diagnosis requires clinical examination, patient history, professional judgment developed through years of training and supervised practice, and accountability structures (licensing, malpractice, peer review) that AI does not have. A wrong diagnosis or treatment recommendation can kill or permanently harm a patient. The fluency of AI medical language makes this category especially dangerous — plausible-sounding medical advice is more dangerous than no advice at all.
AI tools may appropriately support licensed clinicians as research aids, documentation assistants, or information lookups — but always with professional judgment as the decision layer and always with verification against clinical standards.
Legal advice for high-stakes decisions. AI can explain legal concepts, describe how laws work in general, and help you understand what questions to ask a lawyer. It cannot replace a licensed attorney for decisions with significant legal consequences — because law is jurisdiction-specific, fact-specific, and context-specific in ways that AI cannot reliably handle, and because the attorney-client relationship creates professional accountability structures that AI cannot replicate.
The documented pattern of AI hallucinating legal citations (Chapter 29) amplifies this concern. In high-stakes legal contexts, wrong information is not just unhelpful — it can result in missed deadlines, waived rights, or decisions made on a false understanding of what the law requires.
Financial decisions with significant consequences. AI tools can help you understand financial concepts, model scenarios, and think through options. They should not be the sole basis for significant investment decisions, tax strategies, or financial planning for retirement and major life events. The combination of potential hallucination in technical regulatory details, the model's inability to know your complete financial picture, and the irreversibility of major financial decisions makes AI a support tool, not a decision-maker in this domain.
Engineering safety and physical infrastructure. Code that will run in safety-critical systems, structural calculations, electrical designs, and anything else where failure modes include injury or death require verification processes that go well beyond AI output review. Professional engineering licensure, peer review, and testing regimes exist precisely because human expertise and accountability are required for these systems.
Crisis response. When someone is in crisis — experiencing a mental health emergency, facing immediate danger, navigating an acute crisis situation — the response requires human presence, professional training, and genuine relationship. AI tools are not substitutes for emergency services, crisis counselors, or the people in someone's life who can provide real support.
⚠️ Common Pitfall: The danger in safety-critical domains is not that AI tools are obviously inadequate — it is that they can seem adequate. Fluent, confident, technically plausible output in medical, legal, or engineering contexts is exactly what hallucination looks like. The appropriateness of using a professional is not determined by whether you have access to AI that sounds like a professional.
Section 2: Relationship-Critical Communication
When Authenticity Is the Point
The second category is not about safety in the sense of avoiding physical harm. It is about authenticity in the sense that some human communications carry meaning precisely because they come from a particular human — and that meaning cannot be outsourced.
Sincere condolences and expressions of grief. When someone experiences significant loss, the communication they receive from people in their life has meaning proportionate to its authenticity. A message of condolence carries weight because it reflects the writer's genuine response to someone else's grief — the care they took with the words, the personal reference they included, the emotional labor of writing something true.
An AI-drafted condolence message may be perfectly composed. It may be grammatically better than what you would write. It will not carry the weight of something that cost you something to write. And if the recipient discovers or suspects the message was AI-generated, the harm is significant — the relationship is damaged at exactly the moment when the relationship needed to show up.
Sincere apologies. A genuine apology requires the speaker to own what happened, understand its impact, and communicate that understanding in their own words. AI can help you think through what needs to be said, understand the other person's perspective, or organize your thoughts. The apology itself should be yours. An AI-drafted apology that you didn't write is not an apology — it is a simulation of one, and it denies the other person the acknowledgment they need from you specifically.
Deep personal conversations. Conversations with close relationships — about significant life events, difficult decisions, vulnerabilities, deeply personal matters — require presence. Using AI to generate your side of a conversation with someone close to you, or to script responses in real-time, is a form of absence that corrodes the relationship.
Authenticity-required situations. There are contexts where the recipient's need is specifically to hear from you — your judgment, your voice, your character. Job interviews (the part where they're trying to understand who you are), personal statements for educational applications, and similar contexts where the point is that a real person is speaking about themselves. AI-generated content in these contexts is a form of deception about who you are.
💡 Intuition Check: The test for relationship-critical communication is not "could AI do this?" (it can) but "would this communication be diminished if the other person knew AI wrote it?" If the answer is yes, AI should not write it.
Section 3: Learning Contexts
When the Struggle Is the Learning
The third category addresses a subtler harm: using AI in ways that prevent you from learning what you need to learn.
Formative assessments and skill-building work. When the purpose of a task is to develop your capability — not to produce the output itself — using AI to produce the output defeats the purpose. A student asked to write an essay to develop writing skills, a programming student asked to implement an algorithm to develop coding skills, a trainee asked to draft a report to develop professional writing — in all these contexts, the task is the vehicle for learning, not the deliverable.
AI assistance with this category of work is not time-saving — it is skill-theft. The student who used AI to write their essay did not save time; they spent time they could have used learning on producing output that doesn't serve their development.
Process-evaluated work. Some evaluations care about the process of thinking as much as the conclusion. A scientific lab report that is supposed to show experimental reasoning, a case study analysis that is supposed to demonstrate applied judgment, a research proposal that is supposed to show scholarly thinking — these require the practitioner's genuine intellectual engagement. Offloading the intellectual work to AI produces an output that misrepresents the practitioner's actual thinking.
When the skill gap is something you need to close. There is a difference between using AI to handle tasks where you have sufficient capability and using AI to handle tasks where you lack capability you need. If you are a junior professional and your role requires you to write persuasive business cases, using AI to write all your business cases prevents you from developing a skill that your career requires. The short-term output gain is a long-term capability loss.
Where the struggle is the value. Some of the most important thinking happens in the process of working through a difficult problem: the insights that emerge from wrestling with complexity, the judgment that develops from making hard calls without a net, the confidence that comes from finding the answer yourself. These are not substitutable. AI can help you think, but it cannot think for you in a way that develops your thinking.
✅ Best Practice: Identify the skills you are actively working to develop and create explicit "AI-free zones" for the tasks that build those skills. Use AI extensively for tasks where you have the capability; protect the tasks that are building capability you need.
Section 4: Confidentiality-Constrained Contexts
The Data You Cannot Share
The fourth category is legal and ethical: some information cannot go into AI tools because of confidentiality obligations that precede and supersede AI use.
Attorney-client privileged information. Communications protected by attorney-client privilege are confidential by legal design. Inputting such communications into a consumer AI tool — where the data may be logged, retained, and used for training — risks waiving the privilege and breaching professional obligations. Legal privilege is not a fuzzy concept. It is a specific legal protection with specific requirements, and those requirements do not include sending the privileged information to a commercial AI vendor's servers.
HIPAA-protected health information. Healthcare providers and entities handling protected health information (PHI) are subject to strict confidentiality requirements under HIPAA. PHI cannot go into consumer AI tools. It requires HIPAA-compliant tools with appropriate Business Associate Agreements (BAAs). The obligation here is not ambiguous — it is a specific legal requirement with enforcement consequences.
Trade secrets and NDA-covered information. If you have signed a non-disclosure agreement covering certain information, and you paste that information into a consumer AI tool, you may have breached the NDA. Consumer AI tools' terms of service do not create the confidentiality obligations that NDAs require. The specifics depend on the NDA and the tool's terms, but the risk is real and non-trivial.
Certain government and regulated contexts. Government employees, defense contractors, and others working with classified or sensitive government information operate under strict rules about what systems can be used with that information. Consumer AI tools are not approved for classified or controlled unclassified information.
The "enterprise vs. consumer AI" distinction matters here. Enterprise AI deployments with appropriate data processing agreements, privacy commitments, and security certifications may be appropriate for some of these contexts — but the decision requires legal review, not assumption. Consumer AI tools (the free tier of ChatGPT, consumer Gemini, standard Claude) should be treated as public-facing for purposes of confidentiality decisions.
📋 Action Checklist: Before Inputting Information to an AI Tool - [ ] Does this information carry a confidentiality obligation (attorney-client, HIPAA, NDA, classified)? - [ ] Have I read the tool's terms of service regarding data retention and use? - [ ] Is this an enterprise tool with appropriate data processing agreements, or a consumer tool? - [ ] If consumer: would I be comfortable if this information were publicly disclosed?
The last question is a useful practical standard. Consumer AI tools should be treated with the same confidentiality caution as email sent to an unknown recipient: you should not include information in them that you would not be comfortable having become broadly visible.
Section 5: High-Novelty and High-Context Tasks
Where AI Lacks What the Task Requires
The fifth category covers tasks where AI is genuinely structurally incapable of providing what the task requires — not because the task is complex, but because it requires something AI does not have.
Tasks requiring deep situational knowledge that AI cannot have. AI knows what you've told it in the conversation and what it learned in training. It does not know the unwritten history of a specific relationship, the political dynamics of a particular organization, the specific constraints of a situation it cannot fully see. Strategic decisions about specific organizations, relationship-sensitive communications, and situational judgment calls in specific contexts require contextual knowledge that AI fundamentally lacks.
Tasks requiring lived experience. Some expertise is not transferable through text. The judgment of a crisis counselor who has sat with hundreds of people in distress. The design intuition of an architect who has built real buildings and learned from what worked and what didn't. The clinical wisdom of a doctor who has treated thousands of patients. This embodied, experiential knowledge cannot be replicated by a model that has read about these things. For tasks where this kind of knowledge matters, AI assistance is a supplement to the human, not a substitute for them.
Truly novel problems. AI excels at pattern-matched problems — situations where the solution can be found by applying known patterns, methods, or frameworks learned from training data. Genuinely novel problems — situations for which there is no good precedent in the training corpus — are not what AI does best. The model will produce a plausible-sounding answer, but the answer is pattern-matching to the nearest relevant case, not genuine reasoning from first principles.
Section 6: Reputation-Critical First Impressions
The Cover Letter Question and Beyond
The sixth category is contextual: situations where the output needs to represent you specifically, and where AI generation changes the meaning of the output in ways that matter.
Cover letters and job applications — the ongoing debate. The use of AI for cover letters is one of the most actively debated topics in professional AI ethics. The debate has no clean consensus answer, but the relevant considerations are:
Using AI to improve your writing — fixing grammar, tightening structure, strengthening language — is analogous to asking a friend to review your letter. Using AI to generate a letter that doesn't reflect your actual voice, perspective, or genuine reasons for interest in the role produces something that will sound generic (the hiring manager has read fifty AI-generated letters), may not survive a conversation (you said things in the letter that don't reflect your actual views), and misrepresents who you are. There is a gradient here, not a bright line — but the endpoint of "AI wrote this letter and I signed it" is a different thing than "I wrote this letter and AI helped me improve it."
Defining creative work. For creative professionals, work that is fundamentally defined as expression of individual perspective, style, or voice — the work that establishes your creative identity — should not be AI-generated. Using AI as a tool in a creative process is different from outsourcing the creative expression itself. The distinction is whether the final work authentically represents you or performs authenticity while concealing that it doesn't.
Must-sound-like-you writing. Some professional writing needs to sound like you because the reader's relationship is with you: your personal newsletter, your professional statement of philosophy, your thought leadership that defines your expertise. AI assistance with research, structure, and editing is different from AI generation of the content itself. The reader who follows you is following you, not a model's representation of what you might say.
Section 7: The "Just Because You Can" Problem
Capability Is Not Justification
A consistent pattern across the categories above: AI capability to perform a task is not sufficient justification for using AI to perform it. AI can draft your condolence note. AI can write your job application. AI can summarize a legally privileged document. AI can generate responses to customer complaints at scale. The technical capability is real.
The question is whether AI use in a particular context serves the actual goals of the context — not just the surface goal (produce output) but the deeper goals (develop capability, maintain genuine relationship, fulfill professional obligation, preserve the meaning of the communication).
When AI use achieves the surface goal while undermining the deeper goals, that is not productive AI use. That is AI use that creates the appearance of progress while degrading the underlying value.
Skill Atrophy: The Long-Term Cost
One of the most consequential and underappreciated costs of inappropriate AI use is skill atrophy — the gradual degradation of capabilities that are not exercised because AI has been used in their place.
Skills require practice to maintain and develop. Writing, analytical reasoning, coding, critical argumentation, professional judgment — these are not fixed capabilities that persist independently of use. They require continued exercise to remain sharp.
When AI is used consistently for tasks that would otherwise exercise a skill, that skill does not remain at its previous level — it degrades. The practitioner who uses AI to write all their analytical memos may find, when they need to write one without AI assistance, that the skill has atrophied. The developer who delegates all code implementation to AI assistance may find their implementation skills declining even as their system design skills remain strong.
📊 Research Breakdown: Research on cognitive offloading and skill maintenance provides relevant evidence. Studies on calculator use (Barr et al., 2015) found that heavy reliance on external calculation tools was associated with reduced mental arithmetic ability — consistent with use-it-or-lose-it models of skill maintenance. More directly, a 2024 study of programmers using AI code generation tools found evidence of reduced code quality in unassisted work among heavy AI users in areas where AI assistance was frequent, compared to lighter AI users. This is preliminary evidence, and the study has limitations — but the theoretical mechanism is well-established: skills not practiced atrophy.
The "AI as Crutch" Failure Mode
The "AI as crutch" failure mode is when AI use, originally adopted for efficiency, becomes a dependency that masks and eventually creates a genuine capability gap.
Signs of the crutch failure mode: - You feel unable to begin a writing task without AI generating a first draft - You cannot solve coding problems without AI assistance that you could have solved before - Your analytical work has become dependent on AI synthesis in ways that prevent you from synthesizing independently - You feel uncomfortable in situations where AI is unavailable for tasks it has been handling
The crutch failure mode is not a character flaw — it is a predictable consequence of consistent outsourcing of specific cognitive tasks. The remedy is not guilt but deliberate practice: identifying the capabilities that have been delegated, and reclaiming them through AI-free practice zones.
Section 8: Building Your Personal AI No-Fly List
Why a Specific List Beats General Principles
General principles ("don't use AI for safety-critical things") are useful but insufficient. They require a moment-by-moment judgment call about whether the current situation falls under the principle — and under time pressure, that judgment often resolves toward the easier option.
A personal AI no-fly list is specific: a list of concrete task types, contexts, and situations where you have decided not to use AI, written down, referenced before you start, and updated as your practice evolves. The specificity removes the moment-by-moment decision and replaces it with a policy you've already made.
The Personas' No-Fly Lists
Alex's AI no-fly list:
- Condolences and personal expressions of grief to people I know
- Content that is explicitly meant to represent my voice and perspective to my audience (certain newsletter sections, my professional positioning statements)
- Fundamental analysis and interpretation of data my clients paid me to analyze — AI can help with the mechanics, but the interpretation and strategic conclusion is mine
- Any client-facing communication that requires genuine relationship accountability
- Statistical research synthesis where I haven't been able to trace back to primary sources — if it's going public, it's verified or it's out
Raj's AI no-fly list:
- Code reviews for code that will run in production safety paths (authentication, access control, data integrity) — I check these myself or with a senior peer, not AI
- Architecture design decisions for systems where I need to be able to defend the design to the team — AI can help me think through options, but the decision is mine and needs to reflect my understanding
- Learning exercises I've set up specifically to develop skills I need — AI-free zones for skill development
- Anything involving client infrastructure credentials or proprietary system details
- Performance evaluation language for direct reports — this is relationship work that requires my genuine observation, not AI synthesis
Elena's AI no-fly list:
- Client data in any form into consumer AI tools
- Legal and regulatory claims in client deliverables — these go to primary sources or to counsel
- The analytical conclusions in my deliverables — AI helps with the research and the structure, but the conclusion is my professional judgment
- Communications where the relationship itself is on the line — difficult client conversations, significant apologies, letters of reference for people I've mentored
- Any content that will be attributed to me where I cannot defend every specific factual claim
Building Your List
The exercise at the end of this chapter will walk you through building your own no-fly list. The relevant questions to work through for each domain of your professional and personal life:
- Are there safety consequences if this task goes wrong?
- Does this task's value depend on it coming authentically from me?
- Is this task something I need to practice to maintain or develop?
- Does this task involve information I'm obligated to keep confidential?
- Does this task require contextual knowledge, lived experience, or situational judgment that AI genuinely lacks?
- Does the person receiving this output need it to come from me specifically?
A "yes" to any of these questions is a signal that the task belongs on your no-fly list.
Section 9: The Nuanced Middle Ground
Most Situations Are Not Clear Cases
The categories above describe relatively clear cases. Many professional situations fall in the middle ground — AI assistance that may or may not be appropriate depending on specific context, degree of involvement, transparency, and purpose.
AI assistance with learning vs. AI replacement of learning. Using AI to explain a concept you're learning is different from using AI to produce the assignment that should demonstrate your learning. The question is whether AI is helping you understand or helping you appear to understand without understanding.
AI polish vs. AI generation. Having AI improve your writing is different from having AI write for you. The gradient matters: where you are on the spectrum from "AI helped me communicate my ideas more clearly" to "AI generated ideas I'm presenting as mine" affects the ethics and the outcomes.
Using AI to think through options vs. using AI to make the decision. AI can help you explore decision options, identify considerations you might have missed, and test your reasoning. The judgment call and accountability for the decision should remain with you in contexts where your judgment is what's needed.
Disclosure changes the ethical landscape. Many uses of AI that would be problematic without disclosure become acceptable with it. The client who knows AI drafts were involved in their strategy document has a different relationship to that document than one who doesn't. Disclosure is addressed in detail in Chapter 33.
Conclusion: The Sophisticated User's Judgment
The ability to say "I'm not going to use AI for this" is as important as any prompting skill. It is the judgment that makes everything else in AI practice defensible — the signal that you are using AI as a tool that serves your purposes, rather than as an automation that has replaced your judgment.
Your no-fly list will evolve. Tasks you currently delegate to AI may move onto the list as you recognize their importance to your skill development or relationship capital. Tasks currently on the list may be reconsidered as AI tools improve and as you develop practices that adequately address the concerns that put them there.
The list is not a constraint on AI use. It is the map of where your professional judgment lives.
Section 10: The Decision Framework in Practice
Making the Call in Real Time
The six-question framework from Section 8 is useful for deliberate reflection. Real professional life also requires faster, more intuitive versions of the same judgment. Over time, the questions should become reflexive enough that the no-fly determination happens quickly, not as a bottleneck.
The practical version of the framework becomes a set of alerts — mental flags that trigger the slower deliberate check when they fire:
Alert 1: "This is about someone specific." When a task involves generating content about a named, real individual — a performance evaluation, a reference letter, an apology, an assessment — the authenticity and relationship questions become relevant. Pause and assess.
Alert 2: "This decision has consequences I can't undo." When the output will drive a financial decision, a medical determination, a legal filing, or any other irreversible consequential choice, the safety-critical questions become relevant. Pause and assess.
Alert 3: "I'd be worse off if this got out." When the information you're about to input would concern you if it appeared on the other side of a data breach or a terms-of-service change — names, confidential business data, patient information, client strategies — the confidentiality questions become relevant. Pause and assess.
Alert 4: "This is something I need to get better at." When a task is something you've been delegating to AI but that your professional development requires you to do yourself, the learning context questions become relevant. Pause and assess.
These alerts are not the full framework — they are the triggers that make you apply the full framework when it matters.
The Default Question
When none of the alerts fire and the task is straightforward, the default is: proceed with appropriate verification practices from Chapters 29 and 30. The no-fly framework is not designed to create friction for routine AI use. It is designed to create deliberate evaluation for the specific situations where routine AI use is wrong.
The practitioner who has internalized both the productivity practices of Parts 2-4 and the judgment framework of Part 5 moves fluidly: fast where appropriate, careful where necessary, clear about the difference.
Section 11: Common Objections and Honest Responses
"But the Output Is Good"
The most common objection to no-fly restrictions is that the output is high quality. The AI-drafted condolence note is eloquent. The AI-completed learning assignment is well-written. The AI analysis is impressive.
Output quality is not the relevant criterion in the no-fly categories. In relationship-critical communication, the communication's value is not its quality — it is its authenticity of origin. In learning contexts, the task's value is not the output — it is the skill development the output is supposed to represent. In safety-critical contexts, the danger is not low-quality AI output — it is high-quality AI output that is wrong.
The "good output" objection mistakes the measure. No-fly categories are not defined by output quality. They are defined by whether AI use in the context serves the actual purpose of the context.
"No One Will Know"
This objection collapses the question of whether AI use is appropriate into the question of whether it is detectable. They are different questions.
Whether a condolence note was AI-generated affects its meaning in the relationship regardless of whether the recipient knows. Whether a student's essay was AI-written affects their skill development regardless of whether the professor detects it. Whether confidential data was entered into a consumer AI tool constitutes a potential confidentiality breach regardless of whether anyone finds out.
The "no one will know" framing treats ethics as reputation management. The framework in this chapter treats it as something else: an honest accounting of whether your AI use serves the purposes you claim it serves.
"Everyone Else Is Doing It"
This is the sycophancy of social norms: the implicit argument that if a practice is widespread, it must be acceptable. It is not an argument — it is the absence of an argument. "Everyone uses AI for condolence notes" (if true) does not change whether condolence notes with genuine personal origin have more meaning than AI-generated ones. "Everyone uses AI to complete learning assignments" (if true) does not change whether using AI to complete learning assignments is skill theft.
The no-fly framework is not a claim that everyone behaving well. It is a personal standard for your own practice.
"I'll Lose Competitive Ground"
This objection is worth taking seriously. In contexts where others are using AI in ways you've decided not to, there may be genuine competitive costs. The student who uses AI for assignments may appear to produce better work than the student who doesn't. The professional who uses AI for efficiency may produce more than the one who maintains AI-free zones.
The honest response: some costs are worth bearing. The cost of not using AI for condolences is a small time investment to maintain genuine relationship capital. The cost of not using AI for core learning is the effort of actually learning — which is what you're there for. The cost of maintaining confidentiality boundaries is leaving some efficiency on the table — which is the price of professional integrity.
Not all competitive costs are costs you should avoid paying. The no-fly list is the explicit decision about which ones are worth it.
Section 12: Revisiting the No-Fly List
The List Is Not Fixed
Your no-fly list should evolve. The landscape changes: AI capabilities improve, professional norms shift, your own relationship with these tools develops. A task that belongs on the list today may not belong there in two years.
Some reasons to remove an item from the list: - AI capabilities in the domain have improved to the point where the original concern (e.g., accuracy in safety-adjacent domains) is more adequately addressed by verification practices - Professional norms in the domain have shifted to the point where the authenticity concern has changed (disclosure norms, if widely adopted, change the meaning of AI assistance) - You have built verification and oversight practices that address the concern adequately - The original concern reflected risk aversion rather than genuine principle
Some reasons to add items to the list: - You have recognized skill atrophy in an area you had been delegating - A near-miss or error revealed that a task carries more consequence than you estimated - A confidentiality concern you hadn't thought through has been surfaced - Your professional context has changed in ways that affect the authenticity or accountability dimension
Review the list annually. When something changes — in the tools, in the norms, in your own practice — update it deliberately rather than letting it drift.
Sharing and Discussing the List
There is value in making your no-fly list semi-public within your professional community — not as a statement of superiority, but as a contribution to collective norm-setting.
Professional communities are developing AI use norms right now. Those norms will be shaped by the conversations that happen in them. When experienced practitioners are explicit about what they don't use AI for and why, they contribute to the development of professional standards that benefit the whole community.
This is not an argument for self-righteous announcement. It is an argument for the kind of professional conversation — among colleagues, in professional organizations, in team settings — where the reasoning behind AI use decisions gets discussed and refined.
The no-fly list is not just a personal document. It is a position in an ongoing professional conversation about what responsible AI use looks like.
Next: Chapter 33 — Ethics of AI Use: Disclosure, Attribution, and Fairness, which addresses the ethical dimensions of AI use that persist even in contexts where it is appropriate.