Chapter 32 Quiz: When NOT to Use AI (and Why That Matters)

15 questions covering the six no-fly categories, skill atrophy, and personal no-fly list building.


Question 1

Which of the following is the most accurate description of why safety-critical domains are AI no-fly zones?

A) AI tools are too slow for real-time safety decisions B) The product of error probability and harm severity produces unacceptable expected harm, and plausible-sounding wrong answers are more dangerous than no answer C) Regulatory authorities have formally prohibited AI in safety-critical contexts D) AI tools do not have access to specialized safety knowledge

Answer **B — The product of error probability and harm severity produces unacceptable expected harm, and plausible-sounding wrong answers are more dangerous than no answer.** The core safety-critical argument: even low-probability errors, multiplied by catastrophic consequences, produce an expected harm that is not justified by the efficiency gains of AI use. The fluency of AI output in medical, legal, and engineering domains makes this especially dangerous — confident, plausible wrong answers in safety-critical contexts are worse than acknowledged uncertainty.

Question 2

The test for whether a communication belongs in the "relationship-critical" no-fly category is:

A) Whether the recipient is a professional contact or a personal contact B) Whether the communication involves sensitive emotions C) Whether the communication would be meaningfully diminished if the recipient knew AI wrote it D) Whether the communication is more than 100 words

Answer **C — Whether the communication would be meaningfully diminished if the recipient knew AI wrote it.** Relationship-critical communications carry meaning proportionate to their authenticity — they matter because they come from a specific person, reflecting their genuine engagement. If the recipient's response to knowing AI wrote it would be diminishment of the communication's meaning, the communication belongs in the authentic-expression category. This test is more useful than categorical rules because it captures the relevant dimension: is the human origin of this communication part of its value?

Question 3

Using AI to complete learning assignments is problematic primarily because:

A) AI-generated work is typically lower quality than student work B) The task is the vehicle for skill development, not the deliverable — AI completion achieves the surface output while defeating the learning purpose C) Educators can always detect AI-generated work D) AI use in learning contexts is always dishonest

Answer **B — The task is the vehicle for skill development, not the deliverable — AI completion achieves the surface output while defeating the learning purpose.** The learning context argument is not primarily about honesty (though that's also relevant) — it is about what the task is for. In formative assessments and skill-building work, the purpose is development, not output. AI that produces the output while bypassing the development is not efficient — it is self-undermining. The student who completes all assignments via AI has a credential but not the capability the credential represents.

Question 4

HIPAA-protected health information should not be input into consumer AI tools because:

A) Consumer AI tools are not smart enough to handle medical information accurately B) Consumer AI tools do not have appropriate Business Associate Agreements or data handling commitments required for protected health information C) Healthcare providers are not allowed to use AI tools at all under HIPAA D) AI tools always share user data with third parties

Answer **B — Consumer AI tools do not have appropriate Business Associate Agreements or data handling commitments required for protected health information.** HIPAA requires specific contractual agreements (BAAs) and technical safeguards for any vendor handling PHI. Consumer AI tools do not have these agreements in place. Healthcare providers using consumer AI with PHI are not just risking a breach — they are committing a compliance violation regardless of whether any breach actually occurs. Enterprise tools with appropriate BAAs may be permissible; consumer tools are not.

Question 5

Skill atrophy from AI use occurs when:

A) AI tools are used for tasks that are too complex for them B) Cognitive skills not practiced because AI handles them gradually degrade, creating capability gaps the practitioner may not notice until they need the skill without AI available C) Users spend too much time learning to use AI tools and not enough time on their core work D) AI tools make workers lazy

Answer **B — Cognitive skills not practiced because AI handles them gradually degrade, creating capability gaps the practitioner may not notice until they need the skill without AI available.** Skill maintenance requires practice. When AI consistently handles tasks that would otherwise exercise a skill, the use-it-or-lose-it principle applies. The skill degrades gradually and often without the practitioner noticing — until a situation arises where the skill is needed without AI assistance. The "AI as crutch" failure mode describes this progression from helpful assistance to dependency that masks a genuine capability gap.

Question 6

Which of the following is the best description of the "just because you can" problem?

A) AI capability expands faster than practitioners can learn to use it B) Having the technical ability to use AI for a task is not sufficient justification for doing so — appropriateness depends on whether AI use serves the actual goals of the context C) AI tools should only be used when no human alternative exists D) Advanced AI capabilities require advanced prompting skills

Answer **B — Having the technical ability to use AI for a task is not sufficient justification for doing so — appropriateness depends on whether AI use serves the actual goals of the context.** AI can accomplish many things that it shouldn't accomplish in specific contexts. The "just because you can" problem is applying AI capability as justification without examining whether that application serves the deeper purposes of the context — relationship authenticity, skill development, confidentiality, safety. Capability answers the question "can I?" The chapter argues for always also asking "should I?" and providing the framework for answering it.

Question 7

A personal AI no-fly list is more useful than general principles because:

A) General principles are incorrect; specific rules are always better B) Specific, written no-fly items remove moment-by-moment judgment calls under time pressure and replace them with pre-made policies C) No-fly lists are required by professional organizations in most fields D) Principles apply to other people's situations, not your own

Answer **B — Specific, written no-fly items remove moment-by-moment judgment calls under time pressure and replace them with pre-made policies.** General principles ("don't use AI in safety-critical contexts") require a judgment call every time about whether the current situation falls under the principle. Under time pressure, those judgment calls tend to resolve toward the easier option. A specific no-fly list is a policy made in advance: "I don't use AI for X" is a pre-made decision that doesn't require re-deciding under pressure. The specificity and the writing-down are both important.

Question 8

High-novelty tasks are a no-fly category because:

A) AI tools lack the processing power for novel problems B) Novel tasks require creativity that AI cannot provide C) AI excels at pattern-matched problems but generates plausible-sounding pattern-matched answers even for genuinely novel situations where no reliable precedent exists in training data D) Users cannot evaluate AI output on novel topics

Answer **C — AI excels at pattern-matched problems but generates plausible-sounding pattern-matched answers even for genuinely novel situations where no reliable precedent exists in training data.** This connects directly to the hallucination mechanism: AI generates the most plausible completion based on patterns, whether or not the patterns apply to the current situation. For genuinely novel problems — situations without reliable precedent — AI will produce a confident-sounding answer that is pattern-matched to the nearest available analog. The sophistication of the answer does not reflect the genuine novelty of the situation.

Question 9

What is the "AI as crutch" failure mode?

A) Using AI tools that are not advanced enough for the task at hand B) A pattern where AI use originally adopted for efficiency becomes a dependency that masks and eventually creates a genuine capability gap C) Relying too heavily on AI for emotional support D) Using outdated AI tools when newer versions are available

Answer **B — A pattern where AI use originally adopted for efficiency becomes a dependency that masks and eventually creates a genuine capability gap.** The crutch failure mode has a progression: AI is adopted for a task where human capability exists and is sufficient. The task is delegated to AI consistently. The human skill degrades from lack of exercise. The practitioner becomes uncomfortable performing the task without AI — and may not notice until a situation requires the skill in an AI-unavailable context. The crutch doesn't just support a capability; eventually it replaces it.

Question 10

Which of the following is NOT a reason that AI-drafted sincere apologies fail to achieve their purpose?

A) Recipients may detect AI generation and feel the apology was insincere B) A genuine apology requires the offending party to own what happened in their own words — AI generation bypasses this accountability C) AI-generated apologies are typically grammatically incorrect D) The apology fails to give the other person acknowledgment from the specific person who caused harm

Answer **C — AI-generated apologies are typically grammatically incorrect.** AI-generated apologies are typically grammatically excellent. The failure of AI-drafted apologies is not a quality failure — it is a meaning failure. A sincere apology requires: the offending party's genuine acknowledgment, their own words, their demonstrated understanding of impact, and the recipient's experience of receiving acknowledgment from the specific person. AI drafting bypasses the accountability and the specificity that make apologies work. Options A, B, and D describe real reasons; option C is simply false.

Question 11

The "enterprise vs. consumer AI" distinction matters for confidentiality because:

A) Enterprise AI tools are always more accurate than consumer tools B) Enterprise AI deployments may have data processing agreements, retention policies, and security commitments that make them appropriate for confidential information that consumer tools are not C) Enterprise tools are only available to large organizations D) Consumer tools require users to share information publicly

Answer **B — Enterprise AI deployments may have data processing agreements, retention policies, and security commitments that make them appropriate for confidential information that consumer tools are not.** For healthcare, legal, and NDA-covered information, the appropriateness of an AI tool depends not just on its capabilities but on its data handling commitments. Enterprise deployments typically include contractual commitments about data use, retention, and access that consumer tiers do not. HIPAA requires BAAs that consumer AI tools don't have. The tool selection decision for confidential information requires reviewing actual contractual commitments, not assuming all versions of a tool have the same data policies.

Question 12

Research on skill atrophy related to AI use is characterized as:

A) Comprehensive and conclusive — AI use clearly degrades skills B) Non-existent — this is only a theoretical concern with no evidence C) Preliminary but consistent with the well-established use-it-or-lose-it principle of skill maintenance; evidence suggests reduced performance in unassisted tasks for heavy AI users in delegated skill areas D) Showing that AI use improves all skills by modeling expert performance

Answer **C — Preliminary but consistent with the well-established use-it-or-lose-it principle of skill maintenance; evidence suggests reduced performance in unassisted tasks for heavy AI users in delegated skill areas.** The chapter is honest about the state of the evidence: the AI-specific research is preliminary, but it is consistent with decades of established cognitive science on skill maintenance. Skills require practice. Consistent delegation of specific cognitive tasks to AI tools is likely to reduce those skills over time. The specific AI studies have limitations; the underlying mechanism is well-established.

Question 13

In the cover letter debate, the meaningful distinction between acceptable and unacceptable AI use is:

A) Any AI involvement in a cover letter is inappropriate B) AI involvement is only inappropriate if the tool being used requires a subscription C) The gradient between AI helping you communicate your genuine voice more effectively vs. AI generating content that doesn't reflect your actual perspective and relationship to the role D) Cover letters that are edited after AI generation are always appropriate; unedited AI letters are never appropriate

Answer **C — The gradient between AI helping you communicate your genuine voice more effectively vs. AI generating content that doesn't reflect your actual perspective and relationship to the role.** The chapter explicitly avoids a bright line here. "AI improved my writing" is different from "AI wrote my letter." The relevant distinction is whether the final product authentically represents the candidate's actual perspective, voice, and genuine interest in the role — or whether it performs authenticity while being generated by a model that has no such perspective. This is a judgment call that depends on the degree and nature of AI involvement, not a binary.

Question 14

Which question from the no-fly list framework applies to using AI to draft performance evaluations for direct reports?

A) Only safety consequences matter — and this isn't safety-critical B) Multiple questions apply: does the value depend on it coming from me (yes — I'm the one who observed their performance), does it require contextual knowledge AI lacks (yes — my specific observations), and does the recipient need it from me (yes — the relationship and accountability require it) C) Confidentiality concerns prevent AI use in HR contexts D) Performance evaluations are always on the no-fly list

Answer **B — Multiple questions apply: does the value depend on it coming from me (yes — I'm the one who observed their performance), does it require contextual knowledge AI lacks (yes — my specific observations), and does the recipient need it from me (yes — the relationship and accountability require it).** Performance evaluation is a good example of a task where multiple no-fly questions trigger. The evaluation is based on your observations; AI doesn't have them. The evaluation is a form of professional accountability that should reflect your judgment; AI doesn't have your judgment. The evaluation matters to the person being evaluated partly because it comes from their manager, who knows them; AI doesn't know them. AI might help structure or improve language, but the core evaluation must originate from your genuine observations and judgment.

Question 15

The most sophisticated AI users are described in this chapter as:

A) Those who use AI for the widest range of tasks B) Those who avoid AI use out of concern for quality C) Those who have developed clear judgment about when AI use serves their actual purposes and when it doesn't — including a well-calibrated no-fly list D) Those who use only the most advanced AI models available

Answer **C — Those who have developed clear judgment about when AI use serves their actual purposes and when it doesn't — including a well-calibrated no-fly list.** Sophistication in AI use is not measured by volume of use or by model capability. It is measured by judgment: knowing when AI adds genuine value and when it creates surface output while degrading authentic value. The no-fly list is not a constraint on productive AI use — it is the map of where genuine professional judgment lives. The practitioner who can say "I don't use AI for this, and here's why" demonstrates more AI literacy than one who uses AI for everything without reflecting on what each use serves.