Glossary

304 terms from Working With AI Tools Effectively

# A B C D E F G H I K L M N O P Q R S T U V W Y Z

#

"Building Custom GPTs: A Practical Guide"
OpenAI Community Forum https://community.openai.com → Chapter 14 Further Reading
"Building with Claude Projects"
Anthropic Help Center https://support.anthropic.com → Chapter 15 Further Reading
"Can AI-generated text be reliably detected?"
various papers, 2023-2025 → Chapter 15 Further Reading
"Carried by those who've learned."
Frames the audience as experienced people whose choices are the product of accumulated learning, not aspiration. Not for beginners — for people who know. → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
"ChatGPT Enterprise Privacy FAQ"
OpenAI https://openai.com/enterprise-privacy → Chapter 14 Further Reading
"Comparing Human and AI Writing Evaluations"
several studies available via academic search → Chapter 15 Further Reading
"Data Analysis with ChatGPT Code Interpreter"
various tutorials on Towards Data Science https://towardsdatascience.com → Chapter 14 Further Reading
"Generative AI at Work"
Brynjolfsson, Li, and Raymond (2023) https://www.nber.org/papers/w31161 → Chapter 14 Further Reading
"It made the cut."
Simple, direct, uses hiking/outdoor language. The product is not recommended; it was evaluated and earned its inclusion. → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
"Long Context: the Next Challenge for AI Systems"
Google Research Blog https://research.google/blog/ → Chapter 16 Further Reading
"Measuring Progress in Reducing AI Sycophancy"
various academic papers available via Google Scholar → Chapter 15 Further Reading
"Navigating AI Privacy in the Workplace"
International Association of Privacy Professionals (IAPP) https://iapp.org → Chapter 14 Further Reading
"Not everything gets a second trip."
Evokes the experience of trying gear and leaving it behind. This product comes back on every trip. Slightly longer but story-rich. → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
"System Prompt Design Patterns"
Lilian Weng, OpenAI (various posts) https://lilianweng.github.io/lil-log/ → Chapter 14 Further Reading
"The Future of Work with Google Workspace"
Google Workspace thought leadership blog https://workspace.google.com/blog/ → Chapter 16 Further Reading
"The Impact of Generative AI on Knowledge Workers"
McKinsey Global Institute, 2024 https://www.mckinsey.com/mgi → Chapter 14 Further Reading
"The pack doesn't lie."
The pack is the most honest expression of what a hiker trusts. What's in it has been tested, not just marketed. This line uses the judgment of the pack itself as the arbiter of quality. → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
"Water weighs. Doubt doesn't."
Plays on the functional language of serious hiking (ounce-counting, weight optimization) while using doubt as the actual thing to eliminate. The subtext: with this product, you don't carry doubt about your water source. → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
"Work Trend Index"
Microsoft (also relevant) https://www.microsoft.com/en-us/worklab/work-trend-index → Chapter 16 Further Reading
1. Meeting summary
Weak: "Summarize the meeting" - Strong: "Summarize this meeting transcript in three sections: Key decisions made (numbered list), Action items with owner and due date (table), and Open questions that still need resolution (bullets). Max 250 words total." → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
2. Performance review feedback
Weak: "Help me write feedback for my employee" - Strong: "Draft constructive feedback for a junior analyst who consistently meets deadlines and produces accurate work but struggles to communicate proactively when blockers arise. The feedback should acknowledge the strength, describe the specific pat → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
3. Code explanation
Weak: "Explain this code" - Strong: "Explain this Python function to a junior developer who understands basic syntax but has not worked with decorators before. Use a real-world analogy for what the decorator does, then walk through the code line by line. Max 200 words." → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
4. Cover letter
Weak: "Write a cover letter for this job" - Strong: "Write a cover letter for a senior product manager applying to [Company] for [Role]. The applicant has 7 years of B2B SaaS experience, led the 0-to-1 launch of a data analytics feature, and has a background in user research. Tone: confident and spe → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
5. Sales email
Weak: "Write an email to a potential customer" - Strong: "Write a cold outreach email to a VP of Operations at a mid-sized logistics company. We sell fleet management software. Pain point: their current system requires manual mileage reporting. Our differentiator: automated real-time GPS logging wit → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
6. Slide content
Weak: "Give me content for my presentation slide" - Strong: "Write the content for one PowerPoint slide on the business case for investing in employee mental health programs. Format: a header, three bullet points of no more than 12 words each, and one supporting statistic. Audience: executive leader → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
7. Policy document
Weak: "Write a remote work policy" - Strong: "Draft a remote work policy for a 50-person technology company. Sections: eligibility criteria, required availability hours (core hours 10am-3pm local time), equipment stipend ($500/year, receipts required), in-person requirements (one team week per quart → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
8. Social media post
Weak: "Write a post for LinkedIn" - Strong: "Write a LinkedIn post from the CEO's perspective announcing that we just reached 100 enterprise customers. Tone: genuinely grateful, not performatively humble. Mention the team, not just the milestone. 150 words, no hashtags, end with one forward-looking → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
`--ar [width:height]`
Aspect ratio. `--ar 16:9` for landscape video/presentation format, `--ar 2:3` for portrait/phone, `--ar 1:1` for square social media. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--chaos [0-100]`
Controls variation between generated images. Low chaos (0-10) produces four similar variations. High chaos (50-100) produces four very different interpretations. Use high chaos early in exploration, low chaos when you have found a direction you want to refine. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--cref [URL]`
Character reference. Attempts to maintain the appearance of a character from the reference image in the new generation. Useful for creating consistent characters across multiple images. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--no [concept]`
Negative prompt. `--no hands` significantly reduces deformed hand frequency in images with people. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--q`
Controls generation quality and time. `--q .5` is faster and cheaper; `--q 1` uses full quality. For exploration, `.5` is fine. For final outputs, use `1`. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--seed [number]`
Reproducibility. Using the same seed with the same prompt produces the same image. Useful for consistency across variations. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--sref [URL]`
Style reference. Applies the style of the referenced image to your prompt. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--style [value]`
Applies aesthetic variation. `--style raw` turns off Midjourney's automatic aesthetic enhancement, giving you more direct control but requiring more complete prompting. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--v [version]`
Model version. `--v 6.1` (current) for highest quality photorealism. `--v 5` for some users' preference for stylized artistic output. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`--weird [0-3000]`
Introduces unusual, experimental aesthetic qualities. Useful for finding unexpected creative directions. Most useful at moderate values (250-500). → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
`brand-voice-principles.pdf`
The ten pages of the brand guide specifically about voice (extracted and reformatted). Section headers include: "The Direct Principle," "How We Handle Technical Complexity," "Tone Variation by Channel," and "Common Voice Violations." → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
`off-brand-examples.md`
Ten annotated examples with before/after comparisons. The "before" shows the off-brand version; the "after" shows the corrected version; the annotation explains the specific principle being applied. → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
`on-brand-examples.md`
Fifteen annotated examples of on-brand content — blog introductions, email subject lines, social posts, and product descriptions. Each example has a brief note on what makes it work. → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
`vocabulary-guide.md`
A Markdown table organized by category: "Opening sentence patterns we use," "Phrases we never use and why," "Words that signal off-brand hedging," and "Words that signal on-brand confidence." → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource

A

A defined identity
the configured system is not a general-purpose AI assistant; it is a specific tool with a specific purpose. Users know what it is for, what it does well, and where to go when it cannot help. → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
A feedback channel
a shared Slack thread where anyone can post examples of GPT assessments they thought were wrong, with a note on why. → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
A five-minute team walkthrough
not training, just a live demo using a real piece of content so everyone could see what the output looked like. → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
A personal role assignment library
saved, ready-to-use prompts for your most common review and evaluation needs — is among the highest-leverage prompt infrastructure investments you can make. Build it once; use it for every high-stakes piece of work. → Chapter 9 Key Takeaways: Instructional Prompting and Role Assignment
Academic Literature
Google Scholar (scholar.google.com) — broad academic coverage, free - PubMed (pubmed.ncbi.nlm.nih.gov) — biomedical and life sciences, free - arXiv (arxiv.org) — physics, math, CS, economics pre-prints, free - SSRN (ssrn.com) — social science and economics, largely free - ACL Anthology (aclanthology → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Action Checklist: Becoming a ChatGPT Power User
[ ] Set up Custom Instructions with detailed context about your role and preferences - [ ] Enable or review Memory settings - [ ] Try Advanced Data Analysis with a real dataset from your work - [ ] Build one GPT for a task you do repeatedly - [ ] Identify one regular task currently taking 2+ hours t → Chapter 14: Mastering ChatGPT and GPT-4
Action Checklist: Getting the Most from Gemini
[ ] Verify your organization uses Google Workspace at a tier that includes Gemini features - [ ] Enable Gemini in Gmail and try "Help me write" for your next email draft - [ ] Explore the Gemini side panel in Google Docs on a current document - [ ] Use formula assistance in Sheets for your next comp → Chapter 16: Google Gemini and the Workspace Integration
Action Checklist: Starting Your First Chain
[ ] Identify a recurring complex task with three or more distinct phases - [ ] Map the expert process: what steps would a skilled human take? - [ ] Write a chain specification document with input/output for each step - [ ] Identify which steps need human review gates - [ ] Build a two-step version a → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Add a gate if:
The next step uses the current output as a foundation for significant work (if step 3's output shapes steps 4 through 8, a gate after step 3 is worth the slowdown) - The current step makes interpretive or judgment-based decisions (not just mechanical processing) - An error in the current step would → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Advanced Practitioner Course (12 weeks):
Full Parts 1–3 (Weeks 1–4) - Selected chapters from Part 4 based on cohort (Weeks 5–7) - All of Part 5 (Weeks 8–9) - All of Part 6 (Weeks 10–11) - Part 7 + Capstone project (Week 12) → Prerequisites and Assumed Knowledge
After running all five:
Where do the frameworks agree in their conclusions? - Where do they conflict? How do you resolve the conflict? - Which framework was most useful for this particular decision? Why? - Is there a decision type where this framework would be the first one to reach for? → Chapter 25 Exercises: Decision Support, Analysis, and Strategic Thinking
After the context packet:
Average editing time per content piece: 5-12 minutes - Approximate edit rate: 10-20% of content needed significant revision - Brand voice accuracy (Alex's own rating): 4.5/5 on average → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
After translation workflow:
CPO responds to status reports most weeks with a question or acknowledgment - COO has specifically requested earlier notification on maintenance windows - Raj has been included in two product roadmap discussions that previously wouldn't have included engineering → Case Study 27-2: Raj's Stakeholder Translation — Making Technical Reports Readable
AI does not help in specific ways:
**It doesn't know your full context.** The most important factors in most real decisions — your organization's specific strategy, your team's capabilities, your relationships, your personal values and risk tolerance — are not in the model. AI decisions made without this context are decisions made ab → Chapter 25: Decision Support, Analysis, and Strategic Thinking
AI helps with complex decisions because:
**It doesn't share your biases.** When you're already leaning toward an option, AI hasn't invested emotionally in any outcome. It can generate equally strong arguments for alternatives without the reluctance you'd feel doing it yourself. - **It has broad exposure.** AI has been trained on enormous a → Chapter 25: Decision Support, Analysis, and Strategic Thinking
AI stack
a collection of tools configured and integrated to support their specific workflow. → Chapter 5: Setting Up Your Personal AI Environment
AI Suitability Key:
High: Repetitive, text-heavy, has clear inputs and outputs, or involves synthesis of information - Medium: Has some variable elements but follows a recognizable pattern; AI assists, human finalizes - Low: Highly contextual, requires deep personal relationships, involves real-time physical judgment, → Appendix C: Workflow Templates & Worksheets
Alex
a marketing manager who uses AI daily but finds herself wrestling with quality and trust - **Raj** — a software developer who integrates AI tools into his engineering workflow - **Elena** — a freelance consultant who depends on AI for speed, but guards her professional reputation carefully → Working With AI Tools Effectively
Alex (marketing manager, non-technical, creative)
Advantage: Reasonable expectations and a practical orientation — she is not looking to be impressed, just helped. She is open to experimenting. - Disadvantage: She initially treats the tool like a search engine — expecting it to surface correct, relevant, specific information without being given tha → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Alex's AI no-fly list:
Condolences and personal expressions of grief to people I know - Content that is explicitly meant to represent my voice and perspective to my audience (certain newsletter sections, my professional positioning statements) - Fundamental analysis and interpretation of data my clients paid me to analyze → Chapter 32: When NOT to Use AI (and Why That Matters)
Alex's evaluation (section by section):
Campaign objective: Good. - Target audience: Demographic solid, psychographic uses the insight language well. - Market context: Reasonable, though the specific competitor details it generated need verification before going into a final client document. - Key insight: Excellent — this came directly f → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
Analysis questions for each:
Which components are present? - Which are missing? - What assumption does the AI likely make to fill each gap? - How would the output quality differ if the missing components were added? → Chapter 7 Exercises: Prompting Fundamentals
Answer framework:
Ask for: SDK version, language, code snippet showing how they initialize the client - Common causes: expired key, wrong key format, missing ANTHROPIC_API_KEY env variable - First response: ask for the three items above before attempting to diagnose - Known issue (as of Q3 2024): Python SDK versions → Case Study 2: Raj's Email Assistant — A Custom Triage and Draft Bot
Anthropic Python SDK
pypi.org/project/anthropic The official SDK for the Anthropic API, used in the Python code examples. Requires an API key (available at console.anthropic.com). → Chapter 22 Further Reading: Data Analysis and Visualization
Applicant tracking systems with AI features
Greenhouse, Lever, iCIMS — have integrated AI for resume screening, candidate ranking, and interview scheduling. These features require careful evaluation: the research record on algorithmic hiring tools is mixed, with documented cases of bias against protected groups. → Chapter 19: Specialized and Domain-Specific AI Tools
Architecture and design tasks:
Designing system components - Evaluating technology choices - Planning database schema or API structure → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Attribution and Disclosure
[ ] Define your default disclosure position for client-facing work - [ ] Address industry-specific disclosure requirements - [ ] Establish internal documentation expectations for AI-assisted work → Chapter 38: Deploying AI in Teams and Organizations
Automatic L3 (no AI draft, immediate human):
Customer used words indicating strong emotional distress (anger, despair, "I've had enough") - Ticket mentions legal action, regulatory filing, or formal complaint - Customer has contacted about the same issue 3+ times in 30 days - Enterprise customer (top 20% by ARR) with any complaint - Upcoming r → Case Study 28-2: The Support Team's AI Upgrade — From Backlog to Same-Day Response

B

B) Context
and often **C) Format** as well. → Chapter 7 Quiz: Prompting Fundamentals
Before the context packet:
Average editing time per content piece: 35-45 minutes - Approximate edit rate: 60-70% of content needed significant revision - Brand voice accuracy (Alex's own rating): 3/5 on average → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
Before translation workflow:
CPO and COO engagement with status reports: effectively zero (by their own admission) - Questions or feedback from CPO: rarely - Raj's sense of being connected to product strategy: low → Case Study 27-2: Raj's Stakeholder Translation — Making Technical Reports Readable
Beginner characteristics:
Long, vague prompts (do you still do this?) - Accepting first outputs without adequate review (do you still do this?) - Giving up when first attempts fail (do you still do this?) - Using AI for tasks where it doesn't help (do you still do this?) → Chapter 41 Exercises: Building Your Long-Term AI Practice
Best practices:
Upload PDFs for most use cases — the convenience and format preservation outweigh the minor accuracy risk for typical documents - Paste text for scanned documents (PDF OCR can be unreliable for complex layouts), for sensitive content where you want to control exactly what is sent, or when you only n → Chapter 12: Multimodal Prompting: Working with Images, Documents, and Data
Brand Copy Writer
The Generator adapted for product descriptions, using her five-example few-shot reference library (see Chapter 10 Case Study) 2. **Campaign Analyzer** — The Analyzer adapted for evaluating campaign performance data against brand KPIs 3. **Competitor Watcher** — The Extractor adapted for pulling key → Chapter 11: Prompt Engineering Patterns for Recurring Tasks

C

Chain-of-Thought (CoT) Prompting
getting the model to reason through problems step by step before answering, which dramatically improves accuracy on anything involving multiple logical steps 2. **Few-Shot Prompting** — providing worked examples inside the prompt to teach the model your specific standards, format, and style 3. **Sel → Chapter 10: Advanced Prompting Techniques
Change Readiness Assessor
a structured framework for evaluating organizational readiness for transformation, which she currently rebuilds for each change management engagement 2. **Stakeholder Map Builder** — extracting and organizing stakeholder information from interview data and org charts into a structured influence/inte → Case Study: Elena's Consulting Toolkit — Patterns as Competitive Advantage
Chapter 5
optional environment setup with `pip install` commands - **Chapter 36** — programmatic AI access via APIs (clearly marked as technical) - **Chapter 22** — optional data analysis examples - **Appendix B** — Python code reference → Prerequisites and Assumed Knowledge
ChatGPT (GPT-4o):
Occasionally over-eager to help — may fabricate specifics when it doesn't know - Can be verbose without explicit length constraints - Very strong instruction following but sometimes over-literal interpretation → Chapter 13: Diagnosing and Fixing Bad Outputs
ChatGPT Advanced Data Analysis
chat.openai.com Requires a ChatGPT Plus subscription. Accepts file uploads (CSV, Excel), runs Python code in a sandboxed environment, generates charts, and provides written interpretation. The primary Tier 1 tool discussed in this chapter. → Chapter 22 Further Reading: Data Analysis and Visualization
Citation Verification
CrossRef (crossref.org) — DOI registration authority - WorldCat (worldcat.org) — books and library holdings - Semantic Scholar (semanticscholar.org) — AI-assisted academic search → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Claude
claude.ai Anthropic's AI assistant. Well-suited for multi-step writing workflows, long-context document work, and nuanced editing instructions. The extended context window (up to 200,000 tokens in some versions) makes it particularly useful for long-form content collaboration. → Chapter 20 Further Reading: Writing and Editing with AI
Claude:
More conservative about factual claims — may refuse or hedge more than necessary - Strong at nuanced instruction following and maintaining specified constraints - May occasionally be overly cautious about requests it interprets as potentially harmful → Chapter 13: Diagnosing and Fixing Bad Outputs
Client context
who the client is, what the engagement is trying to accomplish, and the key stakeholders' names and roles. This is the context that never changes and is always relevant. → Case Study 2: Elena's Client Research System — Claude Projects in Practice
Code Reviewer
The Critic adapted for code review: "Review this code as a security-focused senior engineer at a fintech company. Identify issues in: security vulnerabilities, error handling, test coverage gaps, and performance risks." 2. **Function Documenter** — The Transformer adapted for converting code into do → Chapter 11: Prompt Engineering Patterns for Recurring Tasks
Code writing tasks:
Boilerplate generation (REST endpoint scaffolding, data model definitions, test setup) - Algorithm implementation (sorting logic, data transformation, business logic) - Standard library usage (file I/O, date manipulation, string processing) - Security-sensitive implementations (authentication, autho → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Competent characteristics:
Clear prompt structure (do you have this?) - Reliable use cases where you consistently get good results (do you have these?) - Difficulty adapting when encountering new task types (is this still true?) → Chapter 41 Exercises: Building Your Long-Term AI Practice
confidence is not accuracy
is perhaps the single most important thing to internalize about AI output. The more authoritative an AI response sounds, the more carefully you should verify it, not the less. → Chapter 29: Hallucinations, Errors, and How to Catch Them
Consensus
consensus.app Scientific consensus evaluation for empirical questions. Best for yes/no questions about research evidence. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Content territories:
Earned through use: Documentary-style content showing the product in real trail conditions, not studio settings. - The pack edit: Content about what serious hikers carry and why — with this product as one answer to the question. - Real hikers, real packs: UGC-forward strategy showing actual customer → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
Context
what the image is and why you're analyzing it 2. **Specific questions or analysis tasks** — exactly what you want to know or assess 3. **Output format** — how you want the analysis structured → Chapter 12: Multimodal Prompting: Working with Images, Documents, and Data
Continue iterating when:
The output is in the right direction but needs refinement - You are getting progressively closer with each iteration - You are adding depth or specificity to a good structure - A self-critique iteration could catch remaining issues → Chapter 6: The Iteration Mindset — Working in Loops, Not Lines
Cost calculation:
AI subscriptions: $200/month for the team - Alex's management time on policy and training: approximately 40 hours over 6 months, valued at her hourly rate → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Costs of over-trust:
**Reputational damage.** Presenting fabricated statistics to a client or publishing inaccurate information harms professional credibility, sometimes severely. Recovery from a public factual error is costly. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Costs of under-trust:
**Lost productivity.** The primary value of AI tools is time savings. If you re-verify everything independently, you get no time savings. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Cultural calibration
describing the cultural world your audience inhabits (what they read, what they buy, what they value) — gives the AI reference points that subtly shape vocabulary and emotional register in ways that demographic data alone cannot. → Chapter 8 Key Takeaways: Context Is Everything
Current phase
she updates this section as the engagement progresses ("We are in data collection phase" or "We are drafting the strategic options section"). → Case Study 2: Elena's Client Research System — Claude Projects in Practice
Cursor
cursor.sh An AI-native code editor built on VS Code. Includes chat-based code editing, codebase-aware prompting (referencing specific files and functions in context), and multi-file editing. Well-suited for the architecture discussion and iterative implementation workflow. → Chapter 23 Further Reading: Software Development and Debugging
CWE (Common Weakness Enumeration)
cwe.mitre.org The authoritative catalog of software weakness types. When AI identifies a security issue, it will often use CWE classification (CWE-89 for SQL injection, CWE-79 for XSS). Knowing how to look up CWE entries gives you the full vulnerability description, examples, and mitigation guidance → Chapter 23 Further Reading: Software Development and Debugging

D

Data and Confidentiality
[ ] Identify which AI tools are approved for team use - [ ] Define categories of information that can/cannot be shared with AI tools - [ ] Address training data and data residency questions for your industry/context → Chapter 38: Deploying AI in Teams and Organizations
Data.gov
for any statistic that traces back to government data (labor, demographics, internet usage) → Case Study 2: Alex's Verification Stack
Debugging tasks:
Diagnosing error messages - Tracing unexpected behavior - Understanding obscure compiler or runtime errors → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Deepen existing practice when:
Your current use cases have clear room for improvement (high iteration counts, low batting averages on important tasks) - You have identified specific bottlenecks that deeper practice would address - A new capability is generating a lot of attention but early reports suggest it's not yet reliable fo → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Diagnostic signals:
Output is generic — could apply to any company or situation - Output uses reasonable approaches but not your specific approach - Output sounds plausible but doesn't fit your actual context - The model made assumptions about your audience, goals, or constraints that were wrong → Chapter 13: Diagnosing and Fixing Bad Outputs
Different "challenges"
one piece that had been praised for handling an eco claim without preachiness, one that used humor effectively, one that described a mundane product (a sponge) in a way that felt genuinely appealing → Case Study: Alex's Style Cloner — Few-Shot Prompting for Brand Voice
Do not upload to external AI tools:
Personally identifiable information (PII) — names, email addresses, social security numbers - Protected health information (PHI) covered by HIPAA - Non-public financial data with regulatory sensitivity - Data covered by NDAs or confidentiality agreements with specific handling requirements → Chapter 22: Data Analysis and Visualization
Do not use CoT when:
The task is factual retrieval (no reasoning required) - You need a short, direct output and reasoning would be noise - The task is highly creative and open-ended (CoT can over-constrain) - Latency or response length is a significant concern → Chapter 10: Advanced Prompting Techniques
Do not:
Use: "elevate your space," "perfect for any occasion," "warm your home," "create ambiance," "luxurious," "cozy," "indulge" - Use exclamation points anywhere in body copy - Begin product descriptions with "Introducing..." or "Meet..." - Write about the product as if it is the subject ("This candle do → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
Do:
Use short, declarative sentences - Reference the experience or moment, not just the product feature - Be specific about place, time, or mood when describing scent or atmosphere - Use "the" before product names when possible ("the Bordeaux candle") - Write product descriptions as if describing someth → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
Document the full process:
Number of rounds - Iteration types used - What each round changed - Time spent on AI interaction vs. human editing - Final assessment: did iteration produce better output than a single-shot attempt would have? → Chapter 6 Exercises: Working in Loops, Not Lines
Documentation tasks:
Writing docstrings - Creating README files - Writing technical specifications → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Documents included:
[Filename]: [What it contains, last updated date] - [Filename]: [What it contains, last updated date] → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems

E

Elena (freelance consultant, efficiency-focused)
Advantage: Strong domain expertise across many areas means she can evaluate AI output for substantive correctness. Her professional judgment is a reliable quality filter. - Disadvantage: Her efficiency-first orientation creates pressure to trust and move on rather than verify. If AI output looks pro → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Elena's AI no-fly list:
Client data in any form into consumer AI tools - Legal and regulatory claims in client deliverables — these go to primary sources or to counsel - The analytical conclusions in my deliverables — AI helps with the research and the structure, but the conclusion is my professional judgment - Communicati → Chapter 32: When NOT to Use AI (and Why That Matters)
Elena's role
that she is the primary strategist and deliverable author, that she uses Claude to organize research, test synthesis, and draft deliverable components. → Case Study 2: Elena's Client Research System — Claude Projects in Practice
Elicit
elicit.org Research question-based literature search across Semantic Scholar. Best for academic research questions with clear empirical focus. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Email fetcher
polls the shared inboxes via IMAP and retrieves unprocessed emails 2. **Triage engine** — classifies and prioritizes each email using the Anthropic API 3. **Response drafter** — generates draft responses for emails requiring one 4. **Output handler** — writes triage results and drafts to a shared do → Case Study 2: Raj's Email Assistant — A Custom Triage and Draft Bot
Escalation indicators:
Expressed strong emotion (anger, distress, frustration that goes beyond irritation) - Mentions of legal action, regulatory complaints, or formal disputes - References to a specific senior person or relationship ("I've been a customer for 10 years") - Second or third contact on the same unresolved is → Chapter 28: Customer-Facing Work: Sales, Support, and Outreach
Every month:
Review your most-used prompts. Are they still optimal? Have you found better approaches that haven't been incorporated? - Look at prompts you used once and haven't returned to. Were they failures (good to remove) or opportunities (good to revisit)? - Check prompts against current AI capability. Some → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Example — Competitive Analysis Chain:
Sub-chain A: Research Company 1 → summarize strengths/weaknesses → format - Sub-chain B: Research Company 2 → summarize strengths/weaknesses → format - Sub-chain C: Research Company 3 → summarize strengths/weaknesses → format - Merge step: Take all three formatted summaries → synthesize comparative → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Executive Summary of API Changes
for the product leadership team and two enterprise clients' CTOs who need to understand what changed, why, and what the migration path looks like, without needing to read technical specifications. → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Executive summary problems:
The output was written as if the API was being launched for the first time, not upgraded - It contained technical terms (REST, JSON, webhook) without defining them - It did not address the migration from old to new — which is the only thing the CTOs actually care about - It ended with a vague statem → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Exercises
hands-on practice prompts and tasks - **Quiz** — self-assessment questions with hidden answers - **Case Studies** — two detailed worked examples per chapter - **Key Takeaways** — a scannable summary - **Further Reading** — curated resources for deeper exploration → Working With AI Tools Effectively
Expert characteristics:
Flexible, judgment-based AI use (do you have this?) - Efficient verification — checking what matters most, not everything (do you have this?) - Clear sense of when not to use AI (do you have this?) - Reflective habit — learning from each interaction (do you have this?) → Chapter 41 Exercises: Building Your Long-Term AI Practice

F

Fact-Checking
Snopes (snopes.com) — general fact-checking - PolitiFact (politifact.com) — political claims - FactCheck.org — political and policy claims - Full Fact (fullfact.org) — UK-focused general fact-checking → Chapter 30: Verifying AI Output — Fact-Checking Workflows
FastAPI
fastapi.tiangolo.com The Python web framework used in the case studies. FastAPI's dependency injection system, automatic documentation generation, and Pydantic integration are relevant context for the implementation examples. → Chapter 23 Further Reading: Software Development and Debugging
First-output scoring:
Used directly with minor edits: 1.0 - Good foundation, moderate editing needed: 0.7 - Useful but significant revision required: 0.4 - Didn't save time or was misleading: 0.0 → Chapter 39 Exercises: Measuring Effectiveness
For each role:
Write the full role assignment prompt (using the templates from Section 9.19 as a guide) - Run the prompt (if you have access to an AI tool) or predict what feedback each role would produce - Summarize the key feedback from each role → Chapter 9 Exercises: Instructional Prompting and Role Assignment
FTC.gov
for marketing disclosure requirements, endorsement guidelines, advertising standards - **HHS.gov** — for HIPAA if it comes up in health marketing contexts - **CAN-SPAM FAQ on FTC.gov** — for email compliance - **GDPR official text** on EUR-Lex for anything GDPR-related → Case Study 2: Alex's Verification Stack

G

Gemini in Google Sheets
workspace.google.com/products/gemini Available with Google Workspace Gemini add-on. Integrated directly into Google Sheets for natural language analysis, formula assistance, and chart generation. → Chapter 22 Further Reading: Data Analysis and Visualization
Gemini:
Strong at current information with web access - Can struggle with very long structured output consistency - Generally reliable for Google Workspace integration tasks → Chapter 13: Diagnosing and Fixing Bad Outputs
General patterns across all platforms:
All models hallucinate more on obscure topics, recent events, and specific numerical data - All models produce better output with explicit format specification than without - All models improve significantly with context — the "blank slate" problem affects all platforms equally → Chapter 13: Diagnosing and Fixing Bad Outputs
GitHub Copilot
github.com/features/copilot Integrated code completion for VS Code, JetBrains, and other editors. The primary IDE-integrated AI development tool. The GitHub Copilot study is the source of the 55% productivity finding. → Chapter 23 Further Reading: Software Development and Debugging
Goals for next quarter:
One use case to develop further - One use case to stop or significantly reduce - One new capability to explore → Chapter 41 Exercises: Building Your Long-Term AI Practice
Google Scholar
scholar.google.com Broad academic search with citation tracking. Essential for citation verification. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Governance
[ ] Name the policy owner - [ ] Establish the escalation path - [ ] Define the review cadence for policy updates - [ ] Create an incident process for policy violations or quality failures → Chapter 38: Deploying AI in Teams and Organizations
Government and Statistical Data
data.gov — US federal datasets - Bureau of Labor Statistics (bls.gov) — labor, employment, wages - Census Bureau (census.gov) — demographics, business data - FRED Economic Data (fred.stlouisfed.org) — economic time series - WHO Global Health Observatory (who.int/data) — health statistics - Eurostat → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Grammarly
grammarly.com AI-powered proofreading and style checking. The free tier catches grammar and spelling. The premium tier adds style, tone, and clarity suggestions. Useful for the proofreading stage; less appropriate for line editing where AI voice suggestions can conflict with your own stylistic choic → Chapter 20 Further Reading: Writing and Editing with AI

H

Hemingway Editor
hemingwayapp.com Readability analysis that flags passive voice, adverb overuse, and complex sentences. Useful as a diagnostic tool for evaluating AI-generated prose and for identifying where line editing is needed. Does not generate text — it only evaluates. → Chapter 20 Further Reading: Writing and Editing with AI
Using personal data (GDPR, CCPA, HIPAA, other privacy law implications) - Using AI in hiring, lending, housing, or other contexts regulated for discrimination - Generating content for commercial publication where copyright and IP integrity matters - Using AI in contexts involving professional liabil → Chapter 34: Legal and Intellectual Property Considerations
High reliability domains:
Grammar, writing mechanics, style - Common programming languages and standard patterns - General business frameworks and concepts - Historical events well before the training cutoff - Widely documented scientific concepts - Common legal concepts and frameworks (not specific advice) → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
High reliability situations:
Standard CRUD operations in frameworks he knew well - Boilerplate configuration for common tools (Docker, linters, CI configuration) - Straightforward algorithmic implementations (sorting, filtering, transformation) - Test scaffolding for well-defined behaviors - Documentation generation from code c → Case Study 3.2: The Right Tool for the Right Job
High trust (verify only edge cases):
Standard implementations of well-established patterns - Boilerplate code generation - Simple utility functions - Code formatting and style → Case Study: Raj's Practice — Staying Human in an AI-Augmented Development World
High-reliability code tasks (Raj's Zone 1):
Boilerplate file structure for standard patterns (REST endpoints, database models) - Unit test scaffolding for simple functions - Code reformatting and style adjustments - Docstring and comment generation - Standard utility functions with well-defined, common behavior (string formatting, date calcul → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
High-signal indicators:
New capability demonstrated on diverse, practical tasks (not just one impressive example) - Capability improvement confirmed by third-party testing (not just the company's own benchmarks) - Practical adoption by real practitioners you trust, who describe their workflow with specificity - Research pa → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
human review points
moments where a practitioner reviews the intermediate output, edits it if necessary, and approves it before the chain continues. The placement of these review points is one of the most important design decisions in chain construction. → Chapter 35: Chaining AI Interactions and Multi-Step Workflows

I

Immediate (this week)
[ ] Set a recurring quarterly calendar block: "AI Practice Review" - [ ] Create or update your effectiveness journal - [ ] Identify one skill you want to maintain through independent practice regardless of AI capability - [ ] Write down your current position on the "tool, partner, or threat" questio → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Immediately involve counsel if:
You believe you may have already violated a legal obligation through AI use (trade secret disclosure, HIPAA breach, GDPR violation) - A client or third party is raising a legal concern related to AI use - You are deploying AI in a high-risk domain for the first time at scale - Your AI use has genera → Chapter 34: Legal and Intellectual Property Considerations
inconsistent calibration
the person does not have a systematic model of when to trust AI and when not to. They may be over-trusting in some situations (perhaps when under time pressure, or when the output looks impressive) and under-trusting in others (perhaps out of general anxiety about AI, or when they are perfectionisti → Chapter 4 Quiz: Trust Calibration
Initial Setup
[ ] Create your effectiveness journal (spreadsheet with date, task type, time estimates, iteration count, quality rating, notes) - [ ] Define the quality dimensions most relevant to your work - [ ] Establish your baseline: how long do common AI-assisted tasks take without AI? → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Introductory AI Literacy Course (8 weeks):
Week 1: Chapters 1–2 (What AI is, how it thinks) - Week 2: Chapters 3–4 (Mental models, trust) - Week 3: Chapters 7–8 (Prompting basics) - Week 4: Chapters 9–10 (Advanced prompting) - Week 5: Chapters 20–21 (Writing and research workflows) - Week 6: Chapters 29–30 (Hallucinations and verification) - → Prerequisites and Assumed Knowledge
Invest in new capability when:
A capability has clear potential to address a specific, real limitation in your current practice - Multiple trusted sources (not just enthusiasts) have found it valuable in contexts similar to yours - The exploration cost is proportionate to the potential value (a 2-hour exploration for a capability → Chapter 40: How AI is Evolving: Staying Ahead of the Curve

K

Key structural differences from Anthropic:
OpenAI uses `client.chat.completions.create()` vs. Anthropic's `client.messages.create()` - OpenAI includes the system message in the messages list (as `{"role": "system", ...}`) - OpenAI returns `response.choices[0].message.content` vs. Anthropic's `response.content[0].text` - Token counts use `res → Chapter 36: Programmatic AI — APIs, Python, and Automations
Knowledge file limitations:
Total storage per GPT: 20 files, 512 MB total - Retrieval is not guaranteed: the GPT retrieves relevant sections but cannot access all knowledge simultaneously in one response - File content is not kept confidential from users who probe for it — if you upload sensitive documents, assume users can re → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems

L

Law360 AI Digest
legal news aggregation covering AI-related litigation and regulatory developments - **IAPP AI Privacy Resources (iapp.org)** — regularly updated resources on AI and privacy law from the International Association of Privacy Professionals - **AI Now Institute (ainowinstitute.org)** — research and poli → Chapter 34 Further Reading: Legal and Intellectual Property Considerations
Federal Register and CFR (ecfr.gov) — US federal regulations - Congress.gov — US legislation - EUR-Lex (eur-lex.europa.eu) — EU law - Westlaw/LexisNexis — commercial, comprehensive, subscription-required → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Limitations of no-code tools:
Less flexible than custom code for complex data transformations - Dependent on the platform's available integrations - Can become expensive at scale (per-task pricing on Zapier and Make) - Debugging is harder than in code — when a chain fails, finding which step failed and why takes more investigati → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Low reliability domains:
Statistics and numerical data - Recent events (past 1-2 years relative to training cutoff) - Niche technical topics - Specific details about less-prominent people or organizations - Emerging regulatory areas - Specialized academic literature → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Low reliability situations:
Any feature involving a library updated within the past year - Domain-specific logic that requires understanding of business rules - Architecture-level suggestions requiring system context - Security-sensitive code (authentication, authorization, input validation, cryptography) - Code that targets v → Case Study 3.2: The Right Tool for the Right Job
Low-signal indicators (often noise):
"Impressive demo" without explanation of what's generalizable - Benchmarks from a tool's own marketing materials - "This changes everything" claims without specific explanation of what changes and how - Viral examples that rely on cherry-picked best-case outputs - Predictions about timelines that do → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Using AI for internal analysis and ideation not involving regulated data - AI-assisted drafting and editing for professional services (with adequate oversight) - AI tools for productivity improvement on tasks not involving regulated content - Research and information gathering on public-domain topic → Chapter 34: Legal and Intellectual Property Considerations

M

Making the choice:
If you need maximum ecosystem breadth and third-party integrations: **ChatGPT** - If you do a lot of long-form writing, document analysis, or nuanced professional work: **Claude** - If you are embedded in Google Workspace: **Gemini** - If you are a developer: **All three, with API access to at least → Chapter 5: Setting Up Your Personal AI Environment
Measured productivity changes:
**Boilerplate generation:** Approximately 60% faster. REST endpoint scaffolding, model definitions, test setup — tasks that previously took 30-60 minutes now took 10-20 minutes. - **Code review preparation:** Architecture and design discussions with Claude before starting implementation consistently → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Medium reliability situations:
Integration code for well-documented third-party services - Error handling patterns (correct structure, but often missing specific cases) - Refactoring existing code into cleaner patterns (good patterns, but sometimes misunderstands intent) - Optimization suggestions (often correct in direction, nee → Case Study 3.2: The Right Tool for the Right Job
Mermaid
mermaid.js.org A text-based diagramming language supported by GitHub, Notion, GitLab, and other platforms. Referenced in the architecture section for AI-generated diagram creation. → Chapter 23 Further Reading: Software Development and Debugging
Microsoft Copilot in Excel
microsoft.com/en-us/microsoft-365/copilot Available with Microsoft 365 Copilot license. Integrated into Excel for natural language queries, chart generation, and data summarization. → Chapter 22 Further Reading: Data Analysis and Visualization
Using AI for commercial content generation where copyright ownership matters - Generating AI-assisted code for commercial products (IP chain of title) - Using AI for marketing content (FTC endorsement and disclosure) - Automated decision-making affecting customers (various consumer protection implic → Chapter 34: Legal and Intellectual Property Considerations
Moderate reliability domains:
Industry-specific knowledge in major industries - Applied technical topics with rich documentation - Medical concepts at a general level - Social science research findings in mainstream areas → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Moderate-reliability code tasks (Raj's Zone 2):
Algorithm implementations for common problems - Database query optimization - API integration code for well-documented APIs - Standard error handling patterns - Refactoring for readability → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Month 1: First GPT
[ ] Identify your highest-value GPT use case - [ ] Write and test a system prompt - [ ] Build and deploy your first GPT to a real workflow → Chapter 14: Mastering ChatGPT and GPT-4
Month 2: Refinement
[ ] Update your Custom Instructions based on month 1 usage patterns - [ ] Refine your GPT based on real use - [ ] Add the role-plus-audience frame to your standard prompting repertoire → Chapter 14: Mastering ChatGPT and GPT-4
Month 3: Team and Scale
[ ] Document your three highest-value prompt patterns for sharing - [ ] Review your data privacy setup for professional use - [ ] Do a quarterly ChatGPT feature review (what is new that matters to my work?) → Chapter 14: Mastering ChatGPT and GPT-4
Monthly (approximately 90 minutes):
A 30-minute exploration session: she tries one new AI capability on a real current project - A 60-minute deeper read: one longer piece on a topic directly relevant to her current work (she finds this through her trusted sources' recommendations) → Case Study: How Alex Stays Current Without Getting Overwhelmed
Monthly Practices
[ ] Analyze time savings by task category - [ ] Calculate your AI batting average by task category - [ ] Review iteration efficiency trends - [ ] Identify your top 3 highest-leverage use cases and your bottom 3 - [ ] Run the "stop doing" analysis on low-value use cases → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles

N

News Verification
Reuters, AP, AFP — wire services with strong editorial standards - Newsguard (newsguardtech.com) — news source reliability ratings - MediaBias/FactCheck (mediabiasfactcheck.com) — source reliability assessments → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Non-user
Hasn't engaged with AI tools 2. **Explorer** — Has tried AI tools occasionally, inconsistently 3. **Practitioner** — Uses AI regularly for specific tasks, getting real value 4. **Expert** — Uses AI across many tasks, gets consistently good results, has developed judgment about when to use and when n → Chapter 38 Exercises: Deploying AI in Teams and Organizations
NotebookLM
notebooklm.google.com Document-grounded AI assistant. Best for analysis within a defined document set. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering

O

One-year metrics:
Prompt library: 30 entries, all reviewed and updated in the past 90 days - Monthly measurement analysis: consistent for 12 months - Team aggregate time savings: 15+ hours/week - Her own batting average: above 0.70 on her primary use cases → Case Study: Alex's Capstone Plan — The Marketing AI Practitioner
Output standards
write for a CEO audience unless specified otherwise, use formal but accessible language, always distinguish interpretation from data. → Case Study 2: Elena's Client Research System — Claude Projects in Practice
OWASP Top 10
owasp.org/Top10 The Open Web Application Security Project's list of the ten most critical web application security risks. This is the security checklist that should inform every AI code security review. Familiarity with the Top 10 makes AI security review prompts more targeted and the resulting revi → Chapter 23 Further Reading: Software Development and Debugging

P

Pandas
pandas.pydata.org The Python data analysis library used in all code examples. Version 2.0+ changes some DataFrame behaviors from earlier versions; ensure AI-generated code is using compatible syntax for your installed version. → Chapter 22 Further Reading: Data Analysis and Visualization
Part 1 — Task Analysis:
What is the task? - Who is the intended audience for the output? - What does "excellent" look like for this task? - What are the common failure modes or output problems you want to prevent? - Are there any non-obvious constraints (things that are obvious to you but not to an outside reader)? → Chapter 7 Exercises: Prompting Fundamentals
Part 7: Staying Current and Looking Forward.
## The Chapters Ahead → Part 7: Staying Current and Looking Forward
Perplexity
perplexity.ai Real-time web search with citations. Best for current events and time-sensitive research. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Phase 1 (Weeks 1-2): Foundation documents
Engagement scope and objectives document - Client background — 800-word summary of the firm's history, market position, and the strategic challenges prompting the engagement - Stakeholder map — names, roles, influence level, known positions on key questions - Interview guide — the questions Elena pl → Case Study 2: Elena's Client Research System — Claude Projects in Practice
Practical Notes:
For most knowledge workers with no existing toolchain: start with Claude or ChatGPT; both offer strong free tiers. - If your team is heavily invested in Google Workspace, Gemini's native integrations provide practical advantages. - For API integration, all three have mature, well-documented APIs. Se → Appendix D: Tool Comparison Quick-Reference Cards
Practical risk management for AI-generated code:
Use tools with established IP indemnification commitments where material IP exposure exists (GitHub Copilot has offered such commitments to enterprise customers) - Review AI-generated code for potential similarity to known open source projects in your domain - Maintain records of which portions of y → Chapter 34: Legal and Intellectual Property Considerations
Proactively engage counsel when:
Developing organizational AI use policies for the first time - Negotiating client contracts with AI use implications - Making significant AI deployment decisions in regulated industries - Before using AI tools with any PHI or highly sensitive personal data → Chapter 34: Legal and Intellectual Property Considerations
PubMed
pubmed.ncbi.nlm.nih.gov Authoritative database for medical and life sciences literature. Essential for citation verification in health-related research. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Python Security Advisories
advisories.python.org / PyPI Security Advisories The official source for Python package security advisories. Referenced in the dependency verification section. Check this resource for any AI-introduced Python dependency. → Chapter 23 Further Reading: Software Development and Debugging

Q

Quality assessment:
Error rate on client deliverables: down 23% from pre-AI baseline (she tracked this through her revision request log) - Client satisfaction scores: stable (no degradation, slight positive trend) - Internal review cycles: reduced by one round average for standard deliverables → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Quality Standards
[ ] Define what "done" means for AI-assisted work in your context - [ ] Establish verification requirements for factual claims - [ ] Define the review process for Tier 2 use cases → Chapter 38: Deploying AI in Teams and Organizations
Quarterly Practices
[ ] Calculate ROI on your AI subscriptions - [ ] Review your learning curve trend — are you still improving? - [ ] Identify new use cases to try based on measurement gaps - [ ] Update your prompt library based on what's working best → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Quarterly:
A 30-minute team AI update session built into the existing team meeting - A 90-minute personal reflection and practice update (what's changed, what should she start/stop/continue?) → Case Study: How Alex Stays Current Without Getting Overwhelmed

R

Raj (software developer, technical, precise)
Advantage: His technical background means he reads outputs critically, understands what "plausible but wrong" looks like in code, and is comfortable iterating. He has natural verification habits. - Disadvantage: His "just autocomplete" mental model leads him to use AI tools passively rather than act → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Raj's AI no-fly list:
Code reviews for code that will run in production safety paths (authentication, access control, data integrity) — I check these myself or with a senior peer, not AI - Architecture design decisions for systems where I need to be able to defend the design to the team — AI can help me think through opt → Chapter 32: When NOT to Use AI (and Why That Matters)
Recommended entry points by role:
**Writers/Content creators:** Start with Ch. 7 (Prompting Fundamentals), then Ch. 20 (Writing with AI) - **Developers:** Start with Ch. 17 (Copilot), then Ch. 23 (Software Development), then Ch. 36 (APIs) - **Managers/Leaders:** Start with Ch. 3 (Mental Models), then Ch. 38 (Deploying AI in Teams) - → How to Use This Book
Brand voice GPT: style guide, example posts, vocabulary lists, tone descriptions - Code assistant GPT: internal coding standards, architecture documentation, common patterns - Research assistant GPT: topic overviews, key sources, methodology guidelines - Client communication GPT: client profiles, pr → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Reflection exercises
questions to answer in writing; help consolidate learning 2. **Hands-on tasks** — things to actually do with an AI tool, with specific prompts to try 3. **Applied challenges** — more open-ended tasks connecting the chapter to your own work → How to Use This Book
Remaining problems:
The two documents were not sufficiently differentiated in depth or tone. The technical document was accurate but lacked the structure a developer needs for implementation (specifically: no authentication section, no error code table, no webhook payload examples). - The executive summary still used t → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Repair.
## Repair Prompt 1: The Context Reload → Case Study: Alex's Campaign Copy Crisis — A Three-Round Repair
Required columns:
Date - Tool used - Task type - Domain - What zone you treated it as - What happened (was there an error? was verification needed?) - Calibration update (what does this tell you about your model?) → Chapter 4 Exercises: Practicing Trust Calibration
Requirements:
Target list: 50 minimum - Personalization: every email must have a specific hook (verified, not just AI-generated) - Sequence: minimum 3-touch for non-responders - Quality review: every email reviewed before sending against your authenticity checklist - Tracking: open rate, reply rate, conversation → Chapter 28 Exercises: Customer-Facing Work: Sales, Support, and Outreach
Research literature tools
covered in more depth below under Scientific Research — are used by medical researchers and clinically-oriented practitioners to navigate the vast medical literature. PubMed AI features, Semantic Scholar's medical coverage, and specialized tools like AskThePaperAI are all relevant here. → Chapter 19: Specialized and Domain-Specific AI Tools
Research Synthesizer
The Summarizer adapted for multi-source synthesis: multiple documents → structured insight summary with source attribution 2. **Framework Applicator** — The Analyzer adapted for applying standard consulting frameworks (SWOT, Porter's Five Forces, McKinsey 7-S) to client situations 3. **Deliverable S → Chapter 11: Prompt Engineering Patterns for Recurring Tasks
Results from initial testing:
13 of 15 on-brand samples: correctly identified as strong or acceptable - 11 of 15 off-brand samples: correctly identified as needing revision - 7 of 10 competitor samples: correctly identified as not matching the company's voice (expected, since it should only apply company voice standards) → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
Review and understanding tasks:
Reviewing pull requests from colleagues - Reading unfamiliar codebases - Understanding legacy code without good documentation → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Rewrote the workforce section
expanded from 2 paragraphs to a full section including a detailed hiring analysis, a vacancy rate comparison, and a recommendation to begin workforce partnerships with nursing schools before any capital deployment. → Case Study 02: Elena's Devil's Advocate — How Role Assignment Saved a Flawed Report
ROI calculation:
8.5 hours/week × average team member value ($45/hour) × 26 weeks = $9,945 in time value recovered - Total investment: $1,200 subscriptions + estimated $2,400 management time = $3,600 - ROI: $9,945 / $3,600 = 2.76x → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Role stacking
combining multiple roles simultaneously or sequentially — is effective for tasks that genuinely benefit from multiple perspectives. Practical limit: more than two or three roles produces superficial coverage of each perspective. → Chapter 9 Key Takeaways: Instructional Prompting and Role Assignment
Rules:
You may use AI for all generation tasks - You must review and edit every output — nothing goes into the final plan unreviewed - Track your time: how much on prompting vs. reviewing vs. editing? - At the end, rate the quality of the plan on a scale of 1-10 compared to plans you'd produce without AI a → Chapter 24 Exercises: Project Planning and Task Management

S

Scoring:
If your score on items 1-5 averages above 3: You are likely over-trusting AI. Focus on building a verification habit for Zone 3 tasks. - If your score on items 6-10 averages above 3: You are likely under-trusting AI. Focus on identifying Zone 1 tasks you can use more confidently. - If both scores ar → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Search Engine Journal
industry publications that follow platform changes closely and whose editorial standards she trusts - **Platform official announcements** — she follows official product blogs for the major platforms she covers → Case Study 2: Alex's Verification Stack
Section 1: Where I Am Now
Six-dimension scores with brief evidence - Strengths and gaps summary - The one most important improvement → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Section 2: The 30-Day Sprint
Goal - One habit - Quick win → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Section 3: The 90-Day Plan
Primary skill investment and goal - New capability to explore - Learning investment → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Section 4: The One-Year Vision
Written vision (400 words) - Three metrics - Growth path and what progress looks like → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Section 5: The Three Commitments
Start - Stop - Improve → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Section 6: Review Schedule
When you'll review this plan (monthly? quarterly?) - What you'll track - How you'll know if you're on track → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
Semantic Scholar
semanticscholar.org Academic search engine covering 200M+ papers. Best for citation discovery and literature mapping. → Chapter 21 Further Reading: Research, Synthesis, and Information Gathering
Signs you're outsourcing rather than supporting:
You're framing the prompt as "What should I do?" rather than "Help me think through X" - You're accepting the AI's recommendation without being able to articulate why you agree - You're feeling relieved rather than clearer after reading the AI output - You're not engaging with the AI's analysis — ju → Chapter 25: Decision Support, Analysis, and Strategic Thinking
Skip a gate if:
The step is purely mechanical (reformatting, counting, sorting) - An error in this step would be immediately visible in the next step's output - The chain loops iteratively and will self-correct - Speed is more important than quality for this specific workflow → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Social Content AI
AI-assisted social media content generation 2. **Email Personalization Platform** — AI email subject line and copy optimization 3. **Brand Voice Content Generator** — Long-form content generation with brand voice training 4. **Ad Copy Generator** — Short-form ad copy at scale 5. **Market Intelligenc → Case Study: Alex's MarTech Stack Audit — Evaluating 6 Marketing AI Tools in One Week
Source Selection
[ ] Identify one domain-specific AI newsletter or community to follow consistently - [ ] Identify one first-party source from an AI lab whose tools you use - [ ] Identify one critical voice — someone who thinks clearly about AI's limitations → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
SQLAlchemy 2.0
sqlalchemy.org The Python ORM used in the case studies. Understanding SQLAlchemy's session management, eager vs. lazy loading, and `expire_on_access` behavior is relevant to the memory leak case study. → Chapter 23 Further Reading: Software Development and Debugging
Start over when:
**You have given contradictory instructions across multiple turns** and the conversation context has become confused. The AI is now working with an accumulated set of constraints that may be internally inconsistent. A clean start with a better first prompt is more efficient than trying to reconcile → Chapter 6: The Iteration Mindset — Working in Loops, Not Lines
Step 1: Individual item classification
Classify each of the 497 items using a structured prompt with claude-haiku (fast and economical for this mechanical task). → Case Study 1: Raj's Document Processor — Batch Analysis with the API
Step 2: Human validation sample
Present the customer success director with a random sample of 50 classified items and ask her to mark any misclassifications. This step was manual by design. → Case Study 1: Raj's Document Processor — Batch Analysis with the API
Step 3: Recalibration (if needed)
If the validation sample showed systematic errors in specific categories, revise the classification prompt and rerun those items. → Case Study 1: Raj's Document Processor — Batch Analysis with the API
Step 4: Synthesis
With validated classifications, run the frequency analysis and generate the synthesis narrative using claude-opus-4-6. → Case Study 1: Raj's Document Processor — Batch Analysis with the API

T

Technical API Reference
for developers integrating the API. Needs endpoint specifications, request/response schemas, error code definitions, authentication details, and code examples. → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Technical output problems:
The AI generated generic REST API documentation with placeholder endpoints like `/api/payments` — not the actual endpoint paths - Code examples were in Python, but Raj's team uses Node.js - The authentication section described bearer tokens generically without reference to the company's specific API → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
The "false positive" problem
AI flagging intentional decisions as mistakes — is one of the most trust-destroying failures in AI-assisted technical and expert work. Context packets should explicitly document intentional decisions that look like mistakes from the outside. → Chapter 8 Key Takeaways: Context Is Everything
The "one general plus one specialized" strategy
maintaining a general-purpose AI for broad tasks and one carefully chosen specialized tool for your highest-volume distinctive professional need — manages proliferation while capturing most of the specialization advantage. → Key Takeaways: Chapter 19 — Specialized and Domain-Specific AI Tools
The Markup (themarkup.org)
investigative journalism organization that publishes technically rigorous investigations of AI systems and data practices, including verification-relevant research. → Chapter 30 Further Reading: Verifying AI Output — Fact-Checking Workflows
The Noise Filter
[ ] Write your personal filter criteria: what types of AI coverage you will and won't spend time on - [ ] Review your current information sources and eliminate those that fail the filter → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
The one habit I'm establishing:
What it is: - When I'll do it: - How long it takes: - How I'll track it: → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
The original research firm's press release page
when a report is cited, research firms often publish press releases summarizing key findings. These are free and usually contain the headline numbers. - **Search "[firm name] [report topic] [year] press release"** — this gets her to the authoritative source in many cases → Case Study 2: Alex's Verification Stack
The perspective shift technique
cycling through multiple stakeholder perspectives and synthesizing the common ground and tensions — is particularly valuable for decisions that affect multiple groups with different interests. → Chapter 9 Key Takeaways: Instructional Prompting and Role Assignment
The Testing Protocol
[ ] Define your standard evaluation battery for new tools in your domain (like Raj's six-task battery for coding) - [ ] Commit to running your battery on any new tool that seems relevant before forming a strong opinion about it → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
This Month
[ ] Run your first monthly prompt retrospective - [ ] Identify one capability from the "developing" list to actively experiment with - [ ] Have a conversation with at least one colleague about your AI practice — what you're learning, what you're questioning → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
This Quarter
[ ] Complete your first quarterly AI practice review using the framework above - [ ] Set three concrete goals for your AI practice next quarter - [ ] Update your prompt library based on what you've learned → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
This Year
[ ] Conduct all four quarterly reviews - [ ] Assess your position on the beginner-to-integrated arc at year start and year end - [ ] Write a brief reflection: what has AI changed about how you work, and is that change in the direction you want? → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Three-month results:
Reduction in "pattern violation" comments in human code review: approximately 40% (engineers caught their own pattern violations in AI pre-review) - Reduction in TypeScript annotation issues caught in human review: approximately 60% - False positive rate from AI review: approximately 5% (compared to → Case Study 02: Raj's Codebase Context — Making AI a Useful Code Reviewer
Tier 1 (Weekly, ~45 minutes):
Source 1: [Name, why you chose it, how long it typically takes] - Source 2: [Name, why you chose it] - Any other weekly touchpoints → Chapter 40 Exercises: Staying Ahead of the Curve
Tier 2 (Monthly, ~90 minutes):
What kind of deeper reading will you do? - How will you find it? → Chapter 40 Exercises: Staying Ahead of the Curve
Tier 3 (Quarterly, ~2-3 hours):
What does your quarterly capability exploration session look like? - When specifically will you schedule it? → Chapter 40 Exercises: Staying Ahead of the Curve
Time Allocation
[ ] Set a weekly standing time for catching up on AI developments (30-45 minutes maximum) - [ ] Schedule a monthly "capability exploration" session (60-90 minutes) - [ ] Schedule a quarterly "testing and reflection" session (2-3 hours) → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Time allocation suggestion:
0:00-0:30 — Narrative development (SCQA, audience-centered outline) - 0:30-1:30 — Slide content generation (titles, bullets/visuals, speaker notes) - 1:30-2:00 — Visual strategy (chart recommendations, image prompts, visual mockups) - 2:00-3:00 — Editing and refinement (so what audit, title assertio → Chapter 26 Exercises: Presentations, Slides, and Visual Communication
Time savings calculation:
Content creation: 4 hours/week saved (team aggregate) - Research and analysis: 2 hours/week saved - Email and communication drafting: 1.5 hours/week saved - Template and format work: 1 hour/week saved - Total: 8.5 hours/week, team aggregate → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Tool-specific configuration:
**ChatGPT:** Custom instructions available in Settings > Personalization. Also consider creating Custom GPTs for specific recurring task types. - **Claude:** Custom instructions available in Settings, and Projects allow separate custom contexts for different ongoing work areas. - **Gemini:** Workspa → Chapter 5: Setting Up Your Personal AI Environment
Total time investment comparison:
Prompt 1: 30 seconds to write, 52 minutes to edit - Prompt 2: 10 minutes to write, 28 minutes to edit - Prompt 3: 12 minutes to write, 4 minutes to edit → Case Study 01: From Generic to Remarkable — Alex's Blog Post Transformation
Training and Support
[ ] Plan foundational orientation - [ ] Establish shared resource infrastructure (prompt library, discussion channel) - [ ] Define ongoing learning cadence → Chapter 38: Deploying AI in Teams and Organizations

U

Unexpected value adds:
**Onboarding unfamiliar codebases:** When he joined a project on a codebase he had not worked in before, using Claude to explain the structure and logic of key modules significantly accelerated his ramp-up time. - **Technical writing:** He started using Claude for technical specification writing and → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Use Case Inventory
[ ] List the 10 most common tasks your team performs - [ ] For each, assess: AI value potential (high/medium/low/none), information sensitivity (low/medium/high), quality stakes (low/medium/high) - [ ] Draft your three-tier taxonomy based on this assessment → Chapter 38: Deploying AI in Teams and Organizations
Use Claude Projects when:
The configured context is for your own ongoing use, not for sharing with others - You are working on a body of work that evolves over weeks or months - You want to maintain conversation history across sessions - Your context involves documents you want to update and maintain → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use CoT when:
The task requires multiple sequential reasoning steps - Accuracy is more important than response speed - You want to verify the reasoning, not just the conclusion - The problem involves math, logic, planning, or diagnosis - The model frequently makes errors on this task type without CoT → Chapter 10: Advanced Prompting Techniques
Use Custom GPTs when:
You want to share the configured tool with others - The tool should present as a standalone, clearly named product - You need external API integrations (Actions) - The use case is recurring and benefits from a polished user experience → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use GPT Builder (no-code) when:
Non-technical team members will create or maintain the configured system - You want a fast setup with the visual interface - The assistant does not require complex integration with external systems - You need to share the assistant publicly or via the GPT store → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use the Assistants API (code) when:
The assistant will be embedded in your own application - You need programmatic control over thread creation and management - You need to integrate the assistant with your own backend systems - You are building a product or service that uses AI assistants as infrastructure → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems

V

Version C
replace one good example with a deliberately atypical or edge-case example. → Chapter 10 Exercises: Advanced Prompting Techniques
Very low reliability domains:
Cutting-edge research not yet widely published - Local or regional information - Non-English sources and non-Western topics (for English-centric models) - Highly specialized technical topics with limited training data → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong

W

Week 1: Foundation
[ ] Set up Custom Instructions with your full professional context - [ ] Review and configure Memory settings - [ ] Practice the "before you answer" technique on your next complex task → Chapter 14: Mastering ChatGPT and GPT-4
Week 2-3: Feature Exploration
[ ] Try Advanced Data Analysis on a real dataset from your work - [ ] Try DALL·E for one visual creation task - [ ] Explore the GPTs marketplace for your domain → Chapter 14: Mastering ChatGPT and GPT-4
Weekly (approximately 45 minutes total):
One marketing-specific AI newsletter (20 minutes) - One productivity/measurement newsletter (15 minutes) - A scan of her team's AI channel (10 minutes) → Case Study: How Alex Stays Current Without Getting Overwhelmed
Weekly Practices
[ ] Complete a journal entry for each significant AI interaction - [ ] Calculate weekly time savings aggregate - [ ] Note any interactions with unusually high iteration counts → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
What AI contributed:
The 9-slide structure: organizing by decision rather than analysis is an obvious idea in retrospect, but she wouldn't have arrived at it quickly alone - The defensive executive framing: "structural pattern, not leadership failure" is a reframe she knew intuitively but couldn't have articulated witho → Case Study 26-2: Elena's Executive Translation — Making Complex Analysis Simple
What AI does not do well:
**Estimating effort for novel work.** AI can suggest task durations based on averages from its training data, but it has no idea how fast your team works, how complex your specific technical environment is, or what organizational frictions will slow things down. Duration estimates from AI are plausi → Chapter 24: Project Planning and Task Management
What AI does well in project planning:
**Generating structure from chaos.** When you have a fuzzy project concept, AI is excellent at suggesting organizational frameworks, decomposing vague goals into specific tasks, and proposing logical sequences. - **Surfacing what you haven't thought of.** AI has been trained on enormous amounts of p → Chapter 24: Project Planning and Task Management
What Alex did not have:
Campaign concept or creative territory - Key message or brand positioning for the product - Tagline or campaign idea - Channel strategy - KPIs for the launch phase → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
What Alex had:
Product: Premium hydration system (bottle + filter) - Target audience: Active day hikers, 28-45, values gear quality over price, experienced outdoors, approximately $80-120 gear purchase range - Price: $89 - Timeline: Campaign needs to run for 8 weeks around product launch → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
What changed?
Have any AI capabilities changed significantly? - Have your verification habits changed? - Has your trust calibration shifted? → Chapter 41 Exercises: Building Your Long-Term AI Practice
What did not work:
**Over-relying on Claude for debugging:** Raj initially tried pasting error tracebacks into Claude expecting resolution. For complex, context-dependent debugging, Claude could suggest hypotheses but rarely resolved the issue without much more back-and-forth context sharing than it was worth. He stil → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
What didn't work?
List 2-3 AI use cases or approaches that didn't deliver expected value - Why didn't they work? - What will you do differently? → Chapter 41 Exercises: Building Your Long-Term AI Practice
What Elena adds:
Opening paragraph: adds a specific reference to "the conversation on Wednesday" and thanks them for their candor — AI had no way to write this - Section 1: adds the phrase "architecture firm growing through acquisition" to describe their situation more precisely — she'd noted this detail but AI had → Case Study 27-1: Elena's Proposal Machine — From Discovery Call to Proposal in 3 Hours
What Elena brought:
The $2.4M productivity calculation: this required her own research and anchored the entire business case - The "investment targets" language (vs. AI's "focus areas"): a subtle but important shift - The decision to make departments explicit rather than abstract: judgment about what the executives in → Case Study 26-2: Elena's Executive Translation — Making Complex Analysis Simple
What goes on each axis
**Any grouping or color coding** - **The title you want** → Chapter 22: Data Analysis and Visualization
What improved:
The collection name and concept are present - Individual candle names appear - There is an attempt at storytelling ("tell a story") - The structure is better — there are headers for each candle - The output is specifically about Lumier Home, not a generic brand → Case Study 01: From Generic to Remarkable — Alex's Blog Post Transformation
What she added in month 3:
A **Jasper.ai trial** for a high-volume content project (product description writing for 150 product pages). The templates were helpful but the quality ceiling was lower than Claude for premium content. She used Jasper for volume, Claude for quality. - A **Canva AI** workflow for quick social graphi → Case Study 5.1: Alex's AI Stack — Building a Marketing Powerhouse
What she cut completely:
General AI news from non-specialist sources - "Will AI replace marketing?" and similar thinkpiece content - Viral AI demos without practical context - AI company valuation and funding news - Podcast content she was consuming out of FOMO rather than genuine interest → Case Study: How Alex Stays Current Without Getting Overwhelmed
What success looks like at day 30:
### Exercise 5: The 90-Day Development Plan → Chapter 42 Capstone Exercises: Your Personal AI Mastery Plan
What the repair prompts did right:
Repair 1 was a context reload with the full few-shot reference. It didn't try to describe the voice — it demonstrated it. - Repair 2 was surgical: "Keep everything else exactly as is" prevented the model from changing the elements that were now correct. - The specificity of "The bamboo cutting board → Case Study: Alex's Campaign Copy Crisis — A Three-Round Repair
What these tools do well:
Trigger chains based on external events (a new email arrives, a form is submitted, a file appears in a folder) - Pass data between steps automatically - Integrate AI steps with non-AI steps (store output to a spreadsheet, send an email, create a record in a database) - Run chains on a schedule - Han → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
What this model suggests:
Provide explicit context before making requests - Specify the audience, purpose, constraints, and history relevant to the task - Review output with the recognition that the intern may have made reasonable errors based on missing context - Give specific, direct feedback when output needs to change → Chapter 3: The Right Mental Models for AI Collaboration
What to save:
**Effective prompts.** When a prompt works well — produces the quality of output you needed with minimal iteration — save it. You will want that prompt again. - **Outputs you refer back to.** Not every AI output is worth keeping, but those that inform ongoing work (research summaries, analysis frame → Chapter 5: Setting Up Your Personal AI Environment
What worked?
List 3-5 AI use cases or approaches that generated real value - What made them work? - Have you captured them in your prompt library? → Chapter 41 Exercises: Building Your Long-Term AI Practice
What you can do at Tier 1:
Upload a CSV, Excel file, or Google Sheet and ask for a basic exploration - Request specific statistics: average, median, correlation, trend over time - Generate visualizations by describing what you want to see - Ask interpretive questions: "What are the most important patterns in this data?" - Get → Chapter 22: Data Analysis and Visualization
What you can do at Tier 2:
Ask AI to write data loading and cleaning code - Request specific analyses with code - Iterate on visualizations through code - Debug analysis errors with AI assistance - Use AI to write code for analyses you know conceptually but could not implement quickly → Chapter 22: Data Analysis and Visualization
Where human judgment mattered most:
Round 1: Selecting the territories worth exploring (Alex's call) - Round 3: Recognizing that the tagline needed to be more specific (Alex's evaluation) - Round 4: Choosing "The pack doesn't lie" from the alternatives (Alex's decision) - Round 5: Identifying channel strategy and KPIs as the weak sect → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
Why would a non-developer want API access?
**Automation tools.** Platforms like Zapier, Make (formerly Integromat), and n8n can use AI APIs to build automated workflows without writing code. You might set up a workflow that automatically summarizes incoming emails, generates a first-draft response, or routes content through an AI filter. - * → Chapter 5: Setting Up Your Personal AI Environment
With CRAFT:
**Context:** We are a sustainable packaging startup. Our new product is a compostable shipping envelope that replaces plastic poly mailers. Our audience is e-commerce founders and sustainability-minded operations managers. - **Role:** You are a LinkedIn content strategist specializing in purpose-dri → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
With the role review:
90-minute investment in role assignment sessions - 6-hour revision day - Report delivers clean, all findings addressed, client confidence high - Engagement closes on schedule, strong reference client secured → Case Study 02: Elena's Devil's Advocate — How Role Assignment Saved a Flawed Report
Without the role review:
Report delivered as originally drafted - CFO catches the WACC error in the presentation room - COO raises the Epic conflict — Elena has no response prepared - Board chair asks the competitive advantage question — Elena has no compelling answer - Presentation deferred, engagement extended by 4-6 week → Case Study 02: Elena's Devil's Advocate — How Role Assignment Saved a Flawed Report
Workshop Format (1 day):
Morning: Chapters 3, 7, 10 (Mental models + prompting) - Afternoon: Chapters 29, 33 + Appendix D (Verification, ethics, tool cards) → Prerequisites and Assumed Knowledge

Y

Your noise filter:
List five types of AI coverage you will consciously skip → Chapter 40 Exercises: Staying Ahead of the Curve
Your tasks:
Name the specific failure - Identify the root cause(s) - Write a complete repair prompt using the appropriate template - Write the prevention: what would the original prompt have needed to produce a useful response? → Chapter 13 Exercises: Diagnosing and Fixing Bad Outputs

Z

Zone 1
Reformatting is a structural transformation task with no factual claims at risk. Errors are immediately visible. Use directly with light review. → Chapter 4 Quiz: Trust Calibration
Zone 1 tasks include:
**Formatting and restructuring.** Asking AI to convert bullet points to prose, reorganize a document's structure, apply consistent formatting, or reformat data is highly reliable. There is no factual claim to be wrong about — just structural transformation. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 2 tasks include:
**Factual claims in your professional domain.** If you are a marketing professional asking about marketing concepts, or a developer asking about common programming patterns, you have the background to spot most errors. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 3
Specific statistics and recent research citations. High risk of hallucinated or outdated statistics. Requires independent verification from primary sources before use. → Chapter 4 Quiz: Trust Calibration
Zone 3 tasks include:
**Recent events and current information.** AI tools have training data cutoff dates. Events after the cutoff simply do not exist in their knowledge. But more dangerously, events slightly before the cutoff may be represented incompletely or inaccurately. If you need current information, verify with c → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 4
Medical guidance for a specific clinical situation. Errors can cause serious harm and AI tools are unreliable for specific medical guidance. Requires expert (physician) oversight before acting on it. AI can be used to learn about the topic and prepare better questions for the doctor, but not to act → Chapter 4 Quiz: Trust Calibration
Zone 4 tasks include:
**Medical guidance for specific clinical situations.** AI can explain how diseases work, describe typical treatment protocols in general terms, and help someone understand medical information — but applying that to a specific patient's situation is Zone 4. The risk of an error is high and the conseq → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 5 tasks include:
**Legal documents you will sign.** Contracts, wills, non-disclosure agreements, filings — these need to be reviewed and authored by qualified legal professionals. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong