Weak: "Summarize the meeting" - Strong: "Summarize this meeting transcript in three sections: Key decisions made (numbered list), Action items with owner and due date (table), and Open questions that still need resolution (bullets). Max 250 words total." → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
2. Performance review feedback
Weak: "Help me write feedback for my employee" - Strong: "Draft constructive feedback for a junior analyst who consistently meets deadlines and produces accurate work but struggles to communicate proactively when blockers arise. The feedback should acknowledge the strength, describe the specific pat → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
3. Code explanation
Weak: "Explain this code" - Strong: "Explain this Python function to a junior developer who understands basic syntax but has not worked with decorators before. Use a real-world analogy for what the decorator does, then walk through the code line by line. Max 200 words." → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
4. Cover letter
Weak: "Write a cover letter for this job" - Strong: "Write a cover letter for a senior product manager applying to [Company] for [Role]. The applicant has 7 years of B2B SaaS experience, led the 0-to-1 launch of a data analytics feature, and has a background in user research. Tone: confident and spe → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
5. Sales email
Weak: "Write an email to a potential customer" - Strong: "Write a cold outreach email to a VP of Operations at a mid-sized logistics company. We sell fleet management software. Pain point: their current system requires manual mileage reporting. Our differentiator: automated real-time GPS logging wit → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
6. Slide content
Weak: "Give me content for my presentation slide" - Strong: "Write the content for one PowerPoint slide on the business case for investing in employee mental health programs. Format: a header, three bullet points of no more than 12 words each, and one supporting statistic. Audience: executive leader → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
7. Policy document
Weak: "Write a remote work policy" - Strong: "Draft a remote work policy for a 50-person technology company. Sections: eligibility criteria, required availability hours (core hours 10am-3pm local time), equipment stipend ($500/year, receipts required), in-person requirements (one team week per quart → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
8. Social media post
Weak: "Write a post for LinkedIn" - Strong: "Write a LinkedIn post from the CEO's perspective announcing that we just reached 100 enterprise customers. Tone: genuinely grateful, not performatively humble. Mention the team, not just the milestone. 150 words, no hashtags, end with one forward-looking → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
Controls variation between generated images. Low chaos (0-10) produces four similar variations. High chaos (50-100) produces four very different interpretations. Use high chaos early in exploration, low chaos when you have found a direction you want to refine. → Chapter 18: Image Generation — Midjourney, DALL·E, and Stable Diffusion
saved, ready-to-use prompts for your most common review and evaluation needs — is among the highest-leverage prompt infrastructure investments you can make. Build it once; use it for every high-stakes piece of work. → Chapter 9 Key Takeaways: Instructional Prompting and Role Assignment
Academic Literature
Google Scholar (scholar.google.com) — broad academic coverage, free - PubMed (pubmed.ncbi.nlm.nih.gov) — biomedical and life sciences, free - arXiv (arxiv.org) — physics, math, CS, economics pre-prints, free - SSRN (ssrn.com) — social science and economics, largely free - ACL Anthology (aclanthology → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Action Checklist: Becoming a ChatGPT Power User
[ ] Set up Custom Instructions with detailed context about your role and preferences - [ ] Enable or review Memory settings - [ ] Try Advanced Data Analysis with a real dataset from your work - [ ] Build one GPT for a task you do repeatedly - [ ] Identify one regular task currently taking 2+ hours t → Chapter 14: Mastering ChatGPT and GPT-4
Action Checklist: Getting the Most from Gemini
[ ] Verify your organization uses Google Workspace at a tier that includes Gemini features - [ ] Enable Gemini in Gmail and try "Help me write" for your next email draft - [ ] Explore the Gemini side panel in Google Docs on a current document - [ ] Use formula assistance in Sheets for your next comp → Chapter 16: Google Gemini and the Workspace Integration
Action Checklist: Starting Your First Chain
[ ] Identify a recurring complex task with three or more distinct phases - [ ] Map the expert process: what steps would a skilled human take? - [ ] Write a chain specification document with input/output for each step - [ ] Identify which steps need human review gates - [ ] Build a two-step version a → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Add a gate if:
The next step uses the current output as a foundation for significant work (if step 3's output shapes steps 4 through 8, a gate after step 3 is worth the slowdown) - The current step makes interpretive or judgment-based decisions (not just mechanical processing) - An error in the current step would → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Advanced Practitioner Course (12 weeks):
Full Parts 1–3 (Weeks 1–4) - Selected chapters from Part 4 based on cohort (Weeks 5–7) - All of Part 5 (Weeks 8–9) - All of Part 6 (Weeks 10–11) - Part 7 + Capstone project (Week 12) → Prerequisites and Assumed Knowledge
After running all five:
Where do the frameworks agree in their conclusions? - Where do they conflict? How do you resolve the conflict? - Which framework was most useful for this particular decision? Why? - Is there a decision type where this framework would be the first one to reach for? → Chapter 25 Exercises: Decision Support, Analysis, and Strategic Thinking
CPO responds to status reports most weeks with a question or acknowledgment - COO has specifically requested earlier notification on maintenance windows - Raj has been included in two product roadmap discussions that previously wouldn't have included engineering → Case Study 27-2: Raj's Stakeholder Translation — Making Technical Reports Readable
AI does not help in specific ways:
**It doesn't know your full context.** The most important factors in most real decisions — your organization's specific strategy, your team's capabilities, your relationships, your personal values and risk tolerance — are not in the model. AI decisions made without this context are decisions made ab → Chapter 25: Decision Support, Analysis, and Strategic Thinking
AI helps with complex decisions because:
**It doesn't share your biases.** When you're already leaning toward an option, AI hasn't invested emotionally in any outcome. It can generate equally strong arguments for alternatives without the reluctance you'd feel doing it yourself. - **It has broad exposure.** AI has been trained on enormous a → Chapter 25: Decision Support, Analysis, and Strategic Thinking
High: Repetitive, text-heavy, has clear inputs and outputs, or involves synthesis of information - Medium: Has some variable elements but follows a recognizable pattern; AI assists, human finalizes - Low: Highly contextual, requires deep personal relationships, involves real-time physical judgment, → Appendix C: Workflow Templates & Worksheets
Alex
a marketing manager who uses AI daily but finds herself wrestling with quality and trust - **Raj** — a software developer who integrates AI tools into his engineering workflow - **Elena** — a freelance consultant who depends on AI for speed, but guards her professional reputation carefully → Working With AI Tools Effectively
Alex (marketing manager, non-technical, creative)
Advantage: Reasonable expectations and a practical orientation — she is not looking to be impressed, just helped. She is open to experimenting. - Disadvantage: She initially treats the tool like a search engine — expecting it to surface correct, relevant, specific information without being given tha → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Alex's AI no-fly list:
Condolences and personal expressions of grief to people I know - Content that is explicitly meant to represent my voice and perspective to my audience (certain newsletter sections, my professional positioning statements) - Fundamental analysis and interpretation of data my clients paid me to analyze → Chapter 32: When NOT to Use AI (and Why That Matters)
Alex's evaluation (section by section):
Campaign objective: Good. - Target audience: Demographic solid, psychographic uses the insight language well. - Market context: Reasonable, though the specific competitor details it generated need verification before going into a final client document. - Key insight: Excellent — this came directly f → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
Analysis questions for each:
Which components are present? - Which are missing? - What assumption does the AI likely make to fill each gap? - How would the output quality differ if the missing components were added? → Chapter 7 Exercises: Prompting Fundamentals
Answer framework:
Ask for: SDK version, language, code snippet showing how they initialize the client - Common causes: expired key, wrong key format, missing ANTHROPIC_API_KEY env variable - First response: ask for the three items above before attempting to diagnose - Known issue (as of Q3 2024): Python SDK versions → Case Study 2: Raj's Email Assistant — A Custom Triage and Draft Bot
Greenhouse, Lever, iCIMS — have integrated AI for resume screening, candidate ranking, and interview scheduling. These features require careful evaluation: the research record on algorithmic hiring tools is mixed, with documented cases of bias against protected groups. → Chapter 19: Specialized and Domain-Specific AI Tools
[ ] Define your default disclosure position for client-facing work - [ ] Address industry-specific disclosure requirements - [ ] Establish internal documentation expectations for AI-assisted work → Chapter 38: Deploying AI in Teams and Organizations
Automatic L3 (no AI draft, immediate human):
Customer used words indicating strong emotional distress (anger, despair, "I've had enough") - Ticket mentions legal action, regulatory filing, or formal complaint - Customer has contacted about the same issue 3+ times in 30 days - Enterprise customer (top 20% by ARR) with any complaint - Upcoming r → Case Study 28-2: The Support Team's AI Upgrade — From Backlog to Same-Day Response
Long, vague prompts (do you still do this?) - Accepting first outputs without adequate review (do you still do this?) - Giving up when first attempts fail (do you still do this?) - Using AI for tasks where it doesn't help (do you still do this?) → Chapter 41 Exercises: Building Your Long-Term AI Practice
Best practices:
Upload PDFs for most use cases — the convenience and format preservation outweigh the minor accuracy risk for typical documents - Paste text for scanned documents (PDF OCR can be unreliable for complex layouts), for sensitive content where you want to control exactly what is sent, or when you only n → Chapter 12: Multimodal Prompting: Working with Images, Documents, and Data
Brand Copy Writer
The Generator adapted for product descriptions, using her five-example few-shot reference library (see Chapter 10 Case Study) 2. **Campaign Analyzer** — The Analyzer adapted for evaluating campaign performance data against brand KPIs 3. **Competitor Watcher** — The Extractor adapted for pulling key → Chapter 11: Prompt Engineering Patterns for Recurring Tasks
C
Chain-of-Thought (CoT) Prompting
getting the model to reason through problems step by step before answering, which dramatically improves accuracy on anything involving multiple logical steps 2. **Few-Shot Prompting** — providing worked examples inside the prompt to teach the model your specific standards, format, and style 3. **Sel → Chapter 10: Advanced Prompting Techniques
Change Readiness Assessor
a structured framework for evaluating organizational readiness for transformation, which she currently rebuilds for each change management engagement 2. **Stakeholder Map Builder** — extracting and organizing stakeholder information from interview data and org charts into a structured influence/inte → Case Study: Elena's Consulting Toolkit — Patterns as Competitive Advantage
Chapter 5
optional environment setup with `pip install` commands - **Chapter 36** — programmatic AI access via APIs (clearly marked as technical) - **Chapter 22** — optional data analysis examples - **Appendix B** — Python code reference → Prerequisites and Assumed Knowledge
ChatGPT (GPT-4o):
Occasionally over-eager to help — may fabricate specifics when it doesn't know - Can be verbose without explicit length constraints - Very strong instruction following but sometimes over-literal interpretation → Chapter 13: Diagnosing and Fixing Bad Outputs
ChatGPT Advanced Data Analysis
chat.openai.com Requires a ChatGPT Plus subscription. Accepts file uploads (CSV, Excel), runs Python code in a sandboxed environment, generates charts, and provides written interpretation. The primary Tier 1 tool discussed in this chapter. → Chapter 22 Further Reading: Data Analysis and Visualization
claude.ai Anthropic's AI assistant. Well-suited for multi-step writing workflows, long-context document work, and nuanced editing instructions. The extended context window (up to 200,000 tokens in some versions) makes it particularly useful for long-form content collaboration. → Chapter 20 Further Reading: Writing and Editing with AI
Claude:
More conservative about factual claims — may refuse or hedge more than necessary - Strong at nuanced instruction following and maintaining specified constraints - May occasionally be overly cautious about requests it interprets as potentially harmful → Chapter 13: Diagnosing and Fixing Bad Outputs
The Critic adapted for code review: "Review this code as a security-focused senior engineer at a fintech company. Identify issues in: security vulnerabilities, error handling, test coverage gaps, and performance risks." 2. **Function Documenter** — The Transformer adapted for converting code into do → Chapter 11: Prompt Engineering Patterns for Recurring Tasks
Code writing tasks:
Boilerplate generation (REST endpoint scaffolding, data model definitions, test setup) - Algorithm implementation (sorting logic, data transformation, business logic) - Standard library usage (file I/O, date manipulation, string processing) - Security-sensitive implementations (authentication, autho → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Competent characteristics:
Clear prompt structure (do you have this?) - Reliable use cases where you consistently get good results (do you have these?) - Difficulty adapting when encountering new task types (is this still true?) → Chapter 41 Exercises: Building Your Long-Term AI Practice
Earned through use: Documentary-style content showing the product in real trail conditions, not studio settings. - The pack edit: Content about what serious hikers carry and why — with this product as one answer to the question. - Real hikers, real packs: UGC-forward strategy showing actual customer → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
The output is in the right direction but needs refinement - You are getting progressively closer with each iteration - You are adding depth or specificity to a good structure - A self-critique iteration could catch remaining issues → Chapter 6: The Iteration Mindset — Working in Loops, Not Lines
describing the cultural world your audience inhabits (what they read, what they buy, what they value) — gives the AI reference points that subtly shape vocabulary and emotional register in ways that demographic data alone cannot. → Chapter 8 Key Takeaways: Context Is Everything
cursor.sh An AI-native code editor built on VS Code. Includes chat-based code editing, codebase-aware prompting (referencing specific files and functions in context), and multi-file editing. Well-suited for the architecture discussion and iterative implementation workflow. → Chapter 23 Further Reading: Software Development and Debugging
CWE (Common Weakness Enumeration)
cwe.mitre.org The authoritative catalog of software weakness types. When AI identifies a security issue, it will often use CWE classification (CWE-89 for SQL injection, CWE-79 for XSS). Knowing how to look up CWE entries gives you the full vulnerability description, examples, and mitigation guidance → Chapter 23 Further Reading: Software Development and Debugging
D
Data and Confidentiality
[ ] Identify which AI tools are approved for team use - [ ] Define categories of information that can/cannot be shared with AI tools - [ ] Address training data and data residency questions for your industry/context → Chapter 38: Deploying AI in Teams and Organizations
Your current use cases have clear room for improvement (high iteration counts, low batting averages on important tasks) - You have identified specific bottlenecks that deeper practice would address - A new capability is generating a lot of attention but early reports suggest it's not yet reliable fo → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Diagnostic signals:
Output is generic — could apply to any company or situation - Output uses reasonable approaches but not your specific approach - Output sounds plausible but doesn't fit your actual context - The model made assumptions about your audience, goals, or constraints that were wrong → Chapter 13: Diagnosing and Fixing Bad Outputs
Personally identifiable information (PII) — names, email addresses, social security numbers - Protected health information (PHI) covered by HIPAA - Non-public financial data with regulatory sensitivity - Data covered by NDAs or confidentiality agreements with specific handling requirements → Chapter 22: Data Analysis and Visualization
Do not use CoT when:
The task is factual retrieval (no reasoning required) - You need a short, direct output and reasoning would be noise - The task is highly creative and open-ended (CoT can over-constrain) - Latency or response length is a significant concern → Chapter 10: Advanced Prompting Techniques
Do not:
Use: "elevate your space," "perfect for any occasion," "warm your home," "create ambiance," "luxurious," "cozy," "indulge" - Use exclamation points anywhere in body copy - Begin product descriptions with "Introducing..." or "Meet..." - Write about the product as if it is the subject ("This candle do → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
Do:
Use short, declarative sentences - Reference the experience or moment, not just the product feature - Be specific about place, time, or mood when describing scent or atmosphere - Use "the" before product names when possible ("the Bordeaux candle") - Write product descriptions as if describing someth → Case Study 01: Alex's Brand Voice Problem — When AI Sounds Like Nobody
Document the full process:
Number of rounds - Iteration types used - What each round changed - Time spent on AI interaction vs. human editing - Final assessment: did iteration produce better output than a single-shot attempt would have? → Chapter 6 Exercises: Working in Loops, Not Lines
Advantage: Strong domain expertise across many areas means she can evaluate AI output for substantive correctness. Her professional judgment is a reliable quality filter. - Disadvantage: Her efficiency-first orientation creates pressure to trust and move on rather than verify. If AI output looks pro → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Elena's AI no-fly list:
Client data in any form into consumer AI tools - Legal and regulatory claims in client deliverables — these go to primary sources or to counsel - The analytical conclusions in my deliverables — AI helps with the research and the structure, but the conclusion is my professional judgment - Communicati → Chapter 32: When NOT to Use AI (and Why That Matters)
polls the shared inboxes via IMAP and retrieves unprocessed emails 2. **Triage engine** — classifies and prioritizes each email using the Anthropic API 3. **Response drafter** — generates draft responses for emails requiring one 4. **Output handler** — writes triage results and drafts to a shared do → Case Study 2: Raj's Email Assistant — A Custom Triage and Draft Bot
Escalation indicators:
Expressed strong emotion (anger, distress, frustration that goes beyond irritation) - Mentions of legal action, regulatory complaints, or formal disputes - References to a specific senior person or relationship ("I've been a customer for 10 years") - Second or third contact on the same unresolved is → Chapter 28: Customer-Facing Work: Sales, Support, and Outreach
Every month:
Review your most-used prompts. Are they still optimal? Have you found better approaches that haven't been incorporated? - Look at prompts you used once and haven't returned to. Were they failures (good to remove) or opportunities (good to revisit)? - Check prompts against current AI capability. Some → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Example — Competitive Analysis Chain:
Sub-chain A: Research Company 1 → summarize strengths/weaknesses → format - Sub-chain B: Research Company 2 → summarize strengths/weaknesses → format - Sub-chain C: Research Company 3 → summarize strengths/weaknesses → format - Merge step: Take all three formatted summaries → synthesize comparative → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
The output was written as if the API was being launched for the first time, not upgraded - It contained technical terms (REST, JSON, webhook) without defining them - It did not address the migration from old to new — which is the only thing the CTOs actually care about - It ended with a vague statem → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Exercises
hands-on practice prompts and tasks - **Quiz** — self-assessment questions with hidden answers - **Case Studies** — two detailed worked examples per chapter - **Key Takeaways** — a scannable summary - **Further Reading** — curated resources for deeper exploration → Working With AI Tools Effectively
Expert characteristics:
Flexible, judgment-based AI use (do you have this?) - Efficient verification — checking what matters most, not everything (do you have this?) - Clear sense of when not to use AI (do you have this?) - Reflective habit — learning from each interaction (do you have this?) → Chapter 41 Exercises: Building Your Long-Term AI Practice
F
Fact-Checking
Snopes (snopes.com) — general fact-checking - PolitiFact (politifact.com) — political claims - FactCheck.org — political and policy claims - Full Fact (fullfact.org) — UK-focused general fact-checking → Chapter 30: Verifying AI Output — Fact-Checking Workflows
FastAPI
fastapi.tiangolo.com The Python web framework used in the case studies. FastAPI's dependency injection system, automatic documentation generation, and Pydantic integration are relevant context for the implementation examples. → Chapter 23 Further Reading: Software Development and Debugging
First-output scoring:
Used directly with minor edits: 1.0 - Good foundation, moderate editing needed: 0.7 - Useful but significant revision required: 0.4 - Didn't save time or was misleading: 0.0 → Chapter 39 Exercises: Measuring Effectiveness
For each role:
Write the full role assignment prompt (using the templates from Section 9.19 as a guide) - Run the prompt (if you have access to an AI tool) or predict what feedback each role would produce - Summarize the key feedback from each role → Chapter 9 Exercises: Instructional Prompting and Role Assignment
FTC.gov
for marketing disclosure requirements, endorsement guidelines, advertising standards - **HHS.gov** — for HIPAA if it comes up in health marketing contexts - **CAN-SPAM FAQ on FTC.gov** — for email compliance - **GDPR official text** on EUR-Lex for anything GDPR-related → Case Study 2: Alex's Verification Stack
G
Gemini in Google Sheets
workspace.google.com/products/gemini Available with Google Workspace Gemini add-on. Integrated directly into Google Sheets for natural language analysis, formula assistance, and chart generation. → Chapter 22 Further Reading: Data Analysis and Visualization
Gemini:
Strong at current information with web access - Can struggle with very long structured output consistency - Generally reliable for Google Workspace integration tasks → Chapter 13: Diagnosing and Fixing Bad Outputs
General patterns across all platforms:
All models hallucinate more on obscure topics, recent events, and specific numerical data - All models produce better output with explicit format specification than without - All models improve significantly with context — the "blank slate" problem affects all platforms equally → Chapter 13: Diagnosing and Fixing Bad Outputs
GitHub Copilot
github.com/features/copilot Integrated code completion for VS Code, JetBrains, and other editors. The primary IDE-integrated AI development tool. The GitHub Copilot study is the source of the 55% productivity finding. → Chapter 23 Further Reading: Software Development and Debugging
[ ] Name the policy owner - [ ] Establish the escalation path - [ ] Define the review cadence for policy updates - [ ] Create an incident process for policy violations or quality failures → Chapter 38: Deploying AI in Teams and Organizations
Government and Statistical Data
data.gov — US federal datasets - Bureau of Labor Statistics (bls.gov) — labor, employment, wages - Census Bureau (census.gov) — demographics, business data - FRED Economic Data (fred.stlouisfed.org) — economic time series - WHO Global Health Observatory (who.int/data) — health statistics - Eurostat → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Grammarly
grammarly.com AI-powered proofreading and style checking. The free tier catches grammar and spelling. The premium tier adds style, tone, and clarity suggestions. Useful for the proofreading stage; less appropriate for line editing where AI voice suggestions can conflict with your own stylistic choic → Chapter 20 Further Reading: Writing and Editing with AI
H
Hemingway Editor
hemingwayapp.com Readability analysis that flags passive voice, adverb overuse, and complex sentences. Useful as a diagnostic tool for evaluating AI-generated prose and for identifying where line editing is needed. Does not generate text — it only evaluates. → Chapter 20 Further Reading: Writing and Editing with AI
High legal risk:
Using personal data (GDPR, CCPA, HIPAA, other privacy law implications) - Using AI in hiring, lending, housing, or other contexts regulated for discrimination - Generating content for commercial publication where copyright and IP integrity matters - Using AI in contexts involving professional liabil → Chapter 34: Legal and Intellectual Property Considerations
High reliability domains:
Grammar, writing mechanics, style - Common programming languages and standard patterns - General business frameworks and concepts - Historical events well before the training cutoff - Widely documented scientific concepts - Common legal concepts and frameworks (not specific advice) → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
High reliability situations:
Standard CRUD operations in frameworks he knew well - Boilerplate configuration for common tools (Docker, linters, CI configuration) - Straightforward algorithmic implementations (sorting, filtering, transformation) - Test scaffolding for well-defined behaviors - Documentation generation from code c → Case Study 3.2: The Right Tool for the Right Job
Boilerplate file structure for standard patterns (REST endpoints, database models) - Unit test scaffolding for simple functions - Code reformatting and style adjustments - Docstring and comment generation - Standard utility functions with well-defined, common behavior (string formatting, date calcul → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
High-signal indicators:
New capability demonstrated on diverse, practical tasks (not just one impressive example) - Capability improvement confirmed by third-party testing (not just the company's own benchmarks) - Practical adoption by real practitioners you trust, who describe their workflow with specificity - Research pa → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
human review points
moments where a practitioner reviews the intermediate output, edits it if necessary, and approves it before the chain continues. The placement of these review points is one of the most important design decisions in chain construction. → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
I
Immediate (this week)
[ ] Set a recurring quarterly calendar block: "AI Practice Review" - [ ] Create or update your effectiveness journal - [ ] Identify one skill you want to maintain through independent practice regardless of AI capability - [ ] Write down your current position on the "tool, partner, or threat" questio → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Immediately involve counsel if:
You believe you may have already violated a legal obligation through AI use (trade secret disclosure, HIPAA breach, GDPR violation) - A client or third party is raising a legal concern related to AI use - You are deploying AI in a high-risk domain for the first time at scale - Your AI use has genera → Chapter 34: Legal and Intellectual Property Considerations
inconsistent calibration
the person does not have a systematic model of when to trust AI and when not to. They may be over-trusting in some situations (perhaps when under time pressure, or when the output looks impressive) and under-trusting in others (perhaps out of general anxiety about AI, or when they are perfectionisti → Chapter 4 Quiz: Trust Calibration
Initial Setup
[ ] Create your effectiveness journal (spreadsheet with date, task type, time estimates, iteration count, quality rating, notes) - [ ] Define the quality dimensions most relevant to your work - [ ] Establish your baseline: how long do common AI-assisted tasks take without AI? → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Introductory AI Literacy Course (8 weeks):
Week 1: Chapters 1–2 (What AI is, how it thinks) - Week 2: Chapters 3–4 (Mental models, trust) - Week 3: Chapters 7–8 (Prompting basics) - Week 4: Chapters 9–10 (Advanced prompting) - Week 5: Chapters 20–21 (Writing and research workflows) - Week 6: Chapters 29–30 (Hallucinations and verification) - → Prerequisites and Assumed Knowledge
Invest in new capability when:
A capability has clear potential to address a specific, real limitation in your current practice - Multiple trusted sources (not just enthusiasts) have found it valuable in contexts similar to yours - The exploration cost is proportionate to the potential value (a 2-hour exploration for a capability → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
K
Key structural differences from Anthropic:
OpenAI uses `client.chat.completions.create()` vs. Anthropic's `client.messages.create()` - OpenAI includes the system message in the messages list (as `{"role": "system", ...}`) - OpenAI returns `response.choices[0].message.content` vs. Anthropic's `response.content[0].text` - Token counts use `res → Chapter 36: Programmatic AI — APIs, Python, and Automations
Knowledge file limitations:
Total storage per GPT: 20 files, 512 MB total - Retrieval is not guaranteed: the GPT retrieves relevant sections but cannot access all knowledge simultaneously in one response - File content is not kept confidential from users who probe for it — if you upload sensitive documents, assume users can re → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
L
Law360 AI Digest
legal news aggregation covering AI-related litigation and regulatory developments - **IAPP AI Privacy Resources (iapp.org)** — regularly updated resources on AI and privacy law from the International Association of Privacy Professionals - **AI Now Institute (ainowinstitute.org)** — research and poli → Chapter 34 Further Reading: Legal and Intellectual Property Considerations
Legal and Regulatory
Federal Register and CFR (ecfr.gov) — US federal regulations - Congress.gov — US legislation - EUR-Lex (eur-lex.europa.eu) — EU law - Westlaw/LexisNexis — commercial, comprehensive, subscription-required → Chapter 30: Verifying AI Output — Fact-Checking Workflows
Limitations of no-code tools:
Less flexible than custom code for complex data transformations - Dependent on the platform's available integrations - Can become expensive at scale (per-task pricing on Zapier and Make) - Debugging is harder than in code — when a chain fails, finding which step failed and why takes more investigati → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Low reliability domains:
Statistics and numerical data - Recent events (past 1-2 years relative to training cutoff) - Niche technical topics - Specific details about less-prominent people or organizations - Emerging regulatory areas - Specialized academic literature → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Low reliability situations:
Any feature involving a library updated within the past year - Domain-specific logic that requires understanding of business rules - Architecture-level suggestions requiring system context - Security-sensitive code (authentication, authorization, input validation, cryptography) - Code that targets v → Case Study 3.2: The Right Tool for the Right Job
Low-signal indicators (often noise):
"Impressive demo" without explanation of what's generalizable - Benchmarks from a tool's own marketing materials - "This changes everything" claims without specific explanation of what changes and how - Viral examples that rely on cherry-picked best-case outputs - Predictions about timelines that do → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Lower legal risk:
Using AI for internal analysis and ideation not involving regulated data - AI-assisted drafting and editing for professional services (with adequate oversight) - AI tools for productivity improvement on tasks not involving regulated content - Research and information gathering on public-domain topic → Chapter 34: Legal and Intellectual Property Considerations
M
Making the choice:
If you need maximum ecosystem breadth and third-party integrations: **ChatGPT** - If you do a lot of long-form writing, document analysis, or nuanced professional work: **Claude** - If you are embedded in Google Workspace: **Gemini** - If you are a developer: **All three, with API access to at least → Chapter 5: Setting Up Your Personal AI Environment
Measured productivity changes:
**Boilerplate generation:** Approximately 60% faster. REST endpoint scaffolding, model definitions, test setup — tasks that previously took 30-60 minutes now took 10-20 minutes. - **Code review preparation:** Architecture and design discussions with Claude before starting implementation consistently → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Medium reliability situations:
Integration code for well-documented third-party services - Error handling patterns (correct structure, but often missing specific cases) - Refactoring existing code into cleaner patterns (good patterns, but sometimes misunderstands intent) - Optimization suggestions (often correct in direction, nee → Case Study 3.2: The Right Tool for the Right Job
Using AI for commercial content generation where copyright ownership matters - Generating AI-assisted code for commercial products (IP chain of title) - Using AI for marketing content (FTC endorsement and disclosure) - Automated decision-making affecting customers (various consumer protection implic → Chapter 34: Legal and Intellectual Property Considerations
[ ] Identify your highest-value GPT use case - [ ] Write and test a system prompt - [ ] Build and deploy your first GPT to a real workflow → Chapter 14: Mastering ChatGPT and GPT-4
Month 2: Refinement
[ ] Update your Custom Instructions based on month 1 usage patterns - [ ] Refine your GPT based on real use - [ ] Add the role-plus-audience frame to your standard prompting repertoire → Chapter 14: Mastering ChatGPT and GPT-4
Month 3: Team and Scale
[ ] Document your three highest-value prompt patterns for sharing - [ ] Review your data privacy setup for professional use - [ ] Do a quarterly ChatGPT feature review (what is new that matters to my work?) → Chapter 14: Mastering ChatGPT and GPT-4
Monthly (approximately 90 minutes):
A 30-minute exploration session: she tries one new AI capability on a real current project - A 60-minute deeper read: one longer piece on a topic directly relevant to her current work (she finds this through her trusted sources' recommendations) → Case Study: How Alex Stays Current Without Getting Overwhelmed
Monthly Practices
[ ] Analyze time savings by task category - [ ] Calculate your AI batting average by task category - [ ] Review iteration efficiency trends - [ ] Identify your top 3 highest-leverage use cases and your bottom 3 - [ ] Run the "stop doing" analysis on low-value use cases → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Hasn't engaged with AI tools 2. **Explorer** — Has tried AI tools occasionally, inconsistently 3. **Practitioner** — Uses AI regularly for specific tasks, getting real value 4. **Expert** — Uses AI across many tasks, gets consistently good results, has developed judgment about when to use and when n → Chapter 38 Exercises: Deploying AI in Teams and Organizations
Prompt library: 30 entries, all reviewed and updated in the past 90 days - Monthly measurement analysis: consistent for 12 months - Team aggregate time savings: 15+ hours/week - Her own batting average: above 0.70 on her primary use cases → Case Study: Alex's Capstone Plan — The Marketing AI Practitioner
owasp.org/Top10 The Open Web Application Security Project's list of the ten most critical web application security risks. This is the security checklist that should inform every AI code security review. Familiarity with the Top 10 makes AI security review prompts more targeted and the resulting revi → Chapter 23 Further Reading: Software Development and Debugging
P
Pandas
pandas.pydata.org The Python data analysis library used in all code examples. Version 2.0+ changes some DataFrame behaviors from earlier versions; ensure AI-generated code is using compatible syntax for your installed version. → Chapter 22 Further Reading: Data Analysis and Visualization
Part 1 — Task Analysis:
What is the task? - Who is the intended audience for the output? - What does "excellent" look like for this task? - What are the common failure modes or output problems you want to prevent? - Are there any non-obvious constraints (things that are obvious to you but not to an outside reader)? → Chapter 7 Exercises: Prompting Fundamentals
Engagement scope and objectives document - Client background — 800-word summary of the firm's history, market position, and the strategic challenges prompting the engagement - Stakeholder map — names, roles, influence level, known positions on key questions - Interview guide — the questions Elena pl → Case Study 2: Elena's Client Research System — Claude Projects in Practice
Practical Notes:
For most knowledge workers with no existing toolchain: start with Claude or ChatGPT; both offer strong free tiers. - If your team is heavily invested in Google Workspace, Gemini's native integrations provide practical advantages. - For API integration, all three have mature, well-documented APIs. Se → Appendix D: Tool Comparison Quick-Reference Cards
Practical risk management for AI-generated code:
Use tools with established IP indemnification commitments where material IP exposure exists (GitHub Copilot has offered such commitments to enterprise customers) - Review AI-generated code for potential similarity to known open source projects in your domain - Maintain records of which portions of y → Chapter 34: Legal and Intellectual Property Considerations
Proactively engage counsel when:
Developing organizational AI use policies for the first time - Negotiating client contracts with AI use implications - Making significant AI deployment decisions in regulated industries - Before using AI tools with any PHI or highly sensitive personal data → Chapter 34: Legal and Intellectual Property Considerations
advisories.python.org / PyPI Security Advisories The official source for Python package security advisories. Referenced in the dependency verification section. Check this resource for any AI-introduced Python dependency. → Chapter 23 Further Reading: Software Development and Debugging
Q
Quality assessment:
Error rate on client deliverables: down 23% from pre-AI baseline (she tracked this through her revision request log) - Client satisfaction scores: stable (no degradation, slight positive trend) - Internal review cycles: reduced by one round average for standard deliverables → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Quality Standards
[ ] Define what "done" means for AI-assisted work in your context - [ ] Establish verification requirements for factual claims - [ ] Define the review process for Tier 2 use cases → Chapter 38: Deploying AI in Teams and Organizations
Quarterly Practices
[ ] Calculate ROI on your AI subscriptions - [ ] Review your learning curve trend — are you still improving? - [ ] Identify new use cases to try based on measurement gaps - [ ] Update your prompt library based on what's working best → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Advantage: His technical background means he reads outputs critically, understands what "plausible but wrong" looks like in code, and is comfortable iterating. He has natural verification habits. - Disadvantage: His "just autocomplete" mental model leads him to use AI tools passively rather than act → Chapter 1 Quiz: What AI Tools Actually Are (and Aren't)
Raj's AI no-fly list:
Code reviews for code that will run in production safety paths (authentication, access control, data integrity) — I check these myself or with a senior peer, not AI - Architecture design decisions for systems where I need to be able to defend the design to the team — AI can help me think through opt → Chapter 32: When NOT to Use AI (and Why That Matters)
Recommended entry points by role:
**Writers/Content creators:** Start with Ch. 7 (Prompting Fundamentals), then Ch. 20 (Writing with AI) - **Developers:** Start with Ch. 17 (Copilot), then Ch. 23 (Software Development), then Ch. 36 (APIs) - **Managers/Leaders:** Start with Ch. 3 (Mental Models), then Ch. 38 (Deploying AI in Teams) - → How to Use This Book
Recommended knowledge file types for common GPTs:
Brand voice GPT: style guide, example posts, vocabulary lists, tone descriptions - Code assistant GPT: internal coding standards, architecture documentation, common patterns - Research assistant GPT: topic overviews, key sources, methodology guidelines - Client communication GPT: client profiles, pr → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Reflection exercises
questions to answer in writing; help consolidate learning 2. **Hands-on tasks** — things to actually do with an AI tool, with specific prompts to try 3. **Applied challenges** — more open-ended tasks connecting the chapter to your own work → How to Use This Book
Remaining problems:
The two documents were not sufficiently differentiated in depth or tone. The technical document was accurate but lacked the structure a developer needs for implementation (specifically: no authentication section, no error code table, no webhook payload examples). - The executive summary still used t → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
Date - Tool used - Task type - Domain - What zone you treated it as - What happened (was there an error? was verification needed?) - Calibration update (what does this tell you about your model?) → Chapter 4 Exercises: Practicing Trust Calibration
Requirements:
Target list: 50 minimum - Personalization: every email must have a specific hook (verified, not just AI-generated) - Sequence: minimum 3-touch for non-responders - Quality review: every email reviewed before sending against your authenticity checklist - Tracking: open rate, reply rate, conversation → Chapter 28 Exercises: Customer-Facing Work: Sales, Support, and Outreach
Research literature tools
covered in more depth below under Scientific Research — are used by medical researchers and clinically-oriented practitioners to navigate the vast medical literature. PubMed AI features, Semantic Scholar's medical coverage, and specialized tools like AskThePaperAI are all relevant here. → Chapter 19: Specialized and Domain-Specific AI Tools
Research Synthesizer
The Summarizer adapted for multi-source synthesis: multiple documents → structured insight summary with source attribution 2. **Framework Applicator** — The Analyzer adapted for applying standard consulting frameworks (SWOT, Porter's Five Forces, McKinsey 7-S) to client situations 3. **Deliverable S → Chapter 11: Prompt Engineering Patterns for Recurring Tasks
Results from initial testing:
13 of 15 on-brand samples: correctly identified as strong or acceptable - 11 of 15 off-brand samples: correctly identified as needing revision - 7 of 10 competitor samples: correctly identified as not matching the company's voice (expected, since it should only apply company voice standards) → Case Study 1: Alex's Brand Voice Assistant — From Custom GPT to Team Resource
combining multiple roles simultaneously or sequentially — is effective for tasks that genuinely benefit from multiple perspectives. Practical limit: more than two or three roles produces superficial coverage of each perspective. → Chapter 9 Key Takeaways: Instructional Prompting and Role Assignment
Rules:
You may use AI for all generation tasks - You must review and edit every output — nothing goes into the final plan unreviewed - Track your time: how much on prompting vs. reviewing vs. editing? - At the end, rate the quality of the plan on a scale of 1-10 compared to plans you'd produce without AI a → Chapter 24 Exercises: Project Planning and Task Management
S
Scoring:
If your score on items 1-5 averages above 3: You are likely over-trusting AI. Focus on building a verification habit for Zone 3 tasks. - If your score on items 6-10 averages above 3: You are likely under-trusting AI. Focus on identifying Zone 1 tasks you can use more confidently. - If both scores ar → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Search Engine Journal
industry publications that follow platform changes closely and whose editorial standards she trusts - **Platform official announcements** — she follows official product blogs for the major platforms she covers → Case Study 2: Alex's Verification Stack
You're framing the prompt as "What should I do?" rather than "Help me think through X" - You're accepting the AI's recommendation without being able to articulate why you agree - You're feeling relieved rather than clearer after reading the AI output - You're not engaging with the AI's analysis — ju → Chapter 25: Decision Support, Analysis, and Strategic Thinking
Skip a gate if:
The step is purely mechanical (reformatting, counting, sorting) - An error in this step would be immediately visible in the next step's output - The chain loops iteratively and will self-correct - Speed is more important than quality for this specific workflow → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
Social Content AI
AI-assisted social media content generation 2. **Email Personalization Platform** — AI email subject line and copy optimization 3. **Brand Voice Content Generator** — Long-form content generation with brand voice training 4. **Ad Copy Generator** — Short-form ad copy at scale 5. **Market Intelligenc → Case Study: Alex's MarTech Stack Audit — Evaluating 6 Marketing AI Tools in One Week
Source Selection
[ ] Identify one domain-specific AI newsletter or community to follow consistently - [ ] Identify one first-party source from an AI lab whose tools you use - [ ] Identify one critical voice — someone who thinks clearly about AI's limitations → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
SQLAlchemy 2.0
sqlalchemy.org The Python ORM used in the case studies. Understanding SQLAlchemy's session management, eager vs. lazy loading, and `expire_on_access` behavior is relevant to the memory leak case study. → Chapter 23 Further Reading: Software Development and Debugging
Start over when:
**You have given contradictory instructions across multiple turns** and the conversation context has become confused. The AI is now working with an accumulated set of constraints that may be internally inconsistent. A clean start with a better first prompt is more efficient than trying to reconcile → Chapter 6: The Iteration Mindset — Working in Loops, Not Lines
The AI generated generic REST API documentation with placeholder endpoints like `/api/payments` — not the actual endpoint paths - Code examples were in Python, but Raj's team uses Node.js - The authentication section described bearer tokens generically without reference to the company's specific API → Case Study 02: Raj's Technical Documentation Prompt — Precision Under Pressure
The "false positive" problem
AI flagging intentional decisions as mistakes — is one of the most trust-destroying failures in AI-assisted technical and expert work. Context packets should explicitly document intentional decisions that look like mistakes from the outside. → Chapter 8 Key Takeaways: Context Is Everything
The "one general plus one specialized" strategy
maintaining a general-purpose AI for broad tasks and one carefully chosen specialized tool for your highest-volume distinctive professional need — manages proliferation while capturing most of the specialization advantage. → Key Takeaways: Chapter 19 — Specialized and Domain-Specific AI Tools
[ ] Write your personal filter criteria: what types of AI coverage you will and won't spend time on - [ ] Review your current information sources and eliminate those that fail the filter → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
when a report is cited, research firms often publish press releases summarizing key findings. These are free and usually contain the headline numbers. - **Search "[firm name] [report topic] [year] press release"** — this gets her to the authoritative source in many cases → Case Study 2: Alex's Verification Stack
[ ] Define your standard evaluation battery for new tools in your domain (like Raj's six-task battery for coding) - [ ] Commit to running your battery on any new tool that seems relevant before forming a strong opinion about it → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
This Month
[ ] Run your first monthly prompt retrospective - [ ] Identify one capability from the "developing" list to actively experiment with - [ ] Have a conversation with at least one colleague about your AI practice — what you're learning, what you're questioning → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
[ ] Conduct all four quarterly reviews - [ ] Assess your position on the beginner-to-integrated arc at year start and year end - [ ] Write a brief reflection: what has AI changed about how you work, and is that change in the direction you want? → Chapter 41: The Long-Term Partnership: Building an AI-Augmented Practice
Three-month results:
Reduction in "pattern violation" comments in human code review: approximately 40% (engineers caught their own pattern violations in AI pre-review) - Reduction in TypeScript annotation issues caught in human review: approximately 60% - False positive rate from AI review: approximately 5% (compared to → Case Study 02: Raj's Codebase Context — Making AI a Useful Code Reviewer
[ ] Set a weekly standing time for catching up on AI developments (30-45 minutes maximum) - [ ] Schedule a monthly "capability exploration" session (60-90 minutes) - [ ] Schedule a quarterly "testing and reflection" session (2-3 hours) → Chapter 40: How AI is Evolving: Staying Ahead of the Curve
Time allocation suggestion:
0:00-0:30 — Narrative development (SCQA, audience-centered outline) - 0:30-1:30 — Slide content generation (titles, bullets/visuals, speaker notes) - 1:30-2:00 — Visual strategy (chart recommendations, image prompts, visual mockups) - 2:00-3:00 — Editing and refinement (so what audit, title assertio → Chapter 26 Exercises: Presentations, Slides, and Visual Communication
Time savings calculation:
Content creation: 4 hours/week saved (team aggregate) - Research and analysis: 2 hours/week saved - Email and communication drafting: 1.5 hours/week saved - Template and format work: 1 hour/week saved - Total: 8.5 hours/week, team aggregate → Chapter 39: Measuring Effectiveness: ROI, Quality, and Iteration Cycles
Tool-specific configuration:
**ChatGPT:** Custom instructions available in Settings > Personalization. Also consider creating Custom GPTs for specific recurring task types. - **Claude:** Custom instructions available in Settings, and Projects allow separate custom contexts for different ongoing work areas. - **Gemini:** Workspa → Chapter 5: Setting Up Your Personal AI Environment
**Onboarding unfamiliar codebases:** When he joined a project on a codebase he had not worked in before, using Claude to explain the structure and logic of key modules significantly accelerated his ramp-up time. - **Technical writing:** He started using Claude for technical specification writing and → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Use Case Inventory
[ ] List the 10 most common tasks your team performs - [ ] For each, assess: AI value potential (high/medium/low/none), information sensitivity (low/medium/high), quality stakes (low/medium/high) - [ ] Draft your three-tier taxonomy based on this assessment → Chapter 38: Deploying AI in Teams and Organizations
Use Claude Projects when:
The configured context is for your own ongoing use, not for sharing with others - You are working on a body of work that evolves over weeks or months - You want to maintain conversation history across sessions - Your context involves documents you want to update and maintain → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use CoT when:
The task requires multiple sequential reasoning steps - Accuracy is more important than response speed - You want to verify the reasoning, not just the conclusion - The problem involves math, logic, planning, or diagnosis - The model frequently makes errors on this task type without CoT → Chapter 10: Advanced Prompting Techniques
Use Custom GPTs when:
You want to share the configured tool with others - The tool should present as a standalone, clearly named product - You need external API integrations (Actions) - The use case is recurring and benefits from a polished user experience → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use GPT Builder (no-code) when:
Non-technical team members will create or maintain the configured system - You want a fast setup with the visual interface - The assistant does not require complex integration with external systems - You need to share the assistant publicly or via the GPT store → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Use the Assistants API (code) when:
The assistant will be embedded in your own application - You need programmatic control over thread creation and management - You need to integrate the assistant with your own backend systems - You are building a product or service that uses AI assistants as infrastructure → Chapter 37: Custom GPTs, Assistants, and Configured AI Systems
Cutting-edge research not yet widely published - Local or regional information - Non-English sources and non-Western topics (for English-centric models) - Highly specialized technical topics with limited training data → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
W
Week 1: Foundation
[ ] Set up Custom Instructions with your full professional context - [ ] Review and configure Memory settings - [ ] Practice the "before you answer" technique on your next complex task → Chapter 14: Mastering ChatGPT and GPT-4
Week 2-3: Feature Exploration
[ ] Try Advanced Data Analysis on a real dataset from your work - [ ] Try DALL·E for one visual creation task - [ ] Explore the GPTs marketplace for your domain → Chapter 14: Mastering ChatGPT and GPT-4
The 9-slide structure: organizing by decision rather than analysis is an obvious idea in retrospect, but she wouldn't have arrived at it quickly alone - The defensive executive framing: "structural pattern, not leadership failure" is a reframe she knew intuitively but couldn't have articulated witho → Case Study 26-2: Elena's Executive Translation — Making Complex Analysis Simple
What AI does not do well:
**Estimating effort for novel work.** AI can suggest task durations based on averages from its training data, but it has no idea how fast your team works, how complex your specific technical environment is, or what organizational frictions will slow things down. Duration estimates from AI are plausi → Chapter 24: Project Planning and Task Management
What AI does well in project planning:
**Generating structure from chaos.** When you have a fuzzy project concept, AI is excellent at suggesting organizational frameworks, decomposing vague goals into specific tasks, and proposing logical sequences. - **Surfacing what you haven't thought of.** AI has been trained on enormous amounts of p → Chapter 24: Project Planning and Task Management
Product: Premium hydration system (bottle + filter) - Target audience: Active day hikers, 28-45, values gear quality over price, experienced outdoors, approximately $80-120 gear purchase range - Price: $89 - Timeline: Campaign needs to run for 8 weeks around product launch → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
**Over-relying on Claude for debugging:** Raj initially tried pasting error tracebacks into Claude expecting resolution. For complex, context-dependent debugging, Claude could suggest hypotheses but rarely resolved the issue without much more back-and-forth context sharing than it was worth. He stil → Case Study 5.2: Raj's Developer Environment — From Copilot Tab to Full AI-Assisted Workflow
Opening paragraph: adds a specific reference to "the conversation on Wednesday" and thanks them for their candor — AI had no way to write this - Section 1: adds the phrase "architecture firm growing through acquisition" to describe their situation more precisely — she'd noted this detail but AI had → Case Study 27-1: Elena's Proposal Machine — From Discovery Call to Proposal in 3 Hours
What Elena brought:
The $2.4M productivity calculation: this required her own research and anchored the entire business case - The "investment targets" language (vs. AI's "focus areas"): a subtle but important shift - The decision to make departments explicit rather than abstract: judgment about what the executives in → Case Study 26-2: Elena's Executive Translation — Making Complex Analysis Simple
The collection name and concept are present - Individual candle names appear - There is an attempt at storytelling ("tell a story") - The structure is better — there are headers for each candle - The output is specifically about Lumier Home, not a generic brand → Case Study 01: From Generic to Remarkable — Alex's Blog Post Transformation
What she added in month 3:
A **Jasper.ai trial** for a high-volume content project (product description writing for 150 product pages). The templates were helpful but the quality ceiling was lower than Claude for premium content. She used Jasper for volume, Claude for quality. - A **Canva AI** workflow for quick social graphi → Case Study 5.1: Alex's AI Stack — Building a Marketing Powerhouse
What she cut completely:
General AI news from non-specialist sources - "Will AI replace marketing?" and similar thinkpiece content - Viral AI demos without practical context - AI company valuation and funding news - Podcast content she was consuming out of FOMO rather than genuine interest → Case Study: How Alex Stays Current Without Getting Overwhelmed
Repair 1 was a context reload with the full few-shot reference. It didn't try to describe the voice — it demonstrated it. - Repair 2 was surgical: "Keep everything else exactly as is" prevented the model from changing the elements that were now correct. - The specificity of "The bamboo cutting board → Case Study: Alex's Campaign Copy Crisis — A Three-Round Repair
What these tools do well:
Trigger chains based on external events (a new email arrives, a form is submitted, a file appears in a folder) - Pass data between steps automatically - Integrate AI steps with non-AI steps (store output to a spreadsheet, send an email, create a record in a database) - Run chains on a schedule - Han → Chapter 35: Chaining AI Interactions and Multi-Step Workflows
What this model suggests:
Provide explicit context before making requests - Specify the audience, purpose, constraints, and history relevant to the task - Review output with the recognition that the intern may have made reasonable errors based on missing context - Give specific, direct feedback when output needs to change → Chapter 3: The Right Mental Models for AI Collaboration
What to save:
**Effective prompts.** When a prompt works well — produces the quality of output you needed with minimal iteration — save it. You will want that prompt again. - **Outputs you refer back to.** Not every AI output is worth keeping, but those that inform ongoing work (research summaries, analysis frame → Chapter 5: Setting Up Your Personal AI Environment
Upload a CSV, Excel file, or Google Sheet and ask for a basic exploration - Request specific statistics: average, median, correlation, trend over time - Generate visualizations by describing what you want to see - Ask interpretive questions: "What are the most important patterns in this data?" - Get → Chapter 22: Data Analysis and Visualization
What you can do at Tier 2:
Ask AI to write data loading and cleaning code - Request specific analyses with code - Iterate on visualizations through code - Debug analysis errors with AI assistance - Use AI to write code for analyses you know conceptually but could not implement quickly → Chapter 22: Data Analysis and Visualization
Where human judgment mattered most:
Round 1: Selecting the territories worth exploring (Alex's call) - Round 3: Recognizing that the tagline needed to be more specific (Alex's evaluation) - Round 4: Choosing "The pack doesn't lie" from the alternatives (Alex's decision) - Round 5: Identifying channel strategy and KPIs as the weak sect → Case Study 6.1: Alex's Campaign in Seven Rounds — From Blank Page to Brief
Why would a non-developer want API access?
**Automation tools.** Platforms like Zapier, Make (formerly Integromat), and n8n can use AI APIs to build automated workflows without writing code. You might set up a workflow that automatically summarizes incoming emails, generates a first-draft response, or routes content through an AI filter. - * → Chapter 5: Setting Up Your Personal AI Environment
With CRAFT:
**Context:** We are a sustainable packaging startup. Our new product is a compostable shipping envelope that replaces plastic poly mailers. Our audience is e-commerce founders and sustainability-minded operations managers. - **Role:** You are a LinkedIn content strategist specializing in purpose-dri → Chapter 7: Prompting Fundamentals: Structure, Clarity, and Specificity
Report delivered as originally drafted - CFO catches the WACC error in the presentation room - COO raises the Epic conflict — Elena has no response prepared - Board chair asks the competitive advantage question — Elena has no compelling answer - Presentation deferred, engagement extended by 4-6 week → Case Study 02: Elena's Devil's Advocate — How Role Assignment Saved a Flawed Report
Name the specific failure - Identify the root cause(s) - Write a complete repair prompt using the appropriate template - Write the prevention: what would the original prompt have needed to produce a useful response? → Chapter 13 Exercises: Diagnosing and Fixing Bad Outputs
Z
Zone 1
Reformatting is a structural transformation task with no factual claims at risk. Errors are immediately visible. Use directly with light review. → Chapter 4 Quiz: Trust Calibration
Zone 1 tasks include:
**Formatting and restructuring.** Asking AI to convert bullet points to prose, reorganize a document's structure, apply consistent formatting, or reformat data is highly reliable. There is no factual claim to be wrong about — just structural transformation. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 2 tasks include:
**Factual claims in your professional domain.** If you are a marketing professional asking about marketing concepts, or a developer asking about common programming patterns, you have the background to spot most errors. → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 3
Specific statistics and recent research citations. High risk of hallucinated or outdated statistics. Requires independent verification from primary sources before use. → Chapter 4 Quiz: Trust Calibration
Zone 3 tasks include:
**Recent events and current information.** AI tools have training data cutoff dates. Events after the cutoff simply do not exist in their knowledge. But more dangerously, events slightly before the cutoff may be represented incompletely or inaccurately. If you need current information, verify with c → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong
Zone 4
Medical guidance for a specific clinical situation. Errors can cause serious harm and AI tools are unreliable for specific medical guidance. Requires expert (physician) oversight before acting on it. AI can be used to learn about the topic and prepare better questions for the doctor, but not to act → Chapter 4 Quiz: Trust Calibration
Zone 4 tasks include:
**Medical guidance for specific clinical situations.** AI can explain how diseases work, describe typical treatment protocols in general terms, and help someone understand medical information — but applying that to a specific patient's situation is Zone 4. The risk of an error is high and the conseq → Chapter 4: Trust Calibration — What AI Gets Right, What It Gets Wrong