Chapter 1 Exercises: What AI Tools Actually Are (and Aren't)

These exercises are organized into three levels. Work through them in order if you are new to AI tools, or jump to the level that matches your current experience. The goal is not completion for its own sake — it is building accurate intuitions through direct experience.


Level 1: Reflection Exercises

These exercises ask you to examine your existing assumptions and experiences. Write your answers in a journal, document, or just think them through carefully. Honesty with yourself is more valuable here than polished answers.


Exercise 1.1 — Your Existing Mental Model

Before reading further in this book, write a paragraph describing what you thought AI tools were before reading this chapter. Be honest — not what you think the "right" answer is, but what your actual operating assumption was.

Then write a second paragraph describing how (if at all) that mental model shifted after reading the chapter. What was most surprising? What confirmed what you already believed?

What to look for in your answer: Most people come to AI tools with a mental model borrowed from another technology — search engines, calculators, databases, or a human expert. Identifying which borrowed model you were using is the first step to replacing it with a more accurate one.


Exercise 1.2 — The Tool-Person Spectrum

On a scale from 1 to 10, where 1 is "pure tool with no apparent intelligence" (like a hammer) and 10 is "appears to have genuine understanding and judgment" (like a trusted colleague), where did you place AI tools before reading this chapter? Where do you place them now?

Write a few sentences explaining the reasoning behind both numbers. What specific behaviors or outputs pushed you toward the higher end? What should push you back toward the lower end?

What to look for: This exercise is not about finding the "right" number. It is about surfacing the tension between AI tools' remarkably human-seeming outputs and their fundamentally non-human mechanism. Most thoughtful people land somewhere in the middle with significant uncertainty — that is appropriate.


Exercise 1.3 — Mapping Your Use Cases

List five tasks you currently do at work or in your personal life that you think AI tools might help with. For each task, write:

  1. What you hope the AI tool would produce
  2. What could go wrong if the AI tool got it wrong
  3. How easy or hard it would be to verify the AI's output

After completing the list, rank the five tasks from lowest to highest risk. The lowest-risk tasks are where you should experiment first. The highest-risk tasks are where you need the most robust verification practices.


Exercise 1.4 — Thinking About Confidence

Think about a time you accepted information from a confident-sounding source and later discovered it was wrong. This could be from a person, a book, a website, or any other source.

  • How did the source's confidence affect your assessment of its accuracy?
  • What signals, if any, should have made you more skeptical?
  • How does this experience apply to your use of AI tools?

What to look for: Confidence calibration is one of the most important skills for AI tool users. This exercise is about surfacing your existing relationship with source confidence before applying it to the AI context.


Exercise 1.5 — The Expertise Gap

Identify one area of your professional work where you have genuine expertise — where you would know immediately if someone told you something incorrect.

Now identify one area where you rely heavily on others' expertise — where you would have difficulty evaluating whether information you received was accurate or not.

When using AI tools, which of these areas is higher risk? Write a paragraph explaining your reasoning. (Hint: the answer might not be as obvious as it initially seems.)


Level 2: Hands-On Exercises

These exercises require you to actually use an AI tool. If you do not have a preferred tool, use the free tier of ChatGPT (chat.openai.com) or Claude (claude.ai). Complete each exercise as written — use the provided prompt or adapt it only as noted.


Exercise 1.6 — The Confidence Test

What to do: Use this exact prompt with an AI tool:

"What are three recent scientific studies published in the last six months about the effects of intermittent fasting on metabolic health? Please include author names, journal names, and publication dates."

Then do this: Take each citation the AI provides and try to verify it. Search for the exact paper using Google Scholar, PubMed, or a similar academic search tool.

Questions to answer: - How many of the citations were real and accurately described? - How many were partially real (real journal, wrong author or title)? - How many were entirely fabricated? - Did the AI's tone change at all when providing potentially fabricated information versus accurate information?

What you're learning: This exercise demonstrates the hallucination problem in a low-stakes context. The goal is not to catch the AI being wrong — it is to make the experience of confident-but-fabricated output concrete and personal before you encounter it in a high-stakes context.


Exercise 1.7 — The Context Experiment

What to do: Ask the AI tool two versions of the same request. First, use this minimal prompt:

"Write an email to a client about a project delay."

Note the output. Then use this detailed version:

"Write an email to a client about a project delay. Context: I'm a management consultant, the client is a manufacturing company's operations director, the project is a supply chain audit that was supposed to take 6 weeks but is now at 8 weeks with 2 more weeks needed, the delay was caused by difficulty getting data access from their internal teams (not our fault, but we don't want to be accusatory), and the client has been patient but is starting to show signs of frustration in our weekly calls. Tone should be professional, direct, and reassuring without being defensive."

Questions to answer: - How different were the two outputs in length? In specificity? In usefulness? - What did the AI invent in the first version that may or may not fit your actual situation? - What information from the detailed prompt most dramatically changed the output?

What you're learning: The quality of AI output is a direct function of the quality of context you provide. This exercise builds intuition for the "brief well, get well" principle that underlies effective prompting.


Exercise 1.8 — Finding the Knowledge Cutoff

What to do: Use this prompt:

"What is today's date, and what is the most recent major AI model release you have information about? Please also tell me your training cutoff date if you know it."

Then follow up with:

"Tell me about [choose a significant event from the past 6 months in your industry]. What happened and what was the outcome?"

Questions to answer: - How did the AI describe its own temporal limitations? - Was it confident or appropriately uncertain about recent events? - What happened when you asked about a recent event? Did it acknowledge uncertainty, attempt to answer, or refuse?

What you're learning: Understanding how an AI tool handles the boundary of its knowledge is important for knowing when to verify and when to provide your own context. Some tools handle this gracefully; others project false confidence.


Exercise 1.9 — The Arithmetic Problem

What to do: Give the AI tool a series of arithmetic problems, increasing in complexity:

"Please solve these calculations step by step: 1. 847 × 23 2. 15% of 3,847 3. If I have $12,500 and spend 23% on rent, 18% on food, and $340 on utilities, how much do I have left? 4. A company's revenue grew from $2.3M to $3.1M over three years. What was the compound annual growth rate?"

Verify each answer with a calculator.

Questions to answer: - How many calculations were correct? - Were errors consistent or random? - Did the AI show its work? Was the work itself correct even when the answer was wrong?

What you're learning: AI tools are genuinely unreliable at precise arithmetic. This is not a quirk of bad prompting — it is structural. Experiencing this directly builds the habit of independent calculation verification.


Exercise 1.10 — The Same Prompt, Three Times

What to do: Use this exact prompt three separate times (start a new conversation each time):

"Give me a three-sentence summary of what a large language model is, written for someone who has never heard the term before."

Questions to answer: - How similar or different were the three responses? - Were there consistent elements that appeared across all three? - Were there significant differences in framing, vocabulary, or emphasis? - Did any version contain a claim you would want to verify before using it with someone else?

What you're learning: AI tools are probabilistic generators, not deterministic ones. The same prompt produces different outputs. Understanding this variability is important for knowing when to generate multiple options and when to seek consistency through other means.


Exercise 1.11 — Explaining Yourself

What to do: Choose a topic from your own area of expertise — something you know well. Ask the AI tool to explain it:

"Explain [your topic] to me as if I'm a smart non-specialist."

Read the explanation carefully. Then respond:

"I'm an expert in this area. I noticed [specific thing you found inaccurate, oversimplified, or missing]. Can you revise with that in mind?"

Questions to answer: - How accurate was the initial explanation? - What did it get subtly wrong, oversimplified, or miss? - How did it respond when you provided expert correction? - What does this tell you about the role of domain expertise in working with AI tools?

What you're learning: AI outputs look authoritative to non-experts. To experts, they often contain errors that are invisible to people who lack the background to catch them. Your domain expertise is one of your most important tools when working with AI.


Exercise 1.12 — Asking About Uncertainty

What to do: Use this prompt:

"I'm going to ask you a question. Before answering, please tell me: how confident are you in your answer, and what are the main ways your answer might be wrong or incomplete? Then answer the question: What is the current best practice for treating moderate lower back pain?"

Questions to answer: - Did the AI accurately represent its own uncertainty? - Did its stated confidence level match what you know or can verify about the answer? - Was the answer itself well-calibrated — acknowledging genuine medical uncertainty rather than projecting false authority?

What you're learning: Explicitly asking AI tools to surface their uncertainty is a useful prompting technique, but the results need to be interpreted carefully. AI tools may not have accurate self-knowledge about their limitations.


Level 3: Applied Challenges

These open-ended challenges ask you to connect what you have learned to your own work. There are no single right answers. The goal is to develop judgment that applies to your specific context.


Exercise 1.13 — Build Your Mental Model Document

Create a one-page reference document (for yourself, not for publication) that describes your current working mental model of AI tools. Include:

  • What you now understand AI tools to be
  • Three things you will always verify
  • Three things you feel comfortable using AI output for with minimal verification
  • Two situations where you will not use AI tools at all, based on your understanding of their limitations

Revisit this document in 30 days after working with AI tools more extensively. What changed? What held up?

What makes this challenging: Most people find that their third and fourth bullet points are genuinely hard to fill in. If everything feels like it requires verification and nothing feels safe to use unverified, you may be overcorrecting. If nothing feels like it requires verification, you may be undercorrecting. The goal is calibrated judgment, not maximum skepticism.


Exercise 1.14 — A Workflow Audit

Identify one specific recurring task in your work that currently takes more than two hours per week. Write a detailed analysis of:

  1. Whether AI tools could help with any part of this task, and which part
  2. What information you would need to provide the AI to get useful output (applying what you learned in Exercise 1.7)
  3. What elements of the AI's output you would need to verify before using it
  4. A realistic estimate of how much time AI tools could save, accounting for verification time
  5. What could go wrong if you used AI output without adequate verification

Do not implement this yet — just analyze it. This is an exercise in developing realistic expectations before you experiment.


Exercise 1.15 — The Misconception Audit

Think about the people around you — colleagues, managers, family members — who use or plan to use AI tools. Identify the three most common misconceptions about AI tools that you observe or hear in these conversations.

For each misconception: 1. Describe the misconception clearly 2. Explain what is inaccurate about it, using what you learned in this chapter 3. Draft a brief, non-condescending explanation you could give to help someone correct it — something you could actually say in a normal conversation without sounding like you are lecturing

What makes this challenging: Correcting AI misconceptions without being dismissive of the person holding them is a genuine communication skill. The goal is not to demonstrate superior knowledge — it is to help people use these tools more effectively.


Exercise 1.16 — Risk Assessment Matrix

Create a simple 2x2 matrix for your own work:

High Expertise (I'd catch AI errors) Low Expertise (I might not catch AI errors)
Low Stakes (consequences of error are minor)
High Stakes (consequences of error are significant)

Fill in the quadrants with specific tasks from your own work. Then write a short paragraph describing what your verification strategy should be for each quadrant.

This matrix is not meant to be filled in permanently — it is meant to help you develop the habit of explicitly thinking about expertise and stakes before using AI tools for important work.


Exercise 1.17 — The Briefing Document Challenge

Building on the context experiment from Exercise 1.7, develop a personal briefing template for AI tools — a structured format you will use to provide context when you ask AI tools for help with important tasks.

Your template should include prompts for: - Your role and expertise level in this area - The specific situation or context - The audience for the output - The constraints (tone, length, format, things to avoid) - What success looks like

Test your template on three different real tasks from your work. After each test, refine the template based on what you found missing or unnecessary.

What you're building: The discipline of briefing AI tools well is one of the highest-leverage habits you can develop. This exercise begins the practice of thinking explicitly about what context an AI tool needs before you ask it for help.


These exercises connect to Chapter 4 (trust calibration — particularly exercises 1.3, 1.5, and 1.16) and Chapter 29 (hallucinations — particularly exercises 1.6 and 1.8). Return to your notes from these exercises as you progress through the book.