Chapter 3: Quiz — The Right Mental Models for AI Collaboration
Test your understanding of the mental models framework from Chapter 3. Answer each question before revealing the answer.
Question 1
A user types "best practices for remote team communication" into an AI tool and accepts the first response without providing any context about their team size, industry, communication challenges, or current tools. Which broken mental model does this behavior most reflect, and why?
Answer
This most reflects the **AI as search engine** model. The query is short, keyword-driven, and treats the AI as a retrieval system rather than a generative collaborator. The user is expecting the AI to surface generally relevant information (as a search engine would retrieve relevant pages) rather than generating a response calibrated to their specific situation. The absence of context provision reflects the implicit assumption that the AI retrieves objective best practices rather than generating a response shaped by the context provided. The fix is to add substantial context: team size, industry, the specific challenges being experienced, what has already been tried, and what "good" would look like for this particular team.Question 2
What is the core practical implication of the "brilliant intern" mental model for how you construct prompts?
Answer
The core implication is that you must explicitly provide all the context that a brilliant but brand-new person would need to do the task well, because they have no history with you, no knowledge of your organization, and no awareness of unstated constraints. Before making a request, you should ask yourself: "What would an excellent, well-intentioned person who has never worked with me need to know to do this well?" Whatever the answer is — purpose, audience, constraints, preferences, history, vocabulary — belongs in the prompt. The model correctly frames context provision as your responsibility, not something the AI should infer or remember from previous sessions.Question 3
How does the "first draft machine" model change how you should evaluate AI output?
Answer
The first draft model shifts evaluation from a binary accept/reject decision to an engagement process. Rather than asking "Is this good enough to use?", you ask "What does this give me to work with, and what do I need to do to it?" This means marking up output with what to keep, what to revise, and what to remove — and then using that analysis to generate a targeted revision prompt. The model also correctly frames the editing and refinement process as the location of your distinctive contribution: an AI-generated draft that you have critically engaged with, verified, and shaped is yours in a meaningful way. Accepting AI output verbatim without critical engagement is a failure of the process the first draft model implies.Question 4
What distinguishes a "thinking partner" use of AI from a "content generation" use of AI? Give an example of each for the same underlying task.
Answer
In content generation, you ask the AI to produce something — a document, a plan, a piece of code. Your output is the thing generated. In thinking partner use, you ask the AI to engage with your thinking — surface alternatives, identify weaknesses, ask questions, steelman opposing views. Your output is improved thinking about the problem, not necessarily anything the AI produced directly. Example — deciding whether to pursue a new market: - Content generation: "Write a go-to-market analysis for entering the healthcare sector." - Thinking partner: "Here is my current reasoning for why we should enter healthcare: [reasoning]. What assumptions am I making that I should examine? What are the strongest counterarguments? What would I need to believe for this to be the wrong decision?" The thinking partner approach uses AI to sharpen the decision-making process rather than to produce the deliverable.Question 5
According to the "pattern matcher" model, why does AI produce generic content when asked to "write creatively" or "capture our brand voice"?
Answer
The pattern matcher model explains this because AI excels at tasks with strong patterns in training data, and "creativity" and "brand voice" are precisely the dimensions that diverge from conventional patterns. The model generates responses that match the statistical center of "creative" or "branded" writing in its training data — which is, by definition, the most common and therefore least distinctive output. Distinctiveness comes from departing from conventional patterns, and the model is trained to match patterns rather than depart from them. The solution is for the user to bring the distinctiveness themselves — providing specific examples of the desired voice, making deliberate choices about which AI-generated options to combine or adapt, and using the model for structural and vocabulary generation while reserving creative differentiation for human judgment.Question 6
The amplifier model claims that AI tools benefit experienced, knowledgeable users more than inexperienced ones. Explain the reasoning behind this claim.
Answer
The amplifier model describes AI as a system that amplifies the quality of your input. Experienced, knowledgeable users bring higher-quality input: they provide more specific and accurate context, they can evaluate whether the output is good more reliably (they have domain expertise to catch errors), they understand the task well enough to provide useful examples and constraints, and they can iterate intelligently because they know why something is wrong. Inexperienced users often lack the domain knowledge to evaluate output quality, provide poor context because they do not know what matters, and cannot identify errors precisely because they do not know the domain well enough to recognize them. The AI amplifies both good and bad input — so more knowledge produces better results, and less knowledge produces less reliable results with less ability to detect the unreliability.Question 7
What is the key flaw in the "AI as oracle" mental model, and what does it lead users to do (or not do) in practice?
Answer
The key flaw is conflating fluency with accuracy. Oracles give correct answers by definition. AI tools give confident, fluent answers — but confidence and fluency are stylistic features produced by training on authoritative text, not indicators of accuracy. The model has no internal truth-checking mechanism. In practice, the oracle model causes users to not verify. They accept confident-sounding output without checking factual claims, following up on sources, or applying critical judgment about whether the answer makes sense. This is most dangerous in high-stakes domains (medical, legal, financial) and for time-sensitive information (post training-cutoff), but the verification failure mode applies across all domains.Question 8
What makes the "AI as person" model particularly seductive compared to the other broken models?
Answer
The person model is especially seductive because modern language models are trained on human conversation and produce responses that genuinely feel human — they use names, they pick up conversational cues, they maintain apparent coherence across an exchange. Unlike the search engine or robot models, which describe behaviors that are easy to recognize as non-human, the person model describes surface features that really are present in the interaction. The sociality is not imagined; it is a consequence of training. What is missing is everything below the surface: persistent memory, genuine preferences, stakes in the outcome, awareness of unstated context, and any meaningful sense of relationship accumulation. These absences are easy to forget precisely because the surface experience feels so human.Question 9
You are reviewing a colleague's workflow. They describe their AI use as follows: "I write a very detailed, precise prompt, and if the output is wrong, I rewrite the prompt to be more precise. I keep tightening the instructions until the output is right." Which broken model is driving this approach, and what would you suggest instead?
Answer
This is the **AI as robot** model — the belief that precise instructions determine output, and that output failures are instruction failures. The robot model undervalues context and examples relative to instruction precision. The better approach: Rather than continuing to tighten abstract instructions, your colleague should try (a) providing concrete examples of the desired output ("here is an example of the format and level of detail I want"), (b) adding context about the purpose and audience rather than just refining instruction wording, and (c) considering whether the task structure itself needs to change — for example, breaking a complex task into clearer sub-tasks rather than adding more qualifications to a single complex prompt. The robot model's failure mode is spending enormous time on instruction precision when examples and context are much more efficient paths to better output.Question 10
What are the four steps of the Model Diagnostic, and when should you run it?
Answer
The four steps are: (1) Describe the surprise specifically — what was the precise expectation and what was the precise outcome; (2) Identify the mental model that generated the expectation — which frame was operating; (3) Look for the mechanism — using Chapter 2 knowledge, identify the most likely mechanical explanation for the gap (missing context, training cutoff, pattern mismatch, context window, etc.); and (4) Update the model — write a specific one- or two-sentence update to your mental model or practice that would prevent this gap in the future. The Model Diagnostic should be run whenever something surprising happens — either the AI performs much better or much worse than expected. It should also be run proactively when you notice recurring patterns (output is consistently off in a particular direction), when you have adopted a new tool or significantly updated version, and periodically (quarterly) as a check-in on model accuracy.Question 11
How does the mental model you hold affect what you do when AI output is wrong or inadequate?
Answer
Mental models directly shape the diagnostic questions you ask when output fails. If you hold the oracle model, wrong output is surprising and hard to explain — you may simply try again without changing anything, or conclude the AI "doesn't know" this topic. If you hold the robot model, wrong output means the instructions were imprecise — you focus on rewording. If you hold the replacement model, wrong output means the AI is not ready to do the task. If you hold the brilliant intern model, wrong output means context was missing — you ask what you failed to provide. If you hold the pattern matcher model, wrong output means the task was outside the model's strong patterns — you consider providing more examples or being more specific. The model you hold determines whether your response to failure leads to learning and improvement, or to frustration and repetition.Question 12
Alex's marketing content was described as "generic" despite being technically correct. The brilliant intern model would diagnose this differently than the replacement model. What diagnosis would each model produce?
Answer
**Replacement model diagnosis:** The AI is not capable of producing branded content. It is a tool with limits, and this is one of them. There is nothing Alex can do within the AI workflow to fix this — she either accepts generic content or goes back to human copywriting. **Brilliant intern model diagnosis:** The intern has never worked with this brand and does not know what makes it distinctive. Alex has not provided that information. The intern can only produce what is statistically conventional given the brief — because that is all they know about "good marketing content." The fix is to provide the brand-specific information: examples of content that has felt distinctively on-brand, explicit characterizations of what the brand's voice avoids, specific vocabulary choices, tone references. The intern can then calibrate to the brand rather than to generic patterns. The replacement model leads to giving up. The brilliant intern model leads to investing in better context — which is the correct response.Question 13
What does "holding models lightly" mean, and why is it an important practice for AI users specifically?
Answer
Holding models lightly means treating mental models as working hypotheses rather than fixed truths — applying them, observing outcomes, and updating them when observations diverge from predictions. It means being willing to revise your beliefs about what AI can and cannot do when evidence contradicts your current model. For AI users specifically, this is important for two reasons. First, AI capabilities are genuinely changing — rapidly — which means models that were accurate in 2023 may be partially outdated by 2025. A user who holds their models too firmly will miss significant capability improvements and may apply skepticism (or trust) that is no longer calibrated to the current generation of tools. Second, the range of tasks and contexts in which any individual uses AI is diverse enough that no fixed model will be accurate across all of them — the appropriate calibration for coding assistance is different from the appropriate calibration for strategic advice, which is different from creative writing. Holding models lightly enables context-sensitive calibration rather than blanket application of fixed expectations.Question 14
Which productive mental model is most directly connected to the "chain-of-thought" prompting technique from Chapter 2? Explain the connection.
Answer
The **AI as thinking partner** model is most directly connected to chain-of-thought prompting. Chain-of-thought prompting asks the model to show its reasoning before giving an answer — essentially asking it to think out loud. This is the behavior of a thinking partner: making the reasoning process explicit so that it can be examined, evaluated, and built upon. The thinking partner model suggests that the dialogue and reasoning process are themselves valuable, not just the final output. Chain-of-thought prompting operationalizes this by making intermediate reasoning visible. The connection also runs through the amplifier model: by asking the model to reason step by step, you are providing a richer context (the intermediate steps) that shapes subsequent generation — amplifying the quality of reasoning by structuring how it proceeds.Question 15
A new team member tells you: "I've been using AI tools for a few months, and I feel like I've hit a ceiling. The quality of what I get hasn't improved in weeks even though I'm putting in more effort." Diagnose this situation using the mental model framework and suggest an approach.