Chapter 6 Quiz: The Iteration Mindset


Question 1

What is the "linear fallacy" in AI use, and why is it a fallacy?

Answer The linear fallacy is the belief that getting from idea to AI-assisted output should follow a straight line: first prompt → desired output. It is a fallacy because it ignores the fundamental information asymmetry in any first interaction. When you start an AI conversation, you know many things the AI does not: the specific context the output will be used in, the particular audience, the implicit standards that make an output "good" for this specific purpose, your own preferences (which you may not even be fully conscious of until you see an output that violates them), and the constraints you are working within that you did not think to articulate. The first prompt cannot communicate all of this. Some of it cannot be articulated until you see a draft that is wrong in a specific way. The linear fallacy expects the AI to read your mind; the iteration mindset recognizes that feedback communicates what the first prompt could not.

Question 2

Describe the four steps of the core iteration loop and explain why the evaluation step is the most important.

Answer The four steps are: 1. **Prompt:** Provide your initial request with as much relevant context as you can articulate. A thoughtful first prompt produces a better starting point, but do not spend excessive time on it. 2. **Evaluate:** Read the response critically and analytically. Identify specifically what is right, what is wrong, what assumptions were unintended, and what the gap is between the output and what you need. Evaluation should be specific, not vague. 3. **Refine:** Provide specific feedback and direction. The most effective refinements identify what to change, what to keep, and why — not just what was wrong. 4. **Repeat:** Continue until the output meets your needs or until signals indicate you should start over. The evaluation step is most important because the quality of your refinement is directly proportional to the quality of your evaluation. A vague evaluation ("this is not quite right") produces a vague refinement that produces minimal improvement. A specific evaluation ("the tone is too formal, the second paragraph assumes technical knowledge my audience does not have, and I need three options not one") produces a targeted refinement that produces significant improvement. Good evaluation is the mechanism by which the loop converges.

Question 3

Describe the five types of iteration and give a brief example of when each is appropriate.

Answer 1. **Clarification iteration:** The AI misunderstood your request — the output went in an unexpected direction. Use when the first response reveals a fundamental misread of the prompt. Example: You asked for a "brief" and got a legal-style formal document when you wanted a marketing brief. 2. **Constraint iteration:** The output is in the right direction but violates a requirement you did not clearly specify. Use when the substance is right but parameters (length, format, tone, technical level) are wrong. Example: The summary is well-written but needs to fit on one slide — maximum 100 words. 3. **Direction change iteration:** The output is technically correct but you have realized the approach itself is wrong for your purpose. Use when the framing, angle, or purpose needs to fundamentally change. Example: After seeing a product requirements document, you realize you need a business case for executives instead. 4. **Layering iteration:** The output is a good starting point and you want to add depth, specificity, or additional elements. Use when structure and approach are right and you want to build on the foundation. Example: "Good outline. Now flesh out Section 3 and add a risk mitigation subsection." 5. **Self-critique iteration:** You ask the AI to evaluate and improve its own previous output. Use when the output is close to what you want and you want systematic review. Example: "Review this draft critically. What are the three weakest sections? Rewrite them."

Question 4

What is the diagnostic function of the first response in an AI conversation?

Answer The diagnostic function of the first response is to reveal what you did not communicate. A first response that misses your intent is not just a failure — it is information. It tells you specifically what the AI assumed (what it inferred about your intent), what context was missing from your prompt, and therefore what you need to add in the next turn. Experienced AI users learn to read first responses diagnostically rather than evaluatively. Instead of "that was wrong, this tool is bad," the mindset is: "that was wrong, and here is specifically what that tells me about what I need to add or clarify." This reframe makes the first response useful even when it is not good — it is the most efficient mechanism for surfacing what was missing from your initial framing.

Question 5

What are the four signals that indicate you should start over with a new prompt rather than continuing to iterate?

Answer The four start-over signals are: 1. **Contradictory instructions have accumulated across multiple turns.** The conversation context has become confused by conflicting requirements. The AI is working with an internally inconsistent set of constraints. A clean start with a better prompt is more efficient. 2. **The fundamental premise of the first prompt was wrong.** If you asked for a formal report and now need a casual blog post — a fundamentally different document for a different purpose — starting fresh with the correct framing from the beginning will produce better results than redirecting. 3. **You have iterated more than five or six times without meaningful progress.** If the output is not improving, the problem is usually in the framing or context of the request, not in the phrasing of individual iterations. Start fresh incorporating what you have learned. 4. **The conversation context is very long and early instructions are being ignored.** In very long conversations, earlier context can effectively fade as the AI weights recent context more heavily. A fresh start with all current requirements clearly stated is better.

Question 6

What is the zoom technique and what problem does it solve?

Answer The zoom technique is a structural iteration approach that starts with a broad scope and progressively narrows to specifics: begin by establishing the high-level structure or outline before generating detailed content. The problem it solves is the waste of investing heavily in detailed content that is built on the wrong structure. If you jump straight to generating a full document and the underlying structure is wrong — sections are in the wrong order, key components are missing, superfluous sections are included — all the work on the detailed content within the wrong structure is wasted. The zoom technique catches structural problems before they become expensive to fix. The pattern is: Ask for an outline → Evaluate and validate the structure → Only after the structure is right, request detailed content one section at a time → Zoom in further to specific subsections as needed. This is especially valuable for long-form content (reports, proposals), complex code (systems with multiple components), and any output where the structure itself is a significant deliverable.

Question 7

Describe Elena's three-pass method and explain why she does not write any content during Pass 1.

Answer Elena's three-pass method for consulting deliverables: **Pass 1 (Structure and Argument):** Elena works with Claude to design the document structure — what sections will be included, in what order, and with what purpose. She does not write any content during this pass because she wants to validate the logical architecture of the document before investing in detailed content. The output of Pass 1 is a fully validated outline with purpose statements for each section. **Pass 2 (Content Drafting):** Elena provides her verified research to Claude section by section. For each section, she asks Claude to draft content based on her provided research and flags anything requiring data she has not supplied. She goes through the document section by section. **Pass 3 (Quality and Polish):** Elena reads the full draft herself, marks sections needing adjustment, uses Claude for targeted rewrites, then runs a consistency check. She does a final human read-through before sending to the client. She does not write content during Pass 1 because: structure is a prerequisite for good content. Detailed content written into the wrong sections, in the wrong order, or serving the wrong argumentative purpose must either be discarded or substantially restructured. Validating structure before content generation prevents this waste and ensures every piece of content serves a clear, pre-validated purpose in the overall document.

Question 8

What is the "Refinement That Changes Everything" anti-pattern, and how do you prevent it?

Answer The "Refinement That Changes Everything" is a variant of the Infinite Loop anti-pattern. It occurs when you make a seemingly small refinement that requires the AI to effectively regenerate most of the content. The new output is sometimes better, sometimes worse, and often just different. You then iterate to fix what changed, then again, cycling endlessly. It happens because your refinement did not specify what to preserve — only what to change. When the AI regenerates, it has no constraint on the parts you wanted to keep, so it changes them too. Prevention is explicit preservation. Before iterating, identify what is working and explicitly tell the AI to preserve it: "Keep the structure, the examples in Section 2, and the overall tone. Only rewrite the opening paragraph because [reason] — it needs [guidance]." Writing both "keep" and "change" instructions in every iteration that touches a complex output prevents the AI from optimizing its regeneration in ways that change things you did not want changed.

Question 9

In Alex's seven-round campaign development, what was the specific function of Round 7 (the self-critique pass)?

Answer Round 7 served as a senior-perspective quality review — asking Claude to evaluate the brief from a critical, expert standpoint rather than just producing or refining content. The specific prompt asked Claude to identify, "from a senior creative director's perspective," the two or three weakest parts of the brief — where strategic thinking was thinnest or where the easiest rather than the best choice was made. This produced two specific improvements: (1) the audience definition was identified as still demographic rather than psychographic, and (2) the campaign idea paragraph was identified as descriptive when it should be inspiring. Claude then rewrote both sections. The function of the self-critique pass as a penultimate round is to apply a different evaluative lens than you have been using throughout the iteration — a critical, outside perspective rather than an "is this better than the last version?" perspective. This often catches issues that step-by-step iteration misses because each step has been locally improving on the previous step without a global quality check.

Question 10

What is the 3-pass rule, and what does each pass check for?

Answer The 3-pass rule requires that any AI-assisted output that goes into a real deliverable receives a three-part human review before being finalized. **Pass 1 — Substance:** Are all the facts, claims, and arguments correct? This is the verification pass — checking that specific factual claims are accurate, that the logic is sound, and that nothing has slipped through from Zone 3 without verification. **Pass 2 — Fit:** Does this match the specific context, audience, and purpose? AI-generated text that is technically correct may still miss the specific needs of your situation — the particular relationship with this audience, the constraints of this channel, the history of this project. This pass evaluates fit, not just correctness. **Pass 3 — Voice:** Does this sound like you or your organization? Does the language feel natural? Even after extensive iteration, AI-generated text may retain subtle patterns that differ from how you actually write or how your organization communicates. This pass — done as if reading aloud — catches language that is technically acceptable but not authentically yours. The 3-pass rule is non-negotiable for significant outputs because even after extensive iteration, each pass catches different types of issues that the iteration process may not have addressed.

Question 11

What is the "single-shot ego" anti-pattern, why does it happen, and what is the fix?

Answer The single-shot ego anti-pattern involves spending a long time (often 15-20+ minutes) crafting an elaborate first prompt, then accepting whatever comes back with minimal iteration — either because the emotional investment in the first prompt makes iteration feel like failure, or because you expect that a great first prompt should produce a perfect first output. It happens because of a misconception about what determines output quality: the belief that first-prompt quality is the primary driver of final output quality. In reality, for complex tasks, iteration quality (the specificity and accuracy of your evaluations and refinements) has more impact than first-prompt quality. The fix is to write a good-enough first prompt (2-3 minutes maximum) rather than spending excessive time on prompt craft, then use the first response diagnostically to identify what is missing, and iterate from there. The time saved by not over-crafting the first prompt is reinvested in higher-quality iteration, which produces better final outputs with less total time spent.

Question 12

What is the iteration budget for "full document draft" and what does exceeding the budget signal?

Answer The typical iteration budget for a full document draft is 3-5 rounds. At the end of this range, the draft should be usable with remaining issues being judgment calls that the human author should make directly. Exceeding the budget (iterating more than 5 rounds without reaching a usable draft) signals one of two things: 1. **The framing or approach needs to change.** If the document is not converging, the problem is usually not that each iteration is being done poorly — it is that the underlying framing, purpose, audience definition, or structural approach is wrong. More iterations of the same approach will not converge. 2. **You are in the Over-Iteration Spiral.** If the document is already good enough but you are continuing to iterate either from perfectionism or from avoidance of making the final judgment calls yourself, the solution is to stop iterating and make those judgment calls directly. The iteration budget is a calibration signal, not a rule. Its practical function is to create a checkpoint: "I have exceeded my expected budget — should I reassess the framing, or accept the current output and finalize it myself?"

Question 13

How does the prompt-response-reflection journal differ from a simple prompt log, and what specific benefit does the "reflection" column provide?

Answer A simple prompt log records what you asked and what you got. The prompt-response-reflection journal adds a third column — reflection — that makes the log a learning tool rather than just a record. The reflection column captures: what this interaction tells you about how to prompt better next time, what iteration type would have improved the output, and what the successful or failed iteration taught you about the initial framing. The specific benefit of the reflection column is that it externalizes and makes explicit what would otherwise remain implicit intuition. Without reflection, you accumulate experience but may not extract the generalizable lessons. With reflection, you are deliberately building a structured model of what works and why in your specific AI use context. After 30 entries, the reflection column reveals patterns: recurring framing mistakes, prompt types that consistently work, domains where the AI needs more context than you typically provide. This pattern recognition is the foundation of continuously improving AI use skill — it is how you build calibrated judgment rather than just accumulating anecdotal experience.

Question 14

Raj's code architecture discussion in the chapter involved five rounds. What specifically did the iteration add that a single-round request for "design a background job processing system" would not have produced?

Answer A single-round request for "design a background job processing system" would have produced a generic design recommendation, likely covering the same components but without the specific decision rationale tailored to Raj's context. The five rounds of iteration added: 1. **Problem scoping (Round 1):** Identifying the specific architectural decisions that mattered — not a generic design, but the key questions for this specific situation. 2. **Context-specific trade-off analysis (Round 2):** The Celery vs. RabbitMQ trade-off was resolved with Raj's specific constraints (500-2,000 jobs/day, operational simplicity preference) in mind, not in the abstract. 3. **Specific failure mode investigation (Round 3):** The partial-failure scenario (API calls made but database not updated) is a specific problem Raj identified after seeing the first structural design. This level of specificity was not in the first prompt and could not have been. 4. **Working starter code (Round 4):** Requesting the skeleton implementation only made sense after the architecture decisions were made — the code could be grounded in the specific choices that emerged from the architectural discussion. 5. **Edge case identification (Round 5):** The self-critique pass asked specifically about failure scenarios at scale — applying pressure to the design that a one-shot request would not have generated. The cumulative effect is that the output (architecture decision record + validated starter code) was both more appropriate for Raj's specific situation and more complete in addressing failure modes.

Question 15

Why does AI interaction in multi-session projects require explicit continuity management, and what is the recommended strategy for providing it?

Answer AI chat sessions do not carry context across separate conversations. Each new conversation starts fresh — with no memory of previous sessions, previous decisions, constraints established, or work produced. Without explicit continuity management, each new session must rebuild context through back-and-forth, wasting time and potentially losing important context from previous sessions. The recommended strategy has three components: 1. **Session summaries:** At the end of each session, ask AI to produce a summary capturing: project purpose, audience, requirements established, decisions made, and current state of output. Save this summary to your file system. 2. **Versioned drafts:** Save each significant iteration to a named file (project-brief-v1.md, project-brief-v2.md). Chat history is not a reliable record; versioned files are. 3. **Context priming:** Start each new session with a context-setting prompt that pastes the session summary and clearly states what is being worked on now. This front-loaded context setup is more efficient than rebuilding context through conversational back-and-forth. For complex projects, maintain a "living brief" document that is updated after each session with all the project context — purpose, audience, requirements, decisions, current state. Paste this document at the start of each new session as a reliable context primer.