Chapter 6 Exercises: Working in Loops, Not Lines

These exercises build practical iteration skill. The most important exercises require you to actually run iterations with an AI tool — not just read about them. Block time for the multi-round exercises; some will take 30-60 minutes to complete properly.


Part A: Foundational Iteration Skills (Exercises 1-5)

Exercise 1: The Diagnostic First Response

This exercise develops the skill of reading first responses diagnostically — as information about what you did not communicate — rather than as judgments about quality.

Step 1: Choose a writing task from your current work: an email you need to write, a document section to draft, a summary to create.

Step 2: Write a first prompt that you think is reasonably good but not exhaustively detailed. 2-3 sentences maximum.

Step 3: Read the response fully before typing anything.

Step 4: Before iterating, write down your diagnostic evaluation: - What did the AI assume that I did not intend? - What context was clearly missing from my prompt? - What was actually right about the response? - What specifically needs to change?

Step 5: Write your refinement prompt based specifically on the diagnostic. Note which iteration type you are using (Clarification, Constraint, Direction Change, Layering, or Self-Critique).

Step 6: Compare the second response to the first. How specifically did the diagnostic evaluation improve your refinement?

Reflection: Was there anything in your diagnostic evaluation that would have been hard to include in the original prompt because you did not know to say it until you saw the first response?


Exercise 2: The Five Types of Iteration Drill

Practice each of the five iteration types deliberately.

Setup: Start with this single baseline prompt on a task from your work or on the provided example:

Baseline prompt: "Write a paragraph explaining why good communication matters in the workplace."

Now practice all five iteration types starting from whatever response you receive:

  1. Clarification iteration: Respond to the baseline output as if the AI fundamentally misunderstood your purpose. Clarify what you actually wanted.

  2. Constraint iteration: Add a specific constraint the baseline output violates (length, audience level, format).

  3. Direction change iteration: Ask for the same content but reframed entirely for a different purpose (e.g., "Actually, write this as the opening to an employee training course, not as a general paragraph").

  4. Layering iteration: Ask to add depth to something specific in the baseline output (e.g., "Add three concrete examples that illustrate the main point").

  5. Self-critique iteration: Ask the AI to evaluate the quality of the baseline output against specific criteria and improve the weakest element.

Write up: For each iteration type, describe what changed in the output and what the iteration taught you about when each type is appropriate.


Exercise 3: The Evaluation Quality Exercise

Most poor iterations result from poor evaluations. This exercise builds evaluation precision.

Step 1: Get an AI response to any writing or analysis prompt.

Step 2: Write two evaluations of the response:

Vague evaluation: "This is okay but not quite right. It's a bit long and the tone isn't what I wanted."

Specific evaluation: List at least five specific, actionable observations about the response. Each should identify exactly what is wrong and what "right" would look like. Example format: - "The opening sentence assumes the reader knows what [X] is. Open with a brief definition instead." - "Bullet point 3 is the strongest — build on this rather than moving away from it." - "The paragraph length is inconsistent. Normalize all body paragraphs to 3-4 sentences."

Step 3: Write a refinement prompt based on your specific evaluation. Count how many of your five specific observations are addressed in the refinement.

Step 4: Run the refinement. Compare the result to what you would have gotten from the vague evaluation.

Conclusion: What is the relationship between the specificity of your evaluation and the quality of the next iteration?


Exercise 4: The Structural First Principle

Practice the zoom technique — validating structure before generating content.

Step 1: Choose a longer-form task: a report section, a proposal, a detailed plan.

Step 2: Instead of asking for the full document, ask for the structure only: "Please provide an outline for [document] with: section titles, a one-sentence description of what each section will accomplish, and an indication of approximate length."

Step 3: Evaluate the outline. Ask these questions: - Are all the necessary components present? - Is anything included that should be cut? - Is the sequencing logical? - Does each section's purpose statement clearly describe its role in the overall argument or document?

Step 4: Refine the outline until you are satisfied. This may take 1-2 iterations.

Step 5: Only after the outline is validated, request the full draft one section at a time.

Reflection: Compare the final output to what you would have gotten by asking for the full document in one shot. Was the extra step of validating the structure worth it?


Exercise 5: The Comparison Experiment

This exercise generates empirical evidence from your own experience that iteration improves output quality.

Step 1: Choose a task where you can produce two versions.

Version A — One-shot: Write the best single prompt you can and accept the first response without iteration.

Version B — Iterated: Write a baseline prompt (no more refined than Version A) and iterate 3-4 times using the full loop.

Step 2: Evaluate both outputs blindly if possible — wait a day and then read them without knowing which is which. Rate each on: accuracy/correctness, completeness, fit for purpose, quality of writing.

Step 3: Compare your scores. Note the total evaluation time for each approach.

Step 4: Calculate the "quality per minute" for each approach. Did the iteration investment pay off relative to the time cost?

Most users find that iteration significantly improves quality with only moderate time investment — because the iteration time replaces editing and rewriting time, not just adds to it.


Part B: Task-Type Iteration Practice (Exercises 6-10)

Exercise 6: Writing Iteration — The Full Five-Round Exercise

This is the central writing iteration exercise. Complete all five rounds.

The task: Write a 300-400 word opinion piece for your professional context — the kind of thing you might post to LinkedIn or a company blog.

Round 1: Prompt AI with your topic, your rough angle, and your target audience. Request a first draft.

Round 2: Evaluate the structure and overall argument only (not the language yet). Iterate to improve the structure and argument.

Round 3: Evaluate the language, tone, and voice. Is this how you actually write? Iterate to make it sound like you.

Round 4: Identify the weakest paragraph — the one you would cut or substantially rewrite in an edit. Ask AI to rewrite it specifically with guidance on what would make it stronger.

Round 5: Self-critique pass. Ask: "From the perspective of a critical reader, what is the most questionable claim or the weakest link in the argument? Rewrite to address it."

Final: Apply the 3-pass rule yourself (substance, fit, language). Note what you changed in the human editing step.

Document the full five rounds in your prompt-response-reflection journal.


Exercise 7: Code Iteration — Building a Function with Progressive Refinement

This exercise is for developers. Non-developers should skip to Exercise 8.

The task: Build a function that validates a configuration dictionary against a schema.

Round 1: Write a basic prompt specifying the function signature and core purpose. Do not over-specify.

Round 2: Run the generated code mentally (or actually). Identify at least two edge cases not handled.

Round 3: Ask the AI to handle the edge cases, providing test cases: "The current implementation doesn't handle [case 1] (should return [expected behavior]) or [case 2] (should return [expected behavior])."

Round 4: Ask for a self-critique: "What additional edge cases or error conditions should this function handle that are not currently tested?"

Round 5: Request a final version with complete docstring, type annotations, and at least four unit test cases.

Evaluate the final version: Would you use this in production code? What would you change? Make those changes yourself.


Exercise 8: Research Iteration — Structured Information Gathering

The task: Use AI to help you explore a topic you want to learn about professionally. The topic should be one where you can verify claims.

Round 1: Ask for an overview of the topic and the three to five most important sub-questions to understand it.

Round 2: For the most important sub-question, ask for a more detailed explanation. Ask the AI to explicitly flag any claims it is less certain about.

Round 3: Take the flagged claims and spend 15 minutes finding primary sources. Bring back what you found: "I checked your claim about [X]. The actual figure is [Y] per [source]. Please update your summary to reflect the correct information."

Round 4: Ask for a final synthesis: "Given everything in this conversation, what is the most important thing to understand about [topic] for someone in my professional context?"

Reflection: How did the structure of this iteration change how you engaged with the research topic versus just reading an AI summary uncritically?


Exercise 9: Analysis Iteration — The Steel-Manning Technique

The task: Use AI to analyze a real decision you are facing (professional or otherwise) and pressure-test the analysis.

Round 1: Describe the decision context and ask for an analysis with a recommended course of action.

Round 2: Ask for the steel-man: "Now argue the strongest possible case for the opposite recommendation. What assumptions underlie the recommendation, and which ones are most fragile?"

Round 3: Ask for synthesis: "Given the analysis and the counter-argument, what is the most defensible recommendation, and what conditions would change it?"

Round 4: Ask for a blind spot check: "What is the most important thing I have NOT mentioned that could affect this decision? What would you ask me to find out before finalizing a recommendation?"

Evaluate: Is the final analysis more robust than what you would have produced without the steel-manning steps? What did the process reveal that you had not considered?


Exercise 10: Multi-Session Project Iteration

This exercise builds multi-session continuity skills. Choose a project that will genuinely take multiple sessions (minimum three sessions over multiple days).

Session 1: Start the project. Develop the structure, establish context, and produce a first-pass output for the most important component.

End of Session 1: Ask AI to produce a project summary capturing: project purpose, audience, requirements established, decisions made, current state of output.

Save: Store the session summary and the current output version in your file system.

Session 2: Start with a context-priming prompt: paste the session summary and clearly state: "Continuing from the previous session. [Current task description]."

During Session 2: Evaluate the quality of continuity. Does the AI pick up where you left off? What context was lost? What needed to be re-established?

Session 3 and beyond: Refine your context priming based on what you learned in Session 2.

Deliverable: The completed project output, plus a one-page reflection on what the multi-session continuity challenges were and how you addressed them.


Part C: Anti-Pattern Recognition and Override (Exercises 11-15)

Exercise 11: Identifying Your Primary Anti-Pattern

Based on the four iteration anti-patterns described in the chapter (Infinite Loop, Single-Shot Ego, Copy-Paste Trap, Over-Iteration Spiral), identify which one you are most prone to.

Self-assessment: 1. Think of the last five significant AI interactions you had. 2. Did any of them involve iterating many times without meaningful progress? (Infinite Loop) 3. Did any involve spending a long time on the first prompt and then accepting output without meaningful iteration? (Single-Shot Ego) 4. Did any involve directly using AI output in a deliverable without review? (Copy-Paste Trap) 5. Did any involve continuing to iterate an already-good output past the point of useful improvement? (Over-Iteration Spiral)

Design a counter-habit:

For your primary anti-pattern, design a specific behavioral counter-habit. For example: - Infinite Loop counter: "After three iterations with no improvement, I stop and rewrite the framing from scratch." - Copy-Paste Trap counter: "I never paste AI text into a final document without reading it first with the 3-pass rule."

Write your counter-habit as a specific, observable behavior change.


Exercise 12: The When to Start Over Test

Practice the decision between continuing to iterate and starting over.

Step 1: Get a first response on any task. Iterate once.

Step 2: Apply the "start over" test: - Have you given contradictory instructions in this conversation? - Is the fundamental framing/premise of the original prompt wrong? - Have you iterated more than five times without meaningful progress? - Is the conversation context very long and early instructions being ignored?

Step 3: Based on the test, either continue or start over. If you start over, write a new first prompt that captures everything you learned from the failed iteration.

Compare: How does the "start over" output quality compare to the last iteration from the original conversation?

When you start over, what specifically changed in your new first prompt based on what you learned from the failed iterations?


Exercise 13: Iteration Budget Setting

Develop your personal iteration budget for your five most common AI-assisted task types.

Step 1: For each of your five most common AI task types, review the last three to five times you completed that task with AI assistance. - How many rounds did it typically take? - How many rounds were productive (meaningful improvement)? - At what round did you hit diminishing returns?

Step 2: Set a budget for each task type: "For this task type, X rounds is my expected budget. If I exceed X rounds without completion, I reassess the framing."

Step 3: For one week, track your actual iterations against your budget. Note cases where you exceeded budget and whether the extra rounds were productive.

Step 4: Adjust budgets based on empirical observation.


Exercise 14: The Preservation Prompt Technique

Practice explicitly managing what to preserve versus what to change during iteration.

This exercise addresses the "Refinement That Changes Everything" anti-pattern.

Step 1: Get a first response that is partially good — some elements are right, some need work.

Step 2: Before iterating, explicitly identify and write down what to preserve: - "Keep: [specific element 1]" - "Keep: [specific element 2]" - "Change: [specific element 1] — replace with [guidance]" - "Change: [specific element 2] — because [reason] and instead [guidance]"

Step 3: Write your refinement prompt that explicitly states both what to preserve and what to change.

Compare: How does explicit preservation instruction affect iteration efficiency versus a refinement prompt that only says what to change?


Exercise 15: The Full Iteration Project

This is the capstone exercise for Chapter 6. It requires a real work task and demonstrates everything from the chapter.

Choose a real work task that produces a significant output: a proposal, a report section, a campaign brief, a technical specification, a training document — something you actually need to produce.

Complete the task using full iteration discipline:

  1. Do the AI Workflow Audit step: confirm this is an appropriate task for AI assistance
  2. Write a reasonable first prompt (no more than 3 minutes of craft)
  3. After the first response, write a full diagnostic evaluation before typing anything
  4. Identify the iteration type for each subsequent round
  5. Explicitly state what to preserve in each iteration
  6. Apply the zoom technique if the output is long-form
  7. Use a self-critique iteration as your penultimate round
  8. Apply the 3-pass rule to the final output before using it

Document the full process: - Number of rounds - Iteration types used - What each round changed - Time spent on AI interaction vs. human editing - Final assessment: did iteration produce better output than a single-shot attempt would have?

This documentation becomes the foundational entry in your prompt-response-reflection journal for this chapter.