Chapter 11: Exercises

Tier 1 --- Recall and Basic Understanding (Exercises 1--6)

Exercise 1: Feedback Loop Identification

Read the following abbreviated conversation between a developer and an AI assistant. Identify each stage of the feedback loop (Prompt, Response, Evaluation) for each turn.

Conversation:

Developer: Write a function that calculates compound interest.
AI: [generates function with basic calculation]
Developer: This is correct for annual compounding, but I need it to support
           monthly, quarterly, and daily compounding as well.
AI: [generates updated function with compounding frequency parameter]
Developer: Good. Now add input validation --- principal and rate should be
           positive, and years should be a positive integer.
AI: [generates final version with validation]

Task: Label each developer message as "Initial Prompt" or "Follow-up Prompt (informed by evaluation of...)". For each AI response, state what aspect of the requirements it addresses.


Exercise 2: CMI Phase Labeling

Classify each of the following developer follow-up messages as Critique, Modify, or Improve:

  1. "The function returns a float, but for currency we need Decimal to avoid floating-point errors."
  2. "Add a parameter that lets the user specify the output format: raw number, formatted string, or dictionary with breakdown."
  3. "The variable name r is unclear. Rename it to annual_rate."
  4. "This implementation is O(n^2). Can you use a set lookup to bring it to O(n)?"
  5. "The error messages are too technical. Make them user-friendly."
  6. "Add logging for each major step so we can debug in production."

Exercise 3: Quality Dimension Matching

For each of the following scenarios, identify the top three quality dimensions (from Section 11.6) that should take priority:

  1. A one-off data migration script that will run once and be deleted.
  2. A public REST API that external developers will integrate with.
  3. A prototype to demo a feature idea to stakeholders.
  4. A financial calculation library used in a banking application.
  5. An internal CLI tool used by your team of three developers.

Exercise 4: Terminology Matching

Match each term to its correct definition:

Term Definition
A. Progressive disclosure 1. Exploring an alternative approach without abandoning the current one
B. Incremental building 2. The tendency to keep refining past the point of useful improvement
C. Conversation branching 3. Revealing requirements to the AI in stages
D. Infinite refinement trap 4. Starting simple and layering complexity on a working foundation
E. Scaffold-and-fill 5. Generating the overall structure first, then implementing each part

Exercise 5: Follow-Up Pattern Recognition

Identify which follow-up prompt pattern (from Section 11.5) each example represents: Targeted Fix, Feature Addition, Refactor Request, "What If" Exploration, or Quality Gate.

  1. "Split the process_payment function into validate_payment, execute_payment, and record_payment without changing external behavior."
  2. "The date parsing fails for ISO 8601 dates with timezone offsets. Fix the regex to handle +05:30 style offsets."
  3. "Add CSV export capability to the report generator. Keep the existing PDF export unchanged."
  4. "Review this code for SQL injection vulnerabilities and fix any you find."
  5. "What would this look like if we used GraphQL instead of REST? Just sketch the schema, don't implement it."

Exercise 6: Divergence Signals

For each scenario, identify the type of divergence (Wrong technology, Over-engineering, Under-engineering, Misunderstood requirements, or Style mismatch):

  1. You asked for a simple config parser and received a 200-line class with plugin architecture and abstract factories.
  2. You asked for a Flask application and the AI generated a Django project.
  3. You asked for production-ready code and received a function with no error handling, no type hints, and single-letter variable names.
  4. You asked for a function that sorts users by age and received a function that filters users by age.
  5. Your project uses functional programming style and the AI generated deeply nested classes with inheritance.

Tier 2 --- Application (Exercises 7--12)

Exercise 7: Craft the Follow-Up

You prompted an AI to write a Python function that reads a CSV file and returns summary statistics. The AI produced this:

import csv

def summarize_csv(filename):
    with open(filename) as f:
        reader = csv.DictReader(f)
        data = list(reader)
    return {"rows": len(data), "columns": len(data[0]) if data else 0}

Write three follow-up prompts that progressively improve this code: 1. A follow-up that adds actual statistical calculations (mean, median, min, max for numeric columns). 2. A follow-up that adds error handling and type hints. 3. A follow-up that adds performance optimization for large files (streaming instead of loading all into memory).


Exercise 8: Incremental Building Plan

You need to build a URL shortener service. Create a four-stage incremental building plan (one prompt per stage) following the layered approach from Section 11.3. Each prompt should: - Build on the previous stage. - Be specific enough that an AI could implement it. - Include validation criteria for that stage.


Exercise 9: Steering Practice

An AI has generated the following code in response to your request for "a simple cache with TTL support":

from abc import ABC, abstractmethod
from typing import Generic, TypeVar, Optional
import threading
import time
import heapq
from dataclasses import dataclass, field

T = TypeVar('T')
K = TypeVar('K')

class CacheEvictionPolicy(ABC, Generic[K]):
    @abstractmethod
    def on_access(self, key: K) -> None: ...
    @abstractmethod
    def on_insert(self, key: K) -> None: ...
    @abstractmethod
    def evict(self) -> K: ...

class LRUPolicy(CacheEvictionPolicy[K]):
    # ... 50 more lines

Write a steering prompt that redirects the AI to produce a simpler solution. Use the "Constraint Tightening" technique from Section 11.4.


Exercise 10: Progressive Disclosure

You need to build a blog platform. Write four progressive disclosure prompts that reveal requirements in stages: 1. Stage 1: Basic blog post CRUD. 2. Stage 2: User authentication and post ownership. 3. Stage 3: Comments and moderation. 4. Stage 4: Full-text search and tag-based filtering.

Include a "foreshadowing" note in Stage 1 that hints at future requirements without providing full details.


Exercise 11: Rubber Duck Conversation

You are struggling with a design decision: whether to store user preferences as a JSON column in the users table or as a separate key-value table. Write a conversation with an AI (at least 4 turns) where you use the AI as a thinking partner to work through this decision. Include: - Your initial framing of the problem. - At least one question that challenges your assumptions. - A comparison of tradeoffs. - A final decision with rationale.


Exercise 12: Conversation Recovery

Your AI conversation has gone off the rails. After 6 turns, the AI is generating code that mixes three different approaches, the variable naming is inconsistent, and there is duplicated logic. Write a "fresh start" prompt that: 1. Summarizes the useful decisions from the conversation. 2. Clearly states the approach to take going forward. 3. Provides concrete constraints to prevent the same problems.

Context: You are building a file synchronization utility.


Tier 3 --- Analysis and Evaluation (Exercises 13--18)

Exercise 13: Conversation Post-Mortem

Analyze the following conversation transcript and identify three things the developer did well and three things they could have done better:

Turn 1 Developer: Write a Python web scraper.
Turn 1 AI: [generates basic requests + BeautifulSoup scraper]
Turn 2 Developer: Make it better.
Turn 2 AI: [adds headers, timeout, basic retry]
Turn 3 Developer: It needs to handle JavaScript-rendered pages.
Turn 3 AI: [rewrites everything using Selenium]
Turn 4 Developer: No, I don't want Selenium, it's too heavy. Use something lighter.
Turn 4 AI: [rewrites using playwright]
Turn 5 Developer: Actually, most pages I need don't use JS. Can you make it
                   use requests by default and only fall back to a browser for
                   JS-heavy pages?
Turn 5 AI: [generates dual-mode scraper]
Turn 6 Developer: OK but now the Playwright part doesn't have the same retry
                   logic as the requests part. Unify them.

Exercise 14: Quality Assessment

You receive the following AI-generated code after three iterations. Evaluate it against the six-point "good enough" checklist from Section 11.6. For each point, state whether it passes or fails and why.

def merge_sorted_lists(list1, list2):
    """Merge two sorted lists into one sorted list."""
    result = []
    i = j = 0
    while i < len(list1) and j < len(list2):
        if list1[i] <= list2[j]:
            result.append(list1[i])
            i += 1
        else:
            result.append(list2[j])
            j += 1
    result.extend(list1[i:])
    result.extend(list2[j:])
    return result

Exercise 15: Iteration Count Analysis

For each of the following tasks, estimate how many iterations a skilled vibe coder would typically need and justify your estimate. Use the context categories from Section 11.6 (Prototype, Internal Tool, Production Code, Library/Public API).

  1. A regex function to validate phone numbers (internal tool).
  2. A complete REST API for a todo application (production code).
  3. A data visualization prototype for a stakeholder demo.
  4. A Python package for parsing a custom file format (public API).
  5. A one-off script to clean and merge two CSV files.

Exercise 16: Branching Decision

For each scenario, determine whether you should branch (explore an alternative), backtrack (return to a previous version), or push forward (continue refining the current approach). Explain your reasoning.

  1. After 4 iterations, your REST API works but uses SQLite. You wonder if PostgreSQL would be better.
  2. The AI refactored your function into three smaller functions, but you are not sure the decomposition is right because one function still seems to do two things.
  3. You have spent 5 turns trying to add caching and each attempt introduces bugs. The uncached version works fine and performance is acceptable.
  4. The AI suggested using WebSockets and you agreed, but now you realize most of your clients will be mobile apps that struggle with WebSocket connections.
  5. Your current approach works but uses a third-party library you would prefer to avoid.

Exercise 17: Follow-Up Critique

Read each follow-up prompt below and explain why it is ineffective. Then rewrite it to be effective.

  1. "This code is bad. Fix it."
  2. "Add all the things we discussed earlier."
  3. "Make the code more Pythonic and also add error handling, logging, type hints, unit tests, documentation, configuration management, and a CLI interface."
  4. "Can you maybe improve the performance if you think it's worth it?"
  5. "Rewrite everything using a better approach."

Exercise 18: Progressive Disclosure vs. Upfront Specification

For each scenario, determine whether progressive disclosure or upfront specification is the better approach. Justify your answer based on the criteria in Section 11.8.

  1. A CRUD API where you know all entities and their relationships upfront.
  2. A machine learning pipeline where you are experimenting with different preprocessing steps.
  3. A payment processing system where security requirements affect every component.
  4. A data dashboard where the final visualization types are TBD pending stakeholder feedback.
  5. A microservices system where service boundaries affect the design of every individual service.

Tier 4 --- Synthesis and Creation (Exercises 19--24)

Exercise 19: Design an Iteration Strategy

You have been asked to build a real-time chat application with the following features: user registration, one-on-one messaging, group chats, message search, read receipts, typing indicators, and file sharing.

Design a complete iteration strategy using the techniques from this chapter: 1. Break the features into incremental phases (Section 11.3). 2. For each phase, write the initial prompt and anticipate two likely follow-up prompts. 3. Identify which phases need foreshadowing of later requirements (Section 11.8). 4. Identify potential branching points where you might explore alternatives (Section 11.7).


Exercise 20: Create a Refinement Rubric

Design a scoring rubric that you could use to evaluate the quality of your own iterative refinement process. The rubric should: - Cover at least 6 dimensions of refinement quality. - Have 4 levels (Novice, Competent, Proficient, Expert) for each dimension. - Include specific, observable criteria for each level. - Be usable as a self-assessment tool after a vibe coding session.


Exercise 21: Multi-Conversation Architecture

Design a multi-conversation strategy (Section 11.10) for building an e-commerce platform with: - Product catalog service - Shopping cart service - Payment processing - Order fulfillment - Admin dashboard

For each conversation: 1. Define its scope and boundaries. 2. Identify what shared context it needs from other conversations. 3. Specify integration checkpoints. 4. Define the "done" criteria.


Exercise 22: Write a Steering Guide

Create a one-page reference guide for steering AI when it goes off course. The guide should: - Include a decision tree: "Given the AI has diverged in X way, use technique Y." - Cover at least 5 types of divergence. - Include example prompts for each steering technique. - Include a "when to start fresh" checklist.


Exercise 23: The Socratic AI Prompt

Write a meta-prompt that instructs the AI to act as a Socratic questioner for a software design session. The prompt should: - Tell the AI to ask one question at a time. - Instruct it to challenge assumptions. - Have it periodically summarize what has been decided. - Guide it to probe for edge cases and non-functional requirements. - Keep the tone collaborative, not adversarial.

Test it with a concrete project idea of your choice.


Exercise 24: Feedback Pattern Library

Create a library of at least 8 reusable feedback prompt templates. For each template: - Give it a name. - Describe when to use it. - Provide the template with placeholders. - Show one concrete example with the placeholders filled in.

Categories should include: bug fixes, performance, readability, architecture, security, feature additions, and style.


Tier 5 --- Transfer and Real-World Application (Exercises 25--30)

Exercise 25: Real Project Iteration

Choose a small project (e.g., a CLI tool, a web scraper, a data transformer) and build it using pure iterative refinement with an AI assistant. Document: - Every prompt you send (at least 5 turns). - Your evaluation of each response. - What you would do differently in hindsight. - Your total iteration count and which category it falls in (from Section 11.6).

Submit the conversation transcript and a reflection.


Exercise 26: Peer Conversation Review

Exchange a vibe coding conversation transcript with a classmate or colleague. Review their transcript and provide feedback on: - Prompt quality (were follow-ups specific enough?). - Iteration efficiency (were any turns wasted?). - Steering effectiveness (did they redirect well when the AI went off course?). - "Good enough" judgment (did they stop at the right time, or too early/late?).

Write your review as a structured document with specific, actionable recommendations.


Exercise 27: The Comparison Experiment

Build the same feature twice --- once using a single detailed prompt (big-bang approach) and once using incremental building (at least 4 iterations). Compare the results on: - Code correctness. - Code quality (readability, structure). - Total time spent. - Your confidence in the result. - Ease of debugging any issues.

Write a report documenting the experiment and your findings.


Exercise 28: Rubber Duck Session

Use the rubber duck technique (Section 11.9) for a real technical problem you are facing. This could be a design decision, a bug you are struggling with, or an architecture question. Conduct at least a 6-turn conversation with an AI. Write a reflection on: - Did the conversation change your thinking? How? - At what point in the conversation did the key insight emerge? - Was the AI's response or the act of articulating your thoughts more valuable? - How does this compare to discussing with a human colleague?


Exercise 29: Iteration Metrics Tracking

Over the course of one week of vibe coding (at least 5 separate coding sessions), track the iteration metrics described in Section 11.10: - Iterations per feature. - Backtrack rate. - Time to "good enough." - Common correction types.

At the end of the week, analyze your data. Identify your two most common correction types and create prompt templates designed to prevent those corrections from being necessary.


Exercise 30: Complex System Challenge

Using all the techniques from this chapter, build a library management system through iterative refinement. The system should support: - Book catalog with search and filtering. - Member registration and management. - Book checkout and return with due dates. - Late fee calculation. - Reservation system for checked-out books. - Basic reporting (most borrowed books, overdue items, member activity).

Document your iteration strategy before you start, then execute it. Write a post-mortem comparing your planned strategy to what actually happened, noting where you had to branch, backtrack, or adjust your approach.


Solutions

Detailed solutions for Exercises 1-12 are available in code/exercise-solutions.py. Tier 3-5 exercises are open-ended; refer to the rubric in the instructor guide or self-assess using the quality criteria described in this chapter.