Chapter 9 Exercises: Context Management and Conversation Design

These exercises are organized into five tiers of increasing difficulty, following Bloom's taxonomy. Work through each tier to build mastery of context management concepts.


Tier 1: Recall (Exercises 1-6)

These exercises test your ability to remember and identify key concepts from the chapter.

Exercise 1: Token Estimation

Estimate the token count for each of the following. Use the rule of thumb from Section 9.1 (1 word is approximately 1.3 tokens, 1 line of code is approximately 10-15 tokens).

a) A 300-word project description b) A Python file with 150 lines of code c) A 20-turn conversation where each turn averages 400 tokens (both user and AI messages combined) d) A system prompt of 500 words

Exercise 2: Context Window Components

List the five components that consume space in an AI's context window during a coding conversation. For each component, explain whether its size is fixed or variable during the conversation.

Exercise 3: Attention Zones

Draw a diagram (or describe in text) showing the three attention zones in a long context window. Label the zones and explain what type of information performs best in each zone.

Exercise 4: Degradation Signals

List five signs that a conversation is experiencing context degradation. For each sign, describe a concrete example of what it looks like in practice during a coding session.

Exercise 5: Conversation Patterns Matching

Match each conversation pattern to its best use case:

Pattern Use Case
A. Progressive Disclosure 1. You want to see multiple approaches before committing
B. Scaffold-Then-Fill 2. You have clear specs that can be expressed as tests
C. Test-First 3. Building complex features layer by layer
D. Review-and-Refine 4. You want to review architecture before implementation
E. Parallel Exploration 5. You are not sure exactly what you want yet

Exercise 6: File Context Strategies

Name the five file context strategies described in Section 9.9 and write one sentence describing when each is most appropriate.


Tier 2: Apply (Exercises 7-12)

These exercises ask you to apply context management techniques to specific scenarios.

Exercise 7: First Message Design

You are starting a new vibe coding session to build a command-line tool that converts Markdown files to HTML. The tool should: - Accept a file path or directory as input - Support GitHub-flavored Markdown - Generate standalone HTML files with a CSS stylesheet - Support syntax highlighting for code blocks - Be packaged as a pip-installable CLI tool

Write the first message you would send to the AI, applying the front-loading and context priming techniques from Sections 9.3 and 9.5.

Exercise 8: Context Priming Template

Create a context priming template for a Django web development session. Include: - Role and expertise priming - Project conventions and constraints - Anti-pattern priming (at least 5 "do not" rules) - Output format instructions

Exercise 9: Sandwich Pattern Application

You are 12 turns into a conversation building a REST API. You need to ask the AI to add a pagination feature to an existing endpoint. The AI has previously forgotten your constraint about using cursor-based pagination instead of offset-based. Write your message using the sandwich pattern to maximize the chance the AI follows your constraint.

Exercise 10: File Context Selection

You are working on a Flask application with the following structure:

myapp/
  __init__.py
  config.py
  models/
    __init__.py
    user.py
    product.py
    order.py
  routes/
    __init__.py
    auth.py
    products.py
    orders.py
  services/
    __init__.py
    email.py
    payment.py
    inventory.py
  templates/
    base.html
    products/
      list.html
      detail.html
  tests/
    test_auth.py
    test_products.py
    test_orders.py

Your task is to add a "wishlist" feature where users can save products to a wishlist. Identify which files you would include as context and which strategy (full, interface-only, snippet, or tree-and-summary) you would use for each. Justify your choices.

Exercise 11: Conversation Turn Planning

Plan a 10-turn conversation to build a data validation module. For each turn, write: - A one-sentence description of what you will ask - Which conversation pattern you are using for that turn - How many tokens you estimate the turn will consume

Exercise 12: Summary for Fresh Start

Given this (abbreviated) conversation history, write the summary message you would use to start a fresh conversation:

Turn 1: User asked AI to create a SQLAlchemy model for a blog post
Turn 2: AI created Post model with id, title, body, created_at, author_id
Turn 3: User asked to add tags as a many-to-many relationship
Turn 4: AI added Tag model and post_tags association table
Turn 5: User asked for a PostRepository with CRUD operations
Turn 6: AI implemented async repository with get, create, update, delete
Turn 7: User asked to add full-text search on title and body
Turn 8: AI added search using PostgreSQL tsvector
Turn 9: User reported a bug: search doesn't handle special characters
Turn 10: AI fixed the search to escape special characters

Tier 3: Analyze (Exercises 13-18)

These exercises require you to break down complex scenarios and evaluate context management decisions.

Exercise 13: Conversation Diagnosis

Analyze the following conversation excerpt and identify all context management problems:

Turn 1:  User: "Help me build a web scraper."
Turn 2:  AI: "Sure! What website? What data?"
Turn 3:  User: "It should scrape news articles from multiple sites."
Turn 4:  AI: "Here's a basic scraper using requests and BeautifulSoup..."
Turn 5:  User: "Actually, make it async."
Turn 6:  AI: "Here's the async version using aiohttp..."
Turn 7:  User: "I forgot to mention, it needs to handle rate limiting."
Turn 8:  AI: "Here's the updated version with rate limiting..."
Turn 9:  User: "Also add proxy rotation."
Turn 10: AI: "Here's the version with proxy rotation..."
Turn 11: User: "It should also store results in a database."
Turn 12: AI: "Which database?"
Turn 13: User: "PostgreSQL. Use SQLAlchemy."
Turn 14: AI: "Here's the updated version with database storage..."
Turn 15: User: "The code from turn 6 was better organized. Can you
          restructure this to be more like that?"

For each problem you identify, suggest a specific fix.

Exercise 14: Token Budget Analysis

You have a 128K token context window. Your project requires the following files as context:

File Lines Est. Tokens
models/user.py 120 1,500
models/order.py 200 2,800
models/product.py 180 2,400
services/order_service.py 350 4,500
services/payment_service.py 280 3,600
api/orders.py 250 3,200
api/products.py 200 2,600
schemas/order.py 100 1,200
schemas/product.py 80 1,000
tests/test_orders.py 400 5,200

The system prompt uses 6,000 tokens. You anticipate a 25-turn conversation with an average of 2,000 tokens per turn (including the AI's response).

a) Calculate the total context needed if you include all files in full. b) Is this within budget? If not, by how much does it exceed the budget? c) Propose a specific plan to bring the context within budget. For each file, state whether you would include it in full, as an interface, as a snippet, or exclude it entirely.

Exercise 15: Pattern Selection Analysis

For each of the following coding tasks, identify which multi-turn conversation pattern (or combination of patterns) would be most effective and explain why:

a) Adding authentication to an existing REST API b) Debugging a race condition in an async task queue c) Migrating a codebase from Python 2 to Python 3 d) Building a new data visualization dashboard from scratch e) Optimizing slow database queries identified by profiling

Exercise 16: Context Degradation Timeline

Consider a conversation where you are building a full CRUD API for a "Project Management" application. Map out a realistic timeline of context degradation:

  • At what turn would you expect to start seeing degradation?
  • What specific degradation symptoms would appear first?
  • At what point would you recommend a fresh start?
  • How would you structure the summary for the fresh start?

Exercise 17: Priming Effectiveness Comparison

Compare and contrast these two priming approaches for a data processing pipeline project. Analyze which is more effective and why:

Approach A:

You are a helpful coding assistant. Please help me build a data
processing pipeline in Python.

Approach B:

You are a senior data engineer with expertise in building ETL
pipelines using Python. You follow these practices:
- Use generators and iterators for memory-efficient processing
- Implement proper error handling with retry logic
- Use structured logging with correlation IDs
- Write idempotent transformations
- Use type hints and dataclasses for data schemas

Anti-patterns to avoid:
- Loading entire datasets into memory
- Silent data corruption (always validate and log)
- Hardcoded file paths or connection strings

Exercise 18: Cross-Conversation Information Flow

Design a three-conversation workflow for building, testing, and deploying a new microservice. For each conversation, specify: - The priming context needed - What information must be carried forward from the previous conversation - How you would structure the handoff summary - The estimated token budget


Tier 4: Create (Exercises 19-24)

These exercises ask you to create original artifacts using context management principles.

Exercise 19: Context Management Toolkit

Create a personal context management toolkit that includes: a) Three priming templates for different types of coding sessions (choose your own) b) A "fresh start" protocol checklist c) A context budget estimation worksheet d) A conversation degradation checklist with remediation actions

Exercise 20: Conversation Replay and Redesign

Take a recent vibe coding conversation you have had (or invent a realistic 15-turn conversation for a project of your choosing). Analyze it for context management issues, then redesign the conversation from scratch applying the techniques from this chapter. Document: - The original conversation structure (turn by turn) - Issues identified - The redesigned conversation structure - Expected token savings

Exercise 21: CLAUDE.md File Design

Design a comprehensive CLAUDE.md (or .cursorrules) file for one of the following projects: a) A Django e-commerce platform b) A FastAPI machine learning model serving API c) A Flask-based internal tools dashboard d) A CLI tool suite for DevOps automation

Your file should include: project description, tech stack, coding conventions, architecture overview, common patterns with examples, anti-patterns, and testing requirements. Aim for maximum value per token.

Exercise 22: Multi-Session Development Plan

You are building a complete REST API with the following features: user authentication, CRUD for three resource types, search and filtering, file uploads, email notifications, and admin dashboard endpoints.

Create a multi-session development plan that: - Breaks the work into 5-8 focused sessions - For each session, specifies the context strategy, priming template, and estimated turns - Defines what information must be carried between sessions - Identifies which sessions can potentially be done in parallel

Exercise 23: Context Optimizer Script

Write a Python script that takes a conversation history (as a list of messages) and produces: a) A token estimate for the full conversation b) A summary of the key decisions and artifacts produced c) Recommendations for whether to continue or start fresh d) A priming message for a fresh start if recommended

Use the tiktoken library (or a simplified approximation) for token counting.

Exercise 24: Interactive Context Planner

Design (and optionally implement) an interactive command-line tool that helps a developer plan their vibe coding session. The tool should: - Ask about the task type and complexity - Estimate the required context budget - Suggest a conversation pattern - Generate a priming template - Set up a checkpoint schedule


Tier 5: Challenge (Exercises 25-30)

These exercises integrate concepts from multiple chapters and tackle advanced scenarios.

Exercise 25: Context-Aware Prompt Chaining

Design a system of three connected prompts that build on each other to implement a complex feature (real-time notifications using WebSockets). Each prompt must: - Reference the output of the previous prompt - Include only the necessary context from the previous step - Stay within a per-prompt budget of 3,000 tokens - Apply techniques from both Chapter 8 (prompt engineering) and Chapter 9 (context management)

Exercise 26: Adversarial Context Scenarios

For each of the following "adversarial" scenarios, describe what goes wrong and how you would handle it:

a) You are 30 turns into a conversation and realize the AI has been using the wrong database schema since turn 5. b) You pasted a 2,000-line file as context and the AI is now generating code that references functions from that file that do not actually exist (it is hallucinating the file's contents). c) You are working on two related features in the same conversation and the AI starts mixing up the requirements for each. d) The AI's code quality was excellent for the first 10 turns but has been noticeably declining, and it keeps making the same type of error (forgetting to await async calls).

Exercise 27: Context Management for Team Collaboration

Design a context management strategy for a team of three developers who are all using AI coding assistants to work on the same project simultaneously. Address: - How to keep AI conversations consistent across team members - Shared priming templates and conventions - Handling merge conflicts that arise from AI-generated code - Knowledge sharing between team members' AI conversations - Project-level context files that evolve over time

Exercise 28: Model Comparison Analysis

Design an experiment to compare context management strategies across two different AI coding tools (for example, Claude Code and Cursor). Define: - A standardized coding task that takes 15-20 turns - Three different context management strategies to test - Metrics for measuring output quality, consistency, and efficiency - A rubric for evaluating the results - Write up expected hypotheses and how you would analyze the results

Exercise 29: Context Window Evolution

Current context windows range from 32K to over 1M tokens. Write an analysis (500-1,000 words) exploring: - How would context management strategies change if windows were 10M tokens? - Would context management still matter? Why or why not? - What new strategies would become possible? - What new problems might emerge? - How would the "conversation as data structure" model need to evolve?

Exercise 30: Build a Context Management Dashboard

Design and implement (or design in detail) a web-based dashboard that visualizes a vibe coding session's context usage in real time. The dashboard should show: - Total tokens used vs. available - Token breakdown by category (system prompt, file context, conversation history, current message) - A timeline of context usage over the conversation - Alerts when context usage crosses thresholds (50%, 75%, 90%) - Recommendations for context optimization - Integration points with at least one AI coding tool

Provide the architecture, key components, and implementation plan. If implementing, build the frontend visualization with any charting library of your choice.