Chapter 6: Exercises

Complete these exercises to reinforce the concepts from your first vibe coding session. The exercises are organized into five tiers of increasing difficulty following Bloom's taxonomy.


Tier 1: Recall (Exercises 1--6)

These exercises test your memory and comprehension of the chapter material.

Exercise 1: The Opening Move What type of prompt should you start a vibe coding session with, and why is it important? Describe the two benefits of beginning with this type of prompt rather than immediately asking for code.

Exercise 2: Vocabulary Match Match each term with its correct definition:

Term Definition
a. dataclass 1. Converts a dataclass instance to a dictionary
b. argparse 2. Python decorator that auto-generates __init__ and other methods
c. asdict() 3. Standard library module for command-line argument parsing
d. default_factory 4. A subcommand parser attached to a main parser
e. subparser 5. A callable that generates a default value for a dataclass field

Exercise 3: Prompt Classification Classify each of the following prompts as one of: planning, specification, iterative refinement, example-driven, constraint-based, or error correction.

  1. "I want to build a note-taking app. Help me plan the features and structure."
  2. "Create a function called add_note that takes a title: str and content: str and returns a Note dataclass."
  3. "The output is hard to read. Can you format it like this: [2025-01-15] My Note Title?"
  4. "Good, but also add a tags field that defaults to an empty list."
  5. "The program crashes when the file is missing. Can you add a check for that?"
  6. "It must handle at least 10,000 notes without noticeable slowdown."

Exercise 4: Persistence Pattern Explain in your own words why save_tasks() writes to a temporary file first and then renames it, rather than writing directly to tasks.json. What problem does this solve?

Exercise 5: Field Ordering Why must fields with default values come after fields without default values in a Python dataclass? What error would you see if you placed completed: bool = False before description: str?

Exercise 6: The Testing Fixture What is the purpose of the tmp_path fixture in pytest, and why is it important for testing the task manager's persistence layer?


Tier 2: Apply (Exercises 7--14)

These exercises ask you to use the techniques from the chapter in new but straightforward scenarios.

Exercise 7: Add an Edit Command Using vibe coding, add an edit command to the task manager that lets the user change a task's description. Write the prompt you would use, then implement the feature. The command should work like:

python task_manager.py edit 3 "Updated description"

Exercise 8: Task Statistics Write a function count_tasks_by_status(tasks: list[Task]) -> dict[str, int] that returns a dictionary with keys "completed", "pending", and "total". First write the prompt you would send to an AI, then write the function yourself.

Exercise 9: Export to Text Add a feature that exports all tasks to a plain text file. Write the prompt, then implement a function export_tasks_to_text(tasks: list[Task], output_path: str) -> int that writes tasks to the file and returns the count of tasks exported.

Exercise 10: Clear Completed Implement a clear command that removes all completed tasks at once. Write the prompt and the implementation. The function signature should be: clear_completed_tasks(tasks: list[Task]) -> tuple[list[Task], int] returning the remaining tasks and the count removed.

Exercise 11: Batch Complete Implement a function that completes multiple tasks at once given a list of IDs. Write the prompt and the function: complete_multiple_tasks(tasks: list[Task], task_ids: list[int]) -> list[int] returning the list of IDs that were successfully completed.

Exercise 12: Description Validation Write a validate_description(description: str) -> tuple[bool, str] function that checks: not empty, between 3 and 200 characters, and contains at least one alphanumeric character. Return (True, "") if valid or (False, "error message") if not. Write the prompt first.

Exercise 13: Prompt Improvement Take the following weak prompt and rewrite it to be more effective. Then explain what you changed and why:

Write some code for a to-do list.

Exercise 14: Data Model Extension Design a Project dataclass that groups related tasks. It should have an id, name, description, and a list of task IDs. Write the AI prompt you would use, then write the dataclass with full type hints and a docstring.


Tier 3: Analyze (Exercises 15--20)

These exercises ask you to break down problems, compare approaches, and evaluate code quality.

Exercise 15: Prompt Effectiveness Analysis Consider these two prompts for adding a search feature:

Prompt A: "Add search to the app."

Prompt B: "Add a search subcommand that takes a keyword argument and returns all tasks whose descriptions contain that keyword. The search should be case-insensitive. Display results using the existing display_tasks() function."

Analyze the differences between these prompts. For each, predict what the AI would produce. Identify at least three specific ways Prompt B is superior and explain why each difference matters.

Exercise 16: Code Review Review the following AI-generated code and identify at least four issues (bugs, style violations, missing features, or potential problems):

def delete_task(id):
    tasks = load_tasks()
    for i in range(len(tasks)):
        if tasks[i].id == id:
            del tasks[i]
            save_tasks(tasks)
            return True
    return False

Exercise 17: Architecture Comparison The chapter uses a flat architecture (all code in one file). An alternative would be three separate files: models.py, storage.py, and cli.py. Analyze the trade-offs of each approach for a project of this size. At what point would the multi-file approach become clearly better?

Exercise 18: Serialization Analysis The task manager uses dataclasses.asdict() for serialization and Task(**data) for deserialization. Analyze what happens in these scenarios: 1. A field is added to the Task class but the JSON file has old data without that field 2. A field is removed from the Task class but the JSON file still has that field 3. A field's type changes from str to int

For each scenario, predict the behavior and suggest how to handle it gracefully.

Exercise 19: Conversation Reconstruction Below is a piece of code. Working backward, write the prompt that most likely produced it. Then write a second, better prompt that would have produced cleaner code:

def sort_tasks(tasks, key):
    if key == "date":
        return sorted(tasks, key=lambda x: x.created_at)
    elif key == "priority":
        p = {"high": 0, "medium": 1, "low": 2}
        return sorted(tasks, key=lambda x: p.get(x.priority, 1))
    elif key == "status":
        return sorted(tasks, key=lambda x: x.completed)
    else:
        return tasks

Exercise 20: Test Coverage Analysis The chapter's test suite has eight tests. List five additional test cases that are not covered and explain what bug each test would catch. Prioritize them by likelihood of the bug actually occurring.


Tier 4: Create (Exercises 21--26)

These exercises ask you to build something new using the techniques from the chapter.

Exercise 21: Build a Contact Manager Using the vibe coding workflow from this chapter, build a CLI contact manager that can: - Add contacts with name, email, and phone number - List all contacts - Search contacts by name or email - Delete contacts - Export contacts to CSV

Document each prompt you use and the AI's response. Write at least six prompts across the session.

Exercise 22: Build a Habit Tracker Build a CLI habit tracker that lets you: - Define habits with a name and target frequency (e.g., "Exercise" 5x/week) - Log daily completions - View current streaks - See weekly summaries

Use a dataclass for the habit data model and JSON persistence. Document your vibe coding session.

Exercise 23: Build a Bookmark Manager Build a CLI bookmark manager with: - Add bookmarks with URL, title, and tags - Search by title or tag - List all tags with counts - Delete bookmarks

Focus on making the search functionality robust and the display formatting clean.

Exercise 24: Rebuild with Different Prompts Rebuild the task manager from scratch, but use a completely different prompting strategy. Instead of starting with a planning prompt, try starting with a test specification: describe the tests you want to pass, and ask the AI to write code that passes them. Document how this approach compares to the chapter's approach.

Exercise 25: Build a Prompt Catalog Create a Python module that stores a library of reusable vibe coding prompts. Each prompt should be a dataclass with fields for: template text, category, variables to fill in, and example usage. Include at least 10 prompts covering different aspects of building a CLI application.

Exercise 26: Extend the Enhanced Version Take the enhanced task manager (example-03-enhanced-version.py) and add recurring tasks. A recurring task should have a frequency (daily, weekly, monthly) and automatically create a new pending instance when completed. Document your prompting session.


Tier 5: Challenge (Exercises 27--30)

These exercises integrate concepts from multiple chapters and require deeper thinking.

Exercise 27: Undo System Design and implement an undo/redo system for the task manager. It should support undoing the last N operations (add, complete, delete). Consider: what data structures do you need? How do you handle undo of an undo? Write a design document first (using vibe coding to help), then implement it. See code/exercise-solutions.py for a reference implementation.

Exercise 28: Multi-Format Persistence Refactor the task manager's persistence layer so it supports multiple storage backends: JSON, CSV, and SQLite. Create an abstract base class StorageBackend with load() and save() methods, then implement three concrete classes. The user should be able to choose the backend via a command-line flag or configuration file.

Exercise 29: Prompt Engineering Experiment Conduct a systematic experiment: take five different features of the task manager and implement each one using three different prompting styles (direct request, specification, and example-driven). For each feature, compare the AI's output across the three styles. Write a report analyzing which style produces the best code for which types of features.

Exercise 30: Full Application from Scratch Without referring to the chapter, build a complete CLI application of your choice (not a task manager) using the vibe coding workflow. Your application must include: a data model, persistence, at least four commands, error handling, and at least six tests. After completing it, compare your process to the chapter's process and write a reflection identifying what you did differently and why.


Solutions to selected exercises are available in code/exercise-solutions.py.