Chapter 32: Exercises
Tier 1 -- Remember and Recall (Exercises 1-6)
Exercise 1: Core Challenges Identification
List the five core challenges teams face when adopting AI coding assistants, as described in Section 32.1. For each challenge, write one sentence explaining why it matters.
Exercise 2: AI Usage Policy Components
Name the five areas that a team AI usage policy should cover. For each area, provide one example rule that might appear in a real team's policy.
Exercise 3: Prompt Library Metadata
A shared prompt library entry should include several metadata fields. List at least seven metadata fields that a well-structured prompt entry would carry, and explain the purpose of each.
Exercise 4: Onboarding Phases
Describe the four phases of the AI onboarding checklist from Section 32.5 (Day 1, Day 2-3, Week 1, Week 2-4). For each phase, name the primary learning goal.
Exercise 5: Metrics Dimensions
The chapter describes three dimensions of AI effectiveness measurement. Name all three dimensions and list at least two specific metrics for each.
Exercise 6: Scaling Adoption Curve
List the five stages of the organizational AI adoption curve in order. For each stage, provide the typical timeline and one key characteristic.
Tier 2 -- Understand and Explain (Exercises 7-12)
Exercise 7: Style Drift Analysis
Explain in your own words why style drift is described as "insidious" in the chapter. How does it differ from the more obvious inconsistency problem? Provide a concrete example of how style drift might manifest in a Python web application over six months.
Exercise 8: Responsibility Principle Implications
The chapter states: "The developer who prompts the AI, reviews the output, and commits the code is responsible for that code." Explain three practical implications of this principle for day-to-day development work. Why is this principle important even when the AI produces correct code?
Exercise 9: Layered Standards Explanation
Explain the layered standards model for organizational AI practices. Why is it more effective than either a purely top-down or purely bottom-up approach? Use an analogy from another domain (such as government, sports, or education) to illustrate the concept.
Exercise 10: Knowledge Silo Economics
Explain why knowledge silos are described as "perhaps the most damaging team challenge." Calculate the hypothetical cost: if a team of eight developers each spends two hours per week solving problems that another team member has already solved, how many person-hours are wasted per month? Per year? What is the opportunity cost?
Exercise 11: Show-and-Tell Value Proposition
Explain why show-and-tell sessions are described as being "as much about culture as they are about knowledge transfer." What cultural norms do these sessions establish? How do they differ from simply publishing prompts in a shared library?
Exercise 12: Center of Excellence Design
Explain why the chapter recommends staffing the Center of Excellence with "respected engineers from different teams, not managers or external consultants." What problems might arise from each alternative staffing approach?
Tier 3 -- Apply and Implement (Exercises 13-18)
Exercise 13: Draft an AI Usage Policy
Write a complete AI usage policy for a team of six developers building a Python/Django e-commerce application. The policy should cover all five areas from Section 32.2 (approved tools, code review standards, prompting standards, attribution, and security). Keep it to one page but make it specific enough to be actionable.
Exercise 14: Design a Prompt Template
Create a detailed prompt template for generating Django model classes. The template should include: - At least four template variables with descriptions - A complete prompt that uses those variables - Metadata (category, tags, version, effectiveness notes) - An example of the template filled in with concrete values
Exercise 15: Build an Onboarding Plan
Design a complete two-week onboarding plan for a mid-level developer joining a team that uses Claude Code as its primary AI tool, GitHub Copilot for inline completion, and pytest for testing. The plan should include daily activities, learning objectives, and success criteria for each phase.
Exercise 16: Create a Metrics Dashboard
Design a team AI effectiveness dashboard for a sprint-based team. Include: - At least three velocity metrics with measurement methods - At least three quality metrics with measurement methods - At least two satisfaction metrics with measurement methods - A format for presenting the data (describe the layout) - Threshold values that would trigger a process review
Exercise 17: Implement a Prompt Library Structure
Using your file system or a version control repository, create the directory structure and initial files for a shared prompt library. Include: - A top-level README explaining the library's purpose and structure - At least three category directories - At least two prompt templates in each category - A contribution guide explaining how to add new prompts
Exercise 18: Communication Channel Design
Design the AI-related communication channels for a team of twelve developers. Specify: - Which channels to create (names and purposes) - What types of content go in each channel - Guidelines for posting frequency and format - How to archive and make content searchable - Integration with existing team communication tools
Tier 4 -- Analyze and Evaluate (Exercises 19-24)
Exercise 19: Convention Audit
Take a real or hypothetical codebase that was developed by multiple people using AI tools. Analyze it for inconsistencies that suggest a lack of shared conventions. Identify at least five specific inconsistencies and propose a convention for each that would prevent the issue. For each convention, explain how you would introduce it without disrupting the team's current workflow.
Exercise 20: Prompt Library Effectiveness Analysis
Given the following usage data for a team's prompt library, analyze which prompts are most and least effective and recommend actions:
| Prompt ID | Category | Uses (30 days) | Avg Rating | Last Updated |
|---|---|---|---|---|
| api-get-v3 | API | 45 | 4.5 | 2 weeks ago |
| api-post-v2 | API | 38 | 4.2 | 1 month ago |
| test-unit-v4 | Testing | 62 | 3.8 | 1 week ago |
| test-integ-v1 | Testing | 5 | 2.1 | 3 months ago |
| refactor-extract-v2 | Refactoring | 22 | 4.7 | 2 weeks ago |
| docs-api-v1 | Documentation | 8 | 3.0 | 4 months ago |
| db-migrate-v3 | Database | 31 | 4.4 | 3 weeks ago |
| debug-trace-v1 | Debugging | 3 | 4.0 | 5 months ago |
For each prompt, recommend one of: keep as-is, update, retire, or investigate. Justify each recommendation.
Exercise 21: Anti-Pattern Diagnosis
A 30-person engineering organization has been using AI coding tools for six months. They report the following symptoms: - Teams that adopted AI early resent being told to follow new standards - The AI policy document is 15 pages long and nobody reads it - Metrics show increased velocity but also increased bug reports - New hires take longer to onboard than before AI adoption - The shared prompt library has 200 prompts but is rarely used
Diagnose the root causes of each symptom. Propose a remediation plan that addresses all five issues in a prioritized order.
Exercise 22: Tool Standardization Trade-offs
A team is debating between two AI tool standardization approaches: - Option A: Mandate a single AI tool (Claude Code) for all coding tasks. - Option B: Allow any AI tool but require all tools to use the team's shared system prompt.
Analyze the trade-offs of each approach across five dimensions: consistency, developer satisfaction, learning curve, flexibility, and maintainability. Which would you recommend for (a) a three-person startup team, (b) a fifteen-person product team, and (c) a hundred-person engineering organization? Justify each recommendation.
Exercise 23: Attribution Policy Evaluation
Evaluate the following three attribution policies and rank them from most to least effective for a team of ten developers. Justify your ranking.
Policy A: Every function generated by AI must have a comment indicating it was AI-generated, including the tool name and date.
Policy B: AI usage is noted in commit messages only when more than 50% of the code in the commit was AI-generated. Pull request descriptions include an "AI Usage" section.
Policy C: No attribution is required. All code, regardless of source, is treated identically. The team trusts that developers understand and test all code they commit.
Exercise 24: Organizational Scaling Assessment
You are advising a 200-person engineering organization that wants to scale AI coding practices. They currently have three teams with mature AI practices and twelve teams with no formal AI practices. Design an assessment framework that: - Evaluates each team's current AI maturity level - Identifies the highest-impact interventions for each maturity level - Proposes a realistic 12-month timeline for organization-wide adoption - Defines success criteria for each quarter
Tier 5 -- Create and Synthesize (Exercises 25-30)
Exercise 25: Complete AI Practices Playbook
Create a comprehensive "AI Practices Playbook" for a team of your choosing. The playbook should be a single document (2000+ words) that a team could actually use. Include: - Team AI philosophy and principles - Tool stack and configuration - Prompt library structure and initial prompts - Code review guidelines for AI-generated code - Onboarding checklist - Communication practices - Metrics and review cadence
Exercise 26: Prompt Library Management Tool
Write a Python command-line tool that manages a team prompt library stored as YAML files. The tool should support: - Adding new prompts with metadata - Searching prompts by category, tags, or keywords - Displaying a prompt with its full metadata and version history - Rating a prompt after use - Generating a usage report
Implement the tool with full error handling, type hints, and docstrings.
Exercise 27: Team AI Metrics Dashboard
Build a Python script that generates a team AI effectiveness dashboard from data stored in JSON files. The dashboard should: - Read sprint data from a JSON file - Calculate velocity, quality, and satisfaction metrics - Compare current sprint to previous sprints and targets - Generate a formatted text report suitable for sharing in a team channel - Highlight metrics that are trending up, down, or stagnant
Exercise 28: Onboarding Simulation
Design and document a complete AI onboarding simulation exercise. A new team member should be able to complete the simulation in four hours and emerge with hands-on experience using the team's AI tools and conventions. The simulation should include: - A realistic project scenario (describe the codebase and task) - Step-by-step instructions referencing the team's prompt library - Checkpoints where the new member's work is evaluated - A self-assessment rubric - Solutions and explanations for each step
Exercise 29: Cross-Team Knowledge Transfer Program
Design a quarterly cross-team AI knowledge transfer program for an organization with five engineering teams. The program should include: - A rotation schedule that pairs teams for knowledge exchange - A structured format for each exchange (what to share, how to share it) - Artifacts produced by each exchange (documented in a shared repository) - A measurement framework to evaluate the program's effectiveness - A feedback mechanism for continuous improvement
Exercise 30: Organizational AI Maturity Model
Create a five-level AI maturity model for software development teams. For each level: - Define the characteristics that place a team at that level - Specify the key practices, tools, and processes present at that level - Describe the transition requirements to advance to the next level - Provide assessment criteria (how do you determine a team's level?) - Suggest specific interventions to help teams advance
The model should be practical enough that an engineering manager could use it to assess their team and plan improvements. Present it as a complete document with explanatory text, tables, and examples.