Chapter 32: Quiz
Test your understanding of team collaboration and shared AI practices. Each question is followed by its answer, hidden in a collapsible section.
Question 1
What are the five core challenges teams face when adopting AI coding assistants without deliberate coordination?
Show Answer
The five core challenges are: 1. **The Inconsistency Problem** -- Different developers using different tools, prompts, and styles produce code that looks like it was written by multiple unrelated teams. 2. **The Style Drift Problem** -- Gradual, subtle divergence in coding styles over time as different AI tools suggest slightly different patterns. 3. **The Knowledge Silo Problem** -- Developers independently discovering effective techniques but not sharing them with the team. 4. **The Quality Variance Problem** -- Varying levels of AI proficiency producing uneven code quality across the team. 5. **The Accountability Gap** -- Unclear ownership of AI-generated code leading to unresolved bugs and duplicated effort.Question 2
A team AI usage policy should cover five areas. Which of the following is NOT one of them?
A) Approved tools and their roles B) Code review standards for AI-generated code C) Individual developer performance benchmarks D) Security and privacy E) Attribution and documentation
Show Answer
**C) Individual developer performance benchmarks.** The five areas are: (1) Approved Tools and Their Roles, (2) Code Review Standards for AI-Generated Code, (3) Prompting Standards, (4) Attribution and Documentation, and (5) Security and Privacy. Individual performance benchmarks are explicitly discouraged -- the chapter recommends measuring team-level metrics rather than individual developer metrics.Question 3
Why does the chapter describe style drift as "insidious"?
Show Answer
Style drift is described as "insidious" because each individual change is small and reasonable. A reviewer might approve a slightly different naming convention because "it works fine." But these small approvals accumulate across hundreds of pull requests, and the codebase slowly loses coherence without anyone being able to point to the moment it happened. Unlike overt inconsistency, style drift is gradual and difficult to detect in the moment.Question 4
What is the five-step process the chapter recommends for establishing team conventions?
Show Answer
The five-step convention adoption process is: 1. **Identify the pain point** -- Start with a real problem the team is experiencing. 2. **Propose a convention** -- A team member drafts a proposed convention to address the problem. 3. **Discuss and refine** -- The team discusses and modifies the proposal based on feedback. 4. **Trial period** -- Adopt the convention for two to four weeks and track whether it helps. 5. **Formalize or discard** -- If it improves things, add it to the policy. If not, discard and try something else.Question 5
What is the key difference between a prompt template and a hardcoded prompt in a shared library?
Show Answer
A prompt template includes clearly marked variables (such as `{{resource_name}}`, `{{framework}}`, `{{auth_required}}`) that developers fill in for their specific use case. A hardcoded prompt contains fixed text with no customization points. Templates are preferred because they are reusable across many situations while maintaining the team's conventions, whereas hardcoded prompts are only useful for the exact scenario they were written for.Question 6
How often does the chapter recommend reviewing and updating the team AI usage policy?
A) Weekly B) Monthly C) Quarterly D) Annually
Show Answer
**C) Quarterly.** The chapter states: "Review and update the policy quarterly." This cadence balances the need to keep the policy current with the stability teams need to internalize and follow conventions.Question 7
What is an "AI buddy" in the context of onboarding, and why is it recommended?
Show Answer
An "AI buddy" is an experienced team member assigned to a new hire specifically to answer questions about the team's AI workflow, share tips, and provide feedback during the first few weeks. It is recommended because this personal connection accelerates learning far more than documentation alone. The AI buddy transfers tacit knowledge about when and how to use AI tools, something that is difficult to capture in written guides.Question 8
According to the chapter, which of the following should ALWAYS be shared with the team?
A) Inline autocomplete suggestions B) Routine generation of boilerplate code C) AI interactions that revealed a security issue D) Simple formatting corrections
Show Answer
**C) AI interactions that revealed a security issue.** The chapter lists four categories that should always be shared: (1) prompts that produced unusually good results, (2) AI interactions that revealed a bug, security issue, or architectural insight, (3) failures where prompts produced incorrect output, and (4) novel techniques discovered during AI interaction. The other options (autocomplete suggestions, boilerplate generation, formatting corrections) fall under "no need to share."Question 9
What is the "Responsibility Principle" for AI-generated code, and what are its implications?
Show Answer
The Responsibility Principle states: "The developer who prompts the AI, reviews the output, and commits the code is responsible for that code." Its implications are: 1. Developers must understand AI-generated code before committing it. 2. Developers must test AI-generated code -- "the AI generated it" is not a defense against bugs. 3. Developers must maintain AI-generated code when it needs to change. This means committing code you do not understand is never acceptable, regardless of its source.Question 10
The chapter recommends storing shared AI configuration in a specific directory. What is it, and why?
Show Answer
The chapter recommends storing shared AI configuration in a `.ai/` directory at the root of the repository. This directory can contain the team's system prompt, coding conventions, example prompts, and tool configurations. It becomes a single source of truth for how AI is used on the project. By committing it to version control, every developer who clones the project gets the same AI configuration, and new team members do not need to figure out the "right" settings.Question 11
What three dimensions does the chapter recommend measuring for team AI effectiveness?
Show Answer
The three dimensions are: 1. **Velocity** -- Cycle time, throughput, time to first commit, code generation ratio. 2. **Quality** -- Defect rate, code review iteration count, test coverage, security findings. 3. **Satisfaction** -- Developer satisfaction surveys, prompt library usage, knowledge sharing participation. The chapter emphasizes that no single metric tells the whole story, and teams should use a balanced scorecard across all three dimensions.Question 12
Why does the chapter warn against using AI effectiveness as an individual performance evaluation criterion?
Show Answer
Using AI effectiveness as an individual performance metric creates perverse incentives and suppresses honest reporting. Developers might generate excessive AI-assisted code to hit targets rather than using AI judiciously. They might also avoid reporting failures or sharing techniques that could make colleagues look more productive. The chapter recommends measuring team-level metrics instead, which encourages collaboration and honest sharing of both successes and failures.Question 13
What is a Center of Excellence (CoE) in the context of organizational AI scaling, and who should staff it?
Show Answer
A Center of Excellence (CoE) is a small, dedicated group (often two to five people) that maintains organization-wide AI coding standards, curates shared prompt libraries, evaluates new tools, provides training, collects metrics, and facilitates cross-team knowledge sharing. The CoE advises and supports rather than dictates. It should be staffed with respected engineers from different teams, not managers or external consultants, because engineers listen to engineers they trust. Membership should rotate periodically to prevent the CoE from becoming disconnected from day-to-day development.Question 14
What is the "layered standards" model for organizational AI practices?
Show Answer
The layered standards model has three layers: 1. **Organization Level (required):** Approved AI tools, data security requirements, minimum code review standards, legal/compliance guidelines, basic attribution requirements. 2. **Team Level (recommended):** Team-specific prompt libraries, tool configuration for the team's stack, team-specific metrics, tailored onboarding processes. 3. **Individual Level (optional):** Personal prompt collections, tool customizations within team bounds, individual learning goals. This approach provides alignment where it matters (security, quality, compliance) while preserving team and individual autonomy.Question 15
Which of the following is an organizational scaling anti-pattern?
A) Starting with a minimal AI usage policy and expanding based on real issues B) Staffing the Center of Excellence with people who do not write production code C) Measuring team-level metrics instead of individual metrics D) Using a trial period before formalizing conventions
Show Answer
**B) Staffing the Center of Excellence with people who do not write production code.** This is the "Ivory Tower CoE" anti-pattern, which produces guidelines that are theoretically sound but practically useless, resulting in ignored guidelines and a discredited CoE. The other options (minimal policy, team-level metrics, trial periods) are all recommended best practices.Question 16
In the prompt library structure, what is the purpose of tracking "effectiveness_rating" and "usage_count"?
Show Answer
These metrics serve several purposes: - **Effectiveness rating** helps developers choose between multiple prompts for the same task -- higher-rated prompts produce better results. - **Usage count** identifies which prompts are most valuable to the team and which may need attention (low usage could mean the prompt is hard to find, poorly documented, or not useful). - Together, they inform decisions about which prompts to invest in improving, which to promote, and which to retire. A prompt with high usage but low rating is a priority for improvement. A prompt with low usage and low rating is a candidate for retirement.Question 17
What four questions should pair programming with AI sessions address?
Show Answer
The chapter describes a pair programming with AI session that addresses: 1. **How to formulate prompts** -- The experienced developer thinks aloud while writing prompts, explaining what they include and why. 2. **How to evaluate AI output** -- Both developers review the output together, discussing what is good, what needs modification, and what should be regenerated. 3. **How to iterate** -- Demonstration of refining prompts when initial output is not quite right. 4. **How to verify conventions** -- Walking through how to check that AI-generated code meets team standards. Additionally, the session covers task selection and the complete commit-and-review workflow.Question 18
What is the recommended approach to handling AI-related disagreements within a team?
Show Answer
Handle AI-related disagreements the same way you handle other technical disagreements: with data, examples, and mutual respect. Specifically: - If AI-generated code does not meet quality standards, point to specific issues rather than making general claims about AI. - If a developer's AI usage diverges from team conventions, treat it as an opportunity to refine the conventions rather than a personal failing. - Use concrete evidence rather than opinions to resolve disputes.Question 19
According to the chapter, what is the most important factor in whether a shared prompt library gets used?
A) The storage technology (Git vs. wiki vs. custom tool) B) The number of prompts in the library C) The ease of access D) The complexity of the metadata schema
Show Answer
**C) The ease of access.** The chapter states: "The most important factor is not the storage technology but the habit of using it. Make the library easy to access, and it will be used. Make it difficult, and developers will fall back to their personal prompts."Question 20
What is the recommended attribution approach for commit messages involving AI-generated code?
Show Answer
The recommended approach is balanced attribution: note significant AI involvement in commit messages without flagging every minor AI interaction. The chapter provides an example:feat: Add order processing pipeline
Implemented the order validation, payment processing, and fulfillment
stages of the order pipeline. Test suite covers happy path and all
error scenarios.
AI-assisted: Core pipeline structure and state machine generated via
Claude using the pipeline-generator-v2 prompt template.
This approach notes major AI-generated code while keeping the commit message focused on the change itself. Inline code comments for AI attribution are discouraged as they create noise.
Question 21
What are the five stages of the organizational AI adoption curve?
Show Answer
The five stages are: 1. **Pioneers (1-3 months):** A few enthusiastic developers start using AI tools on their own. Results are promising but inconsistent. 2. **Early Teams (3-6 months):** One or two teams formalize their AI practices with conventions, prompt libraries, and metrics. 3. **Expansion (6-12 months):** Success stories attract attention. More teams adopt AI practices, adapting the early teams' approaches. 4. **Standardization (12-18 months):** The organization establishes shared standards, a CoE, and common infrastructure. Practices converge on best practices. 5. **Optimization (18+ months):** AI practices are deeply embedded in the development culture. Focus shifts to continuous improvement.Question 22
The chapter lists four common onboarding pitfalls. What are they?
Show Answer
The four common onboarding pitfalls are: 1. **Over-reliance:** New members accept AI output uncritically without evaluating it. 2. **Under-reliance:** Experienced developers resist using AI tools because they feel faster without them. 3. **Tool overwhelm:** Too many tools introduced at once creates confusion. 4. **Prompt perfectionism:** Spending too long crafting the "perfect" prompt instead of iterating. The remedies are: teach skeptical evaluation, demonstrate clear AI value in specific scenarios, start with the primary tool and add others gradually, and teach the iterative prompting approach.Question 23
Why does the chapter recommend against cluttering code with inline AI attribution comments?
Show Answer
The chapter recommends against inline AI attribution comments because they create noise. The code should speak for itself. Comments should be reserved for cases where AI-generated code uses an unusual approach that might confuse future readers -- in which case the comment explains the approach, not the fact that it was AI-generated. The rationale is that what matters is whether the code is correct, readable, and maintainable, not whether a human or an AI wrote it.Question 24
What is the difference between the "Top-Down Mandate" and the convention adoption process recommended in the chapter?
Show Answer
The **Top-Down Mandate** anti-pattern is a directive from leadership to "use AI for everything" without investment in training, tools, or support. It results in superficial compliance and cynicism because developers feel imposed upon rather than consulted. The **recommended convention adoption process** is bottom-up and collaborative: (1) identify a real pain point, (2) propose a convention, (3) discuss and refine with the team, (4) run a trial period, and (5) formalize or discard based on results. This process respects developer autonomy, builds genuine consensus, and produces conventions the team actually believes in because they co-created them.Question 25
The chapter states that scaling AI practices is "a cultural change, not just a technical one." What does this mean in practical terms?