Chapter 33 Quiz: Project Planning and Estimation
Test your understanding of AI-augmented project planning, estimation techniques, risk management, and methodology adaptations. 25 questions total.
Question 1
What is the Asymmetric Acceleration Principle?
Show Answer
The Asymmetric Acceleration Principle states that AI does not speed up all parts of software development equally. Some phases (particularly implementation) accelerate by 3-10x, while others (requirements gathering, design decisions, stakeholder communication, deployment) see minimal or no acceleration. This uneven impact is the central challenge of project planning in the AI era.Question 2
Which phase of the feature development lifecycle benefits MOST from AI acceleration, and what is its typical acceleration factor range?
Show Answer
**Implementation** benefits most from AI acceleration, with a typical acceleration factor of **3x-10x**. This phase includes code generation, boilerplate creation, algorithm implementation, and integration plumbing -- all tasks where AI coding assistants excel.Question 3
What is the Planning Paradox described in Section 33.1?
Show Answer
The Planning Paradox is the observation that because AI accelerates implementation so dramatically, the *relative* proportion of time spent on planning, design, and stakeholder communication *increases*. A project where implementation once consumed 60% of total effort might now see implementation consume only 30%, making the other phases proportionally more important. This means planning is more valuable in the AI era, not less.Question 4
Name the three AI acceleration tiers and provide the acceleration factor range for each.
Show Answer
1. **Tier 1: High Acceleration** -- 3x-10x faster. Examples: CRUD endpoints, data models, test generation, boilerplate, documentation. 2. **Tier 2: Moderate Acceleration** -- 1.5x-3x faster. Examples: complex algorithms, integration code, debugging, refactoring. 3. **Tier 3: Minimal Acceleration** -- 1.0x-1.5x faster. Examples: requirements gathering, architecture decisions, user research, stakeholder communication, production deployment.Question 5
What is an "AI-atomic task" and what are its five characteristics?
Show Answer
An AI-atomic task is a unit of work that can be fully described in a single prompt and fully implemented in a single AI session. Its five characteristics are: 1. **Self-contained**: Can be understood without extensive cross-referencing 2. **Well-specified**: Inputs, outputs, and behavior are clearly defined 3. **Context-bounded**: All relevant context fits within the AI's context window 4. **Independently testable**: The result can be verified in isolation 5. **Integration-aware**: Specifies how its output connects to the larger systemQuestion 6
What is the "Two-Paragraph Rule" for task decomposition?
Show Answer
The Two-Paragraph Rule states that if you cannot describe a task completely in two paragraphs (about 200 words), it is too large for an AI-atomic task and should be broken down further. Conversely, if a task description fits in a single sentence, it might be too small and should be combined with related tasks to give the AI more context about the broader goal.Question 7
In the Three-Point AI Estimation Method, what are the three scenarios and what formula is used to calculate the weighted estimate?
Show Answer
The three scenarios are: 1. **AI-Optimistic**: AI generates correct, clean code on the first prompt with minimal review needed 2. **AI-Realistic**: AI generates a solid starting point requiring some debugging or iterative refinement 3. **AI-Pessimistic**: AI struggles, producing code with subtle bugs or architectural issues requiring significant human intervention The PERT formula is: **Estimate = (Optimistic + 4 * Realistic + Pessimistic) / 6**Question 8
What is the Portfolio Effect in AI estimation, and why is it important?
Show Answer
The Portfolio Effect is the observation that while individual AI task estimates have higher variance (more uncertainty), the errors tend to cancel out across many tasks in a project. Some tasks will hit the optimistic case, others the pessimistic case, and the overall project estimate will be reasonably close to the sum of the realistic estimates. This is important because it means AI estimation works better at the project level than at the individual task level, giving teams confidence in aggregate project timelines even when individual task estimates are uncertain.Question 9
Why can code review become a bottleneck in AI-augmented projects?
Show Answer
When developers produce code 5x faster with AI assistance, the volume of code entering the review queue increases proportionally. However, code review capacity does not scale at the same rate because it requires senior developers' focused time and attention. The result is that code review becomes the bottleneck -- items move through development quickly but stack up waiting for review. Teams must plan for this by allocating more time to code review in AI-augmented project plans.Question 10
What is the Velocity Trap, and how should teams avoid it?
Show Answer
The Velocity Trap occurs when teams experience a dramatic initial surge in code output after adopting AI tools and then promise more features, shorten timelines, or reduce team sizes based on this peak velocity. The danger is that increased code velocity often outpaces the team's ability to review, test, integrate, and deploy code. This creates a growing backlog of unreviewed code, increasing technical debt, and eventually a slowdown. Teams should plan for sustainable velocity rather than peak velocity, and wait for several sprints of data before adjusting commitments.Question 11
List the five AI-specific project risks discussed in Section 33.6.
Show Answer
1. **Quality Consistency Risk** -- AI-generated code varies in quality across the codebase 2. **Architectural Drift Risk** -- Quick AI solutions lead to architecturally inconsistent codebases 3. **Vendor Lock-in Risk** -- Dependency on specific AI tools that may change pricing or availability 4. **Overconfidence Risk** -- Teams overestimate AI capabilities based on early successes 5. **Security Risk** -- AI-generated code may contain security vulnerabilities that are not immediately apparentQuestion 12
What is the recommended approach for using AI speed -- building more features or building better features?
Show Answer
The chapter recommends using AI speed to build **fewer features better** rather than more features at lower quality. The time saved on implementation should be invested in better testing, more thorough code review, improved documentation, accessibility, performance optimization, and user experience polish. More features mean more maintenance, more complexity, and potential product dilution. A product with 10 well-implemented features will outperform one with 30 hastily built features.Question 13
How does the MoSCoW method need to be adjusted for AI-augmented projects?
Show Answer
AI acceleration may shift features between MoSCoW categories. Features previously in the "Could have" category because of high implementation cost might move to "Should have" if AI reduces that cost significantly. Conversely, features that were "Should have" might move to "Could have" if they require extensive integration testing or human-intensive work that AI cannot accelerate. Teams should review their MoSCoW categorization through the lens of AI-adjusted effort and reassign features based on the recalculated cost-benefit ratio.Question 14
What is the "hockey stick" burndown pattern, and why does it occur in AI-augmented sprints?
Show Answer
The "hockey stick" burndown pattern shows a steep initial decline in remaining story points (as AI-accelerated tasks are completed quickly) followed by a flattening of the curve (as remaining tasks are ones AI cannot accelerate). It occurs because AI-friendly tasks are typically completed first due to their faster turnaround, leaving non-acceleratable tasks to dominate the later portion of the sprint. This can alarm stakeholders who expect a linear decline. The chapter recommends addressing it by separating burndowns for accelerated and non-accelerated tasks, educating stakeholders beforehand, and mixing task types throughout the sprint.Question 15
What is Quality-Adjusted Velocity (QAV) and how is it calculated?
Show Answer
Quality-Adjusted Velocity is a metric that penalizes raw velocity for defects. The formula is: **QAV = Raw Velocity - (Defect Count * Average Defect Resolution Cost in Story Points)** This metric prevents teams from gaming velocity by shipping AI-generated code that has not been properly reviewed. If defects increase alongside velocity, the QAV will remain flat or decline, signaling a quality problem despite apparently high throughput.Question 16
What are the "Three Timelines" recommended for stakeholder communication?
Show Answer
When presenting AI-augmented project timelines to stakeholders, present three scenarios: 1. **Aggressive** (maximum AI acceleration): "If everything goes perfectly with our AI tools, we could finish by X." 2. **Expected** (realistic AI acceleration): "Based on our historical data, we expect to finish by Y." 3. **Conservative** (minimal AI acceleration): "If AI tools underperform or we encounter significant technical challenges, we would finish by Z." This transparently communicates uncertainty and gives stakeholders a range to plan around.Question 17
How should the Definition of Done be expanded for AI-augmented Scrum teams?
Show Answer
The Definition of Done should include AI-specific criteria: - Code has been reviewed by a human developer (regardless of origin) - Automated security scan has passed - Code conforms to team architectural standards - AI-generated tests have been reviewed for meaningful coverage (not just line coverage) - Documentation is accurate and not merely AI boilerplateQuestion 18
Why might shorter sprints (one week instead of two) be beneficial for AI-augmented teams?
Show Answer
AI acceleration means more work fits into each sprint, and shorter sprints provide more frequent opportunities to recalibrate estimates based on actual AI performance. Since AI introduces new sources of variability in task completion times, more frequent feedback loops help teams adjust their planning and estimation more quickly. Shorter sprints also prevent the accumulation of large volumes of unreviewed AI-generated code.Question 19
What is the AI Sprint Coefficient and what are typical values?
Show Answer
The AI Sprint Coefficient is the ratio of AI-augmented velocity to pre-AI velocity, calculated after three to four sprints of data. This single number captures a team's effective AI productivity gain and can be used for high-level planning. Typical values range from **1.5x to 3.0x** for experienced teams, with a long tail toward 1.0x for teams that primarily do Tier 3 (non-acceleratable) work.Question 20
How does AI acceleration affect Kanban WIP limits?
Show Answer
AI-augmented developers can often work on more items simultaneously because AI handles much of the implementation while the developer can context-switch to planning or reviewing other tasks. WIP limits may be increased modestly (by 1-2 items per developer), but increases should be gradual and quality metrics must be monitored to ensure the additional throughput does not compromise standards. If items flow quickly through "Development" but stack up in "Review," there is a flow imbalance to address.Question 21
In a SAFe environment, how does uneven AI acceleration across teams affect Program Increment planning?
Show Answer
Different teams may experience very different acceleration rates. A team building CRUD-heavy microservices might accelerate by 4x, while a team handling complex algorithmic processing might only see 1.5x. PI Planning must account for these differences because cross-team dependencies may become misaligned -- a fast team might complete their work well before a dependent team finishes theirs. The Release Train Engineer must coordinate based on each team's specific acceleration profile rather than assuming uniform velocity changes.Question 22
What five AI-specific productivity metrics does the chapter recommend tracking?
Show Answer
1. **AI Utilization Rate** -- Percentage of coding tasks using AI assistance (target: 60-80%) 2. **First-Prompt Success Rate** -- Percentage of AI outputs usable without major revision (target: 40-60%) 3. **AI Code Retention Rate** -- Percentage of AI-generated code surviving review (target: 70-90%) 4. **Prompt-to-Production Ratio** -- Number of prompts needed per production-ready feature (target: 3-8) 5. **AI Rework Rate** -- Time spent fixing AI-generated code versus total AI time (target: <30%)Question 23
What is the Compound Risk Effect described in Section 33.6?
Show Answer
The Compound Risk Effect states that AI-specific risks compound with traditional project risks rather than simply adding to them. A project with both unclear requirements (traditional risk) and inconsistent AI code quality (AI-specific risk) is multiplicatively more risky than either risk alone because bad AI output built on bad requirements produces defects that are harder to trace and fix. Teams must address both traditional and AI-specific risks to avoid this compounding effect.Question 24
How does AI change the phase distribution in a Waterfall methodology?
Show Answer
In traditional Waterfall: Requirements 15%, Design 20%, Implementation 40%, Testing 15%, Deployment 10%. In AI-augmented Waterfall: Requirements 20%, Design 25%, Implementation 20%, Testing 20%, Deployment 15%. While the total project duration may decrease, the relative proportions shift. Requirements and design become proportionally larger (because their absolute duration is unchanged while implementation shrinks), and testing expands because there is more code to test. This shift actually brings Waterfall closer to what experts have always recommended: spending more time on requirements and design.Question 25
A team's raw velocity has increased from 40 to 80 story points per sprint after adopting AI tools, but their defect count has increased from 5 to 15 per sprint. Using the QAV formula (with 2 story points per defect resolution cost), compare the pre-AI and post-AI Quality-Adjusted Velocity. What does this tell you?
Show Answer
**Pre-AI QAV**: 40 - (5 * 2) = 40 - 10 = **30 quality-adjusted points** **Post-AI QAV**: 80 - (15 * 2) = 80 - 30 = **50 quality-adjusted points** The QAV has improved from 30 to 50 (a 67% increase), which is significantly less than the raw velocity improvement of 100% (40 to 80). This tells us that while the team is genuinely more productive with AI, a significant portion of the apparent velocity gain is being consumed by increased defects. The team should focus on improving AI code quality (better prompts, more thorough review) to bring the QAV closer to the raw velocity. The gap between raw velocity and QAV (30 points) represents productivity lost to defect rework.Review any questions you found challenging by revisiting the relevant sections of the chapter.