Chapter 34 Quiz: Measuring AI ROI


Multiple Choice

Question 1. Which of the following is NOT one of the four pillars of AI value described in Section 34.2?

a) Direct revenue impact b) Cost reduction c) Market share growth d) Strategic optionality


Question 2. Athena's churn prediction model identifies at-risk customers with 78 percent precision. The $4.2 million annual value depends on the retention team contacting those customers and offering appropriate incentives. This illustrates which challenge of AI ROI measurement?

a) Outcomes are probabilistic, not deterministic — the value depends on human behavior downstream of the model b) Costs are distributed and hard to isolate c) The counterfactual is unclear d) Time horizons are uncertain


Question 3. A company's AI team claims their model saves $5 million annually by automating 30 percent of the work done by 200 salaried employees. No employees have been laid off, and there is no structured plan for how the saved time will be used. What is the most accurate assessment?

a) The $5 million savings is real because the time has been freed up b) The $5 million savings is theoretical — real value depends on how the freed time is productively deployed c) The savings should be calculated as 30% of total salary costs for 200 employees d) The model's ROI cannot be calculated without laying off employees


Question 4. Which attribution method provides the strongest causal evidence for AI's revenue impact?

a) Before/after comparison b) Modeling-based attribution c) A/B testing with random assignment d) Executive estimation


Question 5. The TCO multiplier for AI systems over a five-year horizon is typically in the range of:

a) 1.0-1.5x development cost b) 1.5-2.0x development cost c) 3-5x development cost d) 8-12x development cost


Question 6. In the AI portfolio matrix, which project category should receive the largest share of the AI budget?

a) Quick Wins (high confidence, moderate impact) b) Strategic Bets (medium confidence, high impact) c) Moonshots (low confidence, high impact) d) Experiments (low confidence, low-medium impact)


Question 7. Athena's visual merchandising assistant was killed after $1.2 million in investment. What was the primary reason?

a) The technology did not work b) There was no clear business owner to drive adoption c) The project exceeded its budget by more than 150 percent d) A competitor launched a superior product


Question 8. An AI project has spent $2 million over 15 months. The model performance is improving slowly. The original business champion has been promoted and is no longer involved. The data science team is enthusiastic and requests six more months. Which cognitive bias most threatens sound decision-making here?

a) Confirmation bias b) Anchoring bias c) Sunk cost fallacy d) Availability bias


Question 9. What does the J-curve pattern describe in AI investments?

a) The rapid increase in model accuracy during training b) The pattern of negative returns during the investment phase followed by accelerating positive returns as models are deployed c) The decline in AI project success rates as organizations attempt more complex projects d) The shape of the learning curve for data science teams


Question 10. In the AI ROI Dashboard design, which layer is appropriate for a board meeting presentation?

a) Layer 1: Portfolio Summary (one-page executive view) b) Layer 2: Project Scorecards (one page per project) c) Layer 3: Detailed Analysis (methodology, sensitivity, Monte Carlo) d) All three layers should be presented to the board


Question 11. A company reports that its AI program has an ROI of "$3.50 per $1 invested." According to Section 34.12, this figure places the company:

a) Well below industry median b) At approximately the industry median c) In the top quartile d) Among the top 10 percent of AI adopters


Question 12. The AIROICalculator's Monte Carlo simulation serves which purpose?

a) It determines the exact ROI the project will achieve b) It identifies the single most important variable affecting ROI c) It produces a probability distribution of NPV outcomes, accounting for uncertainty in both costs and value estimates d) It optimizes the project's budget allocation


Question 13. A risk reduction AI system (fraud detection) prevents an estimated $3 million in annual fraud losses. However, this $3 million never appears as a line item on the income statement. This illustrates which measurement challenge?

a) Risk reduction value is invisible because you are measuring events that did not happen b) Risk reduction value cannot be quantified c) Fraud detection models cannot have positive ROI d) Risk reduction should not be included in AI ROI calculations


Question 14. Which of the following is an example of strategic optionality in AI investment?

a) An AI model that reduces customer service costs by 25 percent b) A clean, labeled dataset that enables three future AI projects not yet started c) A recommendation engine that increases average order value by $12 d) A fraud detection model that prevents $2 million in annual losses


Question 15. Professor Okonkwo says: "Every AI ROI number you will ever see was produced by someone with an agenda." This statement is intended to:

a) Suggest that AI ROI numbers are always fabricated b) Encourage students to understand the methodology, assumptions, and incentives behind any ROI claim c) Argue that AI ROI measurement is not worth attempting d) Recommend that only the CFO should calculate AI ROI


Short Answer

Question 16. Explain why "full-time equivalent (FTE) savings" from AI automation frequently overstate actual cost savings. Provide one example of when FTE savings are real and one example of when they are misleading.


Question 17. Define kill criteria and explain why they should be established before an AI project begins rather than during the project.


Question 18. The CFO of Athena Retail Group said that the two killed projects were "the most impressive part" of Ravi's portfolio review. Explain why killing projects, when done well, strengthens rather than weakens an AI program's credibility.


Question 19. A project has an NPV of $5 million based on the base case assumptions. The Monte Carlo simulation shows a 72 percent probability of positive NPV, with a 5th percentile NPV of -$1.2 million and a 95th percentile of $11.5 million. Should this project be funded? Explain your reasoning, addressing both the quantitative evidence and any qualitative factors you would consider.


Question 20. What is the difference between project-level ROI, program-level ROI, platform-level ROI, and strategic ROI? Why is conflating them a common error?


True or False

Question 21. True or False: A/B testing is always the best method for measuring AI revenue attribution.


Question 22. True or False: The operations phase (monitoring, retraining, maintenance) typically accounts for 40 to 60 percent of an AI system's total cost of ownership over a five-year horizon.


Question 23. True or False: A project with a strongly positive NPV should never be killed.


Question 24. True or False: If an AI project's NPV is highly sensitive to the annual value estimate but insensitive to the discount rate, the team should focus its measurement effort on validating the annual value assumptions.


Question 25. True or False: The "solution looking for a problem" anti-pattern occurs when a technically impressive AI capability is developed without a clear business process to transform and a committed business owner to drive adoption.


Answer key available in the appendices.