Case Study: Alex's ROI Audit — Proving the Value of AI Investment to Leadership
The Challenge
Six months after rolling out AI tools to her marketing team (described in Chapter 38's case study), Alex received an email from her VP of Marketing. The company was reviewing all software subscriptions as part of a budget rationalization exercise. The email asked team leads to "justify ongoing investment in AI tools with documented evidence of business value."
Alex had known this moment was coming. She'd started tracking her team's AI use two weeks into the rollout precisely because she expected to face this question. But now that it was here, she found herself staring at her tracking spreadsheet wondering whether the data she'd collected was enough.
She had the numbers. The question was whether they told a compelling story.
What She Had Tracked
From the beginning of the rollout, Alex had asked each team member to maintain a simple log of their AI interactions. Not every interaction — that would be unworkable — but a weekly summary that captured:
- Total estimated time saved from AI assistance
- The three most significant AI interactions (task type, time saved, quality assessment)
- Any problems or quality issues encountered
She'd supplemented this with her own records: the team's revision request log (how often clients requested revisions on deliverables), client satisfaction survey results (the team ran quarterly pulse surveys with major clients), and her own notes from weekly team check-ins where AI use was a standing agenda item.
The data wasn't perfect. Time estimates were self-reported and probably subject to some upward bias. Quality ratings were subjective. But it was consistent and directional, and she'd been collecting it for six months.
Building the Analysis
Alex spent a Saturday afternoon building the analysis. She organized it around three questions: What did we save? What did we improve? What did it cost?
What We Saved: Time Analysis
From her team's weekly logs, Alex calculated total time savings over six months.
The data showed significant variability across team members — which itself was informative. Marcus (content) and David (campaign management) were saving substantial time. Priya (social media) had improved considerably after her early quality problems. The three team members who'd been slow adopters were saving less time but had improved over the period.
Aggregated, the team's AI assistance was saving an estimated 34 hours per month across all ten team members — roughly 204 hours over the six-month period.
She calculated the financial value of those hours using each team member's loaded cost rate (a figure she got from her HR contact, which included benefits and overhead). The aggregate value of 204 hours recovered: approximately $14,280.
She was careful to note the uncertainty in this figure: time estimates were self-reported, and actual savings could be 20% higher or lower. She included a "conservative estimate" column in her analysis that used 80% of the reported savings, yielding $11,424.
What We Improved: Quality Analysis
This section of the analysis was harder but ultimately more compelling.
Client revision requests: Alex pulled data from the team's project management system. In the three months before AI adoption, the team had received 23 client revision requests on major deliverables. In the three months after the policy and quality standards were established, the number dropped to 14. That's a 39% reduction in revision requests.
She was careful about attribution: not all of the improvement was due to AI use. The team had also implemented the brand voice checklist and the client-facing review requirement, both of which would improve quality independently. She noted this in her analysis rather than claiming AI credit for the full improvement.
Client satisfaction scores: The quarterly client pulse surveys showed stable scores in the two quarters since AI adoption — no deterioration in satisfaction despite significantly higher content output volume. Alex framed this as a quality maintenance achievement: "We're producing 30% more content without decline in client satisfaction scores."
Error rate on financial and factual claims: The team tracked this through their internal review log. Pre-adoption, the review log showed 8 factual issues caught in review over the prior quarter. Post-adoption (with verification requirements in place), the review log showed 5 issues caught — a modest improvement, though she acknowledged that the sample size was small.
What It Cost
The cost analysis was straightforward:
- AI tool subscriptions: $2,400 over six months (enterprise license for the team)
- Alex's management time: approximately 80 hours over six months for policy development, training, and ongoing support (valued at her hourly cost rate)
- Total investment: approximately $6,800
The ROI
Conservative case: $11,424 time value recovered / $6,800 invested = 1.68x ROI
Standard case: $14,280 time value recovered / $6,800 invested = 2.1x ROI
Plus the quality narrative: 39% reduction in client revision requests, stable satisfaction scores through 30% volume increase.
Presenting to Leadership
Alex met with her VP and the CFO to present the analysis. She'd prepared a four-slide deck: context, what we did, what we found, what we recommend.
She led with the quality story, not the ROI. Her instinct was right: the CFO's first question wasn't about time savings — it was "Are the quality numbers reliable?" When Alex walked through the methodology — actual client revision requests tracked in the project management system — the CFO relaxed. "That's real data," he said.
The ROI calculation got less scrutiny than she'd expected. 2x on a six-month investment, with conservative assumptions, in a context where the alternative is no productivity investment at all, is not a hard case to make.
The questions she got:
"Why aren't all team members saving equal time?" Alex explained the skill development arc and noted that the team members saving less time now had been at the same low level four months ago and had improved significantly. She showed the trend data.
"What happens if we remove the management overhead from your ROI calculation?" She'd anticipated this. "If the management overhead is excluded, the ROI improves substantially. But I'd argue the management overhead is necessary — without the policy, training, and quality standards, we wouldn't have the quality results I'm showing you. The alternative to the management investment isn't free AI use; it's ungoverned AI use with the quality problems that implies."
"What would you recommend for next year?" Her recommendation: expand the license to include two additional teams (customer success and sales), with a dedicated change management budget that covered three months of Alex's time to support the rollout. She had a secondary ROI calculation for this expansion that she walked through.
The outcome: the license was renewed, expanded to customer success, and a budget was approved for the expansion support.
What Worked in the Analysis
Leading with real operational data. Client revision requests tracked in actual project management records were more credible than time savings estimates. The CFO engaged differently with data from a system of record than with self-reported logs.
Being transparent about uncertainty. Showing a conservative estimate alongside the standard estimate, and explicitly noting the uncertainty in self-reported time data, built credibility rather than undermining it.
Separating correlation from causation. Noting that quality improvements couldn't be fully attributed to AI (the quality standards played a role) was initially counterintuitive but made the analysis more trustworthy.
Framing cost as investment, not expense. The management time was the hardest cost to justify. Alex's framing — that ungoverned AI use would also have costs, just in different forms — made the management investment more defensible.
What She Wished She'd Done Differently
Alex identified two gaps in her analysis:
She wished she'd collected pre-adoption baseline data more systematically. The comparison between pre- and post-adoption quality was real but imprecise because she hadn't explicitly tracked the same metrics before adoption. If she'd set up her tracking two months before the rollout to establish a clean baseline, the quality comparison would have been more rigorous.
She wished she'd tracked the productivity distribution. Her analysis showed aggregate time savings but couldn't easily show the bimodal distribution — that Marcus and David were saving substantial time while some team members were saving much less. This distribution data would have supported the skill development investment argument more clearly.
Both gaps are fixable for the next reporting cycle. And having them identified is itself a measure of how far her measurement practice has developed.
The Key Lesson
Alex's ROI audit demonstrated something important about measurement: you're not just collecting data to justify a decision already made — you're collecting data to understand what's actually working and make better decisions.
The audit told her that the management overhead investment was defensible (good — she was worried about that). It also told her that team member skill development was the biggest driver of time savings variance (useful — she should invest there next). And it revealed that the quality story was more compelling than the time savings story for her specific audience (important — she'll lead with quality next time).
Measurement isn't just about proving value. It's about understanding what creates value and investing more there.
Alex's expansion to customer success and the measurement framework she established for that rollout are part of the ongoing case thread in the book's supplementary materials.