Case Study: Alex's Team Rollout — From Chaos to Cohesion in 60 Days
Background
Alex had been using AI tools in her own work for about eight months when her director made the announcement: the company was expanding its AI tool licenses to the entire marketing department, and as team lead, Alex was responsible for "driving adoption."
No budget was attached to the mandate. No policy guidelines from leadership. No training resources from IT. Just an email, a list of licensed tools, and the expectation that by the end of the quarter, the team would be "using AI effectively."
Alex's team of ten covered content creation, campaign management, social media, email marketing, and brand. Their work ranged from highly creative (brand copy, campaign concepts) to highly process-driven (email templates, social post scheduling) to analytically demanding (campaign performance reporting, audience segmentation). The AI use implications were different for each.
At the time of the announcement, Alex knew that at least three team members were already using AI tools on their own — she'd seen hints in their work and had one candid conversation with Marcus, the content manager, who told her he'd been using AI for first drafts for months. She suspected others were also experimenting. She had no idea what the other seven were doing.
The Problem She Walked Into
Alex's first instinct was to start with training — get everyone into a room, show them the tools, and let them figure it out. She'd seen this approach work in other contexts.
Before she organized anything, though, she caught a client-facing issue that reframed the situation.
One of her newer team members, Priya, had used AI to draft a social media campaign for a financial services client. The campaign was delivered on time and Priya was clearly proud of it. Alex's review caught two problems: first, some of the statistics about consumer financial behavior were unverified — one turned out to be a commonly-cited but outdated figure that didn't hold up. Second, the copy had a legal caveat that was slightly wrong for the specific product type.
Neither error would have been catastrophic, but both required rewriting and delayed the client delivery. More importantly, they revealed a pattern: Priya had been using AI to produce faster output without equivalent investment in verification.
Alex realized that generic AI training wouldn't fix this. The problem wasn't that Priya didn't know how to use AI — she clearly did. The problem was that she hadn't developed judgment about what AI output requires verification, and she didn't know what the quality standard was for AI-assisted client work.
The rollout couldn't be about access. It had to be about standards.
Phase 1: Understanding the Landscape (Days 1-14)
Alex put off organizing any training and spent the first two weeks doing informal one-on-ones with each team member. Her framing was simple: "I'm figuring out how to support the team on AI tools. Tell me what you're doing now."
What she found:
Marcus (content manager): Using AI for first drafts of almost all written content. Getting excellent results. His process: detailed prompts with brand voice guidelines, aggressive editing of AI output, strong verification instincts from his journalism background. He was essentially already doing it right.
Priya (social media coordinator): Using AI frequently but submitting with insufficient review (as Alex had already discovered). Skill but not yet judgment.
David (campaign manager): Using AI for data analysis summaries and stakeholder reports. Using it carefully — always verifying numbers, treating AI output as a starting point. Natural verification instincts.
Lisa (email marketing): Had tried AI tools but found the output "too generic for our brand." Had mostly stopped using it. This turned out to be a prompting issue — she wasn't providing enough brand context.
James (brand): Deeply skeptical. Felt AI-generated content undermined authenticity. Willing to try for specific limited use cases (briefing documents, competitive research) but resistant to AI in brand copy.
The other five: A mix of occasional explorers and non-starters, mostly citing "haven't had time to figure it out" as their reason for not engaging.
This landscape told Alex what she needed to know: she had two problems, not one. First, she needed to develop the team's AI skills. Second, she needed to establish quality standards that would prevent the Priya-type situation before it happened again.
Phase 2: Policy First, Training Second (Days 15-28)
Alex drafted a team AI policy using a simple structure: approved tools, what's okay to share with AI, what's not, what kinds of work require extra review, and what "done" looks like.
She shared the draft with Marcus, David, and James — her skeptic — asking each to punch holes in it from their different perspectives. Marcus identified a gap in the data handling section (the policy didn't address using client analytics data for prompts). David suggested clearer examples in the quality standards section. James pushed for a cleaner statement about disclosure to clients.
She revised the draft and shared it with the full team for comment before finalizing. The comment period produced six additional suggestions, two of which meaningfully improved the policy.
The most important decision in the policy: a client-facing deliverables review requirement. Any content going to a client with AI assistance needed a second set of human eyes — either Alex's or another senior team member's. This wasn't about distrust; it was about making the review process explicit and ensuring that AI-assisted work got scrutiny proportional to its stakes.
Phase 3: Shared Resources and Peer Learning (Days 29-42)
Rather than organizing a generic training session, Alex did something that turned out to be the most effective thing she did: she asked Marcus, David, and Priya to each prepare a 20-minute "show and tell" of their actual AI workflows.
The session lasted two and a half hours instead of the planned one hour because the questions wouldn't stop.
Marcus walked through his content drafting process: the prompt construction, the brand voice context he always included, his editing workflow, and the specific things he always verified. Watching him build a prompt in real time — explaining why he was including each element — was more instructive than any training could have been.
David showed his data analysis workflow, including how he asked AI to identify patterns in campaign performance data, how he verified the AI's interpretations against the raw numbers, and how he formatted the output into the reporting templates their stakeholders expected.
Priya, to her credit, also shared her workflow — including the mistake she'd made with the financial services campaign and what she'd learned from it. This took courage and produced one of the most valuable moments of the session: a candid conversation about verification that gave the whole team permission to talk about AI mistakes openly.
Following the session, Alex built a shared prompt library in the team's document management system. She seeded it with the prompts Marcus, David, and Priya had demonstrated. Over the next two weeks, three more team members added prompts of their own.
Phase 4: Training for the Non-Starters (Days 43-56)
With the policy in place and the prompt library started, Alex organized two workshops for the team members who hadn't yet engaged with AI tools.
The workshops were explicitly practical: bring a real task you need to do this week, and we'll do it together with AI assistance.
Lisa brought email subject line optimization. Working with AI together in the session, she discovered that the "too generic" problem she'd experienced was entirely a prompting problem — she hadn't been providing brand voice guidelines or audience context. With those elements added, the AI output was immediately more useful. She left with a working prompt template.
By the end of the two workshops, all five non-starters had at least tried AI on a real task and had a prompt they could build from.
Phase 5: Quality Standards in Practice (Days 57-65, Slight Overrun)
The quality standard that needed the most attention was the brand voice issue Lisa had identified and that several other team members shared. AI-generated content often sounded like AI-generated content — structured, hedged, missing the specific personality the brand had built.
Alex developed a "brand voice checklist" with five questions to ask before submitting AI-assisted copy:
- Does this sound like something our brand would actually say, or does it sound like "a marketing team's output"?
- Have we used any specific brand phrases, references, or recurring vocabulary?
- Is the formality level right for this channel and audience?
- Are there hedge phrases ("it's important to note," "in today's landscape," "it's worth mentioning") that should be removed?
- Would a brand-savvy colleague recognize this as ours?
The checklist was simple and practical enough to become a genuine habit rather than a compliance exercise.
Results at Day 60
At the two-month mark, the picture was substantially different from where Alex had started:
Adoption: All ten team members were using AI tools at least occasionally. Six were using them regularly and getting measurable value. Two were using them selectively for specific use cases they'd found valuable. Two (including James) were using them only for preparatory and research tasks, consistent with their preferences and the policy.
Quality: The client-facing review requirement had caught two more potential issues before they reached clients. The brand voice checklist had improved the consistency of AI-assisted copy noticeably.
Equity: The skill gap had narrowed. The non-starters were now practitioners. The wide variance in output quality had tightened. Alex no longer had to guess which team member produced which piece of work based on whether it looked AI-generated.
Culture: The conversation about AI had become open and practical rather than awkward. Team members were sharing prompts in the shared library without being prompted. The monthly AI check-in Alex had built into team meetings was generating more discussion than she'd anticipated.
What Alex Would Do Differently
Looking back, Alex identified three things she'd change:
She'd do the assessment first, always. Going into the rollout without knowing the landscape delayed her ability to tailor her approach. The one-on-ones were the most valuable thing she did, and she'd do them in the first week, not after.
She'd involve James earlier. Her instinct had been to develop the policy with the enthusiasts and then bring it to the skeptic for comment. She should have brought the skeptic into the process from the beginning — his pushback was consistently valuable, and earlier involvement would have produced a better policy and stronger buy-in.
She'd set explicit success metrics before starting. The rollout was successful, but she had no pre-defined criteria for success. Defining what "good looks like" at day 60 before starting would have made the evaluation cleaner and more useful for making the case to leadership.
The Key Lesson
The biggest lesson from Alex's rollout: the technology was not the challenge. The tools were good and relatively easy to learn. The challenges were human: skill development, quality standards, trust, equity, and the difficult conversations about what AI changes about how the team works. Teams that treat AI adoption as a technology deployment problem — install the tools and step back — miss what actually determines whether adoption succeeds.
The teams that get it right treat it as a management problem. That's more work. It's also the work that actually produces results.
Alex's measurement framework — tracking the ROI of this rollout and demonstrating its value to leadership — is the subject of Chapter 39's first case study.