Case Study 3.1: Alex's Mental Model Upgrade
From Oracle to Intern
Background
Alex had been the marketing manager at a mid-sized B2B software company for four years before AI tools became a significant part of her professional context. She was good at her job: organized, creative within the constraints of a defined brand, and skilled at translating technical product features into messaging that resonated with buyers who were not themselves technical. Her team was small — two direct reports and a network of freelance contractors for specialized work — and she was perpetually under-staffed relative to the content demands the company's growth trajectory placed on marketing.
When the company's executive team decided, in the spring of a recent year, that AI tools would be part of everyone's workflow, Alex was given latitude to implement AI into her team's processes as she saw fit. She spent a week evaluating options, settled on a major general-purpose AI assistant, and got to work.
The early results were encouraging. She used the AI to draft social media posts, generate first passes at email copy, and summarize research reports she did not have time to read fully. Output quality was adequate — sometimes better, sometimes not — and the time savings were real. She estimated she was getting about twenty percent more content output from the same team hours.
The problem started about six weeks in.
The Generic Problem
The company was preparing for a major product launch. The new product represented a significant departure from their existing offering — more technical, aimed at a different buyer persona (engineering leads rather than procurement teams), and requiring a content voice that was more direct, less polished, and more conversational than their existing brand. The marketing brief was explicit: "don't sound like our usual B2B content — this audience will tune out anything that feels like a brochure."
Alex turned to her AI tool for the campaign content. She submitted the brief, included the positioning document, specified the audience, and asked for campaign copy across three channels: email, social media, and website landing page.
The output was technically competent. It addressed the audience, referenced the positioning correctly, and avoided obvious errors. It was also, immediately recognizably to everyone who reviewed it, generic B2B technology marketing content. It sounded like a brochure — specifically, a brochure written for a slightly different audience than the actual brief.
Her marketing director's feedback: "It could be anybody's launch content. There's nothing distinctively us here, and even if there was, it doesn't have the direct voice this audience expects. This isn't our brand, and it's not the right register for the personas."
Alex tried again. She added more specific instructions about tone — "more direct, less formal, conversational, not sales-y." The second output was slightly different in word choice but fundamentally the same in feel. The third output, with yet more detailed instructions, was nearly identical to the second.
She was stuck.
The Implicit Model
Alex had not consciously articulated a mental model for her AI tool, but her behavior revealed one: she treated it as an oracle with specialized capabilities. In her implicit framework, the AI had access to something like professional marketing expertise that it would deploy when given a good brief. Her job was to write a good brief — the right audience, the right objective, the right constraints — and the AI would draw on its capabilities to produce appropriate output.
When the output was wrong, her instinct was to correct the brief — to specify more precisely, to add more constraints, to give better instructions. This is the behavior of someone who believes the problem is a specification problem: the machine has the capability, and the failure is in how it was asked to deploy it.
She spent three sessions adding more and more detail to her instructions. The feedback loop was frustrating: more words in the prompt, marginally different output, the same fundamental problem.
The Turning Point
The reframe came from an unexpected source. Alex's company had a small design team, and she had developed a good working relationship with the lead designer. During a conversation about the campaign, she mentioned her struggle with the AI tool and the generic output problem.
The designer asked a question that changed the conversation: "When you brief a freelance copywriter you've never worked with, what do you give them that you wouldn't give a contractor who's been with us for two years?"
Alex thought about it. "A lot more," she said. "I'd explain what makes us different, what our past content has gotten wrong, what words and phrases we avoid, what tone our best content has. With someone who knows us, I can shorthand a lot of that."
"Right," the designer said. "And the AI has never worked with you."
That was the reframe. Not the AI as an oracle drawing on professional expertise, but the AI as a brilliant person who had never encountered this brand, this voice, this audience, or this set of implicit constraints. Every brief Alex had written had been designed for a moderately sophisticated marketing professional who already had background knowledge of the brand. The AI had none of that background. She had been giving it junior-level briefs and expecting senior-level output.
What Alex Changed
The model shift — from oracle to brilliant intern — produced immediate and specific changes to her workflow.
She built a brand context document. Alex spent two hours writing a document she called the "Brand Voice Brief" — not for the AI specifically, but for any new collaborator who needed to understand the brand before writing for it. It covered:
- What the brand sounds like and what it does not sound like, with specific examples of each
- Vocabulary preferences: specific phrases the brand used, specific phrases it avoided, and why
- Examples of past content that had hit the right register, annotated with what made them work
- The most common mistakes in first drafts from new contributors, and what those mistakes revealed about incorrect assumptions
- The key differences between writing for the engineering audience versus the procurement audience
This document was 800 words. Alex started appending it to every AI session brief.
She stopped expecting tone from instructions and started showing it. Instead of instructing the AI to "be direct and conversational," she pulled three examples of the voice she wanted — from sources she respected in the engineering content space — and included them in the prompt. "Write in the tone and register of these examples: [examples]. Apply that voice to our positioning."
She changed what she evaluated. Rather than evaluating AI output as a complete deliverable (does this work?), she began evaluating it as raw material (what in this is useful?). She started extracting the strongest phrases, the best structural moves, the clearest articulations of the value proposition, and assembling them with her own judgment rather than looking for a complete draft she could accept.
She generated options. Instead of asking for one email, she asked for five variations with explicit instruction to vary the opening hook and the call to action. The variation gave her more material to evaluate and more chances of hitting the register she wanted.
The Results
The difference in output quality was substantial. The fifth campaign draft — produced with the new approach — passed review with minimal revision. Her marketing director's note: "This feels right. The email hook actually sounds like we know who we're talking to."
Alex examined what had changed. The AI tool was the same. The basic task was the same. What had changed was the quality and relevance of the context she provided — and the expectation she brought to the interaction. She had stopped expecting the AI to supply brand expertise from its general training and started treating herself as the source of brand expertise, with the AI as the capable collaborator who needed to be briefed.
The internal model shift also changed how she responded to inadequate output. Instead of cycling through instruction variations (the robot-model response) or concluding the tool was not capable (the replacement-model response), she asked: "What context did I fail to provide?" This question almost always had a productive answer.
Broader Applications
The insight from Alex's experience generalizes beyond brand voice to any AI task that requires local, organizational, or domain-specific knowledge. Generic briefs produce generic output. The AI's training data represents broad patterns — the statistical average of professional-quality content in many domains. If what you need is specifically tailored to your organization, your client, your voice, or your constraints, that specificity has to come from you.
The brilliant intern model reframes the investment of effort. The effort that matters is not writing more precise abstract instructions. The effort that matters is articulating the specific, local knowledge the intern needs — the knowledge that any experienced colleague would already have, and that a new collaborator needs to be given explicitly.
Alex has continued to develop her brand context documents. She now maintains one per client or product line, updates them quarterly, and uses them not just for AI sessions but as onboarding material for any new contractor. The practice of making implicit brand knowledge explicit — forced by the AI intern model — has had benefits well beyond AI interactions.
Discussion Questions
-
Alex's initial model was "oracle." What behavioral evidence in her story reveals this model most clearly, and at what point did her behavior diverge from what the oracle model would predict?
-
The design colleague's question — "what do you give a new freelancer that you wouldn't give a two-year veteran?" — was the moment of reframe. Why is this question so effective at revealing the oracle model's flaw?
-
Alex's new approach required her to spend two hours writing the Brand Voice Brief. How would you assess this time investment? When does the investment in context preparation pay off, and when is it not worth the effort?
-
Can the brilliant intern model become a limitation in any circumstances? Are there AI tasks where treating the model as an intern leads to the wrong practices?