Case Study 25-1: Elena's Build-Buy-Partner Analysis — AI as Strategic Thinking Partner
The Decision: Should Elena build proprietary assessment tools for her consulting practice, license existing tools from an established provider, or partner with another consultancy that already has them?
Context
Elena has been running her organizational development consulting practice for five years. Her work centers on leadership development and employee engagement. She has built a strong reputation and a steady client base, but she's increasingly running into a competitive limitation: assessment tools.
Established consulting firms that compete for her target clients typically have proprietary assessment platforms — validated instruments for measuring leadership capability, engagement, and organizational health. These tools create data continuity (clients can track progress over time), produce branded deliverables that reinforce the consulting relationship, and command higher fees because they're differentiated.
Elena currently licenses generic assessment tools from a third-party vendor. They're adequate but not distinctive. She's been thinking about this for two years without reaching a decision. A conversation with a prospective client who went with a larger firm partly because of their proprietary assessment platform crystallizes the need to decide.
She has a three-day window before her next client engagement. She sets aside half a day to think through the decision properly — for the first time, with AI as a structured thinking partner.
Setting Up the Analysis
Elena's first move is to define the decision clearly. She's learned from experience that "build, buy, or partner" decisions often get analyzed at the wrong level of specificity — people decide in principle before they've defined what they're actually deciding on.
She runs a clarifying prompt:
I need to make a build/buy/partner decision about assessment tools for my leadership and organizational development consulting practice. Before I run the analysis, help me get specific about what I'm actually deciding.
What I currently know: I need an assessment platform that can measure leadership capability, engagement, and organizational health. Currently using a third-party generic tool. Considering: building custom tools, licensing an established platform (like Hogan, Gallup, or similar), or partnering with a larger consultancy.
What clarifying questions should I answer before running the analysis? What does "build" actually mean for a solo consultant? What does "partner" really entail?
AI response (selected):
"Before running the analysis, you need to define what 'build' means in your context:
Option A (Build): Develop proprietary assessment instruments from scratch — validated psychometric tools. This requires: expertise in psychometrics (do you have this or would you hire it?), statistical validation studies with large enough sample sizes, likely 2+ years and $100K+ investment, and ongoing research maintenance.
Option B (Customize): License validated instruments as the foundation and develop your own branded framework and interpretation methodology on top of them. Lower investment, leverages existing validity research, creates differentiation without requiring psychometric expertise.
The question you're probably actually asking is between B, full licensing, and partnering — not between building validated instruments from scratch and the others. Clarifying which 'build' you mean will significantly change the analysis."
This immediately reframes Elena's thinking. She had been mentally conflating "build" with creating custom instruments from scratch. Option B — building a branded framework on licensed instruments — is a real possibility she hadn't fully defined.
She decides the decision is: (1) Full licensing of established platform, (2) Build branded methodology on licensed foundation, or (3) Strategic partnership with established firm that has an assessment platform.
The Decision Matrix
With the options clarified, Elena runs the decision matrix prompt:
Excerpt of AI-generated evaluation criteria:
| Criterion | Weight | Full License | Build on License | Partnership |
|---|---|---|---|---|
| Differentiation from competitors | 5 | 3 | 7 | 8 |
| Time to implement | 4 | 8 | 6 | 4 |
| Investment required | 4 | 6 | 5 | 7 |
| Revenue impact (pricing power) | 5 | 4 | 7 | 8 |
| Control over methodology | 4 | 6 | 9 | 3 |
| Client data continuity | 4 | 5 | 7 | 7 |
| Business model risk | 3 | 7 | 5 | 3 |
The AI-generated matrix scores "Build on License" highest on weighted total, with "Partnership" close behind. "Full Licensing" is clearly weakest on differentiation and pricing power.
Elena reviews the scores critically. She disagrees with two:
Business model risk: AI scored Partnership at 3 (high risk). She initially wants to push back — until she reads the explanation: "Partnership creates dependency on a third party whose interests may not be aligned with yours. They may change partnership terms, be acquired, or become a direct competitor. For a solo practice, this concentration risk is significant." She accepts this score.
Control over methodology: AI scored Partnership at 3, reflecting loss of control over the assessment instruments. But Elena realizes this is actually one of her underlying concerns — she's not sure she wants to be associated with another firm's methodology if she disagrees with it. She accepts this score too.
The Devil's Advocate: Testing Her Lean
After reviewing the matrix, Elena is leaning toward "Build on License." She runs the devil's advocate prompt:
I'm leaning toward building a branded methodology on a licensed assessment foundation. Make the strongest possible case against this.
AI output (selected):
"The strongest case against Build on License:
1. You're underestimating the ongoing investment. Building a methodology isn't a one-time project — it requires continuous refinement, case study documentation, and thought leadership content to make it credible. Consultants with branded methodologies spend 20-30% of their time maintaining and marketing the methodology itself, not just using it with clients. For a solo practitioner, this is a significant ongoing tax on capacity.
2. You may be solving the wrong problem. The client who went to the larger firm didn't necessarily go because of the assessment tool — they may have gone because the larger firm offered a larger team, broader capability, or lower perceived risk. If that's the case, building a better methodology won't win you that type of client. Have you asked former prospects why they chose competitors?
3. There's a sequencing risk. Building a methodology before you've validated client demand for it means you'll invest significant time before you know if it creates the differentiation you're hoping for. The 'build' path is irreversible in terms of opportunity cost — time spent building can't be spent on other growth levers.
4. The strongest version of the Partnership argument: a good partner brings not just tools but referral flow. If you partner with a larger firm that refers work it can't handle in-house, the partnership isn't a constraint on your independence — it's a growth channel. Are you rejecting this option partly because of control concerns that may not be as practically limiting as you think?"
The second argument stops Elena cold. She hasn't actually asked lost prospects why they chose competitors. She's been assuming the tool gap was the issue. She adds "Lost prospect interviews" to her to-do list immediately.
The fourth argument also lands. She'd been framing partnership as dependency and control loss. She hadn't fully thought through the referral flow dimension. She adds this to her analysis.
The Assumption Surfacing
Elena runs the assumption surfacing prompt:
Key assumptions AI identifies:
-
"Your clients value proprietary assessment tools enough to choose you over a competitor because of them." — Elena's confidence: Medium. She has never directly tested this with clients.
-
"You have the capacity to build and maintain a methodology alongside client delivery." — Elena's confidence: Low. She's currently at near-full capacity. "Build" requires discretionary time she doesn't have.
-
"Clients would pay a meaningful price premium for branded vs. licensed instruments." — Elena's confidence: Low. She has pricing conversations but has never tested the impact of assessment tool type.
-
"A partnership structure could be designed that preserves your independence." — Elena's confidence: Medium. She's been assuming partnership means subordination, but hasn't explored what a structured referral partnership would actually look like.
The capacity assumption is the one that most concerns her. She's been considering this decision for two years. Why hasn't she acted? She does a quick calculation: "Build on License" requires roughly 200 hours of upfront methodology development and 40+ hours per year of ongoing maintenance. Looking at her calendar honestly — she doesn't have those hours without something else giving way.
The "Question You're Not Asking" Prompt
At this point, Elena has a solid analysis but still feels uncertain. She runs the Socratic prompt:
Based on my situation and this analysis, what question should I be asking that I'm not asking?
AI response:
"1. 'What type of clients do I actually want to serve in five years?' — Your decision about assessment tools is downstream of your positioning decision. If you want to move up-market to serve enterprise clients, proprietary tools are almost required. If you want to stay in your current market of mid-sized organizations, the tool gap may matter less than you think. You're making a tool decision without a clear strategy decision to anchor it.
2. 'Is this the right time to make this decision?' — You've been deferring this for two years. Is there a reason? Capacity is one answer. Another might be that your practice hasn't yet generated the proof-of-concept to know what type of differentiation will actually matter. If you're growing at a rate that's working, the urgency might be less than the lost-prospect trigger makes it feel.
3. 'What would I do if I had 50% more capacity?' — The capacity constraint may be the hidden variable dominating this decision. If you had more time, which option would you choose? If the answer is 'Build,' then the real decision is whether to create that capacity — by hiring, by raising prices, by offloading lower-value work."
The first question hits hardest. Elena has not clearly defined her five-year positioning. She's been treating this as a tool decision, but it's actually a strategy decision — where does she want to play, and who does she want to serve?
The Decision
Elena's final decision, after the four-hour AI-assisted analysis session: she will not decide on Build vs. License vs. Partner until she has completed two things:
-
Lost prospect interviews. She'll contact the last four prospects who chose competitor firms and ask directly why. She needs to test her assumption that assessment tools were the deciding factor.
-
Positioning clarity. She'll spend half a day on the question of where she wants the practice to be in five years — what type of client, what type of work, what type of firm.
The AI analysis has not produced a decision. But it has done something arguably more valuable: it has revealed that she was solving the wrong problem. The assessment tool question is downstream of a positioning question she hasn't answered.
She schedules the lost prospect calls for the following week. She blocks a morning for the strategy session the following Saturday.
Three Months Later
The lost prospect interviews surface a pattern Elena didn't expect: three of the four did mention assessment capability, but not as the primary reason. The primary reason was team size — they wanted a firm with multiple consultants because it reduced delivery risk. One prospect explicitly said: "You're great, but if you're sick or busy, what happens to our project?"
This validates the AI's devil's advocate point about solving the wrong problem.
Elena's positioning clarity session — using AI to help structure it — reveals that she wants to stay in her current market segment (mid-sized organizations) but grow to a point where she can take on larger, longer engagements. That requires capacity, which means hiring before tools.
Her ultimate decision: hire a junior consultant first (to address the capacity and team-size concerns), then revisit the assessment platform decision with more capacity and a clearer picture of what the second consultant needs. She files the Build/License/Partner analysis and schedules a review in 12 months.
The decision AI helped her reach was not any of the three options she started with. It was: "Not yet, and here's what I need to do first." That's a better outcome than any of the three options would have been.
Reflections on the AI-Assisted Analysis
Elena's post-analysis notes for her own reference:
"AI was most valuable at two moments:
First, in the setup — helping me get specific about what 'build' actually means. I would have analyzed the wrong decision without that clarification.
Second, in the devil's advocate — the argument about the wrong problem. I came in confident I understood why I was losing prospects. AI challenged that assumption in a way that sent me back to primary research. That reframe changed the entire decision.
What AI couldn't do: weigh my values about independence, understand my specific relationship with my existing clients, or account for the fact that I've been in this market long enough to know things about client behavior that don't show up in general analysis.
The four hours were the most productive strategic thinking time I've spent this year. Not because AI decided anything — but because it forced me to be more specific, more honest about my assumptions, and more rigorous about what I actually know vs. what I'm assuming."