Case Study 24-1: Elena's Engagement Launch — 90-Minute Project Plan
The Setup: Consulting engagement won Friday afternoon. Kickoff Monday morning. One weekend to go from "yes, we'll do this" to a credible project plan.
Context
Elena Vasquez is an independent organizational development consultant. At 4:30 on a Friday, she receives confirmation that she has won a six-month employee engagement consulting engagement with Meridian Professional Services, a 400-person accounting and advisory firm.
The contract is for $180,000. The kickoff meeting is Monday at 9:00 AM. The client expects her to arrive with a draft project plan.
Elena has done similar work before — employee engagement initiatives are in her core practice area. But each engagement is different, and Meridian has specific characteristics that make this one non-standard: a partnership governance structure (not a traditional employee-employer hierarchy), three departments with significantly lower engagement scores than the rest of the firm, and a CEO who is skeptical of "soft" interventions and wants to see hard metrics.
It's Friday at 5:00 PM. Elena has until Sunday evening to produce a project plan she's confident presenting Monday morning. She opens her laptop.
Phase 1: Scoping (5:00–5:25 PM, Friday)
Elena's first move isn't to open a project plan template. It's to interrogate her own understanding of the scope.
She runs the hidden complexity prompt:
I'm planning an employee engagement improvement project for a 400-person professional services firm. It's a partnership structure (accounting and advisory firm). Goal: improve engagement scores by 15% within six months using manager training, communication improvements, and targeted department initiatives. Three departments have significantly lower scores than the rest of the firm. The CEO is skeptical of "soft" interventions and wants to see measurable outcomes.
The AI response identifies six sources of hidden complexity. Elena reads them carefully, annotating as she goes:
AI output item 1: "In partnership structures, partners are often not 'employees' in the traditional sense and may not be included in engagement surveys. If partners are excluded from your measurement but included in your interventions, the metrics and the program may not align."
Elena notes: "CRITICAL — need to ask Monday about partner participation. Does engagement survey include partners? If not, manager training for partner-level people won't show in the metrics."
AI output item 2: "The three low-scoring departments — do their leaders know they've been identified? Their cooperation or resistance will significantly shape your access and the interventions' effectiveness. Low scores might reflect a people manager problem that the manager themselves is defensive about."
Elena notes: "Ask Sarah [the HR Director] before Monday if possible. This changes how I approach the first conversation in those departments."
AI output item 3: "'Improve engagement scores by 15%' and 'improve engagement' are different objectives. If the firm runs more intensive pre-survey communication this time, scores can increase without any real change. Clarify: are we trying to improve the metric or improve the underlying reality?"
Elena notes: "Include this distinction in my project plan as a scope clarity item. The CEO skepticism about 'soft' interventions may actually be cover for skepticism about metric manipulation."
AI output item 4: "Manager training effectiveness typically shows up in engagement outcomes 3-6 months after behavior change, and behavior change requires 6-8 weeks of consistent practice. If your six months starts from program kickoff, and manager training starts in month one, you may not capture the full impact in the measurement window."
Elena notes: "Timeline implication — manager training needs to start FAST. Maybe month 1. Will need to flag this to client as a timeline risk."
She sends a quick email to Sarah asking about partner participation and the three department heads' awareness. Then she moves to WBS generation.
Phase 2: WBS Generation (5:30–6:30 PM, Friday)
Elena runs the WBS generation prompt:
Project: Employee Engagement Initiative at Meridian Professional Services Objective: Improve employee engagement scores by 15% over six months, with specific focus on three low-performing departments Timeline: 6 months Team: Myself (solo consultant) + 1-2 internal HR team members from client Key constraints: CEO skepticism about soft interventions; partnership governance structure; must show measurable outcomes Methodology: Hybrid (structured phases with regular check-ins and adaptive execution)
The AI generates a 38-task WBS across five phases: 1. Discovery and Baseline 2. Program Design 3. Implementation — Firm-wide 4. Implementation — Targeted Departments 5. Measurement and Reporting
Elena reviews it. She marks up changes:
- Adds "Partner engagement strategy" as a subtask under Phase 1 — the AI missed the partner governance dimension entirely
- Splits the "Manager training" deliverable into design, pilot, and full rollout — the AI had it as a single task, which underestimates complexity
- Adds "CEO reporting cadence" as a recurring task — she knows from experience that the CEO skepticism requires a dedicated communication track
- Removes "Focus groups (firm-wide)" — too many inputs for a solo consultant to process in six months
- Adds "Baseline metric documentation" as the first task — critical for demonstrating the 15% improvement at the end
She runs the "What am I missing?" prompt on her revised WBS.
New items it surfaces that she accepts: - Change management communication plan (she'd intended to do this but hadn't captured it) - Stakeholder buy-in tracking (ongoing, not one-time) - "Quick wins" identification in first 30 days to demonstrate early momentum to the skeptical CEO
She declines the suggestion to add an employee advisory committee — too resource-intensive for a six-month engagement.
By 6:30 PM, she has a 42-task WBS she believes in.
Phase 3: Risk Analysis (7:00–7:45 PM, Friday)
After dinner, Elena runs the risk brainstorming prompt.
The AI generates a 24-item risk register. Elena's top concern, which the AI names explicitly:
"Stakeholder resistance in low-performing departments: If department managers in the three low-scoring areas are defensive about their scores, they may cooperate superficially while undermining interventions. This risk is high likelihood and high impact — access and buy-in from these three managers is the most critical success factor in the program."
She runs the pre-mortem prompt.
The failure story that stops her cold:
"The engagement initiative launched with strong early momentum. Manager training was well-attended. The communication campaign was effective. Six months in, the measurement survey was administered. Scores improved by 7%, not the 15% target. Investigation revealed that two of the three targeted departments had seen manager turnover mid-program — one manager left the firm, another was promoted out. Their replacements hadn't received the training and hadn't been briefed on the engagement goals. The program's impact was real but not fully captured because the metric was a lagging indicator that had been disrupted by personnel changes."
Elena stares at this for a moment. She has never explicitly built contingency plans for manager turnover into an engagement program. She adds a task: "Succession continuity plan — ensure manager transition process includes engagement briefing for incoming managers."
She also adds a risk item: "Manager turnover mid-program" to the risk register with a mitigation plan.
Phase 4: Timeline and Communication Plan (9:00–10:30 PM, Friday)
Elena doesn't need AI to help her estimate timeline — she knows this work. But she uses AI to structure the communication plan.
Communication plan prompt:
I'm a solo consultant running a 6-month engagement initiative. My stakeholder groups: - CEO: Skeptical; wants hard metrics; gets impatient with process - HR Director Sarah: My internal champion; needs to be well-briefed before things happen - Three department heads: Mixed buy-in; politically sensitive - Firm-wide employees: The "population" of the initiative; need to feel engaged, not studied
Create a communication cadence and template for each group.
The AI generates distinct communication templates for each group. For the CEO, it suggests monthly 15-minute metric briefings with a one-page dashboard format. For Sarah, weekly check-ins. For department heads, individual briefings before any firm-wide communications that affect their teams.
Elena modifies the CEO dashboard template significantly — she's learned from experience that CEOs with her client's profile respond better to narrative context around metrics than pure numbers. She keeps the structure but adds a "what this means for the business" section.
Saturday and Sunday: Refinement
Saturday morning, she gets Sarah's response: partners are excluded from the survey, but two of the three department heads know their departments have been flagged. The third does not.
This changes her Monday approach significantly. She restructures the first two weeks of the project plan to include individual briefings with all three department heads before any firm-wide communication, and she adds a task: "Department head briefing — prepare individually tailored talking points."
Sunday afternoon, she runs one final AI pass:
Prompt: "Review this project plan for a 6-month employee engagement consulting engagement. What is the single most important thing I haven't adequately addressed?"
AI response: "The plan is thorough, but I notice there is no explicit mechanism for capturing and communicating quick wins in the first 30-60 days. Given that the CEO is skeptical of the initiative and the full measurement window is six months, having no interim evidence of progress is a significant risk to continued sponsorship. Consider adding a 30-day and 60-day 'progress marker' — specific, observable indicators that the program is working — with a brief CEO communication at each point."
Elena adds a "30-day progress report to CEO" milestone to the plan. It's a 30-minute meeting with a one-page summary of early observations. She knows from experience that this kind of early touch-point is often what keeps skeptical sponsors engaged through the slow early phases of a program.
Monday Morning
Elena arrives at Meridian's offices at 8:50 AM. Her project plan is a 14-page document with: - Project scope statement with explicit clarifications - 42-task WBS across five phases - 18-item risk register with top 5 fully mitigated - Six-month milestone timeline - Stakeholder communication plan with templates - Success metrics definition (including the distinction between improving the score and improving engagement)
The kickoff meeting runs 90 minutes. The CEO asks hard questions about measurement methodology — and Elena is ready. She has already addressed the score-vs-reality distinction in her scope statement. The CEO's expression shifts when she introduces it: "Most consultants we've talked to just commit to the 15%. You're the first one who asked what we're actually trying to achieve."
She leaves the meeting with an approved project plan and a signed authorization to begin discovery interviews the following week.
What Made the Difference
Elena reflects afterward on what the 90-minute AI planning session contributed:
Scoping questions she wouldn't have asked: The AI's identification of the partner participation issue was one Elena should have thought of herself but didn't — because she was focused on the manager training deliverables, not the measurement framework. That question, asked before the kickoff, changed the scope clarification she brought into the room.
The pre-mortem failure story: The manager turnover scenario was not on her mental risk list. It was obvious in retrospect, but she hadn't thought of it. Adding the succession continuity task was worth more than the entire hour she spent on risk brainstorming.
Communication templates: Having a CEO dashboard template to start from saved her 45 minutes of formatting. The structure was solid; she modified the substance.
What AI didn't do: It didn't understand Meridian's governance structure, the significance of the CEO skepticism, or the political dynamics around the three departments. Every important judgment call — how to handle the department head who didn't know his scores were low, how to manage the CEO's expectations, what a realistic 15% target meant — was hers.
The 90 minutes was an accelerant, not a replacement. The knowledge and judgment she brought to Monday's meeting was entirely her own. The plan just captured it better, more completely, and more quickly than she could have produced alone.