Case Study 2: Elena's Client Research System — Claude Projects in Practice
Background
Elena Rodriguez runs an independent management consulting practice. At any given time she manages two to four active client engagements, each spanning three to six months. Each engagement involves a distinct body of research: client interviews, industry reports, competitive data, internal documents the client provides, and Elena's own accumulated synthesis.
Before building her Claude Projects workflow, Elena managed each engagement's knowledge through a combination of folders in cloud storage, Word documents, and her own memory. The functional result: she spent a significant amount of time at the start of each working session re-establishing context — re-reading notes from the last session, tracking down where she had left a thread of analysis, and reconstructing the state of her synthesis from documents written days or weeks apart.
The opportunity was clear: if the context could be established once and maintained persistently, each working session could start from where the last one ended rather than from re-reading accumulated files.
Elena ran a pilot of the Claude Projects approach on a six-month organizational strategy engagement with a regional professional services firm. This case study documents what she did, what worked, and what she changed as the project evolved.
Setting Up the Initial Project
Elena created the project on the day the engagement contract was signed. Project name: "[Client Name] Strategy Engagement — Q2-Q3 [Year]"
Her initial project instructions were deliberately minimal. She has learned from experience that detailed upfront instructions become outdated quickly in complex engagements. The initial instructions covered four things:
Client context — who the client is, what the engagement is trying to accomplish, and the key stakeholders' names and roles. This is the context that never changes and is always relevant.
Elena's role — that she is the primary strategist and deliverable author, that she uses Claude to organize research, test synthesis, and draft deliverable components.
Output standards — write for a CEO audience unless specified otherwise, use formal but accessible language, always distinguish interpretation from data.
Current phase — she updates this section as the engagement progresses ("We are in data collection phase" or "We are drafting the strategic options section").
The initial instructions were 350 words — long enough to establish context, short enough to remain readable.
Document Management Through the Engagement
The project's document library evolved through six phases:
Phase 1 (Weeks 1-2): Foundation documents - Engagement scope and objectives document - Client background — 800-word summary of the firm's history, market position, and the strategic challenges prompting the engagement - Stakeholder map — names, roles, influence level, known positions on key questions - Interview guide — the questions Elena planned to ask in stakeholder interviews
Elena's practice: upload a document, then immediately test it by asking Claude a question whose answer is in that document. If Claude retrieves and uses the information accurately, the document is working.
Phase 2 (Weeks 3-6): Interview notes Elena conducted 35 stakeholder interviews over four weeks. After each interview, she uploaded her notes (anonymized by default — stakeholder names replaced with role titles and an ID code) to the project.
She developed a standard format for interview notes that significantly improved retrieval quality:
# Interview: [Role Title] [ID]
Date: [Date]
Duration: [Minutes]
## Key Themes Raised
- [Theme 1]: [2-3 sentence description]
- [Theme 2]: [2-3 sentence description]
- [Theme 3]: [2-3 sentence description]
## Notable Quotes
"[Direct quote]" — [Context]
"[Direct quote]" — [Context]
## Their Assessment of [Key Engagement Question 1]
[3-4 sentences on their view]
## Their Assessment of [Key Engagement Question 2]
[3-4 sentences on their view]
## Elena's Observations
[What struck Elena as significant — distinguishing her interpretation from what was stated]
The consistent format made cross-interview synthesis dramatically more effective. When Elena asked "What do different stakeholders say about the partnership model?" Claude could retrieve the relevant section from each interview note and synthesize across them accurately.
Phase 3 (Weeks 5-8): Research documents Elena uploaded industry reports, benchmarking studies, and competitor analyses as the research phase proceeded. Each document was uploaded with a one-paragraph "context note" at the top explaining what the document is and why it is in the project.
Phase 4 (Weeks 7-10): Synthesis documents As Elena began synthesizing research into preliminary conclusions, she created and uploaded synthesis documents to the project. These evolved continuously — she would update and re-upload them as new information came in.
The synthesis documents served a dual purpose: they were working files and they were training context for Claude. When Claude could see Elena's own preliminary synthesis, its assistance with further synthesis was better calibrated to her analytical framework.
Phase 5 (Weeks 9-14): Draft deliverables Section drafts were added to the project as they were written. Having the current state of the deliverable in the project allowed Elena to ask questions like "Does the strategic options section contradict anything in the market analysis section?" — a check she would previously have had to do entirely manually.
Phase 6 (Weeks 13-16): Final deliverables and retrospective Complete deliverables and a project retrospective document were added at the end.
How Elena Used the Project Day to Day
Elena's typical working session with the project:
Starting the session (5 minutes): Ask for a brief status summary — "What is the current state of the strategic options analysis? What open questions remain?" This replaced re-reading her last session's notes.
Research synthesis: After adding new interview notes, ask for cross-interview pattern analysis. "Looking at all the interview notes, what are the most consistent concerns about [specific topic]? Which stakeholders diverge most from the consensus?"
Hypothesis testing: "I'm considering the hypothesis that [hypothesis]. What evidence in our research supports this? What contradicts it?" This rapidly surfaced evidence for and against analytical positions Elena was developing.
Draft assistance: "Based on the market analysis documents and the interview themes about competitive position, draft the competitive landscape section of the assessment. Use the format from the strategic options section we've already written."
Consistency checking: "Review the executive summary and the detailed findings section. Are there any contradictions or inconsistencies between them?"
Preparation: "I have a check-in call with the CEO tomorrow. Based on our research, what are the three findings most likely to surprise or concern her?"
What Worked Particularly Well
Cross-document synthesis: The project's ability to draw on all uploaded documents simultaneously transformed how Elena worked with interview data. Previously, identifying patterns across 35 interviews required hours of manual review. With the project, she could ask specific cross-interview questions and receive synthesis in minutes — with the important caveat that she always reviewed the synthesis carefully, as Claude occasionally created patterns from superficially similar but substantively different statements.
Version memory: Because the project retained conversation history, Elena could ask questions like "Earlier you helped me draft the stakeholder analysis section — what was the key finding we emphasized?" This longitudinal memory across a 16-week engagement was not available in standard session-based AI use.
Calibrated drafting: After the project had accumulated several weeks of Elena's own synthesis and deliverable drafts, Claude's assistance with new sections was calibrated to her analytical framework and communication style in a way that early-project assistance was not. The project "learned" her style from her uploaded work.
What Required Adjustment
Context precision in project instructions: The initial project instructions did not specify which engagement questions were most analytically important. Early synthesis requests sometimes emphasized less important themes. Elena added a section to the instructions titled "Key Strategic Questions" listing the five core questions the engagement was trying to answer. Synthesis quality improved substantially.
Distinguishing Elena's analysis from source material: Claude's synthesis occasionally blurred the line between what sources said and what Elena had analyzed the sources to mean. Elena added an explicit instruction: "Always distinguish between [Source states: ...] and [Elena's analysis: ...] when synthesizing." She also adjusted her interview note format to more clearly separate direct quotes from her interpretive notes.
Document updating discipline: Some early documents became outdated as the engagement evolved. Elena developed a practice of dating every document and adding a "superseded by" note when uploading an updated version. This prevented Claude from drawing on outdated analysis.
Outcome and Reflection
The engagement produced a strategic assessment and implementation roadmap that the client rated as the highest-quality deliverable they had received from an external advisor. The CEO specifically noted the depth and specificity of the stakeholder analysis section.
Elena's own assessment: the project reduced her research synthesis time by approximately 40% compared to her previous approach. More significantly, it changed the type of work she was doing — less time organizing and re-establishing context, more time on the strategic thinking and client judgment that justified her engagement fees.
She now creates a Claude Project for every client engagement immediately upon contract signature. Her setup template — initial instruction structure, document naming conventions, interview note format — has been refined across five engagements and is stable enough to apply consistently.
Her most important reflection: "The project is only as good as the documents you put in it. I spent time early in the pilot thinking about how to prompt Claude to get better synthesis. The real lever was the quality and structure of my source documents. Better inputs make better synthesis — that's true with AI and it was true before AI."