Case Study 2: Elena's Research-to-Report Chain — From Sources to Client Deliverable
Background
Elena Rodriguez runs a boutique management consulting practice focused on organizational strategy for mid-sized professional services firms. Her work is research-intensive: most engagements involve synthesizing information from interviews, industry reports, financial data, and benchmarking studies into actionable strategic recommendations.
The deliverables Elena produces are her primary value vehicle — a client pays for a consulting engagement and receives a set of documents (strategy memos, market analyses, implementation roadmaps) that serve as the foundation for the client's decision-making. Quality and credibility of these documents are not just quality standards; they are existential for Elena's practice. A document that misstates a finding or misattributes a conclusion is not a minor error — it is a credibility crisis.
Elena's challenge before building her research chain was not output quality per se — her documents were consistently well-regarded — but production efficiency. A typical 30-40 page strategic assessment took her 60-80 hours of work, most of which was organizing and synthesizing research rather than developing the strategic insights that justified her engagement fees. She was spending her highest-cost hours on work that, while critical to accuracy, was largely organizational rather than analytical.
Her goal: cut the organizational and synthesis phase from 30-40 hours per engagement to under 15 hours, while maintaining the research rigor her clients expected.
The Nature of Elena's Research Problem
Elena's research for a typical engagement arrives in heterogeneous forms: - Interview transcripts (10-20 interviews, 5,000-15,000 words total) - Industry reports and white papers (5-15 documents, varying length) - Financial and operational data provided by the client - Competitive intelligence gathered through desk research - Benchmark data from industry associations or proprietary databases
The synthesis challenge: find the patterns that run across all these sources, weight them appropriately (a single interview's view vs. an industry survey's finding), connect them to the client's specific strategic context, and build a coherent narrative that leads to actionable recommendations.
This is genuinely complex analytical work. The specific value Elena adds is the quality of her strategic synthesis — the insights that emerge from understanding not just what each source says but what the combination means.
The problem: before the strategic synthesis can happen, there is a substantial organizational and summarization phase that takes 30+ hours but does not require Elena's strategic judgment. This phase is where the chain could help most.
Chain Design
Elena designed a five-step chain that handles the organizational and first-level synthesis work, leaving the strategic interpretation for her direct involvement.
The chain's job: Take a collection of heterogeneous source documents and produce a well-organized, accurately summarized research base that Elena can work from to develop her strategic synthesis.
Elena's job (not in the chain): The interpretive and strategic work — deciding what the patterns mean, how they connect to the client's situation, and what the recommendations should be.
Step 1: Source Inventory and Classification
Purpose: Get a structured overview of what research has been gathered before any substantive analysis begins.
Input: A list of all source documents with brief descriptions
Prompt:
You are a research analyst working on a strategic assessment engagement. I am providing you with a list of research sources gathered for this engagement. Produce a structured source inventory.
For each source, classify it as:
- Type: [interview transcript / industry report / financial data / competitive intelligence / benchmark data / other]
- Relevance scope: [which strategic questions does this source address?]
- Credibility level: [high / medium / low, with one sentence explaining your rating]
- Key data or perspectives: [2-3 bullet points on what this source contributes]
After the inventory, provide:
- Coverage assessment: which strategic questions appear well-covered by the research?
- Gap assessment: which strategic questions appear under-covered or not addressed?
SOURCE LIST:
{source_list_with_descriptions}
Output: Structured inventory table + coverage and gap assessment
Human review: Elena reviews in 15-20 minutes. She is checking whether the AI has correctly classified sources and identified gaps she needs to fill before the engagement concludes. This early gap identification is itself a chain benefit — under the old process, Elena often discovered research gaps late in the writing phase.
Step 2: Per-Source Summarization
Purpose: Extract the most relevant content from each source document into a standardized summary format.
Input: Individual source documents, processed one at a time. Output assembled into a collection.
Prompt (applied to each document individually):
You are summarizing a research source for a strategic assessment engagement. The engagement is focused on: {engagement_focus}
The strategic questions we are trying to answer are:
{strategic_questions}
For this source document, produce:
1. DOCUMENT OVERVIEW: Type, provenance, date, and 2-sentence description
2. KEY FINDINGS: The 5-7 most relevant findings, stated as specific claims (not vague generalizations). For each finding, quote or closely paraphrase the specific supporting passage.
3. QUOTES FOR REPORT: 2-3 direct quotations that are strong enough to use verbatim in the final report (if the source permits quotation).
4. LIMITATIONS: Any caveats about this source's findings (sample size, date, geographic scope, potential bias, etc.)
5. CONNECTION TO STRATEGIC QUESTIONS: For each strategic question listed above, rate this document's relevance (high/medium/low/not relevant) and note the most relevant finding.
DOCUMENT:
{document_text}
Output: Standardized summary for each source document
Human review: Elena spot-checks a sample of summaries (not every one) for accuracy. Her primary check: do the "Key Findings" accurately represent what the source says, or has the AI overstated/understated or subtly misrepresented? She has found this step reliable on well-structured documents (industry reports, academic papers) and occasionally requires correction on interview transcripts where nuance matters.
Step 3: Cross-Source Synthesis
Purpose: Identify patterns, agreements, tensions, and surprises across the full body of research.
Input: All per-source summaries assembled from Step 2 + strategic questions
Prompt:
You have the following summaries of research sources gathered for a strategic assessment. Your task is to identify patterns across all sources.
STRATEGIC QUESTIONS:
{strategic_questions}
SOURCE SUMMARIES:
{all_summaries}
Produce a cross-source synthesis document with the following sections:
1. POINTS OF CONVERGENCE: Where do multiple independent sources agree? List 5-7 findings that appear consistently across sources. For each, note which sources support it.
2. POINTS OF TENSION: Where do sources appear to contradict or significantly qualify each other? List 2-4 tensions and briefly explain the nature of the disagreement.
3. STRONG EVIDENCE FINDINGS: 3-5 findings supported by the most robust and credible evidence. These are candidates for high-confidence recommendations.
4. WEAK EVIDENCE FINDINGS: 3-5 findings that appear interesting but are supported by limited, dated, or low-credibility evidence. These require caveating or additional research.
5. SURPRISES: 2-3 findings that were unexpected based on the engagement's initial hypotheses or that challenge conventional wisdom in the client's industry.
6. STRATEGIC QUESTION COVERAGE: For each strategic question, summarize the state of the evidence in 3-4 sentences.
Output: Cross-source synthesis document
Human review: Full review by Elena — 45-60 minutes. This is the most critical review gate in the chain. Elena is checking: Are these convergences real or are they an artifact of the AI treating adjacent-but-distinct findings as the same? Are the tensions correctly identified? Are the high-confidence findings actually supported by what the sources say? She frequently makes additions and revisions here, drawing on her own analytical judgment. The chain's output is the starting point, not the finished synthesis.
Step 4: Deliverable Structure
Purpose: Design the architecture of the final document, calibrated to the specific audience and communication context.
Input: Approved cross-source synthesis + strategic questions + audience profile + deliverable format requirements
Prompt:
You are a management consultant structuring a strategic assessment deliverable. Based on the following research synthesis and requirements, design the document structure.
RESEARCH SYNTHESIS:
{approved_synthesis}
AUDIENCE: {audience_description}
DELIVERABLE FORMAT: {format_requirements}
ENGAGEMENT CONTEXT: {engagement_context}
Design a complete document structure including:
1. Document title and executive summary approach (what 3 key messages should the executive summary convey?)
2. Section structure: list each major section with H2 heading, a one-paragraph description of what it covers, and the evidence from the synthesis it draws on
3. Recommendation architecture: how many recommendations? At what level of specificity? In what order of priority?
4. Supporting appendices: what supporting material belongs in appendices rather than the main document?
5. Visual opportunities: where in the document would a chart, table, or diagram strengthen the argument? Describe what each visual should show.
Output: Detailed document structure
Human review: Elena reviews in 20-30 minutes. Structure is where her strategic judgment shapes the document most. She typically restructures 20-30% of what the AI suggests — reordering sections, splitting or combining sections, and adjusting the recommendation architecture based on her knowledge of the specific client's decision-making context.
Step 5: Draft Sections
Purpose: Produce draft prose for each section of the deliverable.
Input: Approved structure + approved synthesis + relevant per-source summaries for each section + Elena's interpretive notes
Prompt (applied section by section):
You are drafting a section of a management consulting deliverable. Write in a clear, authoritative, evidence-based style. Avoid jargon. Every claim should be supported by evidence from the research provided.
SECTION TO DRAFT: {section_name}
SECTION PURPOSE: {section_description}
TARGET AUDIENCE: {audience}
EVIDENCE TO DRAW ON:
{relevant_synthesis_excerpts}
{relevant_source_summaries}
ELENA'S NOTES FOR THIS SECTION:
{elenas_interpretive_notes}
Write the section draft. Requirements:
- Length: approximately {target_length} words
- Lead with the key finding, not with methodology or context
- Use active voice
- Each paragraph should advance the argument
- Recommend specific actions, not vague directions
- Where evidence is limited, acknowledge it explicitly
Output: Draft prose for each section
Human review: Elena reviews each section draft as it is produced, editing directly. Her average editing time per section is 15-20 minutes for a well-drafted section, 30-45 minutes for sections requiring more substantial revision. The chain significantly reduces the number of sections requiring major revision — drafts that have the synthesis and her interpretive notes as input are consistently more accurate than drafts produced from memory or general prompts.
Results
Elena ran the chain across three full engagements in its first four months. The results were striking enough that she restructured her practice around it.
Time in research synthesis phase: Reduced from an average of 34 hours per engagement to 14 hours — a 59% reduction. The hours saved were not evenly distributed: the largest savings came from the per-source summarization step (Step 2), which had previously been done entirely by Elena and now was done primarily by the AI with Elena's review.
Deliverable quality: Assessed through client feedback scores, which Elena tracked on a consistent rubric. Average quality score improved from 4.2 to 4.7 on a 5-point scale. The improvement was concentrated in "evidence and support for recommendations" and "comprehensiveness of research coverage" — both directly attributable to the more systematic research processing the chain enabled.
Research gap identification: The chain identified research gaps at Step 1 that Elena would previously have discovered only during writing. In two of three engagements, this led to additional targeted research that strengthened the final deliverable. In the old process, these gaps would have resulted in hedged recommendations or unacknowledged weaknesses in the analysis.
Critical Observations
Elena is emphatic about what the chain does and does not do:
"The chain organizes and summarizes what my research says. It does not tell me what the research means for my client's specific situation. That interpretation is still entirely mine, and it's what I'm paid for. If I took the chain's synthesis straight into a deliverable without applying my own strategic judgment, the deliverable would be competent but not valuable. It would tell the client what the research says without telling them what to do about it."
The distinction matters for understanding what chains can and cannot do in knowledge-intensive professional work. The chain handles the cognitive work of organization and first-level synthesis — work that is necessary but not differentiating. Elena's differentiating contribution — the strategic interpretation that turns organized research into actionable guidance — remains entirely in her domain.
The chain also created an unexpected benefit: better documentation. Because every source summary is preserved in a structured format, Elena can quickly answer client questions ("what exactly did the industry report say about this?") by reviewing the chain's intermediate outputs rather than re-reading the original documents. Several clients have commented positively on Elena's ability to trace recommendations back to specific evidence — a capability the chain's documentation practice directly enables.