Case Study 1: Cross-Domain Transfer in Practice -- Three Success Stories, Three Failure Stories
"Analogy is the fuel and fire of thinking." -- Douglas Hofstadter
Six Transfers, Six Lessons
This case study examines six attempts at cross-domain transfer -- three that succeeded brilliantly and three that failed instructively. The goal is not to catalog successes and failures but to understand why some transfers work and others do not. The six-step method and the analogy quality test, introduced in the chapter, provide the analytical framework. Each story illustrates a different lesson about the practice of moving ideas between fields.
Part I: Three Transfers That Worked
Transfer 1: From Aviation Checklists to Surgical Safety
The Problem: In the early 2000s, surgical errors were a leading cause of preventable death in hospitals. The World Health Organization estimated that at least seven million patients worldwide suffered complications from surgery each year, and at least one million died. Many of these deaths were caused not by the inherent difficulty of the surgery but by preventable errors: operating on the wrong site, failing to administer prophylactic antibiotics, leaving surgical instruments inside patients.
The Abstraction: Complex, high-stakes procedures performed by expert teams under time pressure fail not because of insufficient skill but because of insufficient standardization. Critical steps are skipped not because operators do not know them but because, in the pressure and complexity of the moment, they forget.
The Source Domain: Aviation had faced the same problem decades earlier. Pilots are highly trained experts performing complex procedures under time pressure. In the 1930s, after a series of crashes caused not by mechanical failure but by pilot error -- specifically, by pilots forgetting critical steps in complex procedures -- the aviation industry developed the preflight checklist: a simple, standardized list of steps that must be completed and confirmed before takeoff. The checklist did not teach pilots anything new. It ensured they did what they already knew, every time, without exception.
The Transfer: Dr. Atul Gawande, a surgeon at Brigham and Women's Hospital and a writer for The New Yorker, recognized the structural parallel. He worked with the WHO to develop the Surgical Safety Checklist: a nineteen-item checklist covering three phases of surgery (before anesthesia, before incision, before the patient leaves the operating room). The checklist included items that every surgeon already knew -- confirm the patient's identity, confirm the surgical site, administer antibiotics -- but that were sometimes skipped in the rush of the operating room.
The Result: In a study across eight hospitals in eight countries, the checklist reduced surgical complications by 36 percent and deaths by 47 percent. These are staggering numbers. The intervention did not require new technology, new drugs, or new skills. It required a piece of paper with nineteen items on it, imported from a completely different industry.
Why It Worked (Analogy Quality Test):
- Elements: Pilot corresponds to surgeon. Copilot corresponds to surgical assistant. Aircraft corresponds to patient. Preflight corresponds to pre-incision. The mapping is clear.
- Relationships: The relationship "expert under pressure skips known steps" operates identically in both domains.
- Causal mechanism: The mechanism is the same -- standardization prevents omission errors by externalizing memory. The checklist works for the same reason in both domains.
- Where it breaks: Aviation is more standardized than surgery -- every Boeing 737 is identical, but every patient is different. The surgical checklist had to be more flexible than the aviation checklist, allowing for patient-specific modifications.
- Testable prediction: The analogy predicted that a checklist would reduce errors. The prediction was tested and confirmed, with dramatic results.
Lesson: When the causal mechanism transfers cleanly, the results can be transformative. The key was that Gawande imported the principle (externalize critical steps to prevent omission) rather than the specific checklist (which had to be designed for surgery).
Transfer 2: From Ecological Diversity to Financial Portfolio Theory
The Problem: In the 1950s, investors faced a fundamental question: how should you allocate your money across different investments? The conventional wisdom was to find the single best investment and put all your money there. But this strategy was devastatingly vulnerable to the failure of that single investment.
The Abstraction: A system that depends on a single source of returns is fragile. How can a system be designed to produce reliable returns while protecting against the catastrophic failure of any single component?
The Source Domain: Ecologists had long understood that ecosystem stability depends on biodiversity. A forest with thirty species of trees is more resilient than a forest with one species, because a disease or pest that devastates one species leaves the others intact. The ecosystem's "returns" (biomass production, nutrient cycling) are more stable when they come from diverse sources. The mechanism is simple: different species respond differently to the same environmental stress, so the failures of individual species are uncorrelated.
The Transfer: Harry Markowitz, in his 1952 doctoral dissertation, formalized this insight in the language of finance. His Modern Portfolio Theory demonstrated mathematically that a diversified portfolio -- one containing investments whose returns are not perfectly correlated -- will produce higher returns for a given level of risk (or lower risk for a given level of returns) than any individual investment. The key insight was the same one ecologists had understood: uncorrelated sources of returns provide stability through diversity.
Why It Worked: The causal mechanism is mathematically identical. In ecology, uncorrelated species responses to environmental stress stabilize ecosystem productivity. In finance, uncorrelated asset returns stabilize portfolio returns. The mathematics is the same: the variance of a sum of uncorrelated random variables is less than the sum of their variances. Markowitz may or may not have been consciously inspired by ecology, but the structural homology is complete.
Lesson: The strongest cross-domain transfers occur when the underlying mathematics is literally the same. Diversification as a strategy for resilience is a pattern that applies to any system managing risk across multiple sources -- ecosystems, financial portfolios, supply chains, career skills, social networks.
Transfer 3: From Military After-Action Reviews to Corporate Learning
The Problem: In the 1990s, many corporations recognized that they were repeating the same mistakes. Project teams would complete a project, disband, and scatter -- carrying their hard-won lessons to new assignments where the lessons were never systematically shared. The organization learned slowly, if at all, because its learning mechanism was individual memory rather than institutional knowledge.
The Abstraction: An organization that relies on individual memory for institutional learning loses knowledge when individuals leave or move. How can an organization create a systematic process for capturing and distributing lessons from experience?
The Source Domain: The U.S. Army had faced this problem and developed the After-Action Review (AAR) -- a structured debriefing process conducted after every significant event (training exercise, operation, mission). The AAR asks four questions: What was supposed to happen? What actually happened? Why was there a difference? What can we learn from this? The process is structured but non-hierarchical: rank is suspended during the AAR, and junior soldiers are encouraged to speak as freely as senior officers. The output is a written document that enters the institutional knowledge system.
The Transfer: Companies including Shell, General Electric, and the World Bank adopted the AAR format, adapting it for corporate contexts. Shell's "learning histories" and the World Bank's "knowledge management" programs were directly modeled on military AAR processes. The adaptation required some translation -- corporate AAR participants did not face the life-or-death stakes that motivated military candor, so facilitators had to work harder to create psychological safety. But the core mechanism -- structured reflection immediately after action, with frank discussion of what went wrong and explicit documentation of lessons -- transferred cleanly.
Why It Worked: The causal mechanism transfers: structured reflection produces explicit knowledge from tacit experience (connecting to Chapter 23's discussion of tacit knowledge). The AAR works because it converts the informal, personal knowledge that each participant holds into formal, shared knowledge that the institution can preserve. This mechanism operates regardless of whether the institution is military, corporate, or governmental.
Lesson: When transferring a process (not just an idea), you must translate not only the steps but also the conditions that make the process effective. The military AAR works partly because of the seriousness of military culture. Corporate AARs required creating that seriousness through facilitation and leadership commitment.
Part II: Three Transfers That Failed
Failure 1: The "Business as Ecosystem" Analogy
The Attempt: In the 1990s and 2000s, business strategists enthusiastically adopted ecological metaphors. Companies were "organisms." Industries were "ecosystems." Startups were "species" competing for "ecological niches." Business books described "predator-prey dynamics," "symbiotic partnerships," and "evolutionary fitness."
Why It Failed: The analogy maps surface features (competition, adaptation, survival) but fails at the level of causal mechanisms. Biological evolution operates through random mutation and natural selection -- organisms cannot choose to evolve, and adaptation requires many generations. Businesses make deliberate strategic choices and can transform themselves within a single planning cycle. Biological organisms cannot merge with each other; companies do it routinely. Species cannot decide to enter a new ecological niche; companies can diversify at will.
The ecological metaphor produced specific, testable predictions that turned out to be wrong. It predicted that dominant companies, like dominant species, would be gradually displaced by better-adapted competitors through a slow evolutionary process. In reality, dominant companies are often displaced by sudden disruptions -- new technologies, regulatory changes, or strategic blunders -- that have no ecological counterpart. The analogy predicted gradualism where reality produced discontinuity.
What Went Wrong (Six-Step Analysis): The analogy failed at Step 5 (Stress-Test). The causal mechanisms are too different. Biological evolution is non-teleological (no organism chooses its mutations); business strategy is teleological (companies set goals and pursue them). This is not a minor difference. It is a difference in the fundamental causal structure that makes most ecological predictions inapplicable to business.
Lesson: An analogy that matches on competition, adaptation, and survival -- the most prominent surface features -- can fail completely at the level of mechanism. Always check whether the causal arrow works the same way.
Failure 2: Taylorism Applied to Education
The Attempt: Frederick Winslow Taylor's Scientific Management, developed for factory manufacturing in the early 1900s, was enormously successful at increasing industrial productivity. Its core principles -- standardize procedures, measure outputs, optimize each step, hold workers accountable for quantitative targets -- seemed applicable to education. If factories could increase output by standardizing and measuring, why not schools?
The result was the industrial model of education: standardized curricula, standardized tests, students processed in batches (grades), learning measured by quantitative outputs (test scores), and teachers evaluated by their students' measurable performance.
Why It Failed: The transfer ignored a fundamental contextual difference. Factory workers perform standardized tasks on standardized materials to produce standardized outputs. Students are not standardized materials. They arrive with different backgrounds, different abilities, different motivations, and different needs. The "output" of education -- a well-educated, curious, capable human being -- is not standardizable in the way that a manufactured product is.
The Taylorist approach to education is a textbook case of Goodhart's Law (Ch. 15): when test scores become the target, teaching to the test replaces genuine education. It is also a case of conservation of complexity (Ch. 41): the complexity of educating diverse students was not eliminated by standardization. It was pushed underground -- into the gap between what tests measure and what education actually requires.
What Went Wrong (Six-Step Analysis): The analogy failed at Step 5 (Stress-Test) and committed Mistake 3 (Ignoring Context). The factory context assumes standardized inputs; the educational context has irreducibly diverse inputs. The factory output can be fully specified in advance; the educational output cannot. These are not minor contextual differences. They are structural differences that undermine the core mechanism of the transfer.
Lesson: When the inputs to a process are standardized, standardizing the process increases quality. When the inputs are diverse, standardizing the process destroys quality. The transfer from manufacturing to education failed because it imported a solution designed for homogeneous inputs into a domain defined by heterogeneous inputs.
Failure 3: The Lean Startup Applied to Government
The Attempt: Eric Ries's "lean startup" methodology -- build a minimum viable product, measure how customers respond, learn, and pivot quickly -- was enormously successful in the startup world. Government reformers, frustrated with the slow, bureaucratic pace of government programs, proposed applying lean startup principles to government: launch pilot programs quickly, measure outcomes, iterate rapidly, and scale what works.
Why It Failed (In Many Cases): The lean startup method works in a context of voluntary customers, rapid feedback, low cost of failure, and the ability to pivot. Government operates in a context of mandatory service provision, slow feedback, high cost of failure, and severe political consequences for pivoting (which opponents characterize as "flip-flopping" or "wasting taxpayer money").
A startup that launches a bad product loses customers. A government agency that launches a bad program can harm citizens who have no choice but to use it. A startup can quietly shut down a failed experiment. A government program that is shut down becomes a political scandal. A startup can pivot overnight. A government program requires legislative authorization, regulatory approval, procurement processes, and stakeholder consultation -- none of which can move at startup speed.
The lean startup transfers that did work in government (such as the U.S. Digital Service's redesign of government websites) succeeded precisely in the narrow areas where the government context most closely resembled the startup context: digital products with measurable outcomes, voluntary user engagement, and tolerance for iterative improvement.
What Went Wrong (Six-Step Analysis): Mistake 3 (Ignoring Context) and Mistake 4 (Assuming Universality). The lean startup method was not a universal recipe for organizational effectiveness. It was a method optimized for a specific context -- and the government context violated several of the conditions that made the method work.
Lesson: Ask: What features of the source context are necessary conditions for the solution to work? Then check whether those conditions exist in the target context. If they do not, the solution will fail -- not because it is wrong, but because it has been planted in the wrong soil.
Synthesis: What Separates Success from Failure
Examining these six transfers together reveals a clear pattern. The three successes share three features:
-
The causal mechanism transferred cleanly. Checklists prevent omission errors for the same reason in surgery as in aviation. Diversification reduces risk for the same reason in portfolios as in ecosystems. Structured reflection captures tacit knowledge for the same reason in corporations as in the military.
-
The translation was adapted, not copied. Gawande did not use aviation checklists in surgery. He designed new checklists for the surgical context. Portfolio theory uses finance-specific mathematics, not ecological field methods. Corporate AARs adapted the military format for civilian psychology.
-
The practitioners understood where the analogy broke. Gawande knew that patients are not aircraft. Markowitz knew that markets are not forests. Corporate AAR facilitators knew that executives are not soldiers. Understanding the limits of the analogy allowed them to adapt where adaptation was needed.
The three failures share three features:
-
The causal mechanism did not transfer. Business evolution is not biological evolution. Education is not manufacturing. Government is not a startup.
-
The translation was too literal or too abstract. Business strategists imported ecological vocabulary without ecological mechanisms. Educational reformers imported factory processes without checking whether the inputs were comparable. Government reformers imported a methodology without checking whether the context supported it.
-
The practitioners did not stress-test the analogy. In each case, the analogy was adopted enthusiastically without systematic examination of where it broke and whether the breaks mattered.
The difference between success and failure in cross-domain transfer is not luck, genius, or the inherent quality of the source analogy. It is rigor -- the discipline of applying the six-step method and the analogy quality test, especially the stress-testing steps that most people skip because they are less exciting than the initial flash of insight.
Connection to Chapter 14 (Overfitting): Failed transfers are a form of overfitting -- fitting the analogy too closely to the most prominent features of the source domain while ignoring the features that distinguish the target domain. Just as a statistical model that fits the training data perfectly will fail on new data, an analogy that maps perfectly onto the source domain will fail when applied to a domain with different structural features. The stress-test is the equivalent of testing your model on out-of-sample data.
Discussion Questions
-
For each of the three successful transfers, identify one way in which the transfer could have failed if the practitioners had been less careful about adaptation. What mistake would they have been making?
-
For each of the three failed transfers, propose a better analogy from a different source domain. Apply the analogy quality test to your proposed alternative.
-
The case study argues that the difference between success and failure is "rigor." Is this sufficient, or are there also elements of creativity, intuition, or luck that the six-step method cannot capture? What is the relationship between method and insight?
-
Can you identify a cross-domain transfer in your own field or experience? Was it a success or a failure? Apply the analytical framework from this case study to understand why.