> "Humans are not ideally set up to understand logic; they are ideally set up to understand stories."
Learning Objectives
- Define the narrative fallacy and explain why it is a structural vulnerability in knowledge production, not just individual credulity
- Distinguish between explanation and storytelling — identifying when a plausible narrative is masquerading as an empirical finding
- Identify the plausible story problem operating across evolutionary psychology, historical analysis, criminal profiling, medical diagnosis, and business strategy
- Apply the 'alternative narrative' test: for any explanatory story, construct an equally plausible story that reaches the opposite conclusion
- Add the narrative coherence lens to your Epistemic Audit
In This Chapter
- Chapter Overview
- 6.1 The Narrative Machine
- 6.2 Post-Hoc Explanation vs. Prediction
- 6.3 The Just-So Story Machine
- 6.4 The Plausible Story Problem in Professional Practice
- 6.5 The Alternative Narrative Test
- 6.6 What It Looked Like From Inside
- 6.7 Active Right Now: Where the Plausible Story Problem May Be Operating
- 6.8 The Interaction With Other Failure Modes
- 6.9 Practical Considerations: Living With Stories
- 6.10 Chapter Summary
- Spaced Review
- What's Next
- Chapter 6 Exercises → exercises.md
- Chapter 6 Quiz → quiz.md
- Case Study: The Criminal Profile That Caught the Wrong Man → case-study-01.md
- Case Study: Evolutionary Just-So Stories and the Limits of Adaptation Narratives → case-study-02.md
Chapter 6: The Plausible Story Problem
"Humans are not ideally set up to understand logic; they are ideally set up to understand stories." — Roger C. Schank (attributed)
Chapter Overview
On the morning of October 3, 2002, a 55-year-old man was shot and killed by a sniper while crossing a parking lot in Wheaton, Maryland. Over the next three weeks, nine more people were shot in the Washington, D.C. metropolitan area. The Beltway sniper attacks terrorized the region and consumed the national media.
From the beginning, criminal profilers and media commentators constructed a narrative about the likely perpetrator. The profile was detailed and confident: the sniper was almost certainly a lone white male, probably in his twenties or thirties, with military training, living alone, likely employed in a job requiring mechanical skill. The profile was built on pattern-matching to prior serial killer cases. It was coherent. It was plausible. It drew on established frameworks of criminal behavior.
It was also almost entirely wrong.
The snipers were two people, not one. They were John Allen Muhammad, a 41-year-old Black man, and Lee Boyd Malvo, a 17-year-old Jamaican. Muhammad was a Gulf War veteran (the military training was correct) but virtually nothing else in the widely circulated profile matched. The profile's confident narrative — built on what "typically" happens in serial shooting cases — had constructed a story so compelling that it shaped the investigation, directing law enforcement attention toward white males and away from the actual perpetrators. At least one checkpoint stopped Muhammad and Malvo but let them go because they didn't match the profile.
The criminal profile was a plausible story. It was internally coherent, consistent with prior patterns, and told by credentialed experts. It was also wrong in ways that materially delayed the capture of active killers.
This is the plausible story problem: the human tendency to accept explanations that are narratively compelling — that form a coherent, satisfying story — as though narrative coherence were evidence. It is the fifth major entry mechanism for wrong ideas, and it is perhaps the most difficult to detect, because plausible stories don't just feel true. They feel like the same thing as truth.
In this chapter, you will learn to: - Recognize when narrative coherence is substituting for evidence - Distinguish between genuine explanation and post-hoc storytelling - Apply the "alternative narrative" test to any explanatory claim - Identify the plausible story problem across multiple fields - Add the narrative coherence lens to your Epistemic Audit
🏃 Fast Track: If you're familiar with the narrative fallacy (Taleb, Kahneman), start at section 6.3 (The Just-So Story Machine) for the cross-domain analysis and section 6.5 for the diagnostic framework.
🔬 Deep Dive: After this chapter, explore Nassim Taleb's The Black Swan (chapters on the narrative fallacy), Philip Tetlock's Expert Political Judgment (on the predictive failures of expert narratives), and Daniel Kahneman's discussion of the "planning fallacy" in Thinking, Fast and Slow.
6.1 The Narrative Machine
Human beings are storytelling animals. This is not a metaphor — it is a description of how our cognitive architecture works.
Research in cognitive psychology has consistently demonstrated that humans process information more easily, remember it more accurately, and find it more persuasive when it is organized into narrative form — that is, when it follows a structure of characters, motivations, conflicts, causes, and consequences arranged in temporal sequence. We don't just prefer stories; we think in stories. Our brains are narrative processors that convert the raw data of experience into cause-and-effect sequences, character-driven explanations, and meaningful arcs.
This is enormously powerful. Narrative thinking allows us to plan, to learn from others' experiences, to coordinate social behavior, and to make sense of a complex world. Without narrative, we would drown in disconnected data points.
But narrative thinking has a critical vulnerability: it confuses plausibility with probability, and coherence with truth. A story that "makes sense" — that has a logical flow from cause to effect, that features understandable motivations and clear consequences — feels true regardless of whether it is supported by evidence. The story's internal coherence substitutes for external validation.
Kahneman and Tversky demonstrated this with their famous "Linda problem." Participants were told about Linda, a 31-year-old woman who was single, outspoken, and deeply concerned with issues of social justice. When asked whether it was more likely that Linda was (a) a bank teller, or (b) a bank teller and active in the feminist movement, the majority chose (b) — even though (b) is a subset of (a) and is therefore mathematically less probable. The added detail (active in the feminist movement) made the story more coherent, and the coherence was mistaken for probability. This is the conjunction fallacy: adding detail to a story makes it less probable but more believable.
The conjunction fallacy reveals something profound about how narrative thinking works. It is not that people are bad at probability (though they are). It is that the narrative processing system and the probabilistic reasoning system use different evaluation criteria. The narrative system evaluates stories by coherence — does it fit together? Is the character consistent? Do the causes lead to the effects? The probabilistic system evaluates claims by likelihood — how many ways could this be true versus false? These systems often reach contradictory conclusions, and in most natural contexts, the narrative system wins.
This is not a marginal effect. In study after study, the conjunction fallacy persists even when participants are warned about it, even among statistically trained researchers, and even when the stakes are high. The narrative processing system is that powerful. It doesn't just supplement probabilistic reasoning; in many contexts, it overrides it.
The implications for knowledge production are enormous. When a research finding is presented as part of a compelling narrative (a discovery story, a breakthrough narrative, a contrarian challenge), it feels more convincing than the same finding presented as a statistical table. The narrative adds no information — the data is the same — but it adds coherence, and coherence is mistaken for evidence.
💡 Intuition: Think of narrative coherence as a kind of cognitive glue. When the elements of a story stick together — when causes lead to effects in a satisfying way — the story feels solid, complete, real. But the glue holds regardless of whether the elements are true. A perfectly coherent false story feels just as solid as a perfectly coherent true story. The glue is indifferent to truth.
The Explanatory Depth Illusion
Research by Leonid Rozenblit and Frank Keil has documented a related phenomenon: the explanatory depth illusion. People believe they understand complex phenomena much more deeply than they actually do. When asked "Do you understand how a toilet works?" most people say yes. When asked to actually explain the mechanism in detail, they quickly discover their understanding is superficial — they have a narrative ("you flush, the water goes down, new water fills up") that feels like understanding but isn't.
This illusion is particularly dangerous in professional contexts. An economist who can tell a compelling story about why a recession occurred feels — and appears — as though they understand recessions. A historian who can construct a narrative about why an empire fell feels — and appears — as though they understand the dynamics of imperial decline. But the story is not the same as the understanding. The narrative creates the illusion of explanatory depth without the substance.
🧩 Productive Struggle
Before reading the next section, try this: choose a major historical event you believe you understand (the fall of the Soviet Union, the rise of the internet, the 2008 financial crisis). Now try to explain, in detail, why it happened — not just what happened. As you explain, notice where you start generating narrative (plausible-sounding causal chains) versus where you're citing evidence. How much of your "understanding" is actually a story?
Spend 3–5 minutes. The discomfort you feel is diagnostic.
6.2 Post-Hoc Explanation vs. Prediction
The deepest version of the plausible story problem is the confusion between explanation and prediction.
An explanation tells you why something happened after you already know that it happened. A prediction tells you what will happen before you know the outcome. These are fundamentally different cognitive operations, but they feel similar — and the human brain systematically confuses them.
The Asymmetry
After the 2008 financial crisis, thousands of articles, books, and analyses explained why it happened. The causes were identified with clarity: subprime lending, securitization of bad mortgages, inadequate regulation, misaligned incentives at rating agencies, excessive leverage, and collective overconfidence in risk models. The explanations were coherent, well-documented, and largely correct.
But here is the uncomfortable question: How many of these explanatory factors were identified as risks BEFORE the crisis? Some were — a handful of analysts (Rajan, Roubini, Burry) raised warnings. But the vast majority of post-crisis explanations were generated after the outcome was known. The same analysts who failed to predict the crisis produced confident explanations of why it was, in retrospect, predictable.
Philip Tetlock's research on expert prediction confirms this asymmetry at enormous scale. In his landmark study (documented in Expert Political Judgment, 2005), Tetlock tracked over 28,000 predictions by nearly 300 political and economic experts over two decades. The experts' predictive accuracy was barely better than chance — roughly equivalent to a dart-throwing chimpanzee. Yet these same experts produced confident, articulate, and internally coherent explanations for events after they occurred.
The gap between explanation and prediction was not small. It was a chasm. And the experts who were most confident in their explanations — the ones who constructed the most compelling narratives — were often the worst predictors. The narrative skill and the predictive skill were not just different; they were inversely correlated. The better the story, the worse the forecast.
This asymmetry — confident explanation after the fact, poor prediction before the fact — is a hallmark of the plausible story problem. If your framework truly explains the crisis, it should have been able to predict it. If it couldn't predict it, the "explanation" may be a post-hoc narrative rather than a genuine causal account.
🔄 Check Your Understanding (try to answer without scrolling up)
- What is the key difference between explanation and prediction?
- Why does the 2008 financial crisis illustrate the confusion between the two?
Verify
1. Explanation tells you why something happened after you know the outcome. Prediction tells you what will happen before the outcome is known. Explanation is retrospective; prediction is prospective. 2. Thousands of confident explanations were produced after the crisis, but very few of the same analysts predicted the crisis beforehand. The ability to explain after the fact was mistaken for the ability to understand.
Hindsight Bias: The Engine of False Explanation
The confusion between explanation and prediction is powered by hindsight bias — the well-documented tendency to believe, after learning an outcome, that you "knew it all along." Hindsight bias doesn't just distort memory; it distorts the feeling of understanding. Once you know that the housing market collapsed, the contributing factors seem obvious. But they didn't seem obvious in 2006, when the same information was available but the narrative hadn't been constructed yet.
Hindsight bias operates at the institutional level, not just the individual level. After a corporate failure, the business press constructs a narrative explaining why it was inevitable. After a military defeat, the strategic analysis identifies the critical errors. After a medical misdiagnosis, the case review traces the chain of mistakes. In each case, the narrative is constructed from the outcome backward — selecting the facts that lead to the known conclusion and ignoring the facts that pointed elsewhere.
This is the institutional version of the plausible story problem: entire fields generate retrospective narratives that feel like understanding but are actually selections from a much larger and more ambiguous set of possible stories.
6.3 The Just-So Story Machine
The term "just-so story" — borrowed from Rudyard Kipling's Just So Stories ("How the Leopard Got His Spots") — has become the standard label for post-hoc narrative explanations in science. The field most famously associated with just-so stories is evolutionary psychology, but the pattern extends far beyond it.
Evolutionary Psychology: The Paradigm Case
Evolutionary psychology proposes that many human behaviors, preferences, and cognitive tendencies are adaptations shaped by natural selection during our evolutionary past. The field has produced important insights — the existence of universal emotional expressions, the logic of kin selection, the mechanisms of mate choice. Some of its claims are well-supported by evidence.
But the field also has a structural vulnerability to the plausible story problem: for virtually any human behavior, a plausible evolutionary story can be constructed after the fact.
- Why do humans fear snakes? Because ancestral humans who feared snakes were more likely to survive. (Plausible.)
- Why do humans enjoy music? Because musical ability was a signal of fitness in mate selection. (Plausible.)
- Why do humans experience jealousy? Because ancestral humans who guarded their mates had more reproductive success. (Plausible.)
- Why do humans procrastinate? Because in ancestral environments, immediate rewards were more reliable than delayed rewards. (Plausible.)
Each of these stories is internally coherent. Each connects a modern behavior to an ancestral selection pressure through a plausible causal chain. And that's exactly the problem: they are all plausible. For any behavior, multiple competing evolutionary stories can be constructed, and the available evidence usually cannot distinguish between them.
The philosopher of biology Elliott Sober has argued that many evolutionary psychological claims are "consistent with the evidence" in the same way that Ptolemaic epicycles were consistent with the evidence — they accommodate observations but don't predict new ones. The stories are not falsified by the evidence, but neither are they supported by it in any strong sense. They are narrative constructions that satisfy the human need for explanation without advancing scientific understanding.
The test is straightforward: Can this evolutionary story make a prediction about a behavior we haven't yet studied? Can the theory of music as fitness signaling predict which kinds of musical ability will be most attractive? Can the theory of jealousy as mate-guarding predict which individuals will be most jealous under which conditions? If the story only explains behaviors we've already observed, it is post-hoc. If it can predict new observations — and those predictions are confirmed — it becomes science.
Some evolutionary psychological claims have passed this test. The predictions about male and female mate preferences, derived from parental investment theory, have been partially confirmed across cultures (though with more variation than the original theory predicted). But many of the field's most popular claims — the adaptive explanations for music, humor, art, religion, and consciousness — remain in the "just-so story" category: plausible, compelling, and untested.
⚠️ Common Pitfall: Criticizing evolutionary psychology's just-so stories does not mean rejecting evolutionary theory. Evolution by natural selection is one of the best-supported theories in all of science. The problem is with specific post-hoc narratives about why particular human behaviors evolved — not with the theory of evolution itself. The difference is between a well-tested general framework (evolution) and untested specific applications (why do we like music?).
Historical Determinism: Why the Fall of Rome Was "Inevitable"
The plausible story problem is equally active in historical analysis.
After the Western Roman Empire fell, historians have spent centuries explaining why it was inevitable. Edward Gibbon blamed Christianity and moral decline. Others have cited lead poisoning, barbarian invasions, economic collapse, military overextension, administrative corruption, plague, climate change, or some combination of these factors.
Each explanation is plausible. Each constructs a narrative from selected facts leading to the known outcome. But here's the test: Could the same facts have been assembled into a narrative predicting Rome's continued success?
Consider: Rome survived previous barbarian invasions, previous plagues, previous periods of corruption, previous military overextension, and previous economic troubles. At many points in its history, the same "causes of decline" were present and the empire persisted. The factors that "caused" Rome's fall also "caused" its centuries of survival. The difference is not in the factors but in the narrative — in which facts are selected and how they're arranged.
This is underdetermination: when the available evidence is compatible with multiple contradictory explanations. The concept is central to this entire chapter.
The Newspaper Test for Historical Narratives
Here is a practical test for historical determinism. Take any historical narrative of the form "Event X happened because of factors A, B, and C." Now imagine transporting a newspaper columnist back to the period before Event X, giving them complete knowledge of factors A, B, and C, and asking them to write a column predicting what will happen next.
Would factors A, B, and C have led a reasonable, well-informed observer to predict Event X?
In most cases, the honest answer is no. The same factors that are cited as "causes" of Rome's fall also characterized periods when Rome flourished. The same economic conditions that preceded the 2008 crash also characterized years when the economy grew. The same social tensions that preceded the French Revolution also characterized other periods that didn't produce revolutions.
The factors become "causes" only after the outcome is known. Before the outcome, they are just conditions — features of the landscape that could be assembled into multiple different stories depending on what happens next. The narrative selects from the landscape to create the appearance of inevitability. But the inevitability is in the story, not in the history.
🎓 Advanced: The philosopher of history Arthur Danto argued that some historical concepts are "narrative sentences" — sentences whose truth depends on later events. "The Thirty Years' War began in 1618" is a narrative sentence because the people in 1618 didn't know they were beginning a thirty-year war. They were just fighting. The concept "Thirty Years' War" can only be constructed retrospectively. This means that a significant portion of historical knowledge is inherently post-hoc narrative — it can only exist after the outcome is known. This doesn't make it useless, but it means historical "explanation" is always, to some degree, storytelling. In historical analysis, underdetermination is the norm, not the exception. The evidence almost always supports multiple narratives, and the choice between them is driven by the historian's framework, not by the evidence itself.
📜 Historical Context: The philosopher of history R.G. Collingwood argued that all historical explanation is necessarily narrative — that we cannot understand the past except through stories. If this is true, the plausible story problem is not a bug in historical analysis but a fundamental feature. History is, to some degree, always a plausible story. The question is whether we recognize it as such or mistake it for explanation.
6.4 The Plausible Story Problem in Professional Practice
The plausible story problem is not confined to academic fields. It operates in high-stakes professional contexts where lives and livelihoods depend on getting the story right.
Criminal Profiling: When the Story Points to the Wrong Person
The Beltway sniper case from the chapter opening is not an isolated failure. Research on criminal profiling has consistently found that it performs poorly as a predictive tool, despite its narrative power.
A study published in the Journal of Investigative Psychology and Offender Profiling found that professional criminal profilers were no more accurate in predicting offender characteristics than college students or self-described psychics. The profilers were, however, significantly more confident in their predictions. The narrative coherence of their profiles — the way the pieces fit together into a compelling story — created confidence without accuracy.
This finding should be deeply unsettling. Criminal profiling is presented in courtrooms, taught in training academies, and depicted in popular media as a science — a systematic method for identifying offenders based on behavioral evidence. But the research suggests it is, at its core, a narrative exercise: constructing a compelling story about the offender from the evidence at the crime scene. The story feels authoritative because it is told by credentialed experts with years of experience. But narrative expertise (the ability to construct coherent stories) and predictive accuracy (the ability to identify the actual offender) are different skills — and the research suggests that profiling has the first in abundance and the second barely at all.
This is not to say that profilers never contribute to investigations. They sometimes do, by suggesting lines of inquiry that investigators hadn't considered. But the contribution is heuristic (suggesting possibilities) rather than scientific (providing reliable predictions). The danger arises when the heuristic contribution is mistaken for scientific certainty — when a plausible story is treated as evidence.
The mechanism is the representativeness heuristic at work in a professional context. Profilers match the current case to a mental template built from prior cases, construct a narrative that connects the crime scene details to a character profile, and present the result with the confidence that narrative coherence generates. The story feels right. The investigators act on it. And sometimes the story leads away from the actual perpetrator rather than toward them.
Medical Diagnosis: When the Story Matches the Wrong Disease
Physicians are also vulnerable to the plausible story problem. Research on diagnostic error has found that one of the most common causes of misdiagnosis is premature closure — settling on a diagnosis too early because the initial narrative is compelling.
A patient presents with fatigue, weight loss, and night sweats. The physician's pattern-matching system generates a narrative: this fits the template for lymphoma. The physician orders tests for lymphoma. The tests come back ambiguous. But the narrative is already established — the story is coherent — and the physician interprets the ambiguous results through the lens of the existing narrative rather than stepping back to consider alternative explanations (which might include tuberculosis, HIV, thyroid disorders, or depression).
Research suggests that diagnostic errors affect an estimated 12 million adults annually in the United States, and that a significant fraction of these errors involve premature closure — the physician's narrative settling on a diagnosis before the evidence warranted it.
The mechanism works like this. A patient presents with a set of symptoms. The physician's mind, trained on thousands of prior cases, immediately begins pattern-matching — searching for the narrative that best fits the presenting evidence. Within seconds (research on clinical reasoning suggests the initial hypothesis forms in under 30 seconds), a candidate diagnosis crystallizes. This candidate then shapes everything that follows: which questions are asked, which tests are ordered, how ambiguous results are interpreted.
The speed of this pattern-matching is usually an asset. Most of the time, the initial narrative is correct or close to correct, and the rapid hypothesis formation enables efficient diagnosis. But when the initial narrative is wrong — when the presenting symptoms match a common template but the actual cause is different — the same speed becomes a liability. The narrative creates confirmation bias: evidence that fits the story is noticed; evidence that contradicts it is downweighted or ignored.
Emergency physicians describe this as "anchoring to the first story." A patient arrives with chest pain. The first story is: heart attack. Every subsequent piece of evidence is processed through the heart attack lens. Even when the ECG is normal and the cardiac enzymes are negative, the narrative persists — the physician orders more cardiac tests rather than considering pulmonary embolism, esophageal spasm, or anxiety. The story is too coherent to abandon on the basis of inconvenient evidence.
🔗 Connection: The plausible story problem in medical diagnosis interacts with the authority cascade (Chapter 2): a senior physician's initial narrative carries disproportionate weight, and junior team members are reluctant to challenge a story that a respected clinician has endorsed. It also interacts with the streetlight effect (Chapter 4): physicians test for what they suspect (what's illuminated by the narrative), not for what they don't suspect (what's in the dark).
Business Strategy: The Story of Success
We encountered startup mythology in Chapter 5 (survivorship bias). Now we can add the plausible story layer.
When a startup succeeds, a narrative is constructed: the founder saw an opportunity that others missed, built a team around a bold vision, navigated obstacles through persistence and brilliance, and created something the world needed. This narrative is applied with minor variations to every successful founder, from Steve Jobs to Elon Musk to Sara Blakely.
The narrative is plausible for every success story. It is equally plausible for many failure stories. The difference between a "visionary founder" narrative and a "delusional founder" narrative is often nothing more than the outcome. The stories are constructed from the same raw materials — ambition, risk-taking, unconventional thinking, persistence — and the outcome determines which narrative gets applied.
The problem is not that these narratives are wrong — the founders really did have vision, persistence, and unconventional thinking. The problem is that the narratives are underdetermined by the evidence. The same evidence supports multiple contradictory stories, and the outcome selects which story gets told.
Consider Elizabeth Holmes and Theranos. Before the fraud was discovered, the narrative was: brilliant young founder, Stanford dropout, driven by a personal mission to democratize healthcare, attracted world-class board members and investors, built a revolutionary technology. After the fraud was discovered, the same facts were reassembled: charismatic con artist, Stanford dropout who lacked the technical knowledge to fulfill her promises, manipulated board members and investors, built an elaborate deception.
The underlying facts were the same. What changed was the outcome — and the outcome determined which narrative was applied. If Theranos had succeeded, Holmes would be in the pantheon of visionary founders alongside Jobs and Gates. Because it failed — spectacularly — she is in prison. The narrative is downstream of the outcome, not upstream of the evidence.
🪞 Learning Check-In
Pause and reflect: - Think of a success story you find inspiring. Can you construct an equally compelling failure narrative using the same facts? - Think of an explanation you believe deeply. What would you need to see to abandon it? - In your professional experience, how often do explanations come before evidence vs. after?
🔍 Why Does This Work?
The plausible story problem exploits the same cognitive machinery that makes narrative thinking so powerful. We are designed to construct causal stories from data. This ability is essential for planning, learning, and social coordination. But the ability doesn't come with a reliability indicator. The brain generates stories from true data and false data with equal fluency. The feeling of understanding that accompanies a good story is the same whether the story is correct or not.
6.5 The Alternative Narrative Test
How do you detect the plausible story problem? The most powerful diagnostic is the alternative narrative test: for any explanatory story, try to construct an equally plausible story that reaches the opposite conclusion using the same evidence.
If you can — if the same facts support multiple contradictory narratives with roughly equal plausibility — then the original narrative is underdetermined by the evidence. It is a plausible story, not a demonstrated explanation.
How to Apply the Test
Step 1: State the narrative clearly. "Company X succeeded because of its strong culture and visionary leadership."
Step 2: Construct an alternative narrative using the same facts. "Company X succeeded despite its strong culture (which created groupthink and resistance to outside ideas) and visionary leadership (which produced tunnel vision and ignored warning signs). It succeeded because of market timing, access to capital, and luck."
Step 3: Evaluate which narrative the evidence better supports. If both are roughly equally plausible — if you can't clearly distinguish between them on evidence alone — then neither narrative is well-supported. The evidence is underdetermined.
Step 4: If the evidence is underdetermined, calibrate your confidence appropriately. Don't reject the original narrative — it may be correct. But don't treat it as established either. It is one plausible story among several.
This test is simple but psychologically demanding. It requires you to temporarily inhabit a narrative that contradicts your preferred explanation — to build the best possible case for the opposite view. Most people find this uncomfortable, which is precisely why the test is valuable. If you cannot construct an alternative narrative, it may be because the original is so well-supported that no alternative is viable. But if you can — if the alternative comes easily — you have learned something important: the evidence doesn't support the original story as strongly as you thought.
The alternative narrative test is, in essence, a formalized version of the devil's advocate method — but applied to stories rather than to arguments. Arguments can be evaluated on logic. Stories must be evaluated on whether the same data admits other plots. If it does, the plot you started with is not uniquely determined by the data.
Worked Example: The Rise of Apple
Original narrative: Apple succeeded because Steve Jobs was a visionary who understood that design and user experience were more important than technical specifications. His perfectionism and willingness to say "no" to features created products that were simple, elegant, and beloved by consumers.
Alternative narrative: Apple succeeded despite Steve Jobs's perfectionism, which created a toxic work environment, delayed product launches, and alienated talented employees. Apple succeeded because of Tim Cook's supply chain genius, Jonathan Ive's design talent, and the iPod-iTunes ecosystem that created lock-in — factors that would have produced success under many different leadership styles.
Evaluation: Both narratives are plausible. Both use real facts. The evidence doesn't clearly support one over the other. The "Steve Jobs as visionary" narrative is widely accepted, but it is a plausible story rather than a demonstrated explanation.
🔄 Check Your Understanding (try to answer without scrolling up)
- What is the alternative narrative test?
- If you can construct an equally plausible alternative narrative, what does that tell you?
Verify
1. For any explanatory story, construct an equally plausible story using the same evidence that reaches the opposite conclusion. 2. It tells you the evidence is underdetermined — it supports multiple contradictory explanations — and the original narrative's apparent strength comes from coherence, not from evidence.
6.6 What It Looked Like From Inside
Consider the perspective of a criminal profiler working the Beltway sniper case in October 2002:
- You have twenty years of experience with serial killers. Your case knowledge is extensive. Your pattern-matching skills are finely honed.
- The behavioral evidence — the randomness of victim selection, the use of a rifle, the geographic pattern — matches templates you've seen before. Serial snipers in the United States have historically been lone white males with military training.
- You construct a profile that is consistent with your experience and with the evidence. The profile is internally coherent. It makes sense.
- Law enforcement relies on your profile. Checkpoints are established. Witnesses are asked to report sightings of vehicles driven by white males.
- The profile feels right because it is a good story — it fits together, it draws on real patterns, it generates clear investigative actions.
From inside this perspective, the profiler is not being careless. They are applying expertise, drawing on experience, and constructing the most plausible narrative from the available evidence. The error is not in the profiler's skill or dedication. It is in the structure of the reasoning: pattern-matching to past cases produces plausible stories, but plausible stories are not reliable predictions. The base rates (most serial shooters are white males) are real, but base rates are not deterministic — and in this case, the actual perpetrators fell outside the base rate.
The Confabulation Problem
Neuroscience research on confabulation adds a disturbing dimension. Patients with certain brain injuries produce confident, detailed narratives explaining their behavior — narratives that are entirely fabricated. A patient with anosognosia (unawareness of paralysis) will explain why they aren't moving their paralyzed arm: "I don't feel like it." "I already moved it." "The doctor asked me not to." These explanations are fluent, confident, and completely disconnected from reality.
The unsettling insight from this research is that all of us confabulate to some degree. Split-brain research by Michael Gazzaniga demonstrated that when the left hemisphere (which controls speech) is presented with behavior it didn't initiate, it immediately generates a plausible narrative explaining the behavior — even though the narrative is fabricated. The explanation feels completely genuine to the person offering it.
If the narrative-generating machinery can produce confident explanations for behavior it didn't cause, in people with functioning brains under laboratory conditions, then the same machinery is surely producing confident explanations for events it doesn't understand in everyday professional contexts. The profiler, the historian, the economist, the physician — all are using the same narrative-generating hardware. And that hardware doesn't distinguish between genuine causal understanding and compelling fabrication.
The deeper lesson: expertise in constructing narratives is not the same as expertise in prediction. The profiler's skill is real — they are genuinely good at building coherent stories from evidence. But the skill of story-building and the skill of prediction are different capacities, and the former can substitute for the latter without anyone noticing.
6.7 Active Right Now: Where the Plausible Story Problem May Be Operating
AI capabilities narratives. The discourse around artificial intelligence is dominated by compelling narratives — "AI will transform every industry," "AI will eliminate millions of jobs," "AI will achieve superintelligence by 2030." These narratives are coherent, internally logical, and told by credentialed experts. They are also, in most cases, untested predictions dressed in the clothing of explanation. The alternative narrative test applies: for every confident prediction about AI, an equally plausible story can be constructed reaching the opposite conclusion. When the same evidence supports both "AI will revolutionize healthcare" and "AI will create dangerous new failure modes in healthcare," neither narrative is well-supported.
Political narratives in democracies. Every election produces a flood of "why they won" narratives. These narratives are generated within hours of the result and are presented with extraordinary confidence. But they are almost entirely post-hoc: the same pundits who failed to predict the outcome produce confident explanations of why it was inevitable. Research by Philip Tetlock demonstrates that political experts' explanatory confidence far exceeds their predictive accuracy — the hallmark of the plausible story problem.
Organizational "lessons learned." After any corporate failure, reorganization, or strategic pivot, a narrative is constructed explaining what happened and why. These narratives serve important institutional functions (processing the experience, communicating to stakeholders). But they are almost always underdetermined by the evidence — the same facts support multiple contradictory narratives — and the narrative selected is typically the one most convenient for the current leadership.
Trauma and therapy narratives. Some therapeutic approaches encourage clients to construct narratives explaining their current difficulties in terms of past experiences. When these narratives are grounded in documented events and supported by evidence, they can be genuinely healing. But the narrative impulse can also lead to the construction of compelling but false accounts — a problem that has been documented in the recovered memory debate, where plausible therapeutic narratives led to accusations of abuse that, in some cases, were demonstrably false.
6.8 The Interaction With Other Failure Modes
The plausible story problem rarely operates alone. It interacts with every other failure mode in this book, amplifying their effects.
Plausible stories + Authority cascade (Ch.2): When a prestigious expert tells a compelling story, the story's authority and its narrative coherence reinforce each other. The audience defers both to the expert's status and to the story's internal logic, creating a double lock on the narrative.
Plausible stories + Unfalsifiability (Ch.3): A plausible story that is also unfalsifiable is nearly impossible to dislodge. If the story explains everything and cannot be disproven, it satisfies both the need for explanation and the appearance of scientific validity.
Plausible stories + Survivorship bias (Ch.5): When you study only survivors and construct a narrative about their success, survivorship bias provides the (biased) evidence and the plausible story problem provides the (false) explanation. The combination is the engine of the entire business success literature. This interaction is so powerful and so common that it deserves a name: narrative survivorship — the construction of compelling causal stories from survivorship-biased evidence. Narrative survivorship is responsible for most of what passes for strategic wisdom in business, most of what passes for "lessons learned" in organizational failure analysis, and a distressing amount of what passes for historical understanding.
The interaction works because each component compensates for the other's weakness. Survivorship bias alone produces evidence that looks lopsided (only winners are studied) — which should trigger skepticism. But the plausible story built on that evidence explains the lopsidedness ("of course we studied the winners — they're the ones who did it right!"), making the bias invisible. Conversely, a plausible story without evidence feels speculative. But survivorship-biased evidence seems to support the story ("look, all these successful companies share these traits!"), giving the narrative an empirical veneer.
Plausible stories + Streetlight effect (Ch.4): When measurable evidence is used to construct narratives while unmeasurable evidence is ignored, the resulting story feels well-supported (it's based on data!) while being systematically incomplete (the data is only from under the streetlight).
📐 Project Checkpoint
Your Epistemic Audit — Chapter 6 Addition
Return to your audit target and ask:
What are the dominant narratives in your field? Identify the 2–3 most commonly told stories about how things work in your domain.
Apply the alternative narrative test. For each dominant narrative, construct an equally plausible alternative that uses the same evidence but reaches a different conclusion. If you can, the narrative is underdetermined.
Explanation vs. prediction. Does your field explain events better than it predicts them? If the explanatory confidence is much higher than the predictive accuracy, the plausible story problem is likely active.
Where is hindsight bias operating? After failures or surprises in your field, how quickly does a confident narrative emerge? Are the post-hoc explanations things that anyone would have said before the event?
Add 300–500 words to your Epistemic Audit document.
6.9 Practical Considerations: Living With Stories
We cannot eliminate narrative thinking. Nor should we want to — stories are how we communicate, teach, plan, and make sense of experience. The goal is not to abandon narrative but to use it with appropriate awareness of its limitations.
Strategy 1: Always Construct the Alternative
Make it a habit: before accepting any explanatory narrative, spend two minutes constructing the best alternative narrative you can. If the alternative is roughly as plausible, lower your confidence in the original.
Strategy 2: Ask "Could This Have Been Predicted?"
When someone offers a compelling explanation of why something happened, ask: "Before this happened, would this explanation have predicted it?" If the answer is no — if the explanation was generated after the fact — it may be a plausible story rather than a genuine understanding.
Strategy 3: Seek Predictions, Not Explanations
When evaluating experts or frameworks, give more weight to their track record of prediction than to their explanations. An economist who predicted the 2008 crisis (even roughly) deserves more credibility than one who explains it eloquently after the fact.
Strategy 4: Notice When Stories Satisfy
Pay attention to the feeling of understanding that accompanies a good story. That feeling is generated by narrative coherence, not by evidence. When a story makes you feel like you understand something deeply, pause and ask: "Is this feeling of understanding backed by evidence, or just by narrative satisfaction?"
The feeling has a distinctive quality: it is a sense of things "clicking into place," of pieces fitting together, of a puzzle being solved. This feeling is real and valuable when it accompanies genuine insight — when the narrative corresponds to actual causal relationships. But the same feeling occurs when the narrative is wrong but coherent. The feeling itself cannot distinguish between a true story and a false one that fits together well.
Strategy 5: Demand Predictions, Not Just Explanations
In any professional context — hiring consultants, evaluating analysts, choosing between strategic frameworks — weight predictive track records more heavily than explanatory quality. An analyst who makes calibrated predictions (and is right 70% of the time about what they say they're 70% confident in) is more valuable than one who produces eloquent explanations of last quarter's results.
This is hard to implement because explanations are narratively satisfying while predictions are anxiety-inducing. The consultant who says "here's why last year happened" makes the client feel understood. The consultant who says "here's what I think will happen next quarter, and I might be wrong" makes the client feel uncertain. Human institutions systematically prefer the first over the second — and systematically get worse advice as a result.
Strategy 6: Cultivate Narrative Humility
Recognize that you, too, are a narrative machine. Your own explanations of your successes, your failures, your relationships, and your field's trajectory are stories you've constructed from selected evidence. They may be true. They may also be plausible stories that would not survive the alternative narrative test. Holding your own narratives with appropriate humility is perhaps the most difficult skill this chapter asks you to develop — and it connects directly to the broader theme of epistemic humility that runs through this entire book.
✅ Best Practice: When presenting analysis in your own field, clearly distinguish between "this is what happened" (description), "this is a plausible explanation" (narrative), and "this is supported by predictive evidence" (established). The first is data. The second is story. The third is knowledge. Keeping the categories separate is a service to everyone who depends on your analysis.
6.10 Chapter Summary
Key Arguments
- The plausible story problem is the tendency to accept narratively compelling explanations as though coherence were evidence
- Narrative coherence feels identical to truth — the brain generates stories from true and false data with equal fluency
- The key diagnostic is the distinction between explanation (post-hoc) and prediction (prospective)
- The alternative narrative test reveals underdetermination: if the same evidence supports multiple contradictory stories, no single story is well-supported
- The problem operates across criminal profiling, medical diagnosis, historical analysis, evolutionary psychology, and business strategy
Key Debates
- Is all explanation ultimately narrative? If so, can we ever escape the plausible story problem?
- Is prediction the only valid test of understanding, or are there forms of understanding that don't make predictions?
- How much confidence should we place in post-hoc explanations when prospective prediction isn't possible?
Analytical Framework
- The alternative narrative test (construct the opposite story using the same evidence)
- The explanation-prediction gap (does the field explain better than it predicts?)
- The interaction map (how the plausible story problem amplifies other failure modes)
Spaced Review
Revisiting earlier material to strengthen retention.
- (From Chapter 4) How does Goodhart's Law interact with the plausible story problem? When a metric decouples from its construct, can a plausible narrative make the decoupling invisible?
- (From Chapter 5) Survivorship bias provides the biased evidence. The plausible story problem provides the false explanation built on that evidence. Trace this interaction through the business success literature.
- (From Chapter 1) The lifecycle of a wrong idea includes "counter-evidence" (Stage 4) and "resistance" (Stage 5). How does the plausible story problem help a field resist counter-evidence?
Answers
1. Yes — when a metric goes up but the construct doesn't improve, a plausible narrative ("our initiative is working!") can make the gap invisible. The story explains the metric improvement in terms of genuine progress, hiding the Goodhart decoupling. 2. Survivorship bias produces a sample of winners. The plausible story problem constructs a narrative from the winners' characteristics that appears to explain their success. Neither step is individually dishonest, but together they produce false "success recipes" that are compelling, well-evidenced (by biased evidence), and wrong. 3. Counter-evidence challenges the dominant narrative. But if the dominant narrative is plausible and coherent, the counter-evidence can be dismissed as "anomalous" or "not fitting the pattern." The story's coherence creates inertia that resists revision. The field says, in effect, "the exception proves the rule" — using the plausible story to absorb evidence that should overturn it.What's Next
In Chapter 7: The Anchoring of First Explanations, we'll examine the sixth entry mechanism: why the first explanation a field adopts becomes the hardest to dislodge, regardless of the evidence. You'll encounter the "chemical imbalance" model of depression, the "rational actor" model in economics, and the broader phenomenon of conceptual path dependence — how initial framings constrain all subsequent thinking.
Before moving on, complete the exercises and quiz to solidify your understanding.
Chapter 6 Exercises → exercises.md
Chapter 6 Quiz → quiz.md
Case Study: The Criminal Profile That Caught the Wrong Man → case-study-01.md
Case Study: Evolutionary Just-So Stories and the Limits of Adaptation Narratives → case-study-02.md
Related Reading
Explore this topic in other books
Propaganda Emotional Appeals Propaganda Simplification and the Big Lie Media Literacy Propaganda Techniques Algorithmic Addiction Outrage as Engagement