33 min read

> "All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident."

Learning Objectives

  • Identify the eight variables that determine how quickly a wrong consensus is corrected
  • Apply the Correction Speed Model to estimate correction timelines for wrong consensuses in any field
  • Compare correction speeds across the six anchor examples and explain the variation
  • Evaluate which variables are most amenable to intervention — and which are structural constraints
  • Design a correction acceleration strategy for your own field based on the model

Chapter 22: The Speed of Truth

"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident." — Often attributed to Arthur Schopenhauer (the attribution is uncertain; the model is overly clean, as this chapter will demonstrate)

Chapter Overview

Consider the following correction timelines:

Wrong Consensus Field Duration of Error Correction Mechanism
Peptic ulcers caused by stress/acid Medicine ~15 years (1982–1997) Outsider persistence + dramatic evidence + generational replacement
Dietary fat causes heart disease Nutrition ~50 years (1960s–2010s) Slow erosion + new studies + cultural shift; still incomplete
Neural networks are a dead end Computer science ~25 years (1969–mid-1990s) New hardware + accumulated evidence + competitive pressure
O-ring erosion is acceptable Aerospace ~7 years normalized, then crisis Catastrophic failure (Challenger)
Financial risk models are reliable Economics/finance ~20 years (late 1980s–2008) Catastrophic failure (financial crisis)
Forensic science (bite marks, hair) is reliable Criminal justice ~40 years and counting DNA evidence + Innocence Project; still incomplete

Why fifteen years for peptic ulcers but fifty for dietary fat? Why twenty-five years for neural networks but seven for O-ring normalization? Why is forensic science still not fully corrected after four decades of mounting evidence?

The variation is enormous — and it is not random. The previous five chapters have identified the mechanisms that determine correction speed: Planck's principle and its exceptions (Chapter 17), the outsider problem (Chapter 18), crisis-driven correction (Chapter 19), the revision myth (Chapter 20), and the pendulum dynamic (Chapter 21). This chapter synthesizes those mechanisms into a single predictive framework: the Correction Speed Model.

The model does not predict correction timelines with precision — that would be false precision of exactly the kind Chapter 12 warned against. What it does is identify the variables that matter and their interactions, allowing a structured assessment of how quickly a given wrong consensus is likely to correct and — more importantly — what can be done to accelerate it.

In this chapter, you will learn to: - Identify the eight variables that collectively determine correction speed - Apply the model to any wrong consensus and generate a structured assessment - Distinguish between variables that can be changed (acceleration levers) and those that are structural constraints - Design a correction acceleration strategy based on the model's diagnostics

🏃 Fast Track: If you've closely followed Chapters 17–21, the model will be largely familiar — it formalizes what those chapters established. Focus on sections 22.3 (the model), 22.4 (the comparative analysis), and 22.6 (acceleration levers).

🔬 Deep Dive: After this chapter, explore the metascience literature on correction timelines — particularly the empirical studies testing whether Planck's principle holds across fields, and the growing body of research on what determines replication and correction speed in different disciplines.


22.1 The Question That Ties Part III Together

Part III has examined how wrong ideas finally die. Each chapter identified a different mechanism:

  • Chapter 17 asked whether wrong ideas die when their champions do (Planck's principle) and found that the answer is "sometimes, but not always" — correction speed depends on identifiable variables.
  • Chapter 18 examined the outsider problem — why the people most likely to bring correct challenges are the ones least likely to be heard.
  • Chapter 19 showed that crisis is the primary driver of institutional change, because evidence alone is absorbed by the paradigm.
  • Chapter 20 revealed that fields rewrite their correction histories to look inevitable, obscuring the structural forces that delayed the correction and ensuring the same forces will delay the next one.
  • Chapter 21 demonstrated that correction itself can overshoot, producing a new error that is the mirror image of the original.

Each of these chapters told part of the story. This chapter tells the whole story by integrating them into a single framework. The central question: Given a specific wrong consensus in a specific field, what determines how long it will persist?


22.2 Building the Model: What We Already Know

In Chapter 17, we introduced a six-variable framework for correction speed. The six original variables were: evidence clarity, switching cost, defender power, external evidence, correction mode (persuasion vs. circumvention), and crisis. Chapters 18–21 have deepened and expanded that framework. Let's formalize the expanded model.

🧩 Productive Struggle

Before reading the model, try to build it yourself. Based on everything you've read in Part III, list the variables you think determine how quickly a wrong consensus will be corrected. Aim for at least six variables. Then compare your list to the model below.

Spend 5 minutes, then read on.


22.3 The Correction Speed Model: Eight Variables

The Correction Speed Model identifies eight variables that collectively determine how long a wrong consensus persists. Each variable can be assessed as LOW, MEDIUM, or HIGH for a given case, and the combination produces a structured prediction of correction timeline.

Variable 1: Evidence Clarity

How clear and unambiguous is the counter-evidence?

  • HIGH clarity (fast correction): The counter-evidence is reproducible, measurable, and difficult to reinterpret. Marshall's H. pylori experiment: drink the bacteria, get gastritis, cure it with antibiotics. The ozone hole: satellite data showing a measurable depletion, with a clear chemical mechanism.
  • LOW clarity (slow correction): The counter-evidence is statistical, requires large samples, involves confounding variables, and can be disputed on methodological grounds. The dietary fat hypothesis: decades of contradictory nutritional studies with small samples, confounding variables, and conflicting endpoints.

Evidence clarity is partly intrinsic to the subject matter (some things are easier to measure than others) and partly a function of available technology (DNA evidence transformed forensic science's capacity to detect error). It is also partly a function of how the field defines "evidence" — a field that privileges randomized controlled trials will find ambiguous evidence in domains where RCTs are impractical, while a field that accepts multiple evidence types may find clearer signals.

Variable 2: Switching Cost

How much has the field invested in the wrong answer?

  • HIGH switching cost (slow correction): Careers built, textbooks written, treatment protocols established, regulatory frameworks encoded, training programs designed around the wrong answer. The dietary fat hypothesis: entire branches of the food industry, government dietary guidelines, medical training curricula, and pharmaceutical marketing built on the low-fat consensus.
  • LOW switching cost (fast correction): The wrong answer is not deeply embedded in institutional infrastructure. The ozone hole: the science was relatively new, no major institutional investments depended on denying ozone depletion, and the industrial sector affected (CFC manufacturers) was relatively small.

Switching cost is the single strongest predictor of slow correction. When careers, industries, and institutional identities are built on the wrong answer, correction requires destroying what has been built — and the institution's self-preservation instincts resist.

Variable 3: Defender Power

How much institutional power do the defenders of the wrong consensus command?

  • HIGH defender power (slow correction): The defenders are senior, prestigious, well-funded, and connected to external power bases. The financial industry's defenders of pre-2008 risk models: connected to central banks, government agencies, and trillion-dollar institutions, with the ability to influence hiring, funding, and regulation.
  • LOW defender power (fast correction): The defenders are powerful within academia but lack external institutional leverage. Psychology's defenders of pre-replication-crisis methods: influential within the field but without the external power base that would allow them to resist reform when political and media pressure mounted.

Defender power is not just about seniority. It includes the ability to influence external institutions — governments, funding agencies, regulatory bodies, and industries — that can sustain the wrong consensus even when internal evidence turns against it.

Variable 4: Outsider Access

How permeable is the field to challenges from outsiders?

  • HIGH access (fast correction): The field has mechanisms that allow outsiders to present evidence, publish findings, and gain professional standing. Computer science during the neural network revival: a relatively meritocratic culture with open publication norms, where results (working systems) spoke louder than credentials.
  • LOW access (slow correction): The field is credentialist, hierarchical, and structured to filter evidence through insiders. Gastroenterology when Marshall and Warren presented: a hierarchical medical specialty where outsiders (an internal medicine trainee and a pathologist) could not gain access to the field's publication and conference venues.

Outsider access is determined by: publication norms (open vs. gated), conference culture (inclusive vs. exclusionary), hiring practices (credential-based vs. results-based), and the field's tolerance for heterodox ideas.

Variable 5: Alternative Availability

Is there a clear, implementable alternative to the wrong consensus?

  • HIGH availability (fast correction): A ready-made replacement framework exists. When Marshall and Warren demonstrated H. pylori as the cause of ulcers, the treatment alternative was clear: antibiotics instead of acid suppression. When pre-registration was proposed for psychology, the implementation was straightforward.
  • LOW availability (slow correction): No clear replacement exists. After the 2008 financial crisis, there was no ready-made alternative to DSGE macroeconomic models. Heterodox alternatives existed but were not developed to the point of serving the same institutional functions.

This variable is often overlooked. A wrong consensus can persist not because the evidence against it is weak, but because there is nothing to replace it with. Fields do not abandon paradigms in a vacuum — they swap one for another. If no replacement is available, even a crisis may produce only cosmetic reform (Chapter 19).

Variable 6: Crisis Probability

How likely is a visible, undeniable, attributable crisis?

  • HIGH probability (faster correction): The wrong consensus has direct, measurable consequences that will eventually produce a visible failure. Engineering errors: bridges collapse, shuttles explode, buildings fall. Medical errors: patients die in identifiable ways.
  • LOW probability (slower correction): The wrong consensus causes harm that is diffuse, delayed, or statistically distributed. Nutritional science errors: the harm (heart disease, obesity) develops over decades, in populations, with confounding factors. Forensic science errors: the harm (wrongful conviction) is experienced by individual defendants whose cases are reviewed only if DNA evidence happens to be available.

Crisis probability is partly a function of the stakes (higher stakes → more visible failures) and partly a function of the causal chain's visibility (direct causation → faster attribution).

Variable 7: Correction Mode

Does correction happen through persuasion of incumbents or circumvention through new entrants?

  • PERSUASION (usually slower): The field changes because existing practitioners are convinced by evidence. This requires overcoming sunk costs, defending against consensus enforcement, and waiting for the evidence to become overwhelming. Rare in its pure form.
  • CIRCUMVENTION (usually faster): The field changes because new practitioners — who are not invested in the old paradigm — replace the old guard. This is Planck's principle in action: "science advances one funeral at a time." In practice, most corrections involve a mix, with circumvention doing the heavy lifting and persuasion contributing at the margins.
  • CRISIS-FORCED (fastest but costliest): The field changes because an external event makes the cost of not changing intolerable. This bypasses both persuasion and circumvention by creating political and institutional pressure that overrides internal resistance.

Variable 8: Revision Resistance

How effectively does the field maintain institutional memory of its errors?

  • HIGH revision resistance (faster future corrections): The field deliberately preserves the messy history of its past corrections, maintaining awareness of how the system can fail. Aviation safety: detailed public documentation of every failure, institutional culture that values error reporting.
  • LOW revision resistance (slower future corrections): The field rewrites its history to make corrections look inevitable (the revision myth), producing complacency about current potential errors. Most academic fields fall here.

This variable determines not the speed of the current correction but the field's vulnerability to the next wrong consensus — because the revision myth (Chapter 20) feeds back into Stage 1 of the lifecycle.

🔄 Check Your Understanding (try to answer without scrolling up)

  1. Which variable is described as the "single strongest predictor of slow correction"? Why?
  2. Why is "alternative availability" often overlooked as a variable?
  3. What is the difference between the three correction modes?

Verify 1. Switching cost — because when careers, industries, and institutional identities are built on the wrong answer, correction requires destroying what has been built, and institutional self-preservation resists. 2. Because we focus on whether the wrong answer is wrong (evidence clarity) rather than whether a replacement exists. But fields don't abandon paradigms into a vacuum — they swap one for another. Without a replacement, even strong evidence may produce only cosmetic reform. 3. Persuasion: convincing existing practitioners (slow). Circumvention: replacing them with new practitioners who aren't invested (moderate). Crisis-forced: external shock that makes the cost of not changing intolerable (fastest, but costliest because the crisis inflicts damage).


22.4 Testing the Model: The Six Anchor Examples

Let's apply the Correction Speed Model to all six anchor examples, scoring each on the eight variables and comparing the predictions to the actual correction timelines.

Peptic Ulcers / H. pylori — Correction Time: ~15 years

Variable Score Reasoning
Evidence clarity HIGH Reproducible bacterial culture, Koch's postulates satisfied, dramatic self-experiment
Switching cost MEDIUM Treatment protocols and pharmaceutical revenue invested, but not entire field identity
Defender power MEDIUM Senior gastroenterologists, pharmaceutical industry, but limited political power
Outsider access LOW Hierarchical medical specialty; Marshall & Warren were outsiders
Alternative availability HIGH Antibiotics — clear, cheap, effective replacement for acid suppression
Crisis probability LOW No single visible crisis; harm was diffuse (patients receiving wrong treatment)
Correction mode Mixed Primarily circumvention (generational) with dramatic evidence accelerating
Revision resistance LOW The correction story is now heavily sanitized

Prediction: Medium-speed correction (10–20 years). Evidence clarity and alternative availability pull toward fast; low outsider access and absence of crisis pull toward slow. Actual: ~15 years. Model fits.

Dietary Fat Hypothesis — Correction Time: ~50 years (still incomplete)

Variable Score Reasoning
Evidence clarity LOW Contradictory studies, confounding variables, methodological disputes
Switching cost VERY HIGH Government guidelines, food industry, pharmaceutical industry, public health training
Defender power HIGH Connected to government agencies, food industry, public health establishment
Outsider access LOW Credentialist field; challengers dismissed as fringe
Alternative availability LOW No single clean alternative; "it's complicated" is hard to institutionalize
Crisis probability LOW Harm is diffuse (population-level chronic disease)
Correction mode Circumvention Slow generational replacement + cultural shift
Revision resistance LOW Already being rewritten as "we always knew it was more complex"

Prediction: Very slow correction (40–60+ years). Every variable pulls toward slow. Actual: ~50 years and counting. Model fits.

Neural Networks — Correction Time: ~25 years

Variable Score Reasoning
Evidence clarity MEDIUM initially, HIGH later Theory was ahead of hardware; once compute caught up, results were undeniable
Switching cost MEDIUM Careers built on symbolic AI, but the field was relatively young
Defender power HIGH initially Minsky and Papert were enormously prestigious
Outsider access MEDIUM-HIGH Computer science has relatively meritocratic publication culture
Alternative availability HIGH (eventually) Working neural network systems were the alternative — performance spoke
Crisis probability MEDIUM No single crisis, but competitive pressure (other countries, industry)
Correction mode Mixed Circumvention (new generation) + technology-forced (hardware advances)
Revision resistance LOW History rewritten as "AI winter was just a pause"

Prediction: Medium-slow correction (15–30 years), with technology as an acceleration lever. Actual: ~25 years. Model fits.

Challenger / Normalization of Deviance — Correction Time: ~7 years normalized, then crisis

Variable Score Reasoning
Evidence clarity HIGH Engineering data was clear; the correlation between temperature and O-ring erosion was measurable
Switching cost MEDIUM Schedule and contract commitments, but not paradigmatic
Defender power MEDIUM NASA management, but subject to external oversight
Outsider access LOW Hierarchical organization; engineers' warnings filtered by management
Alternative availability HIGH The alternative was "don't launch in cold weather" — simple to implement
Crisis probability HIGH Engineering systems fail visibly and catastrophically
Correction mode Crisis-forced The shuttle exploded on live television
Revision resistance LOW Post-Challenger reforms became the narrative; Columbia repeated the pattern

Prediction: Fast correction triggered by crisis (when crisis occurs). Actual: Crisis at 7 years; correction cosmetic; repeated at 17 years. Model correctly identifies high crisis probability but the low revision resistance variable correctly predicts that the correction was cosmetic (Chapter 19) and the same pattern recurred.

2008 Financial Crisis — Correction Time: ~20 years of error, then crisis

Variable Score Reasoning
Evidence clarity MEDIUM Statistical models; counter-evidence was interpretable in multiple ways
Switching cost VERY HIGH Trillion-dollar industry, government policy, economic training
Defender power VERY HIGH Connected to governments, central banks, financial industry
Outsider access LOW Economics is credentialist; dissenters were marginalized
Alternative availability LOW No ready replacement for DSGE models
Crisis probability MEDIUM-HIGH Financial systems fail visibly (though prior crises were absorbed)
Correction mode Crisis-forced Global financial collapse
Revision resistance LOW The correction narrative is already being sanitized

Prediction: Slow correction unless crisis intervenes; crisis produces regulatory but not theoretical correction (because alternative availability is low). Actual: Exactly this. Model fits well.

Forensic Science — Correction Time: ~40 years and counting

Variable Score Reasoning
Evidence clarity HIGH DNA evidence is unambiguous in individual cases
Switching cost VERY HIGH Legal precedent, prosecutorial culture, forensic laboratory infrastructure
Defender power VERY HIGH Connected to prosecutors, judges, law enforcement — the criminal justice system
Outsider access VERY LOW Legal system is extremely resistant to external challenge; precedent reinforces itself
Alternative availability MEDIUM DNA-based methods exist but don't replace all forensic disciplines
Crisis probability LOW Harm is distributed across individual cases; no single visible crisis event
Correction mode Circumvention (slow) New generation of lawyers, judges, and forensic scientists
Revision resistance LOW Legal system presents its history as progressive

Prediction: Very slow correction (40+ years). Defender power and outsider access are extreme barriers; no crisis mechanism to accelerate. Actual: 40+ years and still incomplete. Model fits.


22.5 Patterns in the Data

Several patterns emerge from the comparative analysis:

Pattern 1: No Single Variable Is Decisive

No single variable predicts correction speed. High evidence clarity is not sufficient if switching costs are very high (forensic science). Crisis can force change even when evidence clarity is moderate (2008 financial crisis). Low defender power can allow correction even without a crisis (psychology's replication crisis reforms). The variables interact — and the interactions are often more important than any individual variable.

Pattern 2: Alternative Availability Is the Hidden Key

Across all six cases, the availability of a clear alternative is strongly correlated with the depth (not just the speed) of correction. H. pylori had antibiotics → deep correction. Psychology had Open Science methods → deep correction. Economics had no alternative theoretical framework → shallow correction. This supports the principle that fields don't abandon paradigms; they swap them. Without a replacement, even a crisis produces only cosmetic reform.

Pattern 3: Switching Cost × Defender Power Is the Main Brake

The combination of high switching cost and high defender power is the strongest predictor of slow correction. When lots of money, careers, and institutional infrastructure are invested in the wrong answer (high switching cost), AND when the defenders are connected to external power bases that can sustain the wrong consensus independent of internal evidence (high defender power), correction is extremely slow. Dietary fat and forensic science both exhibit this pattern.

Pattern 4: Crisis Accelerates But Does Not Guarantee Depth

Crisis dramatically accelerates the timing of correction — but the depth of correction depends on alternative availability and revision resistance. The 2008 crisis forced rapid regulatory response but shallow theoretical correction because no alternative framework was ready. The Challenger crisis forced rapid procedural response but shallow cultural correction because revision resistance was low (the narrative became "we fixed it").

Pattern 5: The Revision Myth Determines Future Vulnerability

Low revision resistance — the tendency to sanitize the correction history — appears in every case. This is concerning because it means that even successful corrections make the next wrong consensus harder to detect and correct. The revision myth is the failure mode with the longest time horizon: its effects are measured not in years but in generations.

Pattern 6: Fast Corrections Are Not Necessarily Good Corrections

There is a temptation to assume that faster correction is always better. The model reveals that this is not always the case. Crisis-forced corrections are fast but often shallow (cosmetic reform). They also carry the cost of the crisis itself — the damage that forced the change. And they are vulnerable to the pendulum problem (Chapter 21): the trauma of the crisis can push the correction past the optimal point.

The ideal is not the fastest possible correction but the deepest possible correction at the lowest possible cost. This points toward a strategy of investing in acceleration levers (outsider access, alternative development, defender power reduction) before crisis arrives — lowering the crisis threshold so that smaller signals can trigger genuine correction, rather than waiting for catastrophic failure.

Pattern 7: The Model Predicts Why Some Fields Self-Correct and Others Don't

Why has aviation achieved a genuine safety culture while medicine still struggles with preventable errors? Why has psychology reformed its methods while economics has not? The model offers a structural answer: aviation has high crisis probability (planes crash visibly), high outsider access (anonymous reporting systems), high alternative availability (specific technical fixes), and high revision resistance (NTSB reports preserve the mess). Most other fields score low on multiple variables. The difference is not one of institutional virtue — it is one of structural features that can be deliberately built.

🔍 Why Does This Work?

The model uses eight variables but claims they collectively predict correction timelines across vastly different fields. Before reading the next section, consider: why should the same variables apply to medicine, computer science, military strategy, and criminal justice? What common structural features of knowledge-producing institutions make a single model applicable?

The answer is that all knowledge-producing institutions share the same fundamental architecture: they produce consensus through social processes (authority, peer review, training), they resist changes to consensus through institutional mechanisms (career investment, publication norms, hiring), and they correct through a limited set of channels (evidence, generational replacement, crisis). The variables in the model are properties of this shared architecture, not properties of any specific field's content.

This is, in a sense, the central argument of this entire book: the failure modes of human knowledge production are structural, not content-specific. The same forces that trapped gastroenterology in a wrong consensus about ulcers are the same forces that trapped economics in wrong models of risk, that trapped the military in wrong doctrines of warfare, and that trap every field in its own version of the same patterns. The Correction Speed Model works across fields because the architecture is shared. And the architectural insight is what makes the failure modes diagnosable — and partially fixable.

{Diagram: The Correction Speed Model — Summary Visualization. A horizontal timeline labeled "Correction Speed (Fast ← → Slow)." Eight arrows push the timeline in different directions. Pushing toward FAST: high evidence clarity, low switching cost, low defender power, high outsider access, high alternative availability, high crisis probability, circumvention or crisis correction mode, high revision resistance. Pushing toward SLOW: the opposite of each. Five of the eight arrows are colored blue (acceleration levers); three are colored gray (structural constraints).

Alt-text: A horizontal bar representing correction speed from fast (left) to slow (right). Eight labeled arrows push the bar in both directions. Five arrows are colored blue to indicate they are amenable to intervention: outsider access, alternative availability, defender power, correction mode, and revision resistance. Three arrows are colored gray to indicate structural constraints: evidence clarity, switching cost, and crisis probability. The overall balance of forces determines where the bar settles on the speed spectrum.}


22.6 Acceleration Levers: What Can Be Changed?

The model's greatest practical value is not predicting correction timelines but identifying acceleration levers — variables that can be deliberately changed to speed up correction.

Not all variables are equally amenable to intervention. Some are structural constraints that are very difficult to change; others are institutional features that can be reformed.

Hard to Change (Structural Constraints)

Evidence clarity is partly determined by the subject matter. Some things are harder to measure than others. Nutritional science will always have lower evidence clarity than engineering, because the causal chains are longer, the confounding variables are more numerous, and the outcomes are more delayed. Technology can sometimes improve evidence clarity (DNA evidence for forensic science, fMRI for neuroscience), but the improvement is unpredictable.

Switching cost is determined by how deeply the wrong answer is embedded in institutional infrastructure. Switching costs can be reduced gradually — by encouraging intellectual diversification, by funding heterodox research, by building institutional structures that don't depend on any single paradigm — but this is a long-term project.

Crisis probability is determined by the nature of the field. Engineering fields have inherently higher crisis probability than social sciences because engineering failures are visible and catastrophic. This cannot be changed — and should not be, because manufacturing crises is neither ethical nor effective.

Amenable to Intervention (Acceleration Levers)

Outsider access can be deliberately increased through institutional reform. Open publication norms, inclusive conference policies, hiring practices that value results over credentials, funding mechanisms that support heterodox research — these are all concrete, implementable changes that lower the barriers outsiders face.

Alternative availability can be increased by funding the development of alternative frameworks before the current paradigm enters crisis. This is one of the most cost-effective investments a field can make: having a ready alternative dramatically increases the depth of correction when the crisis arrives. The psychology community's development of Open Science methods before the replication crisis was fully recognized is a model for this.

Defender power can be reduced — not by attacking individual defenders, but by reducing the structural features that amplify their influence. Term limits on journal editorships, rotating membership on funding panels, blinded review processes, and mechanisms for anonymous dissent all reduce the ability of any individual or group to enforce consensus through institutional power.

Correction mode can be shifted toward circumvention by supporting the careers of people who challenge orthodoxy — providing institutional protection, creating alternative career paths, and ensuring that being right too early doesn't destroy the people who will be needed when the correction arrives. Chapter 18's analysis of what separates surviving from destroyed outsiders provides the blueprint.

Revision resistance can be increased by deliberately preserving the messy history of past corrections — through institutional practices modeled on aviation safety culture. Documenting not just what the field got wrong, but how the field resisted correction, what it cost, and what structural features allowed the error to persist.

A Worked Example: Applying Acceleration Levers

Consider a hypothetical researcher who has completed the Correction Speed Model analysis for their field — let's say nutritional science — and found the following profile: low evidence clarity, very high switching cost, high defender power, low outsider access, low alternative availability. The model predicts very slow correction. What can be done?

Step 1: Identify the most tractable lever. Evidence clarity is hard to change (nutritional science is inherently complex). Switching cost is enormous and structural. But outsider access and alternative availability are both amenable to intervention.

Step 2: Design specific interventions. For outsider access: establish interdisciplinary grants that fund researchers from adjacent fields (biostatistics, metabolic science, behavioral economics) to study nutritional questions using their own methods. For alternative availability: fund the development of precision nutrition frameworks (personalized dietary recommendations based on individual metabolic data) as a concrete alternative to population-level dietary guidelines.

Step 3: Build institutional support. Identify allies: younger researchers frustrated with the field's methodological limitations. Identify institutional vehicles: new journals, conferences, or funding streams that are not controlled by the defenders of the current consensus.

Step 4: Reduce defender power through structural reform. Advocate for rotating membership on dietary guideline committees, mandatory conflict-of-interest disclosure for food industry funding, and blinded review of nutritional studies.

No single intervention will transform the field's correction speed. But the cumulative effect of multiple interventions — each targeting a different acceleration lever — can meaningfully reduce the correction timeline. The goal is not to force a specific conclusion but to create the structural conditions under which evidence can be evaluated more fairly and alternatives can develop more freely.

🪞 Learning Check-In

Pause and reflect: - Which variable in the Correction Speed Model surprised you most? Why? - Can you apply the model to your own field without looking back at the chapter? - What is the most important concept from Part III for your Epistemic Audit?

🔄 Check Your Understanding (try to answer without scrolling up)

  1. Which variables are structural constraints (hard to change) and which are acceleration levers (amenable to intervention)?
  2. Why is alternative availability described as "one of the most cost-effective investments a field can make"?

Verify 1. Hard to change: evidence clarity, switching cost, crisis probability. Amenable to intervention: outsider access, alternative availability, defender power, correction mode, revision resistance. 2. Because having a ready alternative dramatically increases the depth of correction when crisis arrives. Without an alternative, even a severe crisis produces only cosmetic reform. With an alternative, the same crisis can produce genuine paradigm change. The investment in developing alternatives pays off precisely when it is most needed — during the correction window that crisis opens.


📐 Project Checkpoint

Epistemic Audit — Chapter 22 Addition: Correction Speed Assessment

This is the capstone assessment for Part III. Apply the full Correction Speed Model to your field:

22A. Variable Scoring. Score your field's current consensus on all eight variables:

Variable Your Field's Score (L/M/H) Evidence/Reasoning
Evidence clarity
Switching cost
Defender power
Outsider access
Alternative availability
Crisis probability
Correction mode
Revision resistance

22B. Correction Timeline Estimate. Based on your scoring, estimate the likely correction timeline for your field's most significant potential error (identified in earlier chapters of the audit). Is it a 5-year correction, a 20-year correction, or a 50-year correction?

22C. Acceleration Strategy. For each of the five acceleration levers (outsider access, alternative availability, defender power, correction mode, revision resistance), propose one specific, concrete intervention that could accelerate correction in your field. For each intervention, identify the barriers to implementation and the institutional allies who might support it.

22D. Comparative Analysis. Which of the six anchor examples in this chapter most closely resembles your field's situation? What does that comparison predict about the likely course of correction?

This is the most important assessment in the Epistemic Audit so far. The earlier chapters asked you to identify what might be wrong. This chapter asks you to estimate how long it will take to fix — and what you can do about it.


22.7 The Limits of the Model

Every model simplifies. This one is no exception. Three important limitations:

Limitation 1: The model assumes the error exists. The model predicts how quickly a wrong consensus will be corrected — but it does not diagnose whether a given consensus is wrong. Applying the model to a consensus that is actually correct would produce a prediction for a correction that should not (and will not) happen. The model is a tool for analyzing the speed of correction, not the correctness of the consensus.

Limitation 2: The variables interact in ways the model doesn't fully capture. The model presents eight variables as if they were independent, but they are not. High defender power can cause low outsider access (defenders use their power to exclude challengers). Crisis can change alternative availability (the pressure of crisis funds the development of alternatives that didn't exist before). The interactions are complex and case-specific.

Limitation 3: The model does not account for luck. Marshall's decision to drink H. pylori was not predictable from any variable in the model. The timing of the Berlin Wall's fall shaped the post-Cold War military pendulum in ways that no model could have anticipated. The model identifies the structural forces that shape correction — but individual decisions, accidents, and contingencies can accelerate or delay correction in ways that are inherently unpredictable.

These limitations do not invalidate the model. They constrain its use. The Correction Speed Model is a diagnostic tool, not a crystal ball. It answers: "What structural forces are shaping the correction timeline in this field?" not "Exactly when will the correction happen?"

Limitation 4: The model is itself subject to the failure modes it describes. This book argues that all knowledge frameworks are subject to authority cascades, sunk costs, and consensus enforcement. The Correction Speed Model is no exception. If it gains influence, it will acquire defenders, develop institutional inertia, and resist its own correction. This is Theme 9 — every correction mechanism can become a source of error — applied to the model itself. The appropriate response is to treat the model as a useful tool that should be tested, refined, and eventually replaced, not as a final answer.


22.8 The View From 30,000 Feet: What Part III Tells Us

Let's step back and capture the arc of Part III as a whole.

Part I asked: How do wrong ideas get in? Answer: Through authority cascades, unfalsifiable structures, measurement fixation, survivorship bias, narrative seduction, conceptual anchoring, and cross-domain importing.

Part II asked: How do wrong ideas stay? Answer: Through sunk costs, replication failure, incentive misalignment, false precision, expert blind spots, consensus enforcement, complexity reduction, and zombie resilience.

Part III asked: How do wrong ideas finally die? Answer: Through a combination of generational replacement (Planck's principle), outsider challenge (the outsider problem), crisis-driven correction (the primary mechanism), and the ongoing dynamics of historical revision (the revision myth) and pendulum overcorrection.

And the synthesis: How fast do they die? Answer: It depends on eight identifiable variables — evidence clarity, switching cost, defender power, outsider access, alternative availability, crisis probability, correction mode, and revision resistance — which interact to produce correction timelines ranging from years to generations. Five of these variables can be deliberately changed to accelerate correction.

The view from 30,000 feet is both sobering and hopeful. Sobering: the default mechanism of institutional change is crisis-driven, and its cost is measured in human suffering. Hopeful: the structural forces are identifiable, which means they are partially fixable. Part V (The Toolkit) will offer specific, implementable strategies. Part IV (Field Autopsies) will first demonstrate the framework in action across eight specific disciplines.


22.8 Chapter Summary

Key Concepts

  • Correction Speed Model: An eight-variable framework for predicting how quickly a wrong consensus will be corrected: evidence clarity, switching cost, defender power, outsider access, alternative availability, crisis probability, correction mode, and revision resistance
  • Acceleration levers: The five variables amenable to deliberate intervention: outsider access, alternative availability, defender power, correction mode, and revision resistance
  • Structural constraints: The three variables that are difficult to change through intervention: evidence clarity, switching cost, and crisis probability
  • Alternative availability as hidden key: Fields don't abandon paradigms; they swap them. Without a ready alternative, even a crisis produces only cosmetic reform

Key Findings from the Comparative Analysis

  • No single variable is decisive — the variables interact
  • Switching cost × defender power is the main brake on correction
  • Alternative availability determines the depth of correction; crisis determines the timing
  • The revision myth (low revision resistance) is universal and determines future vulnerability
  • The same eight variables apply across vastly different fields because all knowledge-producing institutions share the same fundamental architecture

Key Tensions

  • The model identifies what can be changed, but changing it requires the same institutional courage that the model shows is systematically punished
  • The most effective interventions (increasing outsider access, reducing defender power) are the ones most strongly resisted by the people who benefit from the current structure
  • The model can predict correction timelines but not whether the current consensus is actually wrong — applying it requires independent evidence assessment

Spaced Review

Revisiting earlier material to strengthen retention.

  1. (From Chapter 17 — Planck's Principle) The original correction speed framework in Chapter 17 had six variables. This chapter expanded it to eight. What two variables were added, and why are they important enough to warrant inclusion?

  2. (From Chapter 19 — Crisis and Correction) The comparative analysis found that crisis accelerates the timing of correction but not necessarily its depth. Explain this finding using the concepts of genuine correction, cosmetic correction, and wasted crisis from Chapter 19.

  3. (From Chapter 20 — The Revision Myth) The model includes "revision resistance" as a variable. Why does revision resistance affect future correction speed rather than current correction speed? How does this connect to the feedback loop described in Chapter 20?

  4. (From Chapter 21 — When Correction Overcorrects) The model does not include a variable for "overcorrection risk." Should it? Argue for and against adding a ninth variable that captures the probability that the correction will overshoot.

Answers 1. The two added variables are "outsider access" (from Chapter 18's analysis of the outsider problem — how permeable the field is to challenges from non-insiders) and "revision resistance" (from Chapter 20's analysis of the revision myth — how effectively the field preserves institutional memory of its errors). Both are important because they capture mechanisms that Chapter 17's original framework underweighted: the systematic exclusion of correct challengers (outsider access) and the feedback loop by which sanitized history makes the next correction slower (revision resistance). 2. Crisis creates political and institutional pressure that forces a response — this is why it accelerates *timing*. But the *depth* of that response depends on alternative availability: if a replacement paradigm is ready, the crisis can trigger a genuine swap (deep correction). If no replacement exists, the crisis triggers only procedural reforms that leave the paradigm intact (cosmetic correction). If the attribution is contested, the crisis may not trigger even cosmetic reform (wasted crisis). Crisis is a trigger, not a guarantee. 3. Revision resistance does not determine how quickly the *current* wrong consensus corrects — that depends on the other seven variables. It determines how quickly the *next* wrong consensus will be identified and challenged, because low revision resistance (sanitized history) creates the illusion of self-correction, making the field complacent about current potential errors. This is the feedback loop from Chapter 20: Stage 7 (revision) feeds back into Stage 1 (introduction of the next wrong idea). 4. For inclusion: overcorrection risk captures a real phenomenon (Chapter 21) that affects the *quality* of correction, not just its speed. High overcorrection risk means the field will correct but land at the wrong point. Against inclusion: the model is designed to predict correction *speed*, not correction *quality*. Overcorrection is a feature of the correction process itself, not a variable that determines when correction happens. Including it would change the model's scope from "how fast?" to "how well?" — a valid question, but a different one. The stronger argument is probably to keep the model focused on speed and use the Chapter 21 framework separately for assessing overcorrection risk.

What's Next

Part III is now complete. You have a comprehensive framework for understanding how wrong ideas die: the mechanisms that drive correction (Planck's principle, outsider challenge, crisis), the mechanisms that distort correction (the revision myth, the pendulum problem), and a predictive model that integrates them all.

In Part IV: Field Autopsies, we will apply everything from Parts I–III to eight specific fields — medicine, economics, psychology, nutrition, criminal justice, military strategy, technology, and education — conducting deep diagnostic examinations of each field's complete history of error, correction, and ongoing vulnerability. These are the chapters that make this book definitive rather than theoretical.

Before moving on, complete the exercises and quiz to solidify your understanding.


Chapter 22 Exercises → exercises.md

Chapter 22 Quiz → quiz.md

Case Study: Why Forensic Science Corrects So Slowly → case-study-01.md

Case Study: The Ozone Hole — A Fast Correction and Why → case-study-02.md