33 min read

> "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it."

Learning Objectives

  • State Planck's principle and evaluate the empirical evidence for and against it
  • Identify the conditions under which Planck's principle holds (generational replacement is required) and the conditions under which it fails (correction happens within a generation)
  • Build an initial framework for predicting correction speed based on structural variables
  • Analyze why some corrections (ozone hole, plate tectonics acceptance) were fast while others (ulcer bacteria, dietary fat) were slow
  • Begin Part III of the Epistemic Audit: the correction assessment

Chapter 17: Planck's Principle and Its Exceptions

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." — Max Planck (often paraphrased as "Science advances one funeral at a time")

Chapter Overview

Max Planck — the physicist who launched quantum mechanics — offered one of the most cited and most pessimistic observations about how knowledge changes. His claim, distilled to its essence: wrong ideas don't die because their defenders are persuaded. They die because their defenders die.

If Planck is right, the implications are bleak. It means that the persistence engine described in Part II is essentially unbreakable from within — that no amount of evidence, no quality of argument, no force of reason can change the minds of practitioners invested in a wrong consensus. It means that correction is not a matter of science but of demographics — a waiting game in which the correctness of the new idea is irrelevant to the timeline; only the age distribution of the old idea's defenders matters.

But is Planck right?

This chapter tests the principle against the evidence. The answer, it turns out, is: sometimes yes, sometimes no — and the conditions that determine which are structural, predictable, and informative.

Understanding when Planck's principle holds and when it fails is the key to understanding how correction actually works — and it is the foundation for everything in Part III.

Part III marks a shift in the book's trajectory. Parts I and II were diagnostic — identifying how wrong ideas enter and why they persist. Part III is about dynamics — how the entry and persistence mechanisms are eventually overcome, and what determines the timeline. If Parts I and II are the disease model, Part III is the recovery model. The disease is structural, so the recovery must also be structural — and understanding Planck's principle is the starting point for understanding what structural recovery looks like.

In this chapter, you will learn to: - Evaluate Planck's principle against historical evidence - Identify the structural conditions that determine whether correction requires generational replacement - Build an initial framework for predicting correction speed - Analyze specific cases of fast and slow correction - Begin Part III of the Epistemic Audit: assessing your field's correction capacity

🏃 Fast Track: If you're familiar with Planck's principle as a concept, start at section 17.3 (When Planck Is Wrong) for the cases that break the rule, and section 17.5 for the prediction framework.

🔬 Deep Dive: After this chapter, explore the empirical tests of Planck's principle by Azoulay, Fons-Rosen, and Graff Zivin (2019), who used large-scale data on scientist deaths to test whether fields actually change faster after prominent researchers die.


17.1 Testing the Principle: When Planck Is Right

Planck's principle is not merely a pessimistic quip. It is an empirical claim that can be tested. In several well-documented cases, the evidence supports it: correction occurred primarily through generational replacement rather than persuasion.

Continental Drift: 50 Years of Waiting

Alfred Wegener proposed continental drift in 1912. The evidence — matching coastlines, identical fossils on separated continents, geological continuity across oceans — was substantial from the beginning. The opposition — led by prestigious geologists who argued that no known mechanism could move continents — maintained the wrong consensus for roughly fifty years.

The correction did not occur because the opponents were persuaded. It occurred because: (a) the discovery of seafloor spreading (1960s) provided the missing mechanism, and (b) a new generation of geologists — who had not built careers on fixed continents — evaluated the accumulated evidence with fresh eyes. The combination of new evidence and new personnel produced the rapid acceptance of plate tectonics in the late 1960s.

Planck's principle operated: the old guard did not change their minds. A new generation, armed with new evidence, replaced them.

Peptic Ulcers: Two Decades of Resistance

Marshall and Warren published their H. pylori findings in 1983. The gastroenterology establishment resisted for approximately 15-20 years. The 1994 NIH Consensus Conference — which formally endorsed the bacterial cause — marked the institutional turning point, but full clinical adoption took several more years.

Analysis of the transition shows a clear generational pattern: younger gastroenterologists adopted the bacterial model more readily than older ones. The older generation — with careers, textbooks, and identities invested in the acid model — largely maintained their position until retirement. The correction was driven primarily by new entrants to the field, not by conversion of existing practitioners.

Planck's principle operated: the evidence eventually became overwhelming, but the timeline was determined primarily by the rate at which new practitioners replaced old ones.

The peptic ulcer case is particularly instructive because we can identify the moment when the evidence became sufficient to warrant change (roughly 1985-1990, after multiple replication studies confirmed the H. pylori-ulcer link) and track how long it took for that evidence to translate into actual practice change (roughly 1994-2000, when antibiotic treatment became standard). The gap — approximately 5-15 years between sufficient evidence and practice change — represents the time required for generational replacement in the portion of the profession that couldn't be persuaded.

Importantly, the generation that adopted H. pylori didn't do so because they were braver or more open-minded than the generation that resisted. They adopted it because they had nothing invested in the acid model. A resident who started training in 1990 learned about both the acid model and the bacterial model. By 1995, the bacterial model was clearly better supported. The resident adopted it because it was the best available framework — with zero switching cost. The 30-year veteran who had performed vagotomies and prescribed acid-suppression drugs for their entire career faced a switching cost that the evidence alone couldn't overcome.

A More Nuanced Pattern: Partial Planck

In many cases, Planck's principle operates partially: some defenders are persuaded by the evidence (partial correction within the generation), while others maintain their position until retirement (complete correction only through replacement).

The peptic ulcer case shows this partial pattern clearly. The 1994 NIH Consensus Conference represented a tipping point: enough evidence had accumulated, and enough of the field's younger members had adopted the bacterial model, that the institutional position changed. But individual practitioners — especially those late in their careers — continued to prescribe acid-suppression therapy as primary treatment for years after the institutional position shifted. The institutional correction was faster than the individual correction, because the institution could change its formal position while individuals maintained their personal practices.

This partial pattern suggests that Planck's principle describes the last stage of correction — the tail end of the process, where the final holdouts are replaced by retirement — rather than the entire process. The initial stages of correction involve evidence accumulation, institutional tipping points, and partial persuasion. Planck's principle kicks in for the final, most resistant portion of the community.

The Empirical Test: Azoulay et al. (2019)

A landmark 2019 study by Pierre Azoulay, Christian Fons-Rosen, and Joshua Graff Zivin provided the first large-scale empirical test of Planck's principle. They examined what happens to scientific fields after the death of a prominent researcher — specifically, whether new ideas enter the field more easily after the "intellectual gatekeeper" is removed.

Their findings:

  • After the death of a prominent scientist, there is a measurable increase in publications by scientists who were not previously part of the deceased's network — suggesting that the prominent scientist's presence had been suppressing new entrants and new ideas
  • The effect is strongest when the deceased scientist had been particularly influential — suggesting that it is specifically the authority of the gatekeeper that prevents new ideas
  • The new publications that appear after the gatekeeper's death tend to come from outside the field — confirming the outsider advantage documented in Chapter 13 (Einstellung)

This study provides the strongest quantitative evidence for Planck's principle: the death of prominent scientists does open fields to new ideas, suggesting that the presence of those scientists was indeed suppressing correction.

The study also revealed a nuance that complicates the simple "funeral" interpretation: the new publications that appeared after the gatekeeper's death were not primarily from the gatekeeper's former collaborators or students (who might have been freed from deference). They were from outsiders — researchers who had been working in adjacent fields and entered the gatekeeper's domain after the barrier was removed. This suggests that Planck's principle operates not just through deference dynamics (people deferring to the authority while they're alive) but through access dynamics (outsiders being blocked from entering the field entirely).

The access-blocking mechanism is particularly consequential. It means that Planck's principle involves not just the suppression of ideas (existing researchers can't voice contrary findings) but the suppression of people (potential contributors can't enter the field). The total cost of a gatekeeper's authority is not just the ideas that aren't voiced — it is the researchers who never join, the methods that are never imported, and the perspectives that are never represented. The cost is invisible until the gatekeeper's removal reveals the suppressed demand.

📜 Historical Context: Planck himself experienced both sides of the principle. As a young physicist, he proposed quantum theory — which was initially resisted by the classical physics establishment. As an older physicist, he became part of the establishment that initially resisted some implications of quantum mechanics (particularly the probabilistic interpretation). His own career arc — from revolutionary to conservative — illustrates how Planck's principle operates: the same individual can be the correction in one era and the resistance in another, because the persistence engine is structural, not personal. The young Planck challenged the old paradigm; the old Planck defended the paradigm he had created. The same person, the same intelligence, the same integrity — but different structural positions.

🔄 Check Your Understanding (try to answer without scrolling up)

  1. What does Planck's principle claim about how scientific change occurs?
  2. What did the Azoulay et al. (2019) study find about the effect of prominent scientists' deaths on their fields?

Verify 1. That new scientific truths triumph not by persuading opponents but by outlasting them — the opponents die and a new generation grows up familiar with the new truth. 2. That after a prominent scientist dies, there is a measurable increase in publications by outsiders to the field — suggesting the prominent scientist's presence was suppressing new ideas and new entrants. The effect is strongest for the most influential scientists.


17.2 The Mechanism: Why Persuasion Fails

Planck's principle holds in many cases because the persistence engine (Part II) makes persuasion structurally impossible for invested practitioners. The mechanisms are now familiar:

  • Sunk cost (Ch.9): The career, identity, and institutional investments in the old paradigm are too large to abandon based on evidence alone
  • Einstellung (Ch.13): Expertise in the old paradigm prevents practitioners from seeing the evidence for the new one — they evaluate the new evidence using the old paradigm's criteria and find it wanting
  • Consensus enforcement (Ch.14): Even practitioners who privately doubt the consensus face career risk if they express their doubts publicly
  • Complexity hiding (Ch.15): The new paradigm is often more complex than the old one, making it harder to communicate, teach, and implement

These mechanisms don't just slow persuasion — they make it structurally impossible for many practitioners. The gastroenterologist who has spent 30 years treating ulcers with acid-suppression drugs cannot be persuaded by evidence for H. pylori, because persuasion would require simultaneously abandoning their career investment, overcoming their trained patterns, accepting social risk, and embracing a more complex model. Each barrier alone might be surmountable. Together, they are insurmountable for most individuals.

Generational replacement bypasses all of these barriers. New practitioners arrive without the sunk cost, without the Einstellung, without the enforcement pressure, and without the commitment to the old simplification. They evaluate the evidence on its merits — and adopt the better framework because they have nothing to lose by doing so.

💡 Intuition: Planck's principle is the demographic consequence of the persistence engine. If the engine makes it impossible for invested practitioners to change their minds, then the only path to correction is replacing the practitioners. The waiting time is determined by the practitioners' career length, not by the evidence's strength.


17.3 When Planck Is Wrong: Cases of Fast Correction

Planck's principle is not universal. In several well-documented cases, correction occurred within a generation — sometimes remarkably quickly — without requiring the old guard to retire or die. Understanding these exceptions is essential for building a prediction framework.

The Ozone Hole: Fast Consensus Formation

In 1985, scientists at the British Antarctic Survey published evidence of severe ozone depletion over Antarctica — the "ozone hole." The finding was unexpected and dramatic: ozone levels had dropped by roughly 40% in the Antarctic spring.

The response was fast. Within two years of the publication, the international community had negotiated and signed the Montreal Protocol (1987) — a binding agreement to phase out ozone-depleting substances. Within a decade, the scientific consensus was fully formed and the policy response was underway.

The speed of the ozone response is remarkable when compared to other environmental issues. Climate change, which has a comparable (or larger) evidence base, has produced policy responses that are decades slower. The comparison reveals that the science is not the bottleneck — the structural conditions are.

Why was this correction so fast? The structural conditions were favorable:

  • The evidence was unambiguous. Satellite data confirmed the ozone hole independently. The measurements were precise, the depletion was dramatic, and the mechanism (chlorofluorocarbons destroying ozone) was well-understood from laboratory chemistry.
  • The cost of being wrong was existential. Ozone depletion threatened skin cancer rates, crop yields, and marine ecosystems worldwide. The stakes were high enough to override the persistence engine.
  • The alternative was available and affordable. CFC alternatives existed and could be deployed without massive economic disruption. The switching cost for industry was manageable.
  • The defenders were weak. The CFC industry had commercial interests in maintaining production, but the evidence was so dramatic and the public concern so intense that the industry's resistance was overwhelmed.

Machine Learning Revival: Reversing a 30-Year Exile

Neural networks were suppressed by the Minsky-Papert authority cascade (Chapter 2) for approximately 30 years. But the revival — when it came — was remarkably rapid. Within roughly a decade (2006-2016), neural networks went from a marginal, underfunded research area to the dominant paradigm in AI.

The structural conditions:

  • Performance evidence was overwhelming. Deep learning achieved dramatically superior results on benchmark tasks (image recognition, speech recognition, language translation) that could be directly compared to previous approaches. The evidence was not theoretical — it was measurable, repeatable, and quantitative.
  • The technology was demonstrable. Unlike many paradigm challenges, deep learning's advantages could be shown rather than argued. A neural network that beat the previous state of the art on a standardized benchmark provided evidence that no amount of theoretical objection could refute.
  • New computing resources enabled what was previously impossible. GPUs provided the computational power that was missing in the 1970s-1990s. The idea hadn't changed; the infrastructure to implement it had become available.
  • The old guard was partially retired. By the 2000s, the strongest opponents of neural networks (including Minsky, who died in 2016) were less active, and a younger generation of AI researchers was more open to the approach.
  • Funding found alternative channels. When traditional academic AI funding agencies wouldn't fund neural network research, industry (Google, Facebook, Baidu) stepped in — providing an alternative funding pathway that bypassed the traditional gatekeepers. This is a specific example of how diversified funding weakens defender power.

The machine learning case is particularly important for this book because it demonstrates the circumvention mode of correction at its most vivid. The neural network proponents didn't win the theoretical debate about whether neural networks were viable. They demonstrated that they worked — producing results so dramatically superior that the theoretical debate became irrelevant. The correction bypassed persuasion entirely.

🧩 Productive Struggle

Consider the major debates in your field. For each, ask: Is the correction process operating in persuasion mode (trying to convince the defenders) or circumvention mode (producing evidence that renders the defenders' objections irrelevant)? If it's operating in persuasion mode and making slow progress, could it be shifted to circumvention mode? What kind of evidence would bypass rather than confront the existing objections?

The shift from persuasion to circumvention is often the key to unlocking a stalled correction.

COVID-19 Vaccine Development: Unprecedented Speed

The development of COVID-19 vaccines represented one of the fastest scientific and industrial corrections in history: from virus identification (January 2020) to approved vaccines (December 2020) in less than a year.

The structural conditions:

  • Crisis created urgency. A global pandemic with millions of deaths created an overwhelming imperative for action that overrode normal institutional processes.
  • Massive funding eliminated resource constraints. Operation Warp Speed and similar programs invested billions, removing the usual funding bottlenecks.
  • Regulatory flexibility. Emergency use authorizations allowed deployment before the completion of normal-length trials.
  • Pre-existing platform technology. mRNA vaccine technology had been in development for years; COVID provided the first large-scale application opportunity.
  • Global collaboration. Researchers worldwide shared data and collaborated at an unprecedented pace.

📝 Note: The COVID vaccine case also demonstrates the limits of fast correction: while the scientific consensus on vaccine efficacy formed quickly, public acceptance of the consensus has been much slower and more contested — demonstrating that the persistence engine operates differently for expert consensus (which can shift fast under crisis conditions) than for public understanding (which is shaped by different persistence mechanisms including political polarization, social media dynamics, and trust deficits).


17.4 The Prediction Framework: What Determines Correction Speed?

Comparing the fast and slow correction cases reveals a set of structural variables that predict whether correction will require generational replacement (Planck) or can occur within a generation (fast correction).

Variable 1: Evidence Clarity

Evidence Type Correction Speed Examples
Unambiguous, quantitative, demonstrable Fast Ozone hole, deep learning benchmarks
Strong but interpretable Moderate H. pylori (convincing but required paradigm change)
Probabilistic, statistical, complex Slow Dietary fat hypothesis, climate change public acceptance

The clearer and more demonstrable the evidence, the faster the correction. Evidence that can be shown (a benchmark result, a satellite image) corrects faster than evidence that must be argued (a statistical trend, a meta-analysis).

This variable explains one of the deepest asymmetries in correction: some fields produce clear evidence easily and some cannot. Physics produces unambiguous experimental evidence routinely — which is why physics paradigm shifts tend to be relatively fast (once the critical experiment is done). Nutrition science, political science, and education produce probabilistic, confounded evidence — which is why their wrong consensuses persist for decades. The subject matter determines the evidence type, and the evidence type determines the correction speed. This is not a failure of the field — it is a structural constraint of the domain.

📊 Real-World Application: Consider why the ozone hole consensus formed in years while the climate change consensus has taken decades (for public acceptance, at least). Both are atmospheric science problems. But the ozone hole produced dramatic, unambiguous evidence (a 40% depletion visible in satellite data over a specific region), while climate change produces gradual, probabilistic evidence (a trend in global average temperatures that requires statistical analysis to detect and that is complicated by natural variability). The evidence type determines the correction speed — even when the underlying science is equally strong.

Variable 2: Switching Cost

Switching Cost Correction Speed Examples
Low (alternative is easy to adopt) Fast Ozone (CFC alternatives available)
Moderate (retraining required) Moderate H. pylori (antibiotics vs. acid drugs)
High (entire infrastructure must change) Slow Dietary fat (policy, industry, training)

The lower the switching cost, the faster the correction. If the correct alternative can be adopted without massive restructuring, the persistence engine has less to protect.

The switching cost variable explains why corrections in practice (clinical guidelines, engineering procedures, standard protocols) are often slower than corrections in theory (academic publications, textbooks, scientific consensus). A theoretical correction requires only that experts update their beliefs. A practical correction requires that institutions change their procedures, retrain their staff, revise their regulations, and often invest in new infrastructure. The theory can change overnight; the practice takes years.

This is why the NIH Consensus Conference endorsed H. pylori in 1994, but routine clinical use of antibiotics for ulcers didn't become universal until the early 2000s. The theoretical consensus shifted in an afternoon (the conference vote). The practical implementation took another 6-8 years — because physicians had to learn the new protocol, hospitals had to update their formularies, insurance companies had to approve the new treatment, and patients had to be educated about the new approach.

⚠️ Common Pitfall: Don't confuse theoretical correction with practical correction. When a scientific review concludes that a consensus is wrong, the theoretical correction has occurred. But the practical correction — changing what practitioners actually do — may take years or decades longer. Tracking only theoretical correction (publications, consensus statements) dramatically overestimates the speed of actual correction in practice.

Variable 3: Power of Incumbent Defenders

Defender Power Correction Speed Examples
Weak or absent Fast Ozone (CFC industry outweighed by public concern)
Moderate (professional community) Moderate Continental drift (geological establishment)
Strong (industry + government + professional) Slow Dietary fat, polygraph, tobacco

The weaker the defenders, the faster the correction. When powerful interests benefit from the wrong consensus, the persistence engine has external reinforcement.

The defender power variable explains why corrections in commercially valuable domains (pharmaceuticals, nutrition, tobacco, finance) are systematically slower than corrections in non-commercial domains (cosmology, paleontology, pure mathematics). When the wrong consensus generates revenue, the defenders include not just intellectually invested practitioners but economically invested industries — and industries have resources (legal teams, lobbying capacity, marketing budgets, funded research programs) that individual practitioners cannot match. The tobacco industry maintained doubt about smoking-cancer links for 40+ years after the epidemiological evidence was compelling — primarily through defender power (industry funding, political influence, manufactured doubt).

Variable 4: Availability of External Evidence

External Evidence Correction Speed Examples
New technology provides independent evidence Fast Seafloor spreading (for drift), GPU computing (for neural networks)
No new independent line of evidence Slow Dietary fat (same epidemiological methods, same debates)

Variable 4 may be the single most important predictor of correction speed. Corrections are faster when they are driven by new, independent lines of evidence that bypass the existing debate. Seafloor spreading bypassed the "no mechanism" objection to continental drift. GPU computing bypassed the "neural networks can't scale" objection. When the new evidence circumvents rather than confronts the existing objections, the correction doesn't require persuading the defenders — it renders their objections obsolete.

Variable 5: Correction Mode — Persuasion vs. Circumvention

This variable deserves special emphasis because it describes two fundamentally different modes of correction:

Persuasion mode: The new evidence is presented within the existing debate framework. Defenders evaluate it using their existing criteria. The correction requires changing minds.

Circumvention mode: New evidence from a different direction renders the existing debate irrelevant. The correction doesn't require changing minds — it requires the defenders to accept that their objections no longer apply.

Circumvention is dramatically faster than persuasion. When Hess proposed seafloor spreading, he didn't argue that Wegener's evidence was stronger than the geologists believed. He provided an entirely new mechanism that made the geologists' objection ("no known mechanism") obsolete. The objection evaporated without the objectors being persuaded — because the grounds for the objection no longer existed.

Similarly, when deep learning achieved benchmark-beating performance, the proponents didn't argue that Minsky and Papert's theoretical objections were wrong. They demonstrated that multi-layer networks (which Minsky and Papert had acknowledged might overcome the limitations of single-layer perceptrons) actually worked. The theoretical debate became irrelevant in the face of practical demonstration.

The practical implication: if you're trying to correct a wrong consensus, seek evidence that circumvents the defenders' objections rather than evidence that confronts them. Circumvention bypasses the persistence engine. Confrontation activates it.

Variable 6: Crisis

Crisis Correction Speed Examples
Existential crisis forces change Very fast COVID (pandemic), ozone (environmental threat)
Moderate crisis (visible failure) Moderate 2008 financial crisis → some economic reform
No crisis (slow accumulation of evidence) Slow Learning styles (no crisis → zombie persists)

Crises create urgency that overrides the persistence engine. When the cost of being wrong becomes immediately, dramatically visible, the switching cost calculation changes: the cost of maintaining the wrong consensus suddenly exceeds the cost of changing.

📐 Project Checkpoint

Your Epistemic Audit — Chapter 17 Addition (Part III begins)

Return to your audit target and assess the correction conditions:

  1. Evidence clarity: How clear is the evidence against the wrong consensus (if one exists)? Unambiguous or probabilistic?

  2. Switching cost: How costly would it be to adopt the correct alternative? What infrastructure would need to change?

  3. Defender power: How powerful are the defenders of the current consensus? What resources do they have?

  4. External evidence: Is there a new, independent line of evidence that could circumvent the current debate?

  5. Crisis potential: Is a crisis likely that would force correction? What would trigger it?

  6. Planck assessment: Based on these five variables, does correction in your field require generational replacement, or could it occur within the current generation?

Add 300–500 words to your Epistemic Audit document.


17.5 Active Right Now: Where Is Correction in Progress?

The replication crisis correction. Psychology is approximately 10 years into a correction process. Evidence clarity is high (the replication data is unambiguous). Switching cost is moderate (new methods like pre-registration are available but require retraining). Defender power is moderate (senior researchers invested in the old methods resist). External evidence exists (registered reports produce different results, confirming the problem). No existential crisis has forced change. Prediction: partial correction within this generation, complete correction with generational replacement. Timeline: 15-25 years total.

Dietary fat consensus correction. The correction has been ongoing for approximately 15 years. Evidence clarity is moderate (the evidence is epidemiological and probabilistic, not as clear as the H. pylori evidence). Switching cost is very high (government guidelines, food industry, medical training). Defender power is high (institutional investments are enormous). External evidence is limited (no single circumventing line of evidence). No crisis. Prediction: very slow correction requiring generational replacement across multiple institutions. Timeline: 30-50 years total (from ~2010).

AI capabilities assessment correction. The current AI hype cycle involves both overestimation (AGI timeline predictions) and underestimation (dismissing current capabilities). The correction process is unusual because it's happening in real time, with new evidence arriving monthly. Evidence clarity is high for specific capabilities but low for general predictions. Switching cost is low (predictions can be updated). Defender power is moderate (both hype promoters and AI skeptics have institutional positions). Crisis potential: a major AI failure could trigger rapid correction of overestimation; a major AI success could trigger rapid correction of underestimation. Prediction: volatile, with rapid oscillation between overestimation and underestimation — the correction process itself is unstable.


17.6 The Correction Speed Formula (Informal)

We can express the relationship informally:

Correction Speed ∝ (Evidence Clarity × Crisis Urgency × Alternative Availability)
                    ÷ (Switching Cost × Defender Power × Institutional Embedding)

When the numerator is large (clear evidence, urgent crisis, available alternative) and the denominator is small (low switching cost, weak defenders, minimal embedding), correction is fast — possibly within years.

When the numerator is small (ambiguous evidence, no crisis, no clear alternative) and the denominator is large (high switching cost, powerful defenders, deep embedding), correction requires generational replacement — possibly decades.

Most real cases fall somewhere in between, with the correction speed determined by the balance of forces.

🔗 Connection: This formula connects to the lifecycle of a wrong idea (Chapter 1). Stages 4-6 (counter-evidence → resistance → crisis) describe the correction process. The speed at which the lifecycle progresses from Stage 4 to Stage 7 is determined by the structural variables in the formula. A field with clear evidence, low switching cost, and a crisis can move from counter-evidence to revision in years. A field with ambiguous evidence, high switching cost, and no crisis can stay stuck in resistance for decades.


17.7 What It Looked Like From Inside

Consider the perspective of a geophysicist in 1965 — the year before the plate tectonics revolution:

  • You have spent your career studying earth structure within the fixed-continent paradigm. Wegener's continental drift hypothesis has been a curiosity — discussed in seminars, occasionally debated — but never taken seriously by the establishment.
  • New data is arriving from ocean floor surveys. Magnetic stripe patterns on the ocean floor are suggestive but not yet conclusive. Harry Hess has proposed seafloor spreading, but it's speculative.
  • Over the next three years (1965-1968), the evidence will accumulate rapidly: magnetic anomalies, seismological data, and paleomagnetic evidence will converge on plate tectonics with devastating clarity.
  • You will face a choice: adopt the new framework (which explains the data beautifully but contradicts everything you've taught for 30 years) or resist (which preserves your paradigm but increasingly isolates you from the field's direction).

Most geophysicists in 1965-1968 chose to adopt. The correction was remarkably fast — within roughly 3-5 years, plate tectonics went from minority position to dominant paradigm. This was NOT a Planck's principle case: the existing generation changed their minds, within their professional lifetimes, because the evidence was so clear and so convergent from multiple independent sources that resistance became untenable.

What made this possible? The evidence was demonstrable (you could see the magnetic stripes in the data), independent (multiple research groups using different methods), and convergent (seismology, paleomagnetism, and ocean floor geology all pointed the same direction). The switching cost was moderate (geophysicists retrained, but the basic skills transferred), and no powerful economic interest defended the old paradigm.

This case shows that Planck's principle is not inevitable. When the structural conditions are right, an entire generation of experts can change their minds within years.

🪞 Learning Check-In

Pause and reflect: - Think of a time when you changed your mind about something important. Was it because someone persuaded you (evidence-driven), or because circumstances forced the change (crisis-driven)? - In your field, do you expect the current consensus to change within this generation, or to require generational replacement? What structural variables determine this? - If you had to accelerate correction in your field, which of the five variables would you try to change?


17.8 Practical Considerations: Accelerating Correction

If the five-variable framework is correct, correction can be accelerated by changing the structural variables — making the numerator larger or the denominator smaller.

The formula is not meant to be precise — it is a conceptual tool for diagnosing why a specific correction is fast or slow, and for identifying which variables to target to accelerate it. The most actionable insight is in the numerator/denominator asymmetry: you can accelerate correction by either increasing the numerator (clearer evidence, more urgent crisis, better alternatives) OR decreasing the denominator (lower switching costs, weaker defenders, less institutional embedding). The cheapest intervention — the one with the best return on investment — varies by case.

🔗 Connection: The correction speed formula synthesizes the entire book so far. The denominator variables (switching cost, defender power, institutional embedding) are the persistence mechanisms from Part II. The numerator variables (evidence clarity, crisis, available alternatives) are the conditions that can overcome persistence. Part III, in essence, asks: when does the numerator become large enough to overcome the denominator? And what can we do to change the balance?


17.8 Practical Considerations: Accelerating Correction

If the prediction framework is correct, correction can be accelerated by changing the structural variables — making the numerator larger or the denominator smaller.

Make Evidence Clearer

Fund research that produces demonstrable evidence — results that can be shown, not just argued. Randomized controlled trials, head-to-head comparisons, and public demonstrations are more persuasive than statistical arguments. The ozone hole was visible in satellite data. Deep learning benchmarks were publicly verifiable. The more the evidence can be demonstrated, the faster the correction.

Lower the Switching Cost

Fund transitions. Provide retraining programs. Develop the alternative before demanding adoption. If the switching cost is high, correction will be slow regardless of the evidence. The most effective correction strategy may be making the alternative easier to adopt rather than making the evidence against the old consensus stronger.

Weaken Defender Positions

Diversify funding sources so that the defenders of the old consensus don't control the resources needed for correction. Support independent research programs. Protect dissenters from career damage. Each action weakens the persistence engine's grip.

Create or Leverage Crises

This is ethically complex — you cannot manufacture a crisis. But you can prepare for one. When a crisis does occur (a financial collapse, a replication failure, a public health emergency), the correction window opens briefly. Having the alternative ready — the evidence prepared, the framework developed, the policy proposal drafted — allows rapid adoption during the window.

Seek Circumventing Evidence (The Most Powerful Strategy)

Rather than arguing within the existing debate, seek evidence from a completely different direction that renders the debate obsolete. Seafloor spreading didn't engage with the "no mechanism" argument against continental drift — it provided an entirely new mechanism that made the argument irrelevant. The most effective evidence is the evidence that bypasses the defenders' objections rather than confronting them directly. This is the single most actionable insight from the prediction framework.

Build the Alternative Infrastructure (Long-Term)

If you cannot accelerate correction within the current generation, invest in building the alternative infrastructure that the next generation will need. Write the alternative textbooks. Develop the alternative training programs. Build the alternative assessment tools. Fund the alternative research programs. When the generational tipping point arrives — when enough new practitioners are in place — the alternative must be ready. Correction fails not only when the evidence is insufficient but also when the alternative is unprepared.

The deep learning revolution succeeded partly because researchers like Hinton, LeCun, and Bengio spent decades building the theoretical and computational infrastructure during the AI winter, when neural networks were unfunded and unfashionable. When the computational resources caught up (GPUs) and the evidence became demonstrable (benchmark results), the alternative was ready. The correction was fast because the infrastructure had been built during the slow period.

🔍 Why Does This Work?

The Planck framework works because it connects the timing of correction to structural variables rather than to the strength of the evidence. This is the book's central insight applied to dynamics: just as the entry and persistence of wrong ideas are structural (not about individual intelligence), the correction of wrong ideas is structural (not about evidence quality). The strongest evidence in the world will produce slow correction if the switching cost is high, the defenders are powerful, and no circumventing evidence exists. Mediocre evidence will produce fast correction if the switching cost is low, the defenders are weak, and a crisis creates urgency. Understanding this allows strategic intervention: instead of producing more evidence (which may have diminishing returns), produce evidence of the right type (demonstrable, circumventing) and change the structural conditions (lower switching costs, weaken defender positions).

✅ Best Practice: When working to correct a wrong consensus, ask: "Am I trying to persuade the defenders (which Planck says may be futile) or am I building an alternative infrastructure that will be adopted by the next generation?" The second strategy — building rather than arguing — is usually more effective.


17.9 Chapter Summary

Key Arguments

  • Planck's principle ("science advances one funeral at a time") is supported by empirical evidence: the death of prominent scientists does open fields to new ideas (Azoulay et al., 2019)
  • But Planck's principle is not universal: in several cases (ozone hole, plate tectonics acceptance, deep learning, COVID vaccines), correction occurred within a generation
  • Five structural variables predict correction speed: evidence clarity, switching cost, defender power, availability of external evidence, and crisis
  • Correction is fastest when clear evidence, low switching cost, weak defenders, new independent evidence, and urgent crisis converge — and slowest when the opposite conditions hold
  • Correction strategies should target these structural variables rather than attempting direct persuasion of invested defenders

Key Debates

  • Is Planck's principle becoming more or less relevant in the digital age (where information spreads faster but polarization may increase)?
  • Is crisis-driven correction ethical to pursue (preparing for and leveraging crises)?
  • Can the correction speed formula be quantified, or is it inherently qualitative?

Analytical Framework

  • The five-variable prediction framework (evidence clarity, switching cost, defender power, external evidence, crisis)
  • The informal correction speed formula
  • The distinction between persuasion-driven correction (Planck required) and circumvention-driven correction (Planck bypassed)

Spaced Review

Revisiting earlier material to strengthen retention.

  1. (From Chapter 9) The sunk cost of consensus creates switching costs that slow correction. How does Variable 2 (switching cost) in this chapter's framework connect to the five components of switching cost from Chapter 9?
  2. (From Chapter 16) Zombie ideas activate all persistence mechanisms simultaneously. Using this chapter's framework, what would it take to correct a zombie? Which variables would need to change?
  3. (From Chapter 2) The authority cascade installs wrong ideas through prestige. Planck's principle says the authority must die for the idea to change. How does the Azoulay et al. finding connect to the authority cascade mechanism?
Answers 1. Variable 2 (switching cost) is the aggregate of the five Chapter 9 components: career investment, reputational capital, textbook infrastructure, funding commitments, and identity investment. The correction speed formula predicts that correction is slow when switching cost is high — which maps directly to the Chapter 9 analysis. Each component that is high increases the denominator of the correction speed formula. 2. A zombie activates all mechanisms → all variables in the denominator are high AND the numerator variables are typically low (ambiguous evidence, no crisis, no clear alternative). To correct a zombie, you would need to: make the evidence dramatically clear (a decisive test), lower the switching cost (provide a replacement), weaken the defenders (diversify funding), find circumventing evidence (a new approach that bypasses the old debate), and/or wait for a crisis. This is why zombies are so hard to correct — all five variables would need to change simultaneously. 3. The Azoulay finding confirms the authority cascade mechanism: prominent scientists' authority suppresses new ideas (cascade), and their death removes the suppression (Planck). The authority cascade creates the lock-in; Planck's principle describes how the lock-in is released. The death of the authority figure simultaneously removes the prestige source (cascade broken) and the primary defender (switching cost reduced).

What's Next

In Chapter 18: The Outsider Problem, we'll examine a painful corollary of what we've learned: why the people who are right about a wrong consensus — the dissenters, the challengers, the correction-bringers — are systematically punished before they're celebrated. You'll meet the full stories of Semmelweis, Wegener, Marshall, McClintock, Boltzmann, and Shechtman — and ask what they had in common, what separated those who survived from those who didn't, and what this tells us about how to design institutions that treat correct dissenters more humanely.

Before moving on, complete the exercises and quiz to solidify your understanding.


Chapter 17 Exercises → exercises.md

Chapter 17 Quiz → quiz.md

Case Study: The Ozone Hole — When Correction Happened Fast → case-study-01.md

Case Study: Plate Tectonics — The Paradigm Shift That Didn't Require Funerals → case-study-02.md