> "For every complex problem there is an answer that is clear, simple, and wrong."
Learning Objectives
- Define overcorrection as a systematic error distinct from the original error and from appropriate correction
- Identify the pendulum dynamic in historical and contemporary fields
- Analyze the structural forces that make overcorrection predictable after crisis-driven correction
- Evaluate whether a given reform represents calibrated correction, overcorrection, or cosmetic correction
- Apply the overcorrection diagnostic to assess whether your own field's current position is a reaction rather than an independent assessment
In This Chapter
- Chapter Overview
- 21.1 The Logic of Overcorrection
- 21.2 Four Case Studies in Overcorrection
- 21.3 The Anatomy of Overcorrection
- 21.4 Why Calibrated Correction Is So Hard
- 21.5 The Pendulum in Everyday Institutional Life
- 21.6 Active Right Now: Pendulums Currently Swinging
- 21.7 What It Looked Like From Inside: The FDA Reviewer's Dilemma
- 21.8 Toward Calibrated Correction: Is It Possible?
- 📐 Project Checkpoint
- 📐 Project Checkpoint
- 21.8 Chapter Summary
- Spaced Review
- What's Next
- Chapter 21 Exercises → exercises.md
- Chapter 21 Quiz → quiz.md
- Case Study: The Thalidomide Pendulum — From No Regulation to Over-Regulation → case-study-01.md
- Case Study: The Military Pendulum — Vietnam to Iraq → case-study-02.md
Chapter 21: When Correction Overcorrects
"For every complex problem there is an answer that is clear, simple, and wrong." — Attributed to H. L. Mencken
Chapter Overview
In 1957, a German pharmaceutical company introduced a new sedative called thalidomide. Marketed as safe and effective for treating morning sickness in pregnant women, thalidomide was sold in nearly fifty countries. It was available over the counter in Germany. It was widely prescribed in the United Kingdom, Australia, Canada, and across Europe.
By 1961, the horror was becoming clear. Thalidomide caused severe birth defects — most notoriously phocomelia, in which babies were born with shortened or absent limbs. Estimates of the total number of affected children range from 10,000 to 20,000 worldwide. Thousands more pregnancies ended in miscarriage or stillbirth. It was one of the worst pharmaceutical disasters in modern history.
The response was swift, dramatic, and — in an important sense — correct. Drug regulation was fundamentally transformed. In the United States, the Kefauver-Harris Amendment of 1962 required pharmaceutical companies to prove not just that drugs were safe but that they were effective before receiving FDA approval. Clinical trials became mandatory. Reporting requirements were expanded. The threshold for evidence required before a drug could reach patients was raised dramatically.
These reforms saved lives. They prevented future thalidomides. They established the modern framework of drug regulation that, for all its imperfections, has protected millions of patients from dangerous medications.
But the reforms also had a cost — and the cost is what this chapter is about.
The time required to bring a new drug to market increased from an average of approximately 2.5 years in the early 1960s to over 12 years by the 2000s. The cost of drug development escalated from millions to billions of dollars. Drugs that could have saved lives were delayed by years of regulatory process. During the AIDS crisis of the 1980s and 1990s, patients died waiting for treatments that were known to be effective but had not yet completed the regulatory approval process. The FDA's own reviewers estimated that delays in approving beta-blockers for heart disease may have cost tens of thousands of lives.
The thalidomide disaster produced a genuine, necessary correction. But the correction overcorrected — and the overcorrection had its own body count.
This is the pendulum problem: the pattern in which the trauma of being catastrophically wrong in one direction produces systematic error in the opposite direction. The field doesn't land on the correct answer. It swings through it, coming to rest at a new wrong position that is the mirror image of the original error.
In this chapter, you will learn to: - Recognize the pendulum dynamic as a structural feature of correction, not just an accident - Identify the forces that make overcorrection predictable after crisis-driven change - Distinguish between calibrated correction and overcorrection - Assess whether your own field's current position is an independent judgment or a reaction to past trauma
🏃 Fast Track: If you're familiar with the concept of regulatory overcorrection and its trade-offs, skim sections 21.1–21.2 and focus on sections 21.3–21.6, which build the analytical framework and extend it beyond regulation.
🔬 Deep Dive: After this chapter, read Daniel Carpenter's Reputation and Power: Organizational Image and Pharmaceutical Regulation at the FDA (2010) for the deepest analysis of how regulatory institutions balance the costs of approving dangerous drugs against the costs of delaying beneficial ones.
21.1 The Logic of Overcorrection
Why does overcorrection happen? Why don't fields simply correct to the right answer and stop there?
The answer lies in the asymmetry of error visibility. When a field is wrong and the wrong answer causes harm — patients die from thalidomide, a space shuttle explodes, a financial system collapses — the harm is visible. The victims are identifiable. The causal chain is traceable. The political and institutional pressure to prevent a recurrence is enormous.
But the costs of overcorrection are almost always invisible. The patients who die waiting for a drug that was delayed by regulatory caution are not identifiable — we don't know which specific patients would have been saved by earlier approval. The research that isn't conducted because post-crisis standards are too restrictive doesn't produce papers, so its absence is invisible. The economic growth that doesn't happen because post-crisis regulation is too conservative leaves no trace in the data.
This is a specific instance of the asymmetric cost of being wrong (Theme 8), but with a twist: the costs are asymmetric in terms of visibility, not just magnitude. The error of under-regulation produces visible victims (thalidomide babies, dead astronauts, foreclosed homeowners). The error of over-regulation produces invisible victims (patients who die waiting, research never conducted, innovations never attempted). And institutions, predictably, optimize against visible errors.
The Three Structural Forces
Three forces make overcorrection predictable:
Force 1: Trauma-Driven Epistemology. After a catastrophic failure, the institutional memory is dominated by the specific error that caused the crisis. "Never again" becomes the organizing principle of reform. But "never again" addresses only the specific failure mode that produced the crisis — it does not address the full range of possible errors, including errors in the opposite direction. The field's epistemology becomes trauma-driven: shaped by the fear of repeating the last catastrophe rather than by a balanced assessment of all possible risks.
Force 2: Political Asymmetry. After a crisis, there is intense political pressure to act — and the most visible action is to increase restrictions, requirements, and oversight. A regulator who approves a dangerous drug and patients die faces investigation, public outcry, and career destruction. A regulator who delays a beneficial drug and patients die in the interim faces no investigation, no public outcry, and no career consequences — because the patients who would have been saved are never identified. The political incentives are overwhelmingly biased toward caution, regardless of whether caution is the correct calibration.
Force 3: The Absence of a Stopping Mechanism. Correction has no natural endpoint. When evidence showed that thalidomide was dangerous, the correct response was more rigorous testing. But how much more rigorous? How many years of testing? How many patients in clinical trials? How much statistical certainty? There is no natural answer to these questions — no equilibrium that the system converges toward. In the absence of a stopping mechanism, the correction continues until some countervailing force pushes back.
🔄 Check Your Understanding (try to answer without scrolling up)
- Why are the costs of overcorrection typically invisible while the costs of undercorrection are visible?
- Name the three structural forces that make overcorrection predictable.
Verify
1. Undercorrection produces identifiable victims (specific people harmed by the error that wasn't corrected). Overcorrection produces invisible victims — people who would have benefited from something that the overcorrection prevented, but who are never identified because the counterfactual (what would have happened without the overcorrection) is unobservable. 2. Trauma-driven epistemology (reform shaped by fear of the last catastrophe), political asymmetry (visible errors are punished, invisible errors aren't), and the absence of a stopping mechanism (no natural equilibrium for how much correction is enough).
21.2 Four Case Studies in Overcorrection
Case 1: Drug Regulation After Thalidomide
We opened with this story. The correction was genuine and necessary: mandatory clinical trials, proof of efficacy, expanded safety monitoring. The overcorrection: a regulatory process so extensive that it now takes 10-15 years and costs over $1 billion to bring a new drug to market. The consequences:
- Drug lag: Multiple studies have documented that drugs approved in other countries are available to patients years before they receive FDA approval. During this lag, American patients with serious diseases cannot access treatments that are available elsewhere.
- The AIDS crisis response: In the 1980s, AIDS activists (notably the group ACT UP) protested FDA approval timelines, arguing that patients were dying while drugs sat in regulatory pipelines. Their activism led to the creation of expedited approval pathways (accelerated approval, fast track, breakthrough therapy designation) — effectively a correction to the overcorrection.
- Rare disease neglect: The cost and time required for regulatory approval make it economically unviable to develop drugs for rare diseases, which have small patient populations and therefore small potential revenues. The Orphan Drug Act (1983) was created partly to address this overcorrection.
- Risk aversion culture: The FDA's institutional culture became oriented toward avoiding the type 1 error (approving a dangerous drug) at the cost of accepting an enormous type 2 error (failing to approve a beneficial drug). Both errors kill patients, but only type 1 errors produce visible, politically damaging outcomes.
🔗 Connection: The drug regulation pendulum illustrates the precision-without-accuracy problem from Chapter 12 in a new context. Post-thalidomide regulation became very precise in its requirements — specific trial designs, specific statistical thresholds, specific documentation standards. But precision in the regulatory process does not guarantee accuracy in the regulatory outcome. A precisely calibrated 12-year approval process can be precisely wrong about the optimal balance between safety and access.
Case 2: Financial Regulation After 2008
The 2008 financial crisis produced Dodd-Frank and a raft of international regulatory reforms (Basel III, enhanced stress testing, new capital requirements). In Chapter 19, we analyzed this as a case of incomplete theoretical reform. Here we examine a different dimension: the degree to which the regulatory response may have overcorrected in specific areas.
The overcorrection concerns:
- Lending standards. Post-crisis lending standards became dramatically more restrictive. This prevented the reckless lending that had fueled the housing bubble — but it also made it significantly harder for creditworthy borrowers, particularly in minority and low-income communities, to access mortgage credit. The homeownership rate declined for nearly a decade after the crisis.
- Community banking. Dodd-Frank's compliance requirements, designed for large systemically important financial institutions, imposed disproportionate burdens on small community banks that had not contributed to the crisis. Hundreds of community banks closed, reducing access to banking in rural and underserved areas.
- Risk aversion in lending. Bank examiners, traumatized by the crisis, became extremely conservative in their assessments of loan portfolios. Banks responded by tightening lending standards beyond what the regulations required, creating a chilling effect on credit availability.
- New fragilities. Some analysts argue that post-2008 regulations, by pushing risk-taking out of regulated banks and into less-regulated "shadow banking" entities, may have created new systemic risks rather than eliminating them.
The calibration challenge: The correct level of financial regulation is not zero (the pre-2008 position was demonstrably too lax) and not maximum (the costs of excessive restriction are real). But the political dynamics after a crisis push exclusively toward "more" — and the institutional memory of the crisis ensures that "more" remains the default until the memory fades (the generational forgetting mechanism from Chapter 19).
Case 3: Psychology After the Replication Crisis
Psychology's response to the replication crisis — Open Science reforms, pre-registration, larger samples, registered reports — has been widely praised, and justifiably so. It represents one of the most genuine corrections we've examined in this book. But there are emerging concerns that the correction is producing its own distortions:
- The chilling effect on exploratory research. Pre-registration requires researchers to specify their hypotheses and analyses before collecting data. This is excellent for confirmatory research (testing specific predictions) but potentially stifling for exploratory research (discovering unexpected patterns). Some researchers argue that the pendulum has swung too far toward confirmation and away from exploration.
- Sample size escalation. The replication crisis revealed that many psychology studies used samples too small to detect real effects reliably. The response has been to demand much larger samples. But "larger" has no natural ceiling — and as required sample sizes increase, the cost and time required for each study increases proportionally, potentially slowing the pace of discovery.
- Methodological conservatism. In the post-crisis environment, methodological novelty is viewed with suspicion. Researchers who propose new methods or unconventional approaches face heightened skepticism — not because their methods are wrong, but because the institutional trauma of the replication crisis has made "novel" feel like "unreliable."
- Publication of null results. The pre-crisis problem was that null results (studies that found no effect) were unpublishable. The post-crisis correction has increased the publication of null results — which is valuable — but some researchers worry that the pendulum may swing toward a culture that values disconfirmation over discovery, making it harder to publish genuinely new findings.
What It Looked Like From Inside:
Consider the position of a junior psychology researcher in 2024. You entered graduate school during or after the replication crisis. Your training emphasized methodological rigor, pre-registration, large samples, and replication. These are good values. But the institutional culture around you has internalized them as the only values — and the result is that proposing an exploratory study with a novel method feels professionally risky. Not because the study is bad science, but because the institutional trauma of the crisis has made anything that resembles "old psychology" feel dangerous.
This is the Einstellung effect (Chapter 13) in a new form: the tools developed to correct one set of errors become the prison that constrains the next generation of thinking.
Case 4: Military Doctrine — The Vietnam Syndrome and Its Children
The United States military's experience in Vietnam (1955–1975) produced a profound institutional overcorrection that shaped military strategy for decades.
The original error: The Vietnam War demonstrated that military superiority (in technology, firepower, and training) does not guarantee victory against a motivated insurgency with popular support. The war's failure was attributable to multiple factors: a fundamental misunderstanding of the conflict's nature, the use of conventional military metrics (body counts) to measure success in an unconventional war, and political constraints that prevented strategic clarity.
The overcorrection — The Vietnam Syndrome: After Vietnam, the American military became profoundly risk-averse about ground combat operations, particularly in ambiguous conflicts. The Weinberger Doctrine (1984) and its successor, the Powell Doctrine (1990), established restrictive criteria for military intervention: clear objectives, overwhelming force, public support, and a defined exit strategy. These doctrines were reasonable responses to Vietnam's lessons — but they were shaped by trauma rather than analysis.
The correction to the overcorrection — The Gulf War: The 1991 Gulf War was conducted according to the Powell Doctrine: clear objectives, overwhelming force, decisive action, rapid withdrawal. Its success was interpreted as vindication of the doctrine — and as evidence that the Vietnam Syndrome had been "kicked" (George H. W. Bush's words).
The overcorrection to the overcorrection — Iraq 2003: The confidence generated by the Gulf War, combined with the political dynamics after September 11, produced a new overcorrection: the belief that American military power could achieve ambitious political objectives (regime change, democratization) in complex environments. The Iraq War demonstrated that the Gulf War model — conventional military superiority producing rapid decisive operations — was exactly the wrong framework for the occupation and nation-building that followed.
The pattern: Vietnam (error: conventional tactics in unconventional war) → Vietnam Syndrome (overcorrection: excessive caution about all ground operations) → Gulf War (correction: successful application of overwhelming force) → Iraq (overcorrection to the overcorrection: misapplication of the Gulf War model to a fundamentally different problem).
The military pendulum didn't swing once. It swung repeatedly, each time producing a new error that was the mirror image of the previous one. This is the pendulum dynamic in its purest form.
What makes the military case particularly instructive is that the military is, of all institutions, the one that has invested the most in learning from failure. After-action reviews, war colleges, doctrine development, historical analysis — the military-industrial complex devotes enormous resources to extracting lessons from experience. And yet the pendulum still swings. The structural forces driving overcorrection — trauma-driven epistemology, political asymmetry, the absence of a stopping mechanism — operate even in institutions that are consciously trying to avoid them. If the military can't escape the pendulum, it should not surprise us that medicine, finance, and academia can't either.
The Deeper Pattern Across All Four Cases
Step back from the individual cases and a meta-pattern emerges:
In every case, the original error was real and harmful. Thalidomide killed children. Pre-2008 financial deregulation crashed the economy. Pre-replication-crisis psychology produced unreliable science. Vietnam-era military doctrine cost lives. The correction was not just justified — it was morally necessary.
In every case, the correction was genuine. The reforms addressed real problems and produced real improvements. Drug regulation is better than it was before thalidomide. Financial regulation is better than it was before 2008. Psychological methodology is better than it was before the replication crisis. Military doctrine is more thoughtful than it was before Vietnam.
And in every case, the correction overshot — not because the reformers were wrong, but because the structural forces of overcorrection (trauma, political asymmetry, absent stopping mechanisms) pushed the correction past the optimal point. The overcorrection produced its own costs, measured in different but no less real currency: patients who died waiting, credit denied to worthy borrowers, research not conducted, military operations conducted with the wrong framework.
The lesson is not that corrections are bad. The lesson is that correction is not a destination. It is a process — one that requires ongoing calibration, ongoing attention to costs in both directions, and the ongoing willingness to adjust even the reforms that were justified and necessary.
🧩 Productive Struggle
Before reading the next section, consider this question: Is there a way to stop the pendulum? What structural feature of institutions would need to change to enable calibrated correction instead of oscillation between opposite errors?
Spend 3–5 minutes thinking about this, then read on.
21.3 The Anatomy of Overcorrection
After examining these four cases and dozens of others, a consistent pattern emerges. Overcorrection follows a predictable sequence:
The Overcorrection Cycle
Step 1: The Original Error. The field holds a wrong position. This position persists through the mechanisms documented in Parts I and II: authority cascade, sunk cost, consensus enforcement, etc.
Step 2: The Crisis. A crisis (Chapter 19) forces the field to confront the error. The crisis is sudden, visible, costly, and attributable to the original position.
Step 3: The Traumatic Correction. The field corrects — but the correction is shaped by the trauma of the crisis rather than by a balanced analysis of the full range of possible errors. The field becomes intensely focused on preventing a repetition of the specific error that caused the crisis.
Step 4: The Equal and Opposite Error. The traumatic correction overshoots. The new position is not the correct answer but the mirror image of the original error. Where the original error was too permissive, the overcorrection is too restrictive. Where the original error was too aggressive, the overcorrection is too cautious. Where the original error was too trusting, the overcorrection is too skeptical.
Step 5: The Invisible Cost. The overcorrection causes harm, but the harm is invisible (because it consists of opportunities not taken, patients not treated, research not conducted, innovations not attempted). The invisible cost accumulates until it becomes visible enough to trigger its own correction — often through its own crisis.
Step 6: The Meta-Correction (sometimes). Eventually, the costs of overcorrection become visible enough to provoke a re-correction. AIDS activists forcing expedited drug approval. Military leaders arguing for more flexible engagement criteria. Researchers arguing for the value of exploratory studies. But this meta-correction is itself at risk of overshooting — producing a return to the original error rather than arriving at calibrated balance.
{Diagram: The Pendulum Cycle — A pendulum swinging between two extremes. Left side: "Original Error (visible harm)." Right side: "Overcorrection (invisible harm)." Center: "Calibrated Correction (the target that's hardest to hit)." Arrows show the swing path, with labels at each point: "Crisis forces correction" (left to center), "Trauma drives past center" (center to right), "Invisible costs accumulate" (at right), "Meta-correction attempt" (right back toward center). A note at the bottom: "Each swing is smaller than the previous one — if the field is learning. If it isn't, the swings continue at full amplitude."
Alt-text: A pendulum diagram with a bob swinging between two positions. The left position is labeled "Original Error (visible harm)" and the right position is labeled "Overcorrection (invisible harm)." The center is labeled "Calibrated Correction (the target)" with a dotted line. Arrows trace the path from left through center to right, with labels showing the progression: crisis-driven correction overshooting to overcorrection. A return arrow shows the meta-correction attempt.}
21.4 Why Calibrated Correction Is So Hard
The pendulum problem reveals something important: calibrated correction — arriving at the right answer rather than the opposite wrong answer — is the hardest intellectual and institutional achievement.
The difficulty is not technical. It is structural. Calibrated correction requires:
1. Acknowledging that the original error was real without treating its prevention as the only goal. Post-thalidomide, the correct statement is: "drugs must be tested rigorously for safety and patients must not be denied access to beneficial treatments while waiting for that testing." Both halves of this statement are true, and they pull in opposite directions. Holding both simultaneously — rather than collapsing to one side or the other — requires cognitive and institutional sophistication that is rare.
2. Quantifying invisible costs. Overcorrection persists because its costs are invisible. Making them visible — estimating how many patients die during drug lag, how much beneficial research is not conducted under overly restrictive protocols, how much economic growth is forgone under excessive financial regulation — is technically difficult and politically unpopular. Nobody wants to argue for "less safety," even when "less safety" means "more access to beneficial treatments."
3. Resisting political pressure. After a crisis, the political pressure is entirely one-directional: more restriction, more caution, more oversight. Arguing for calibration — "yes, the old system was too lax, but the new system may be too restrictive" — is politically toxic. It sounds like defending the error that caused the crisis.
4. Distinguishing reaction from analysis. This is the core question of this chapter. Is the current position an independent assessment of the evidence, or is it a reaction to the last catastrophe? The distinction is subtle but critical. An independent assessment considers all possible errors and their relative costs. A reaction considers only the error that caused the last crisis and optimizes exclusively against repeating it.
🔍 Why Does This Work?
Calibrated correction is described as "the hardest intellectual achievement." Before reading the explanation, formulate your own theory about why. What makes balance harder than extremes? Why does the center of the pendulum's arc feel less stable than either extreme?
The answer connects to a deep feature of human cognition and institutional dynamics: extremes are simpler than balance. "Never approve a drug without 12 years of testing" is a clear, implementable rule. "Approve drugs when the expected benefit of access exceeds the expected cost of residual uncertainty" is a complex judgment that requires weighing incommensurable values. Institutions prefer clear rules to complex judgments, because clear rules distribute accountability and reduce the burden on individual decision-makers. The pendulum swings to extremes because extremes are easier to institutionalize than balance.
There is an additional cognitive factor: certainty feels better than ambiguity. After the trauma of being catastrophically wrong, the field craves the safety of a clear rule — something that guarantees "this will never happen again." The calibrated position, by contrast, acknowledges that some risk is inevitable, that trade-offs are real, and that the optimal position is uncertain. It says: "We don't know exactly where the right answer is, but we know it's somewhere between where we were and where we are now, and we need to keep looking." This is intellectually honest but emotionally unsatisfying. The overcorrected position — "we have fixed the problem" — provides the emotional closure that the trauma demands, even if the fix has created new problems.
This is why the pendulum is so hard to stop: stopping requires the field to sit in ambiguity, to acknowledge that it doesn't know the right answer, and to accept ongoing calibration rather than a definitive resolution. This is, essentially, a call for epistemic humility applied to the field's own correction process — a meta-humility that is even harder to achieve than humility about the original error.
🔄 Check Your Understanding (try to answer without scrolling up)
- Why does overcorrection persist even when its costs are real?
- What are the four requirements for calibrated correction?
Verify
1. Because the costs of overcorrection are invisible (opportunities not taken, patients not treated) while the costs of undercorrection are visible (identifiable victims). Institutions optimize against visible errors. 2. (a) Acknowledging the original error without treating its prevention as the only goal; (b) quantifying invisible costs; (c) resisting one-directional political pressure; (d) distinguishing reaction from independent analysis.
21.5 The Pendulum in Everyday Institutional Life
The pendulum dynamic is not limited to dramatic crises and regulatory overhauls. It operates at every scale — in organizations, in professional practices, and in individual thinking.
Organizational pendulums. A company that experiences a data breach overcorrects with security protocols so restrictive that employees cannot do their work efficiently. A school that experiences a bullying crisis overcorrects with zero-tolerance policies that punish minor infractions as harshly as serious ones. A hospital that experiences a malpractice lawsuit overcorrects with defensive medicine — ordering unnecessary tests and procedures to create a paper trail rather than to improve patient care.
Professional practice pendulums. A therapist who learns that a particular approach harmed a client swings to the opposite approach, which may harm a different client for opposite reasons. A manager who was criticized for being too hands-off becomes micromanaging. A teacher who was told their standards were too low sets impossible standards.
Intellectual pendulums. A field that overemphasized quantitative methods (because they were new and exciting) overcorrects toward exclusively qualitative methods (because the quantitative results failed to replicate). A field that overemphasized genetic explanations of behavior overcorrects toward exclusively environmental explanations. A field that was too credulous about experts overcorrects toward excessive skepticism about all expertise.
In each case, the structure is the same: a genuine error is identified, the correction overshoots, and the new position is shaped by the fear of repeating the original error rather than by an independent assessment of the full range of possibilities.
The Rebound Orthodoxy
One of the most subtle manifestations of the pendulum dynamic is what we might call rebound orthodoxy: the phenomenon in which the overcorrected position itself becomes a new consensus that is defended with the same mechanisms that defended the original error.
Post-replication-crisis methodological standards in psychology are an emerging example. The reforms (pre-registration, large samples, open data) were developed to correct genuine problems. But as they become institutional requirements — embedded in journal policies, grant criteria, hiring standards, and training programs — they acquire the same self-reinforcing properties as any other consensus: people build careers on them, question them at professional risk, and enforce them through the same mechanisms (peer review, hiring, funding) that enforced the original flawed practices.
This is not a reason to reject the reforms — they corrected real problems and have improved the field. It is a reason to maintain vigilance about any consensus, including the reformed one. The lesson of this book is not that the old consensus was wrong and the new one is right. The lesson is that the structural forces that create and defend consensus operate on all consensuses — including the ones we believe are correct.
🔗 Connection: Rebound orthodoxy is Theme 9 of this book in its purest form: "Every correction mechanism can itself become a source of error." The reforms designed to fix one set of problems become, over time, the institutional infrastructure that may perpetuate a new set of problems. The question is not whether this will happen — it will — but whether the field has built in mechanisms for detecting and correcting the corrections.
21.6 Active Right Now: Pendulums Currently Swinging
AI regulation. As of this writing, the regulatory response to AI is in its early stages. The original error — essentially no regulation of AI development — is being corrected. But the correction is at risk of the same overcorrection dynamic: regulations designed in response to current fears (deepfakes, job displacement, autonomous weapons) may constrain beneficial applications that we cannot yet imagine. The political asymmetry is already visible: the risks of under-regulating AI are vivid and concrete (deepfake pornography, algorithmic discrimination), while the risks of over-regulating AI are abstract and future-oriented (cures not discovered, efficiencies not achieved, tools not built).
Public health trust. The COVID-19 pandemic produced a crisis in public health communication. Early guidance on masks, social distancing, and vaccine safety was sometimes contradictory, partly because scientific understanding was evolving and partly because institutional communication was poorly calibrated. The response in some segments of the public has been an overcorrection: blanket distrust of public health institutions, rejection of all expert guidance, and susceptibility to conspiracy theories. This overcorrection — from excessive deference to expertise to excessive rejection of expertise — is the pendulum dynamic applied to the public's relationship with scientific authority.
Corporate culture and remote work. Many organizations that resisted remote work before the pandemic overcorrected toward fully remote work during it, then overcorrected again toward mandatory return-to-office. The pendulum continues to swing, with each position driven more by the perceived failure of the previous position than by an independent assessment of what work arrangement best serves the organization's and employees' needs.
Criminal justice sentencing. The "tough on crime" movement of the 1980s–2000s was partly an overcorrection to the perceived leniency of the 1960s–1970s. Mass incarceration, mandatory minimums, and three-strikes laws addressed genuine public safety concerns — but the overcorrection produced the world's highest incarceration rate, devastating impacts on minority communities, and prison populations that stretched far beyond what public safety required. The current reform movement (sentencing reform, restorative justice, decarceration) is itself at risk of the pendulum — early signs of rising crime rates in some cities have already triggered calls to reverse the reforms.
21.7 What It Looked Like From Inside: The FDA Reviewer's Dilemma
Let's reconstruct the institutional position that makes overcorrection so persistent, using the FDA as an example.
You are an FDA drug reviewer in 2015. Your job is to evaluate whether a new drug should be approved for patient use. You have extensive training, good judgment, and genuine concern for patients. Two types of errors are possible:
Type 1 error: You approve a dangerous drug. If this happens, patients are harmed. The harm is visible: specific patients, specific injuries, specific causal chains. Your name is on the approval. Congressional hearings will follow. Your career is likely over. The pharmaceutical company will be sued. The FDA's reputation will be damaged. The media will run the story for months.
Type 2 error: You delay or reject a beneficial drug. If this happens, patients who would have been helped are not helped. But you will never know which patients — the counterfactual is unobservable. No one will sue. No congressional hearing will be held. No journalist will write a story about the drug you didn't approve. Your career will be fine. In fact, caution is valued: a reviewer who is "careful" and "thorough" is respected. A reviewer who is "reckless" and "hasty" is condemned.
Given these incentives, what would a rational, well-intentioned reviewer do? Exactly what the FDA does: err overwhelmingly on the side of caution. Not because the reviewer is cowardly or bureaucratic, but because the system is structured so that one type of error is career-ending and the other is invisible. The reviewer who takes five years to approve a drug that could have been approved in three — costing thousands of patients the benefit during the two-year delay — faces zero professional consequences. The reviewer who approves a drug six months early and one patient experiences a serious adverse event faces maximum consequences.
This is the same "locally rational, systemically wrong" dynamic we identified in Chapter 1 with the gastroenterologists who resisted Marshall and Warren. Each individual decision is reasonable given the incentives. The system produces an outcome — systematic overcorrection — that no individual chose.
21.8 Toward Calibrated Correction: Is It Possible?
If the pendulum dynamic is structural, is escape possible? Or is oscillation between opposite errors the best we can do?
There is cautious reason for hope — because some institutions have achieved something close to calibrated correction. The examples share common features:
Aviation safety (again) maintains a relatively calibrated position between over-caution (which would ground all flights) and under-caution (which would accept preventable accidents). The calibration is achieved through: quantitative risk assessment that makes invisible costs visible, a non-punitive reporting culture that allows honest assessment, and a professional norm that values balance rather than maximum caution.
Clinical trials for terminal diseases have achieved a partial calibration through compassionate use programs and expanded access pathways. These programs acknowledge that for patients who are dying, the cost of delaying access to experimental treatments outweighs the cost of uncertain safety data. They create a graduated system rather than a binary approve/deny framework.
The Bank of England's approach to macroprudential regulation (post-2008) attempted calibration by explicitly modeling the costs of both under-regulation and over-regulation, publishing its risk tolerance framework, and building in periodic review of whether the balance was correct. Whether this approach succeeds in avoiding the pendulum dynamic remains to be seen — but the structure is designed for calibration rather than maximum caution.
The common features: making invisible costs visible, quantifying trade-offs rather than optimizing for one direction, building in regular reassessment, and creating institutional cultures that value balance over safety-maximization. Whether these features are sufficient to defeat the pendulum dynamic in the long run is an open question — and one that Part V of this book (the Toolkit) will address.
📐 Project Checkpoint
The most important practical question this chapter raises is: Is your field's current consensus an independent assessment of the evidence, or is it a reaction to a past error?
Here is a diagnostic framework for answering that question:
The Overcorrection Diagnostic: Five Tests
Test 1: The Origin Test. When was the current position established, and what happened immediately before? If the current position was established in the wake of a crisis or a dramatic correction, it may be shaped by the trauma of that crisis rather than by an independent analysis. Example: If your field adopted its current methodology standards immediately after a public scandal, those standards may be trauma-calibrated rather than evidence-calibrated.
Test 2: The Mirror Test. Is the current position the approximate opposite of the previous position? If a field that was too permissive became too restrictive, or a field that was too credulous became too skeptical, the pendulum dynamic may be operating. Example: A field that went from "publish anything novel" to "only publish replications" has swung to a mirror position rather than finding the calibrated middle (publish both, with appropriate standards for each).
Test 3: The Invisible Cost Test. Are there invisible costs to the current position that the field is reluctant to acknowledge? If raising these costs provokes an emotional response ("you want to go back to the way things were?"), the current position may be trauma-driven rather than evidence-driven. Example: If pointing out that post-crisis regulations have increased costs or reduced access produces accusations of wanting to return to the pre-crisis state, the conversation is operating in trauma mode rather than analysis mode.
Test 4: The Independent Evidence Test. If you were assessing the question from scratch — without knowledge of the field's history — would you arrive at the current position? Or is the current position only defensible in the context of "we tried the opposite and it was a disaster"? Example: Would a 12-year drug approval timeline seem optimal to someone who had never heard of thalidomide? Or would it seem optimal only to someone whose institutional memory is dominated by that catastrophe?
Test 5: The Accommodation Test. Does the current position accommodate legitimate concerns from both directions? Calibrated correction acknowledges that the original error was real and that overcorrection carries its own costs. If the field's position addresses only the original error — and treats any mention of overcorrection as heresy — the pendulum dynamic is likely operating. Example: Does your field's reformed methodology allow for both rigorous confirmatory research and genuinely exploratory investigation? Or has it optimized entirely for one at the expense of the other?
How to Use the Diagnostic
If your field scores "yes" on three or more of these tests, the current position is likely an overcorrection rather than a calibrated correction. This does not mean the current position is wrong — overcorrections are often closer to correct than the original error, and the reforms they embody are often genuinely valuable. It means the current position should be treated as provisional and subject to ongoing assessment, not as the final, correct answer.
The most dangerous overcorrections are the ones that feel permanent — that have been absorbed into institutional identity and defended with the same fervor that defended the original error. The question is never "was the correction justified?" (it almost always was) but "has the correction arrived at the right calibration, or has it overshot?"
📐 Project Checkpoint
Epistemic Audit — Chapter 21 Addition: Overcorrection Assessment
Add the following to your Epistemic Audit:
21A. Historical Pendulums. Has your field experienced the pendulum dynamic? Identify any cases where: - A previous error was corrected, but the correction overshot - The current position appears to be the opposite of a previous position rather than an independent assessment - Costs of the current position are invisible or unacknowledged
21B. Apply the Five Tests. For your field's current consensus on a significant issue: 1. Origin Test: When was the current position established? What event preceded it? 2. Mirror Test: Is the current position approximately the opposite of the previous one? 3. Invisible Cost Test: What are the costs of the current position? Are they acknowledged? 4. Independent Evidence Test: Would you arrive at this position from scratch? 5. Accommodation Test: Does the position account for costs in both directions?
21C. Rebound Orthodoxy Check. Have any of your field's recent reforms themselves become orthodoxies — enforced through the same mechanisms (peer review, funding, hiring) that enforced the original positions they corrected?
21D. Calibration Proposal. If your field has overcorrected, propose what calibrated correction would look like: a position that acknowledges both the original error and the costs of overcorrection, with evidence for the optimal balance between them.
This assessment connects to your Chapter 12 analysis (is the current position precisely wrong — very specific about its requirements without evidence that those requirements are the right calibration?), your Chapter 19 crisis analysis (was the current position established in response to a crisis?), and your Chapter 20 revision myth assessment (does the field tell the story of the correction as a clean triumph, erasing any acknowledgment that the correction may have overshot?).
21.8 Chapter Summary
Key Concepts
- Overcorrection: The systematic error produced when the trauma of being wrong in one direction causes a field to swing too far in the opposite direction
- Pendulum dynamic: The predictable pattern: original error → crisis → traumatic correction → overcorrection → invisible cost accumulation → meta-correction (which may itself overshoot)
- Trauma-driven epistemology: When a field's current position is shaped by the fear of repeating the last catastrophe rather than by a balanced assessment of all possible errors
- Calibrated correction: Arriving at the right answer rather than the opposite wrong answer — the hardest intellectual and institutional achievement, requiring acknowledgment of costs in both directions
- Rebound orthodoxy: The phenomenon in which the overcorrected position becomes a new consensus defended with the same mechanisms that defended the original error
Key Arguments
- Overcorrection is structurally predictable because the costs of the original error are visible while the costs of overcorrection are invisible
- Three forces drive overcorrection: trauma-driven epistemology, political asymmetry (visible errors punished, invisible errors not), and the absence of a stopping mechanism
- The pendulum dynamic operates at every scale: regulatory, organizational, professional, and intellectual
- Calibrated correction is hard because extremes are simpler to institutionalize than balance
- Every correction mechanism can become a source of error when it becomes institutional orthodoxy (rebound orthodoxy)
Key Tensions
- Genuine reforms that corrected real problems can also produce overcorrection — the quality of the original reform does not protect against pendulum dynamics
- Arguing for calibration after a crisis is politically toxic because it sounds like defending the original error
- The costs of overcorrection are real but invisible — making them visible requires the same institutional courage that challenging any consensus requires
- Even the most justified reforms should be subject to ongoing assessment — a lesson that applies recursively to the reforms this book recommends
Spaced Review
Revisiting earlier material to strengthen retention.
-
(From Chapter 12 — Precision Without Accuracy) How does the overcorrection dynamic relate to the precision-without-accuracy problem? Can a field's response to a crisis be very precise (specific rules, specific thresholds, specific procedures) without being accurate (actually achieving the optimal balance between competing risks)?
-
(From Chapter 19 — Crisis and Correction) The institutional grief cycle describes five stages of how fields process crisis. Where in the grief cycle does overcorrection occur? Is it a feature of the bargaining stage, the acceptance stage, or something that happens after acceptance?
-
(From Chapter 20 — The Revision Myth) How does the revision myth interact with the pendulum dynamic? If a field rewrites its history to present the overcorrection as a clean triumph, how does this affect the field's ability to recognize that it has overshot?
Answers
1. Post-crisis reforms can be precision-without-accuracy in the same way that risk models before 2008 were: very precisely calibrated to specific requirements (12 years of testing, samples of 500+, pre-registration of all hypotheses) without evidence that those specific requirements produce the optimal outcome. The precision of the regulatory framework gives it the *appearance* of accuracy, but the optimal calibration point is unknown — and the framework may be precisely located at the wrong point on the spectrum. 2. Overcorrection typically occurs during Stage 3 (bargaining) or the transition from Stage 3 to Stage 5 (acceptance). During bargaining, the institution implements reforms designed to prevent recurrence — and the political pressure and trauma-driven epistemology push those reforms past the calibration point. It can also occur during genuine acceptance, when the field reconstructs its framework but the new framework is shaped by "never again" rather than by balanced analysis. The key insight: overcorrection is not a failure of the correction process. It is a *feature* of the correction process when that process is driven by crisis and trauma rather than by calibrated analysis. 3. The revision myth makes overcorrection invisible by presenting the post-crisis position as the correct endpoint — the natural, inevitable result of the field learning from its mistakes. If the story is "we were wrong, we corrected, now we're right," there is no space for the possibility that the correction overshot. The revision myth erases the *costs* of the current position (Chapter 20, Cost 2), which are precisely the invisible costs of overcorrection that this chapter describes. The two failure modes are mutually reinforcing: overcorrection creates invisible costs, and the revision myth keeps those costs invisible.What's Next
In Chapter 22: The Speed of Truth, we will synthesize everything we've learned in Part III to build a predictive model: given a wrong consensus in a specific field, how long will it take to correct? What variables determine whether the correction takes 10 years or 100? What can be done to accelerate it? This synthesis chapter brings together the mechanisms of correction (Chapter 17), the outsider dynamic (Chapter 18), the role of crisis (Chapter 19), the revision myth (Chapter 20), and the pendulum problem (this chapter) into a unified framework.
Before moving on, complete the exercises and quiz to solidify your understanding.