43 min read

On December 14, 1799, George Washington woke with a severe sore throat. He was sixty-seven years old, recently retired from the presidency, and otherwise in robust health. His condition, likely acute epiglottitis or a peritonsillar abscess, was...

Learning Objectives

  • Define iatrogenesis and explain why it appears across every domain where complex systems are subject to intervention
  • Identify iatrogenic harm in at least six domains: medicine, economics, foreign policy, software engineering, ecology, and agriculture
  • Analyze the intervention bias -- the systematic human preference for action over inaction -- and explain why it persists even when inaction would produce better outcomes
  • Evaluate the concept of via negativa and explain when removing a harmful intervention is superior to adding a new one
  • Distinguish between first-order effects (the intended consequence of an intervention) and second-order effects (the unintended consequences) and explain why second-order effects are systematically underestimated
  • Apply the threshold concept -- the Intervention Calculus -- to assess whether the burden of proof has been met before intervening in a complex system

Chapter 19: Iatrogenesis -- When the Cure Is the Disease

Medicine, Economics, Foreign Policy, Software Patches, Wildfire Suppression, and Antibiotic Resistance

"The physician's first duty is not to do harm." -- attributed to Hippocrates, circa 400 BCE


19.1 The Doctor Who Killed Presidents

On December 14, 1799, George Washington woke with a severe sore throat. He was sixty-seven years old, recently retired from the presidency, and otherwise in robust health. His condition, likely acute epiglottitis or a peritonsillar abscess, was serious but not immediately fatal. Left alone, he might have recovered. He might have died. We will never know, because his physicians did not leave him alone.

Over the next twelve hours, three doctors treated the former president with the best medical knowledge of their era. They bled him four times, removing approximately 2.4 liters of blood -- roughly 40 percent of his total blood volume. They administered emetics to induce vomiting. They applied blistering agents to his throat. They gave him calomel, a mercury-based purgative. Each treatment was considered sound medical practice. Each was intended to help. Each made him weaker.

By ten o'clock that evening, George Washington was dead.

His doctors did not kill him through negligence or malice. They killed him through treatment. The interventions they applied -- bleeding, purging, blistering -- were the standard of care, endorsed by the leading medical authorities of the age. The physicians followed protocol. The protocol was the problem.

The word for this is iatrogenesis: harm caused by the healer. From the Greek iatros (healer) and genesis (origin). It is one of the oldest patterns in medicine, and one of the most important patterns in this book, because it does not stop at the borders of medicine. Iatrogenesis -- harm caused by the very interventions meant to help -- appears wherever humans intervene in complex systems they do not fully understand. It appears in economics, in foreign policy, in software engineering, in ecology, in education, in criminal justice. The pattern is always the same: an intervener identifies a problem, applies a fix, and the fix creates a new problem that is equal to or worse than the original.

This chapter is about that pattern. It is about why well-intentioned interventions so often make things worse, why humans systematically prefer harmful action to beneficial inaction, and why the ancient principle primum non nocere -- first, do no harm -- may be the most important and most violated principle in every domain of human activity.

Fast Track: This chapter argues that iatrogenesis -- harm caused by the healer -- is a universal pattern that appears whenever complex systems are subject to intervention, and that the systematic human preference for action over inaction (the intervention bias) ensures that iatrogenic harm is chronically underestimated and overproduced. If you already grasp the core idea, skip to Section 19.5 (The Intervention Bias) for the psychological analysis, then read Section 19.8 (Via Negativa) for the deepest insight about when subtraction beats addition.

Deep Dive: The full chapter traces iatrogenesis across six domains (medicine, economics, foreign policy, software, ecology, agriculture), analyzes the psychological and institutional forces that produce it, and develops the threshold concept -- the Intervention Calculus -- which holds that the burden of proof should always be on the intervener. The two case studies extend the analysis to medicine and foreign policy (Case Study 1) and to software patches and wildfire suppression (Case Study 2). For the richest understanding, read everything.


19.2 Medical Iatrogenesis: The Hospital as Hazard

Washington's death by bleeding was not an aberration. For over two thousand years, Western medicine was dominated by the theory of the four humors -- blood, phlegm, yellow bile, and black bile -- and the belief that disease resulted from their imbalance. Treatment meant restoring balance, usually by removing the excess humor. Bloodletting, purging, vomiting, and sweating were the standard interventions. They were applied to virtually every condition: fevers, infections, injuries, mental illness. They were almost always harmful.

The scale of this iatrogenic disaster is difficult to overstate. For most of recorded medical history, seeing a doctor made you sicker. The phrase "the cure is worse than the disease" was not a metaphor. It was a statistical fact. Patients who avoided physicians and relied on rest, nutrition, and their own immune systems fared better, on average, than patients who submitted to the standard treatments.

Modern medicine has made extraordinary progress. The development of antisepsis, antibiotics, vaccination, surgical technique, diagnostic imaging, and evidence-based practice has transformed medicine from a net-negative intervention into a net-positive one. Going to the hospital today is, on the whole, far better than not going. But iatrogenesis has not disappeared. It has changed form.

Consider the modern landscape of medical iatrogenesis:

Hospital-acquired infections. Approximately one in thirty-one hospital patients in the United States acquires at least one healthcare-associated infection during their stay. These infections -- which patients did not have when they entered the hospital -- kill an estimated 72,000 Americans annually. The hospital, the place you go to get well, is itself a source of illness. The very act of bringing sick people together in a facility, inserting catheters, performing surgeries, and administering intravenous lines creates vectors for infection that would not exist if the patient had stayed home.

Medication side effects. Adverse drug reactions are among the leading causes of hospitalization and death in developed countries. In the United States, adverse drug events cause an estimated 1.3 million emergency department visits annually. Some of these are the result of medication errors -- the wrong drug, the wrong dose, the wrong patient. But many are the predictable side effects of correctly prescribed, correctly administered medications. The drug does what it is designed to do, and in doing so, it causes harm that was known and accepted as the cost of the intended benefit.

Unnecessary procedures. A substantial fraction of medical procedures performed in the United States provide no benefit to the patient. Estimates vary, but analyses suggest that 10 to 30 percent of medical spending goes toward services that provide no value. Unnecessary surgeries carry all the risks of surgery -- infection, anesthesia complications, recovery time, surgical error -- without the corresponding benefit. The patient would have been better off with no intervention at all.

Overdiagnosis. Modern screening technologies can detect abnormalities that would never have caused symptoms or death. Thyroid cancer screening, for example, has dramatically increased the diagnosis of thyroid cancer in recent decades, but the death rate from thyroid cancer has remained essentially unchanged. The screening finds real cancers, but many of them are indolent -- they would never have grown, spread, or harmed the patient. The patients who are diagnosed, however, receive surgery, radiation, or lifelong medication, with all the attendant risks and side effects. They are harmed by the detection of a "disease" that was never going to hurt them.

Ivan Illich, the radical social critic, argued in his 1976 book Medical Nemesis that modern medicine had become a major threat to public health. Illich coined the term "clinical iatrogenesis" for the direct harm caused by medical treatment, "social iatrogenesis" for the medicalization of ordinary life (turning normal human experiences like aging, sadness, and childbirth into medical conditions requiring treatment), and "cultural iatrogenesis" for the destruction of people's ability to cope with pain, sickness, and death without professional intervention. Illich's critique was polemical and overstated, but his core observation was sound: the medical system, in its eagerness to treat, systematically produces harm alongside benefit, and the harm is chronically underestimated because the system is structured to count its successes and ignore its failures.

Connection to Chapter 15 (Goodhart's Law): The medical system measures its activity by procedures performed, tests ordered, and conditions treated. These metrics function as Goodhart targets: they incentivize more intervention, regardless of whether the intervention helps the patient. A hospital that performs more surgeries appears more productive than one that counsels watchful waiting. A doctor who orders more tests appears more thorough than one who relies on clinical judgment. The metric rewards action. The patient may be better served by inaction. This is Goodhart's Law applied to healthcare: the measure (procedures performed) becomes the target, and the pursuit of the target (more procedures) diverges from the underlying goal (patient health).


19.3 Antibiotic Resistance: The Cure That Creates a Worse Disease

If medical iatrogenesis in its traditional forms -- infections, side effects, unnecessary procedures -- illustrates how interventions can harm the individual patient, antibiotic resistance illustrates something far more alarming: how a successful intervention can create a problem worse than the one it solved.

Antibiotics are among the greatest achievements in the history of medicine. Before penicillin became widely available in the 1940s, a scratch that became infected could kill. Pneumonia was a death sentence for the elderly. Surgical infections were common and frequently fatal. Tuberculosis filled sanitariums. Childhood ear infections could lead to deafness or death. Antibiotics changed all of that. They saved hundreds of millions of lives. They made modern surgery possible. They transformed childbirth from one of the most dangerous experiences in a woman's life into a routine medical event.

And in doing so, they created the conditions for their own obsolescence.

The mechanism is straightforward. When an antibiotic kills bacteria, it kills the susceptible ones. Any bacterium that happens to have a genetic mutation conferring resistance survives. In a population of millions of bacteria, even a tiny fraction of resistant individuals is enough. The resistant survivors reproduce, passing their resistance genes to their offspring. Within days or weeks, the population has been replaced by resistant bacteria. The antibiotic that worked last week no longer works this week.

This is evolution by natural selection, operating on a timescale fast enough to watch. Each round of antibiotic treatment applies a selection pressure that favors resistance. The more antibiotics are used, the stronger the selection pressure, the faster resistance spreads. The cure -- antibiotics -- is literally creating a worse disease: infections caused by bacteria that no available antibiotic can kill.

The numbers are staggering. The World Health Organization has declared antimicrobial resistance one of the top ten global public health threats. Drug-resistant infections already cause an estimated 1.27 million deaths worldwide per year. By some projections, if current trends continue, drug-resistant infections could kill 10 million people annually by 2050 -- more than cancer kills today.

The iatrogenic pattern is clear: the intervention (antibiotics) solved the original problem (bacterial infection) while creating a new, worse problem (antibiotic-resistant infection). The new problem is a direct consequence of the intervention. Without antibiotics, there would be no antibiotic resistance.

This does not mean antibiotics should never have been developed. The lives saved by antibiotics vastly outnumber the lives lost to resistance -- so far. But it does mean that every prescription of antibiotics carries a cost that extends far beyond the individual patient. The doctor who prescribes a course of amoxicillin for a child's ear infection is making a decision that marginally increases the probability of untreatable infections for every future patient. The benefit is concentrated (one child recovers from an ear infection) and the cost is diffused (the entire population faces slightly higher resistance levels). This asymmetry -- concentrated benefits, diffused costs -- is a signature of iatrogenic harm across every domain.

Spaced Review -- Redundancy (Ch. 17): Antibiotic resistance is the destruction of a form of medical redundancy. When multiple antibiotics can treat an infection, the medical system has redundancy: if one drug fails, another can succeed. Resistance strips away this redundancy one drug at a time. The system becomes progressively more fragile as each antibiotic loses its effectiveness. We are, in effect, running down our antibiotic reserves -- using up our medical redundancy -- without replacing it. The genetic diversity of bacterial populations (their redundancy) allows them to survive our attacks. Our lack of new antibiotic development (our diminishing redundancy) means we are losing the arms race.


🔄 Check Your Understanding

  1. Explain in your own words why George Washington's death illustrates the concept of iatrogenesis. What was the relationship between the physicians' intentions and the outcome of their treatment?
  2. Why is antibiotic resistance a particularly important example of iatrogenesis? How does it differ from traditional medical iatrogenesis (hospital infections, side effects) in its scope and time horizon?
  3. The chapter describes a pattern of "concentrated benefits, diffused costs." Using the antibiotic example, explain what this means and why it makes iatrogenic harm difficult to prevent.

19.4 Economic Iatrogenesis: When the Medicine Moves the Markets

Economics offers perhaps the most instructive laboratory for iatrogenesis, because economic interventions are large, their effects are measurable, and the gap between intention and outcome is often spectacular.

Stimulus That Creates Bubbles

In 2001, the United States entered a mild recession following the collapse of the dot-com bubble. The Federal Reserve, under Chairman Alan Greenspan, responded by cutting interest rates aggressively -- from 6.5 percent in January 2001 to 1 percent by June 2003, the lowest rate in decades. The intention was clear: lower interest rates would stimulate borrowing, investment, and spending, pulling the economy out of recession.

The intervention worked. The economy recovered. But the cheap money did not flow only to productive investment. It poured into housing. With interest rates at historic lows, mortgages became extraordinarily cheap. Banks, awash in low-cost capital, competed to lend to less and less creditworthy borrowers. The subprime mortgage market exploded. Housing prices soared. A bubble inflated.

When the bubble burst in 2007-2008, the resulting financial crisis was the worst since the Great Depression. Banks failed. Credit markets froze. Unemployment doubled. Millions lost their homes. The global economy contracted. The damage was orders of magnitude worse than the mild 2001 recession that the interest rate cuts were designed to address.

The Fed's intervention -- low interest rates to cure a recession -- was the proximate cause of the conditions that led to the financial crisis. The cure for a small illness created the conditions for a catastrophic one.

Austerity That Deepens Recession

The iatrogenic pattern operates in the opposite direction as well. Following the 2008 financial crisis, many European governments -- particularly Greece, Spain, Portugal, and Ireland -- adopted severe austerity measures: cutting government spending, raising taxes, and reducing public services. The intention was to reduce government debt and restore fiscal credibility. The theory was that reduced deficits would reassure financial markets, which would lower borrowing costs, which would stimulate private investment.

In practice, austerity deepened the recession. Government spending cuts reduced demand in economies that were already contracting. Reduced demand meant lower tax revenues, which meant larger deficits despite the spending cuts -- the opposite of the intended effect. Unemployment soared. Social services were cut at precisely the moment they were most needed. Greece's economy contracted by 25 percent between 2008 and 2013. Youth unemployment exceeded 50 percent.

The cure -- austerity to restore fiscal health -- made the patient sicker. Government debt ratios in several countries actually increased during the austerity period, because the economy shrank faster than the debt was repaid. The intervention achieved the opposite of its stated goal.

Moral Hazard: When the Safety Net Creates the Risk

A subtler form of economic iatrogenesis operates through moral hazard -- the phenomenon in which the existence of a safety net encourages the risky behavior it was designed to protect against.

When the Federal Reserve bailed out the financial system in 2008, it prevented an immediate catastrophe. But it also sent a message: if your bank is large enough and interconnected enough, the government will not let it fail. This implicit guarantee -- "too big to fail" -- reduces the incentive for banks to manage risk prudently. Why maintain thick capital reserves (the redundancy discussed in Chapter 17) if you know the government will bail you out when things go wrong?

The safety net -- designed to prevent financial collapse -- creates the incentive for the very behavior that leads to financial collapse. The intervention generates the problem it was designed to solve. This is iatrogenesis in its purest form.

💡 Intuition: Imagine a parent who rushes to catch a toddler every time the child stumbles. The intention is protection. The effect is that the child never learns to balance, never develops the reflexes that prevent falls, and becomes more likely to fall when the parent is not there. The safety net prevents the small harm (a scraped knee) while creating the conditions for the large harm (a child who cannot walk confidently). The economic parallel is precise: the Fed's willingness to prevent small recessions creates the conditions for large crises, because market participants never develop the caution that comes from experiencing consequences.


19.5 Foreign Policy Blowback: The Intervention Spiral

The Central Intelligence Agency coined the term blowback in a classified report in 1953, referring to the unintended consequences of covert operations that "blow back" on the country that conducted them. The term has since entered general usage to describe any situation in which an intervention produces consequences that harm the intervener.

Foreign policy provides some of the most dramatic and consequential examples of iatrogenesis.

The Intervention Spiral

In 1953, the CIA and British intelligence orchestrated the overthrow of Iran's democratically elected Prime Minister, Mohammad Mossadegh, who had nationalized Iran's oil industry. The coup installed Shah Mohammad Reza Pahlavi, who ruled as an authoritarian monarch supported by a brutal secret police, the SAVAK. The intervention achieved its immediate objective: Western oil interests were protected.

Twenty-six years later, in 1979, the Iranian Revolution overthrew the Shah. The revolution's fury was directed in significant part against the United States, which had installed and supported the Shah. The revolutionary government held fifty-two American diplomats hostage for 444 days. The relationship between the United States and Iran became one of the most hostile in world politics, a hostility that persists decades later.

The 1953 coup -- an intervention designed to protect Western interests -- created the conditions for the 1979 revolution, which damaged those interests far more than Mossadegh's nationalization ever would have. The cure was worse than the disease.

This pattern -- intervention creating instability, instability justifying further intervention, further intervention creating deeper instability -- constitutes what we might call the intervention spiral. Each round of intervention produces consequences that appear to demand another round of intervention, and each round makes the situation worse.

The 2003 invasion of Iraq provides a textbook case. The stated goal was to eliminate weapons of mass destruction and bring democracy to Iraq. The weapons did not exist. The invasion toppled Saddam Hussein but dissolved the Iraqi state, disbanded the Iraqi army (sending hundreds of thousands of trained soldiers into unemployment), and created a power vacuum. Into that vacuum flowed sectarian militias, al-Qaeda in Iraq, and eventually the Islamic State (ISIS), which at its peak controlled territory the size of Great Britain. The intervention to stabilize the Middle East produced the most destabilizing force the region had seen in decades.

The cost was staggering: thousands of American soldiers killed, hundreds of thousands of Iraqi civilians dead, trillions of dollars spent, a region further destabilized, and a terrorist organization more dangerous than the one the intervention was supposed to prevent. By any measure, the intervention caused more harm than the problem it aimed to solve.

Connection to Chapter 2 (Feedback Loops): The intervention spiral is a positive (reinforcing) feedback loop. Intervention produces instability. Instability appears to demand more intervention. More intervention produces more instability. Each cycle amplifies the problem. The loop continues until the intervener exhausts its resources, changes strategy, or the system collapses into a new equilibrium. Breaking the loop requires recognizing that the intervention itself is the source of the instability -- a recognition that is psychologically and politically difficult because it means admitting that the previous interventions were mistakes.


🔄 Check Your Understanding

  1. Explain how the Federal Reserve's response to the 2001 recession illustrates iatrogenesis. What was the intended effect? What was the actual effect?
  2. Define moral hazard in your own words. How does the "too big to fail" doctrine illustrate the iatrogenic pattern?
  3. What is the intervention spiral? Use the Iran example to trace the cycle from initial intervention through blowback to further intervention.

19.6 Software Patches: The Cascade of Fixes

Software engineering has its own word for iatrogenesis: regression. A regression is a bug introduced by a fix -- a change intended to solve one problem that creates a new problem. The term is telling: it implies backward movement, a return to a broken state caused by the attempt to improve.

Every software developer knows the experience. A bug is reported. A developer identifies the cause, writes a fix, tests it, and deploys it. The fix resolves the reported bug. It also breaks something else. The something else may not be discovered for days or weeks, because the breakage occurs in a part of the system that the developer did not think to test -- a feature that seemed unrelated to the fix but was connected through shared code, shared data, or shared assumptions.

The patch cascade is the software-specific version of the intervention spiral. A patch fixes bug A but introduces bug B. A second patch fixes bug B but introduces bug C. A third patch fixes bug C but reintroduces bug A, because the fix for C conflicts with the fix for A. The system oscillates between broken states, each fix creating the conditions for the next failure.

The patch cascade is particularly insidious in security. A security patch -- an update designed to close a vulnerability -- may introduce a new vulnerability, break compatibility with other software, or cause system instability that makes the system more vulnerable than the unpatched version. System administrators face an impossible choice: apply the patch (risking the new bugs it introduces) or skip the patch (remaining vulnerable to the known exploit). Either way, the system is at risk. The cure may be no better than the disease.

Microsoft's history provides vivid examples. In 2010, a Windows update intended to fix a minor display issue caused systems with certain third-party antivirus software to crash repeatedly, rendering the computer unusable. The fix for the fix then caused different problems for users running different configurations. The patch cascade consumed weeks of engineering time, disrupted millions of users, and damaged trust in the update process -- leading some users to disable automatic updates entirely, which left them vulnerable to the security threats the updates were designed to prevent.

The structural problem is complexity. In a simple system with few interacting components, a change in one part has predictable effects on other parts. In a complex system with millions of interacting components -- a modern operating system, a large web application, a distributed microservices architecture -- the effects of a change ripple through the system in ways that no human can fully predict. The developer who writes a fix is intervening in a system more complex than any single person can hold in their mind. The intervention is necessarily based on an incomplete model of the system. And interventions based on incomplete models are precisely the ones that produce iatrogenic harm.

Spaced Review -- Goodhart's Law (Ch. 15): Software teams often measure their effectiveness by the number of bugs fixed or patches deployed. This metric incentivizes fixing as many bugs as possible, as quickly as possible. It does not incentivize restraint, careful testing, or the difficult decision to leave a minor bug unfixed because the risk of the fix exceeds the cost of the bug. Like the medical system that measures procedures performed, the software team that measures bugs fixed is optimizing a Goodhart target that diverges from the actual goal: a system that works reliably for its users.


19.7 The Fire Paradox: A Century of Ecological Iatrogenesis

In 1910, a series of wildfires burned three million acres across Idaho, Montana, and Washington in just two days. The "Big Blowup" killed eighty-five people, including seventy-eight firefighters. It traumatized the young United States Forest Service and set the direction of American fire policy for the next century.

The lesson the Forest Service drew from 1910 was simple: fire is the enemy. Suppress it. Put it out. Every fire, every time, as fast as possible. The agency adopted the "10 AM policy" -- every fire should be suppressed by 10 AM the morning following its detection. This policy was enforced with increasing effectiveness as firefighting technology improved: roads penetrated deeper into wilderness, aircraft enabled aerial detection and retardant drops, communication systems coordinated suppression efforts across vast areas.

The policy worked. Wildfires burned fewer acres. Forest fires that would have consumed thousands of acres were caught early and extinguished. The forests seemed safer.

They were not safer. They were accumulating fuel.

In a natural fire regime -- the pattern that existed for millennia before European settlement -- frequent, low-intensity fires burned through forests every five to thirty years, depending on the ecosystem. These fires consumed dead wood, dry brush, fallen needles, and small trees. They cleared the understory. They maintained open, park-like stands of large, fire-resistant trees. The fires were part of the ecosystem, not a threat to it. Many plant species, including the giant sequoia and the longleaf pine, depend on fire for reproduction. Some pine cones open only in the heat of a fire. Many grasslands and savannas exist only because fire prevents tree encroachment.

A century of fire suppression removed fire from these ecosystems. The dead wood, brush, and small trees that fire would have consumed accumulated year after year. Forests that had been open and park-like became dense thickets of vegetation. The fuel load -- the quantity of burnable material per acre -- increased by orders of magnitude.

When fire eventually came to these fuel-loaded forests, it was not the low-intensity surface fire that the ecosystem had evolved with. It was a high-intensity crown fire that climbed from the understory into the canopy, leaped from treetop to treetop, and burned with a ferocity that no firefighting technology could contain. These megafires kill the very large, fire-resistant trees that survived centuries of natural fires. They sterilize the soil. They burn so hot that the affected areas cannot regenerate naturally for decades.

This is the fire paradox: suppressing fire makes fire worse. The intervention (fire suppression) solved the short-term problem (individual fires) while creating a long-term problem (catastrophic fuel accumulation) that makes the eventual fire far more destructive than any of the fires that were suppressed.

The numbers tell the story. In the first decade of the twenty-first century, the area burned annually by wildfire in the western United States was roughly double what it had been in the 1970s, and the average acreage burned per fire had increased dramatically. The fires were not just more frequent; they were more intense, more destructive, and harder to control. The 2020 fire season in California burned over four million acres, more than four percent of the state. The 2023 fire season in Canada burned over 45 million acres, shattering all previous records.

A century of fire suppression did not prevent these catastrophes. It caused them. The cure was the disease.

Connection to Chapter 13 (Annealing): The fire paradox is a perfect illustration of the annealing principle from Chapter 13. A natural fire regime provides the ecological equivalent of "temperature" -- controlled, periodic disturbance that prevents the system from settling into a dangerously rigid state. Frequent small fires are the noise that keeps the ecosystem flexible. Fire suppression is the equivalent of cooling the system too aggressively: it eliminates the disturbance, allowing the system to settle into a state that appears stable but is actually a powder keg. The annealing insight tells us that some disturbance is necessary for long-term health. The fire paradox demonstrates what happens when that disturbance is suppressed.


🔄 Check Your Understanding

  1. Explain the patch cascade in software engineering. How is it structurally similar to the intervention spiral in foreign policy?
  2. What is the fire paradox? Why does suppressing fire make fire worse? Trace the causal chain from fire suppression to megafire.
  3. The chapter draws a connection between fire suppression and annealing (Chapter 13). In your own words, explain how the absence of small disturbances creates the conditions for catastrophic disturbance.

19.8 Via Negativa: The Power of Subtraction

Nassim Nicholas Taleb introduced the concept of via negativa -- the way of subtraction -- as a counterpoint to the intervention bias. The idea is ancient (it appears in theology, philosophy, and traditional medicine), but Taleb gave it new force by connecting it to his broader framework of antifragility and the limits of human knowledge.

The principle of via negativa is this: when you do not fully understand a system, removing things is safer than adding things. Subtraction carries less risk than addition, because removing a known harm has more predictable effects than adding a new intervention whose full consequences are unknown.

Consider a patient taking twelve medications. Some of these medications were prescribed to treat the side effects of other medications. Some interact with each other in ways that are poorly understood. Some were prescribed years ago for conditions that may no longer exist. The patient feels terrible. A naive physician might add a thirteenth medication. A wise physician would start by subtracting: which of these twelve medications can be eliminated? Which were prescribed to treat problems caused by other medications? Which are no longer necessary?

The subtraction approach is powerful because of an asymmetry in knowledge. We typically know more about what is harmful than about what is helpful. We know that smoking causes cancer. We know that excessive sugar causes metabolic disease. We know that sleep deprivation impairs cognition. This knowledge is reliable because it is based on observable harm -- removing the harmful thing produces a clear, positive result. But we know far less about the effects of adding new interventions: a new drug, a new supplement, a new policy. The new intervention interacts with a complex system in ways we cannot fully predict.

Via negativa applies far beyond medicine:

In economics: Rather than designing complex new stimulus programs, consider removing harmful regulations, perverse subsidies, or counterproductive tax provisions. The obstacle you remove may matter more than the program you add.

In software: Rather than adding new features to fix usability problems, consider removing confusing features, unnecessary complexity, and redundant options. The simplest fix is often deletion.

In foreign policy: Rather than intervening to fix the consequences of the previous intervention, consider withdrawing the intervention that caused the problem. The best response to blowback may be to stop doing the thing that generates blowback.

In ecology: Rather than engineering complex solutions to wildfire risk, consider reintroducing the natural fire regime that was suppressed. The solution is the removal of the intervention, not the addition of a new one.

In personal life: Rather than adding productivity tools, morning routines, supplements, and optimization hacks, consider eliminating the sources of distraction, poor sleep, and chronic stress that are degrading your performance. Subtraction often outperforms addition.

The power of via negativa lies in its epistemological humility. It acknowledges that we understand complex systems poorly, that our interventions carry hidden costs, and that the safest path is often to stop doing harmful things rather than to start doing new things whose consequences we cannot predict.


19.9 Overtreatment and the McNamara Fallacy

One of the deepest roots of iatrogenesis is overtreatment -- intervening when the best course of action is to do nothing. Overtreatment is not the same as malpractice. It is not the result of incompetence or negligence. It is the result of a systematic bias toward action -- a bias so deep that it operates even when the evidence clearly favors inaction.

In medicine, the technical term for doing nothing is watchful waiting or active surveillance. For many conditions -- low-grade prostate cancer, small thyroid nodules, mild disc herniations, many childhood ear infections -- the evidence shows that observation produces outcomes as good as or better than treatment. The condition may resolve on its own. The treatment carries risks that outweigh the benefits. The best medicine, in these cases, is no medicine at all.

Yet watchful waiting is extraordinarily difficult for both physicians and patients. The physician faces pressure to "do something." The patient expects treatment -- that is why they came to the doctor. The medical system reimburses procedures, not restraint. The legal system punishes physicians who "failed to act" but rarely punishes those who treated unnecessarily. Every institutional incentive pushes toward action. Restraint, no matter how medically appropriate, feels like negligence.

This pattern is connected to what has been called the McNamara Fallacy, named for U.S. Secretary of Defense Robert McNamara, who managed the Vietnam War by measuring what could be measured (enemy body counts, villages pacified, tons of bombs dropped) and ignoring what could not be measured (Vietnamese popular support, the will to resist, the strategic picture). The fallacy works in two steps:

  1. Measure what is easily measurable.
  2. Disregard what cannot be easily measured, or give it an arbitrary quantitative value.

In medicine, the McNamara Fallacy manifests as a focus on the measurable outcomes of treatment (tumor removed, lab values normalized, procedure completed) while ignoring the unmeasurable costs (quality of life, anxiety from overdiagnosis, side effects of unnecessary treatment, the psychological burden of being labeled a "cancer patient" for a condition that would never have harmed you).

In foreign policy, it manifests as counting territory controlled, enemies killed, and operations conducted, while ignoring the unmeasurable consequences: radicalization caused by civilian casualties, goodwill destroyed, future enemies created.

In software, it manifests as counting bugs fixed and patches deployed while ignoring system stability, user experience, and the maintenance burden created by accumulating patches.

The McNamara Fallacy does not cause iatrogenesis directly. It causes iatrogenesis by making the benefits of intervention visible and measurable while making the costs invisible and unmeasurable. The intervention looks like pure benefit. The harm is hidden in the things no one is counting.

💡 Intuition: Imagine a gardener who measures success by the number of weeds pulled. Every day, the gardener pulls weeds vigorously. The garden looks "productive" by the metric. But the pulling disturbs the soil, damages root systems of desirable plants, and brings buried weed seeds to the surface where they germinate. The measured output (weeds pulled) goes up. The actual outcome (garden health) goes down. The gardener would have been better off pulling fewer weeds, or using mulch to prevent them, or accepting some weeds as part of a healthy garden. But the metric rewards action, and action is what the gardener provides.


19.10 The Intervention Bias: Why Humans Prefer Doing Something to Doing Nothing

The examples in this chapter -- from George Washington's physicians to the Federal Reserve to the Forest Service -- share a common psychological substrate: the intervention bias, the systematic human preference for action over inaction, even when inaction would produce better outcomes.

The intervention bias has deep roots. It is partially explained by several well-documented psychological phenomena:

Action bias. When facing uncertainty, humans prefer to act. In a famous study of professional soccer goalkeepers facing penalty kicks, researchers found that goalkeepers dove left or right on 94 percent of kicks, even though staying in the center of the goal would have produced the best outcome for roughly one-third of kicks. The goalkeepers knew this. They dove anyway. Staying in the center felt like not trying. Action, even ineffective action, feels better than inaction.

Omission bias (reversed). In many contexts, people feel more responsible for harm caused by action than by inaction. A doctor who prescribes a drug that kills a patient feels worse than a doctor who fails to prescribe a drug that would have saved a patient. This should make physicians cautious about intervening. But in practice, the institutional and legal context reverses the effect: a physician who "does nothing" and the patient dies faces far greater legal and professional liability than a physician who intervenes aggressively and the patient dies from a side effect. The system punishes inaction more than it punishes harmful action.

Illusion of control. Humans overestimate their ability to control complex systems. A policymaker who believes she understands the economy is more likely to intervene than one who recognizes the limits of her understanding. The intervention bias is powered by overconfidence in the intervener's model of the system.

Narrative bias. Interveners tell stories in which they are the protagonist. "We identified the problem, we designed a solution, we implemented the fix" is a narrative with a hero. "We studied the problem, concluded that intervention would make it worse, and chose to do nothing" is a narrative without a hero. In organizations, careers are built on action, not on restraint. No one gets promoted for the crisis they prevented by choosing not to intervene.

The interaction of these biases creates a systematic overproduction of intervention. More policies are enacted than are justified by the evidence. More medications are prescribed than are supported by clinical data. More military operations are launched than serve strategic objectives. More code is written than improves the software. The system is biased toward doing, and the cost of excessive doing -- iatrogenic harm -- is chronically underestimated because the harm is harder to measure than the action.


🔄 Check Your Understanding

  1. Explain via negativa in your own words. Why is removing a harmful thing generally safer than adding a new thing?
  2. What is the McNamara Fallacy, and how does it contribute to iatrogenesis? Give an example from a domain other than the Vietnam War.
  3. List three psychological biases that contribute to the intervention bias. For each, explain how it pushes decision-makers toward action even when inaction would be better.

19.11 The Intervention Calculus: A Threshold Concept

The preceding sections have traced iatrogenesis across medicine, economics, foreign policy, software engineering, and ecology. The pattern is consistent: well-intentioned interventions in complex systems produce unintended consequences that may be worse than the original problem.

This convergence points toward a threshold concept -- a shift in thinking that, once made, changes how you evaluate every proposed intervention in every domain. We call it the Intervention Calculus.

The Intervention Calculus rests on three propositions:

Proposition 1: Every intervention has costs as well as benefits. This sounds obvious, but it is systematically violated in practice. Interveners routinely consider only the intended benefits of their actions and ignore or underestimate the unintended costs. The physician considers the disease treated but not the infection acquired. The policymaker considers the jobs created but not the bubble inflated. The firefighter considers the fire suppressed but not the fuel accumulated. The software developer considers the bug fixed but not the regression introduced. Full accounting requires measuring both sides of the ledger -- and acknowledging that the cost side is typically harder to measure, longer to materialize, and easier to ignore.

Proposition 2: The costs of intervention are often invisible or delayed. The benefits of an intervention are usually immediate and visible: the fire is out, the patient is treated, the policy is enacted, the patch is deployed. The costs are typically delayed (the fuel accumulation from fire suppression, the antibiotic resistance from overprescription), diffuse (spread across many people or a long time period), and invisible (they take the form of things that did not happen or that happened to someone else). This asymmetry in visibility creates a systematic bias in favor of intervention, because the benefits are salient and the costs are hidden.

Proposition 3: The burden of proof should be on the intervener, not on those who counsel caution. In current practice, the default is to intervene. The person who proposes doing nothing bears the burden of explaining why action is unnecessary. The Intervention Calculus reverses this default: the person who proposes action must demonstrate that the expected benefits of the intervention, including its second-order and third-order effects, exceed the expected costs, including delayed and invisible costs. When in doubt, do not intervene.

This reversal of the burden of proof is the core of the Intervention Calculus, and it is the hardest part to implement. It requires resisting the action bias, the narrative bias, the illusion of control, and the institutional incentives that reward doing over not-doing. It requires the intellectual humility to say: "I do not understand this system well enough to predict the consequences of my intervention, and therefore I should not intervene."

This is not a counsel of paralysis. Some situations genuinely demand intervention. A patient with a ruptured appendix needs surgery. An economy in freefall may need fiscal stimulus. A fire threatening a town must be fought. The Intervention Calculus does not say "never intervene." It says "prove that intervening is justified before you intervene, and be honest about the costs you cannot see."

Connection to Chapter 17 (Redundancy): The Intervention Calculus connects directly to the redundancy principle from Chapter 17. One of the most common forms of iatrogenic harm is the removal of "inefficient" redundancy from a system -- cutting safety margins, eliminating buffer stock, consolidating suppliers. The intervener sees the redundancy as waste and removes it. The removal is the intervention. The consequence -- increased fragility -- is the iatrogenic harm. The Intervention Calculus would require the intervener to demonstrate that the system's performance under abnormal conditions will not be degraded before removing its redundancy -- a burden of proof that most efficiency consultants cannot meet.


19.12 First, Do No Harm: The Hardest Principle

The Hippocratic principle primum non nocere -- first, do no harm -- is perhaps the most famous ethical principle in medicine. It is also one of the most systematically violated, and not only in medicine.

The principle is hard to follow for all the reasons this chapter has explored: the intervention bias pushes toward action, the McNamara Fallacy hides the costs of intervention, institutional incentives reward doing over not-doing, and the illusion of control makes interveners confident that their actions will have only the intended effects.

But there is a deeper reason the principle is hard to follow: it requires a specific kind of wisdom that is difficult to teach and rarely rewarded. The wisdom is this: knowing when to act and when to refrain from acting requires understanding the limits of your own understanding. The physician who chooses watchful waiting must be confident enough in her judgment to resist the pressure to treat, and humble enough in her knowledge to recognize that treatment might make things worse. The policymaker who chooses not to intervene must be strong enough to withstand the accusation of doing nothing, and wise enough to know that doing nothing may be the best thing she can do.

This wisdom has a name in several traditions. In the Taoist tradition, it is wu wei -- action through non-action, the art of accomplishing by not forcing. In medicine, it is primum non nocere. In Taleb's framework, it is via negativa. In systems thinking, it is the recognition that complex systems have their own dynamics, their own equilibria, their own ways of healing -- and that the intervener's role is sometimes to support those natural processes rather than to override them.

The practical question is: when should you intervene, and when should you refrain? The chapter's analysis suggests several decision heuristics:

Intervene when the system has no self-correcting mechanism. A ruptured appendix will not heal itself. A building fire will not put itself out. When the system cannot fix the problem on its own, intervention is necessary.

Refrain when the system has a self-correcting mechanism that the intervention would disable. Natural fire regimes maintain forest health. The immune system handles most infections. Markets correct most pricing errors. When the system has a built-in mechanism for addressing the problem, intervention may disable that mechanism and create dependence on continued intervention.

Intervene when the costs of inaction are clearly catastrophic and imminent. A pandemic requires a public health response. A financial system on the verge of collapse may require a bailout. When the cost of doing nothing is certain and extreme, intervention is justified even if its own costs are significant.

Refrain when the costs of intervention are uncertain and potentially large. If you cannot predict the second-order effects of your intervention, you probably should not intervene. The Intervention Calculus places the burden of proof on the intervener: if you cannot demonstrate that the benefits outweigh the costs -- including the costs you cannot see -- the default should be restraint.

When in doubt, prefer via negativa. If you must act, consider first whether there is an existing intervention that can be removed rather than a new intervention that must be added. Removing a known harm is safer than adding an unknown benefit.

Forward Connection to Chapter 21 (Path Dependence): Once an iatrogenic intervention is established, it becomes embedded in the system and creates its own constituency. The fire suppression policy created a firefighting industry with budgets, jobs, and political support. The antibiotics pipeline created pharmaceutical companies that profit from antibiotic use. The monetary policy framework created financial institutions that depend on cheap money. Reversing an iatrogenic intervention is not just a technical challenge; it is a political and social challenge, because the intervention has created path-dependent structures that resist change. Chapter 21 will explore this dynamic in depth.


19.13 Pattern Library Checkpoint

Pattern: Iatrogenesis (Intervention-Caused Harm)

One-sentence statement: Interventions in complex systems reliably produce unintended consequences, and those consequences are often worse than the problem the intervention was designed to solve, because the costs of intervention are typically invisible or delayed while the benefits are immediate and salient.

Structural signature: An intervener applies a fix to a visible problem in a complex system. The fix addresses the visible problem but creates a new, less visible problem through second-order effects. The new problem may be worse than the original because it was not anticipated, not measured, and not accounted for in the decision to intervene.

Necessary conditions: 1. A complex system with many interacting components and non-obvious causal pathways 2. An intervener whose model of the system is incomplete (which is always the case for complex systems) 3. An asymmetry in visibility between the benefits of intervention (immediate, measurable) and the costs (delayed, diffuse, invisible) 4. Institutional or psychological bias favoring action over inaction

Domain examples: | Domain | Intervention | Intended Effect | Iatrogenic Effect | |--------|-------------|-----------------|-------------------| | Medicine | Bloodletting | Restore humoral balance | Killed the patient through blood loss | | Medicine | Antibiotics | Kill bacterial infection | Created antibiotic-resistant superbugs | | Economics | Low interest rates | Stimulate economic recovery | Inflated housing bubble, leading to financial crisis | | Economics | Austerity | Reduce government debt | Deepened recession, increased debt ratio | | Foreign policy | Regime change | Install friendly government | Created power vacuum, empowered extremists | | Software | Security patch | Fix vulnerability | Introduced new bugs, broke compatibility | | Ecology | Fire suppression | Prevent wildfire | Created conditions for catastrophic megafire | | Agriculture | Pesticides | Eliminate crop pests | Killed pollinators, created resistant pest strains |

Connection to other patterns: - Feedback loops (Ch. 2): the intervention spiral is a positive feedback loop - Goodhart's Law (Ch. 15): intervention metrics become targets that incentivize overtreatment - Redundancy (Ch. 17): removing "inefficient" redundancy is a common form of iatrogenic harm - Cascading failures (Ch. 18): iatrogenic interventions can trigger cascades in tightly coupled systems


19.14 Spaced Review: Concepts from Earlier Chapters

This chapter has made extensive use of concepts from Chapters 15 and 17. Before moving on, verify that your understanding of those concepts remains solid.

From Chapter 15 (Goodhart's Law):

The core insight of Goodhart's Law is that when a measure becomes a target, it ceases to be a good measure. In the context of iatrogenesis, intervention metrics -- procedures performed, bugs fixed, fires suppressed, policies enacted -- become targets that incentivize more intervention regardless of whether the intervention helps. Ask yourself:

  • Can you identify the Goodhart target in each iatrogenesis example discussed in this chapter? (The measure that incentivizes more intervention.)
  • Can you explain why measuring "procedures performed" in medicine produces different outcomes than measuring "patient health"?
  • Can you describe how the fire suppression policy's metric (acres burned, which the Forest Service wanted to minimize) functioned as a Goodhart target?

If any of these questions are difficult, revisit Chapter 15 before proceeding.

From Chapter 17 (Redundancy vs. Efficiency):

The core insight of Chapter 17 is that redundancy is not waste -- it is insurance against an uncertain future. In the context of iatrogenesis, the removal of "unnecessary" redundancy is itself an intervention that produces iatrogenic harm: the system becomes more efficient in normal times and more fragile in abnormal times. Ask yourself:

  • Can you explain how antibiotic resistance is the destruction of medical redundancy?
  • Can you identify how the Federal Reserve's low-interest-rate policy stripped redundancy (capital buffers, risk aversion) from the financial system?
  • Can you explain how fire suppression removed the ecological redundancy (natural fire regime) that maintained forest health?

If any of these questions are difficult, revisit Chapter 17 before proceeding.


19.15 Chapter Summary

Iatrogenesis -- harm caused by the healer -- is not confined to medicine. It is a universal pattern that appears wherever complex systems are subject to well-intentioned interventions by agents who do not fully understand the system's dynamics.

The pattern recurs across domains because complex systems share common properties that make them vulnerable to iatrogenic harm: they have many interacting components, non-obvious causal pathways, delayed feedback, and emergent behaviors that cannot be predicted from the properties of individual components. An intervention in such a system is necessarily based on an incomplete model, and an intervention based on an incomplete model will produce unintended consequences as reliably as the sun rises.

The intervention bias -- the systematic human preference for action over inaction -- ensures that iatrogenic harm is chronically overproduced. Psychological biases (action bias, illusion of control, narrative bias), institutional incentives (rewarding action, punishing inaction), and epistemological failures (the McNamara Fallacy, visibility asymmetry between benefits and costs) all push toward more intervention, faster intervention, and less careful accounting of intervention's costs.

The antidote is not the elimination of intervention -- some situations genuinely demand it -- but a reversal of the burden of proof. The Intervention Calculus holds that the burden of proof should be on the intervener: demonstrate that the expected benefits of your intervention, including its second-order and third-order effects, exceed the expected costs, including delayed and invisible costs. When you cannot make this demonstration, the default should be restraint. When you must act, prefer via negativa -- removing a known harm rather than adding an unknown intervention. And always, in every domain, remember the oldest and most violated principle of them all: first, do no harm.


Looking Ahead: Chapter 20 will explore the observer effect -- how the act of measuring a system changes the system being measured, from quantum mechanics to social science surveys to Hawthorne studies. Where this chapter showed how interventions cause harm, Chapter 20 will show how even passive observation distorts the thing being observed -- a subtler but equally pervasive form of interference.

Looking Back: This chapter built on feedback loops (Ch. 2), annealing (Ch. 13), Goodhart's Law (Ch. 15), and redundancy (Ch. 17). The intervention bias amplifies the efficiency trap from Ch. 17: the same pressure that strips redundancy from systems also generates unnecessary interventions that make systems worse. Goodhart's Law from Ch. 15 explains why intervention metrics incentivize overtreatment. The annealing framework from Ch. 13 illuminates why suppressing natural disturbances creates catastrophic fragility. Together, these chapters form a coherent account of how complex systems are damaged by the very institutions designed to protect them.