In 1569, a Flemish cartographer named Gerardus Mercator published a map of the world that would reshape how humanity understood the planet it lived on. The Mercator projection was designed to solve a specific, practical problem: navigation at sea...
Learning Objectives
- Define the map-territory relation and explain why Korzybski's insight applies to every domain where humans create representations of reality
- Identify map-territory confusions in at least five domains: cartography, finance, medicine, language, and science
- Analyze how map-territory confusion escalates through three levels: using the map knowingly, forgetting it is a map, and defending the map against the territory
- Evaluate the connection between map-territory confusion and overfitting (Ch. 14), Goodhart's Law (Ch. 15), and legibility projects (Ch. 16)
- Distinguish between the error of having maps (no error -- maps are essential) and the error of forgetting they are maps (the source of most epistemic catastrophes)
- Apply the threshold concept -- All Knowledge Is Cartography -- to recognize that every theory, model, image, word, and measurement is a selective representation, never the thing itself
In This Chapter
- Maps, Models, Financial Instruments, Medical Imaging, Language
- 22.1 The Cartographer's Confession
- 22.2 The Man Who Named the Problem
- 22.3 The Model That Ate Wall Street
- 22.4 When the Scan Becomes the Patient
- 22.5 Language as a Map
- 22.6 All Models Are Wrong, Some Are Useful
- 22.7 The Three Levels of Map-Territory Confusion
- 22.8 The Usefulness of Maps -- A Necessary Defense
- 22.9 The Radical Conclusion: All Knowledge Is Cartography
- 22.10 Korzybski's Three Cautions
- 22.11 Part IV Opens: The Epistemological Turn
- Chapter Summary
Chapter 22: The Map Is Not the Territory -- How Every Field Learns (and Forgets) This Lesson
Maps, Models, Financial Instruments, Medical Imaging, Language
"The map is not the territory." -- Alfred Korzybski, Science and Sanity (1933)
22.1 The Cartographer's Confession
In 1569, a Flemish cartographer named Gerardus Mercator published a map of the world that would reshape how humanity understood the planet it lived on. The Mercator projection was designed to solve a specific, practical problem: navigation at sea. Sailors needed a chart on which a straight line corresponded to a constant compass bearing -- what navigators call a rhumb line. If you drew a line from Lisbon to Havana on Mercator's map and measured its angle from north, you could hold that compass bearing and arrive at your destination. For sixteenth-century mariners crossing the Atlantic, this was not an intellectual nicety. It was the difference between reaching port and dying at sea.
The projection worked brilliantly for its intended purpose. It was, and remains, a superb navigational tool. But it achieved this navigational accuracy through a specific geometric tradeoff: it preserved angles at the expense of areas. To keep compass bearings consistent on a flat surface, Mercator stretched the map progressively as it moved away from the equator. The result was a systematic distortion of relative size. Greenland appeared roughly the same size as Africa, when in reality Africa is fourteen times larger. Europe appeared to dominate the Northern Hemisphere, dwarfing the massive landmasses of South America and Southeast Asia. The polar regions ballooned to infinity.
For sailors, this was irrelevant. They did not need accurate relative sizes. They needed accurate compass bearings. The distortion was a feature, not a bug -- a deliberate sacrifice of one kind of accuracy to achieve another.
But then something happened that Mercator could not have anticipated. His navigational tool became the standard classroom map of the world. For four centuries, schoolchildren in Europe and North America grew up staring at a representation designed for sailors and internalized it as reality. Greenland was the size of Africa. Europe was the center of the world. The Northern Hemisphere was dominant and vast, the tropical regions small and peripheral.
The map had become the territory.
In 1973, a German historian named Arno Peters published what he called a "new" projection -- the Peters projection -- that preserved relative areas at the expense of the shapes Mercator had faithfully rendered. On Peters's map, Africa was enormous, Europe shrank to its true proportional size, and the developing world suddenly looked as large as it actually was. The Peters map generated a ferocious political controversy that continues to this day. Some cartographers dismissed it as a crude equal-area projection that distorted shapes beyond usefulness. Others championed it as a corrective to centuries of Eurocentric geographic imagination. The controversy was not really about cartography. It was about the political consequences of a map that had been mistaken for reality.
Here is the lesson that matters for this book: both maps were wrong. And both maps were useful. The Mercator projection distorted areas but preserved angles -- useful for navigation. The Peters projection distorted shapes but preserved areas -- useful for understanding relative size. Neither map was the territory. Both were selective representations, designed for specific purposes, that emphasized certain features at the expense of others.
Every map lies. Every map must lie. A map that did not lie -- that reproduced the territory in perfect, complete detail -- would be the territory itself, and therefore useless as a map. Jorge Luis Borges imagined exactly this in his one-paragraph story "On Exactitude in Science," in which a cartographic empire creates a map at 1:1 scale, coextensive with the territory itself. The map, of course, is perfectly accurate and perfectly useless. It is abandoned, left to decay at the edges of the empire, "inhabited by Animals and Beggars."
The usefulness of a map lies precisely in what it leaves out.
Fast Track: The map-territory relation is Korzybski's principle that every representation -- map, model, theory, image, word -- simplifies and distorts the reality it represents. This chapter traces the pattern across cartography, finance, medicine, language, and science, and develops the threshold concept: All Knowledge Is Cartography. If you already grasp the core insight, skip to Section 22.5 (Language as a Map) for the most philosophically provocative application, then read Section 22.7 (The Three Levels of Confusion) for the diagnostic framework, and finish with Section 22.9 for the synthesis.
Deep Dive: The full chapter traces the map-territory relation from literal cartography through financial modeling, medical imaging, linguistic relativity, and scientific theory, developing the insight that all human knowledge is a form of map-making. The two case studies extend the analysis to financial models (Case Study 1) and medical imaging and language (Case Study 2). For the richest understanding, read everything. This is the opening chapter of Part IV, and its central insight -- All Knowledge Is Cartography -- provides the epistemological foundation for every chapter that follows.
22.2 The Man Who Named the Problem
Alfred Korzybski was a Polish-American engineer, mathematician, and philosopher who spent years developing a system he called "general semantics" -- a framework for understanding how language and thought relate to reality. Much of his system has been forgotten, some of it fairly. But one phrase from his 1931 paper presented at the American Association for the Advancement of Science meeting has outlived everything else he wrote: "The map is not the territory."
Korzybski's point was not about literal maps. He was making a claim about the fundamental nature of human knowledge. Every time we create a representation of something -- a word, a number, a theory, a model, an image, a category -- we are drawing a map. The representation captures certain features of the territory and omits others. It simplifies. It structures. It highlights some aspects and hides others. This is not a failure. This is what representations do. The failure comes when we forget that the representation is a representation -- when we begin treating the map as though it were the territory itself.
Korzybski called this confusion the process of identification -- identifying the map with the territory, the word with the thing, the model with reality. He argued that this confusion was the root cause of a vast range of human errors, from personal misunderstandings to national catastrophes. And he was more right than even he knew, because the map-territory confusion turns out to be one of the most universal patterns in this book.
Consider what Korzybski is actually claiming. He is not saying maps are bad. He is not saying models are useless. He is not saying we should abandon representations and somehow access reality directly. We cannot access reality directly -- our senses, our concepts, our language are all mapping mechanisms. What Korzybski is saying is that we must maintain a conscious awareness of the gap between representation and reality. We must remember that our models are models. We must notice when we start confusing the menu for the meal.
That phrase -- "confusing the menu for the meal" -- comes from the British-American philosopher Alan Watts, who used it as a vivid analogy for the same insight. You walk into a restaurant, pick up the menu, and read "grilled salmon with dill sauce." The words conjure an image, perhaps a memory, perhaps an anticipation. But you do not eat the menu. The words "grilled salmon" are not the fish. The description is not the described. Watts's point was that much of human suffering arises from exactly this confusion: we eat the menu, we chew on the description, we get angry at the words -- and we forget that the meal itself is something entirely different.
Connection to Chapter 16 (Legibility and Control): Korzybski's map-territory distinction is the theoretical foundation for the legibility problems we explored in Chapter 16. James C. Scott's legibility projects -- the state's attempt to make complex realities visible, countable, and manageable -- are map-making projects. Scientific forestry was a map of the forest that counted only the trees that produced revenue. The cadastral survey was a map of land ownership that erased the messy, overlapping, customary claims that actually governed peasant life. The modern surname was a map of identity that compressed the rich, contextual naming practices of traditional communities into a single, state-legible label. In every case, the map was created for administrative purposes, the map was simpler than the territory, and eventually the state tried to reshape the territory to match the map. Legibility is institutionalized map-territory confusion.
🔄 Check Your Understanding
- Explain why the Mercator projection is simultaneously an excellent map and a misleading one. What does this tell you about the nature of all maps?
- What did Korzybski mean by "identification," and why did he consider it the source of so many human errors?
- In your own words, explain Alan Watts's menu-meal analogy. Can you think of a situation in your own life where you have "eaten the menu"?
22.3 The Model That Ate Wall Street
In the year 2000, a Canadian mathematician and actuary named David X. Li published a paper in The Journal of Fixed Income titled "On Default Correlation: A Copula Function Approach." The paper introduced a mathematical formula -- the Gaussian copula -- that appeared to solve one of the most intractable problems in finance: how to quantify the correlation between different debt instruments defaulting at the same time.
The problem was crucial. By the late 1990s, Wall Street had developed an enormous market in collateralized debt obligations -- CDOs -- which were bundles of mortgages, credit card debts, auto loans, and other debt instruments packaged together and sold to investors as securities. The value of a CDO depended critically on default correlation: if one mortgage in the bundle defaulted, how likely was it that others would default too? If defaults were independent (one mortgage defaulting said nothing about others), then a CDO was extremely safe -- the chance of many defaults happening simultaneously was vanishingly small. If defaults were highly correlated (when one went, they all went), then CDOs were very risky.
Before Li's formula, there was no elegant way to estimate this correlation. Different analysts used different methods, got different answers, and could not easily compare their results. The market for CDOs was growing, but pricing them required extensive historical data, complex simulations, and a great deal of guesswork.
Li's Gaussian copula changed everything. It provided a single, mathematically tractable function that could estimate default correlation using a simple input: credit default swap prices, which were readily available in liquid markets. Instead of laboriously analyzing historical default data, a trader could plug CDS spreads into Li's formula and get a correlation number. It was elegant. It was efficient. It was, in the word that Wall Street loves, scalable.
Within a few years, the Gaussian copula was everywhere. Rating agencies used it to evaluate CDOs. Banks used it to price them. Regulators accepted it as a standard methodology. The formula did not merely describe the CDO market -- it enabled the market's explosive growth. Before the copula, CDOs were a niche product. After the copula, they became a multi-trillion-dollar industry.
The map had become the territory.
Here is what the formula assumed. The Gaussian copula modeled default correlations as though they followed a normal (Gaussian) distribution. This meant it assumed that extreme events -- situations where many debts defaulted simultaneously -- were extraordinarily rare. The formula worked well for moderate scenarios: a few defaults here, a few there, in roughly predictable patterns. But it systematically underestimated the probability of catastrophic scenarios where correlations spiked suddenly -- scenarios where the entire housing market collapsed at once.
Li himself understood the limitations. In a 2005 interview with the Wall Street Journal, he said, "The most dangerous part is when people believe everything coming out of it." He knew his formula was a map, not the territory. He knew it simplified. He knew it omitted the possibility of extreme correlation events. But the financial system did not share his caution. The formula was too useful, too profitable, too convenient to question. Questioning the formula meant questioning the entire CDO market, the billions in profits it generated, the bonuses it funded, the careers it built.
By 2007, when the U.S. housing market began to decline, the map and the territory violently diverged. Mortgage defaults were not independent. They were not even modestly correlated. They were catastrophically correlated -- when housing prices fell in one region, they fell everywhere, because the same economic forces (loose lending standards, stagnant wages, speculative bubbles) drove housing prices across the entire country. The Gaussian copula, which had assigned near-zero probability to this scenario, was suddenly confronted with a territory it had never mapped.
The result was the financial crisis of 2007-2008. CDOs that had been rated AAA -- the safest possible rating -- turned out to be worthless. Banks that had relied on the copula to price their risk discovered that they were catastrophically exposed. Lehman Brothers collapsed. Bear Stearns was absorbed. AIG required a $185 billion government bailout. The global economy entered its worst recession since the 1930s.
The Gaussian copula did not cause the financial crisis by itself. Lax regulation, perverse incentives (see Chapter 21), excessive leverage, and systemic overconfidence all played roles. But the copula was the map that everyone used, and when the map said the territory was safe, people believed the map. The copula is perhaps the most expensive map-territory confusion in history -- a mathematical model that was mistaken for reality, with consequences measured in trillions of dollars and millions of lost jobs.
Connection to Chapter 14 (Overfitting): The Gaussian copula was, in a precise sense, the opposite of overfitting -- and yet it failed for a related reason. An overfitted model fits the training data too closely, capturing noise as though it were signal (Ch. 14). The copula was underfitted to the tails of the distribution -- it systematically ignored extreme events that did not appear in the historical data it was calibrated on. But both errors share the same root cause: confusing the model with reality. The overfitter says, "My model captures every detail of this dataset, therefore it captures reality." The copula user says, "My model produces a clean correlation number, therefore correlation is clean." Both forget that the model is a map. Both are punished when the territory diverges from the map.
Connection to Chapter 15 (Goodhart's Law): The Gaussian copula became a Goodhart target. When credit rating agencies began using the copula's output as the measure of CDO risk, the copula's correlation number became the target that banks optimized against. Banks structured CDOs to produce favorable copula numbers, not to produce genuinely safe investments. The measure (copula-derived correlation) became the target, and it ceased to be a good measure. Li's formula was supposed to describe risk. It ended up defining what "risk" meant, and the definition was wrong.
🔄 Check Your Understanding
- David X. Li knew his formula was a simplification. Why did the financial system treat it as though it were a complete description of reality? What incentive structures contributed to this map-territory confusion?
- Explain how the Gaussian copula's treatment of extreme events (tail risk) exemplifies the general problem with all maps: they must leave something out, and what they leave out determines when they fail.
- How does the copula story illustrate the difference between a map being wrong and a map being dangerous? Is a map dangerous because it is wrong, or because people forget it is a map?
22.4 When the Scan Becomes the Patient
In 1982, a radiologist at the Mayo Clinic examined a CT scan of a patient's abdomen and noticed something unexpected: a small, round mass on the patient's left adrenal gland. The mass had nothing to do with the reason for the scan. The patient had come in for abdominal pain, which turned out to be a kidney stone. The adrenal mass was an accidental finding -- a discovery made incidentally while looking for something else.
The radiologist reported the finding. A follow-up scan was ordered. Then a biopsy. Then a consultation with an endocrinologist. Then blood tests. Then another scan six months later. The mass never grew. It never produced symptoms. It never became malignant. It was, in all probability, a benign adenoma -- a harmless clump of cells that millions of people carry without ever knowing, because before modern imaging, there was no way to see it.
This harmless finding had a name. It was an incidentaloma -- a mass discovered incidentally on medical imaging that would never have been found if the imaging had not been performed for an unrelated reason. And the incidentaloma is one of the most consequential map-territory confusions in modern medicine.
The problem is not that the imaging technology is inaccurate. The scans are exquisitely accurate. They can detect masses measured in millimeters, abnormalities invisible to the naked eye, variations in tissue density that no physical examination could ever reveal. The problem is that the accuracy of the map has outstripped our ability to interpret what it shows. We can see things we cannot yet understand.
Before CT scans and MRIs, the human body was largely opaque to medicine. Doctors could feel lumps, hear heart murmurs, observe symptoms, and examine tissue removed during surgery or autopsy. Their map of the body was coarse -- it missed many things -- but what it found was almost always clinically significant. If you could feel a lump, the lump was big enough to matter.
Modern imaging created a new, extraordinarily detailed map of the body's interior. And on this new map, there were features that had never appeared on the old one -- not because they were new, but because the old map had been too coarse to detect them. Thyroid nodules. Adrenal masses. Tiny brain lesions. Cysts on kidneys, ovaries, livers. Structures that had existed all along, that were part of the normal variation of human anatomy, that caused no symptoms and posed no threat -- but that, on a scan, looked like something that might need treatment.
The medical term for the resulting problem is overdiagnosis: the detection and treatment of conditions that would never have caused symptoms or harm during the patient's lifetime. Overdiagnosis is not misdiagnosis -- the finding is real. The thyroid nodule exists. The prostate cancer cells are there. The lung nodule is not an artifact. The diagnosis is technically correct. The map is accurate. But the map is showing you features of the territory that do not matter for the journey you are on, and treating those features as though they were the destination.
The consequences of overdiagnosis are not trivial. A thyroid nodule discovered incidentally may lead to a biopsy, which may lead to a thyroidectomy, which leaves the patient on lifelong hormone replacement -- for a condition that would never have caused a single symptom. A slow-growing prostate cancer detected by screening may lead to surgery that causes incontinence and impotence -- for a cancer that would never have progressed to the point of harm. The patient was healthy. The scan found something. The something was treated. The treatment caused harm. The patient is now worse off than if the scan had never been performed.
This is iatrogenesis (Chapter 19) caused by map-territory confusion. The scan -- the map -- was too detailed, too sensitive, too good at finding things. And the medical system, trained to treat everything it finds, treated the map as though it were the patient. The scan showed an abnormality, therefore the patient was abnormal, therefore the patient needed treatment. The logic seems unassailable, but it rests on a hidden assumption: that everything the scan reveals is clinically meaningful. That assumption is the map-territory confusion at the heart of modern overdiagnosis.
The radiologist and writer H. Gilbert Welch, who has studied overdiagnosis extensively, puts it this way: "The problem is not that we find things. The problem is that we feel compelled to do something about everything we find." The compulsion to act on every finding is itself a map-territory confusion -- the assumption that if the map shows a feature, the feature must be significant in the territory of the patient's actual health and actual life.
Spaced Review -- Cascading Failures (Ch. 18): Overdiagnosis creates its own cascading sequence. An incidental finding triggers a follow-up scan. The follow-up scan reveals another incidental finding. A biopsy is ordered. The biopsy causes a complication. The complication requires treatment. Each step is individually rational -- each step follows logically from what the map shows. But the cascade as a whole was triggered by a finding that was never clinically significant. This is the cascade structure from Chapter 18: a small initial event (the incidental finding) triggers a sequence of increasingly consequential interventions, each one justified by the previous one, none of them justified by the patient's actual health needs.
🔄 Check Your Understanding
- Explain the paradox of overdiagnosis: how can a more accurate diagnostic tool lead to worse patient outcomes?
- Why is an incidentaloma a map-territory problem rather than a technology problem? What would have to change in the medical system's interpretation of imaging to reduce overdiagnosis?
- Connect the overdiagnosis problem to the Mercator projection: both involve maps that are technically accurate but misleading when used for a purpose they were not designed for. What is the equivalent of "using a navigational map as a political map" in the medical imaging context?
22.5 Language as a Map
Every word you have read in this book is a map.
That statement is not metaphorical. Language is the most pervasive, most invisible, and most consequential mapping system humans have ever created. Every word is a category, and every category is a simplification. The word "tree" maps an astonishing diversity of organisms -- from a two-meter birch sapling to a hundred-meter coast redwood, from a deciduous maple to an evergreen pine, from a rainforest strangler fig to a desert Joshua tree -- onto a single label. The word captures something real (these organisms share structural features: roots, trunk, branches, leaves or needles) and erases something real (the staggering differences between them). The word is useful. The word is a map. The word is not the territory.
The linguists Edward Sapir and Benjamin Lee Whorf proposed, in various forms during the 1930s and 1940s, what became known as the Sapir-Whorf hypothesis: the idea that the language you speak shapes the way you perceive and think about reality. In its strong form -- sometimes called linguistic determinism -- the hypothesis claims that language determines thought, that you literally cannot think things your language has no words for. In its weak form -- linguistic relativity -- it claims that language influences thought, making certain ideas easier or harder to think depending on the linguistic tools available.
The strong form has been largely discredited. People can think thoughts they have no words for -- otherwise, no new word could ever be coined, no concept could ever be invented that was not already named. But the weak form has accumulated substantial empirical support. The map of language does not determine what territory you can perceive, but it does influence what territory you are likely to notice, how you categorize it, and how easily you can reason about it.
Consider color. The Russian language has two basic words for blue: "goluboy" (light blue) and "siniy" (dark blue). These are not shades of one color in Russian -- they are different colors, as different as green and blue are in English. Research by Lera Boroditsky and colleagues has shown that Russian speakers are faster at discriminating light blue from dark blue than English speakers -- but only when the two blues cross the goluboy/siniy boundary. The linguistic distinction creates a perceptual advantage. The map sharpens the perception of the territory.
Or consider time. In Mandarin Chinese, time is often described using vertical metaphors: earlier events are "up" and later events are "down." In English, time is horizontal: earlier events are "behind" and later events are "ahead." Boroditsky's research suggests that these linguistic metaphors influence how speakers actually think about temporal relationships -- Mandarin speakers are slightly faster at processing temporal sequences when primed with vertical spatial cues, English speakers with horizontal ones.
Or consider the Kuuk Thaayorre people of Cape York Peninsula in Australia, whose language has no words for "left" and "right." Instead, all spatial relationships are described in cardinal directions -- north, south, east, west. A Kuuk Thaayorre speaker might say "there is an ant on your southwest leg" or "move the cup to the north-northwest." This linguistic mapping system requires its speakers to maintain a constant, precise awareness of cardinal orientation -- an ability that strikes most English speakers as almost superhuman. The map has shaped the navigator.
These are not just curiosities. They illustrate a profound point about the map-territory relation: the map you use shapes what you can see in the territory. Language is a map of reality that every speaker internalizes so deeply that they forget it is a map. The categories your language provides feel like the categories of reality itself. Blue is one color (if you speak English). Time does flow horizontally (if you speak English). Spatial relationships are relative to your body (if you speak English). These feel like facts about the world, not features of your mapping system.
The same principle applies to specialized languages -- the jargons and terminologies of professional fields. When economists speak of "the market," they map a bewildering variety of human interactions -- the village bazaar, the New York Stock Exchange, the implicit negotiations within a household, the dark web marketplace -- onto a single abstraction. The map is useful: it allows economists to identify structural similarities across very different contexts. But the map also erases differences that matter: the social relationships, the power dynamics, the cultural meanings, the emotional content of exchange. An economist who says "the market will adjust" is using a map that omits the human suffering that "adjustment" entails -- the lost jobs, the broken families, the communities hollowed out by capital flows. The map is not wrong. The map is a map.
The untranslatable words of other languages are windows into alternative maps. The Japanese "wabi-sabi" maps a territory that English has no single word for: the beauty found in imperfection, impermanence, and incompleteness. The Portuguese "saudade" maps a longing for something absent that is more than nostalgia and more than melancholy. The Danish "hygge" maps an experience of cozy, warm intimacy that the English word "coziness" only approximately captures. These are not just vocabulary gaps. They represent territories that one language has mapped and another has not -- features of human experience that one culture has deemed important enough to name and another has left unlabeled, and therefore harder to notice, discuss, and cultivate.
Spaced Review -- Legibility Traps (Ch. 20): Language is the ultimate legibility project. It takes the illegible, continuous, infinitely varied territory of experience and renders it legible through categories, labels, and grammar. This is precisely the process Chapter 20 analyzed: the imposition of a legible framework onto a complex reality, with inevitable loss of information. The legibility trap occurs when the framework starts shaping the reality instead of describing it -- when people begin experiencing the world through their linguistic categories rather than using language as a tool to approximate experience. This is linguistic relativity in a nutshell: the legibility project of language shapes the territory of perception.
22.6 All Models Are Wrong, Some Are Useful
In 1976, the British statistician George E. P. Box wrote a sentence that became one of the most quoted aphorisms in science: "All models are wrong, but some are useful."
Box's dictum, as it is sometimes called, is Korzybski's insight translated into the language of statistics and science. Every scientific model is a map. Every map simplifies. Every simplification is, in a strict sense, wrong -- it omits features of the territory that exist. But some simplifications are extraordinarily useful because they capture the features that matter for a particular purpose and ignore the ones that do not.
The most famous illustration is Newtonian mechanics. Newton's laws of motion and gravitation, published in 1687, provided a map of physical reality that was so accurate, so comprehensive, and so useful that it dominated physics for over two centuries. You can use Newton's laws to predict the trajectory of a cannonball, the orbit of a planet, the period of a pendulum, the tides of the ocean. The map works. Engineers still use it daily to design bridges, calculate rocket trajectories, and build skyscrapers.
And it is wrong.
Einstein's special and general relativity, published in 1905 and 1915, showed that Newton's map was a simplification of a deeper reality. Space and time are not the fixed, absolute stage that Newton assumed. They curve in the presence of mass. Time passes at different rates depending on velocity and gravitational field. Mass and energy are interchangeable. At velocities approaching the speed of light, or in the vicinity of extremely massive objects, Newton's predictions diverge significantly from observation. His map breaks down.
But here is the crucial point: Newton's map does not break down for the purposes it was designed to serve. At the speeds and scales of everyday experience -- cannonballs, bridges, pendulums, even planetary orbits -- Newton's predictions are so close to Einstein's that the difference is negligible. Newton's map is wrong, but it is useful. It captures the features that matter for engineering and everyday physics and omits the features (relativistic effects) that do not matter at ordinary scales. The wrongness is real but irrelevant for most purposes.
This is not a failure of Newtonian mechanics. It is the nature of all maps. Every map captures some features and omits others. A map that captured everything would be the territory itself -- Borges's 1:1 map, perfectly accurate and perfectly useless. Newton's map is useful precisely because it omits relativistic effects. Including them would add enormous complexity without improving predictions for the contexts where Newtonian mechanics is applied. The simplification is not a defect. It is the source of the map's power.
The same pattern appears across science. The ideal gas law -- PV = nRT -- is a map of gas behavior that treats gas molecules as dimensionless points with no intermolecular forces. This is wrong. Gas molecules have volume and they do interact. But for many purposes, the ideal gas law is extraordinarily useful. It becomes inaccurate at high pressures and low temperatures, where intermolecular forces matter -- that is, when the features the map omits become significant in the territory.
The Bohr model of the atom, with electrons orbiting a nucleus like planets around a sun, is wrong. Electrons do not orbit in defined paths. They exist in probability clouds described by quantum mechanics. But the Bohr model is useful for understanding basic atomic structure, spectral lines, and chemical bonding at an introductory level. It breaks down when you need to understand molecular orbital theory or quantum tunneling -- when the features the map omits become relevant.
The point is not that these models are failures. The point is that they are maps, and maps have boundaries. Within those boundaries, they are extraordinarily useful. Outside those boundaries, they are misleading or useless. The error is not in having a map with boundaries. The error is in forgetting that the boundaries exist.
Box's dictum -- "all models are wrong, but some are useful" -- can be extended: all models are wrong, some are useful, and the most dangerous models are the useful ones, because their usefulness makes us forget they are models. A model that fails immediately is abandoned quickly. A model that works well for decades -- like Newtonian mechanics, like the Gaussian copula, like the Mercator projection -- accumulates a kind of epistemic authority that makes it feel less like a model and more like reality itself. The map's success is the source of its danger.
🔄 Check Your Understanding
- Explain Box's dictum in your own words. Why is the word "but" in "all models are wrong, but some are useful" the most important word in the sentence?
- Why does a model's success make it more dangerous, not less? Connect this to the financial crisis example from Section 22.3.
- Pick a model, theory, or framework from your own field of study or work. In what domain is it useful? Where does it break down? What features of reality does it omit, and when do those omissions matter?
22.7 The Three Levels of Map-Territory Confusion
Not all map-territory confusions are equal. There is a crucial difference between a navigator who uses the Mercator projection knowing it distorts areas, a schoolchild who believes Greenland is the size of Africa because the classroom map says so, and a political ideologue who insists that any map showing Africa as larger than Europe is propaganda. These represent three escalating levels of confusion, each more dangerous than the last.
Level 1: Knowing the map is not the territory, but using it anyway. This is the ideal relationship between map and territory. The sixteenth-century navigator who used Mercator's projection knew that Greenland was not the size of Africa. The navigator needed accurate compass bearings, and the Mercator map provided them. The distortion was understood, accepted, and irrelevant to the navigator's purpose. This is how scientists use models: they know Newtonian mechanics is "wrong" in the Einsteinian sense, but they use it anyway because it is useful for their purposes and its limitations are understood.
Level 1 is the relationship Box's dictum advocates. Use the model. Appreciate its usefulness. But never forget that it is a model. Maintain what Korzybski called "consciousness of abstracting" -- the ongoing awareness that you are working with an abstraction, not with reality itself.
Level 2: Forgetting the map is not the territory. This is where most map-territory disasters begin. The schoolchild who grows up with the Mercator projection on the classroom wall and internalizes it as geographic reality. The bank trader who uses the Gaussian copula every day and gradually stops thinking about its assumptions. The doctor who orders treatment for every incidentaloma because the scan showed something and the scan is "objective." The economist who talks about "the market" as though it were a natural force rather than a human abstraction.
Level 2 is the default state for most people in most domains. We forget that our maps are maps because they work. The more a map works, the more transparent it becomes -- we see through it to the territory, or rather, we see the territory as the map depicts it, and we stop noticing the map at all. This is not stupidity. It is cognitive efficiency. If you had to constantly remind yourself that every word, every concept, every theory you use is a simplification of reality, you would be paralyzed. Some level of map-territory identification is necessary for functional thought and action. The danger arises when the identification becomes total -- when the gap between map and territory closes entirely in your mind, and you lose the ability to notice it even when someone points it out.
Level 3: Defending the map against the territory. This is the most dangerous level, and it is more common than you might think. Level 3 occurs when someone has invested so deeply in a particular map -- intellectually, emotionally, professionally, financially -- that evidence contradicting the map is perceived not as information about the territory but as an attack on the map. The map has become part of the person's identity, and defending the map feels like defending the self.
The financial industry's response to early warnings about the Gaussian copula was Level 3. By 2005, some analysts were pointing out that the copula underestimated tail risk. The response was not to examine the map more carefully. It was to dismiss the critics. The copula was too profitable, too embedded in the industry's operations, too central to the careers of the people who used it. Questioning the copula meant questioning the entire business model. So the map was defended against the territory.
The same dynamic appears in science. When the territory of empirical evidence contradicts a well-established theory, the first response is often not to question the theory but to question the evidence. This is not always irrational -- anomalous evidence should be checked carefully before overturning established knowledge. But when the defense of the theory becomes an end in itself -- when the theory is treated not as a useful map but as a truth to be protected -- the process has crossed from Level 1 (using the map knowing it is a map) to Level 3 (defending the map against the territory). Thomas Kuhn's analysis of scientific revolutions, which we will examine in Chapter 24, is fundamentally about the transition from Level 1 to Level 3 and the crisis that eventually forces the adoption of a new map.
The three levels form a diagnostic framework:
| Level | Relationship | Example | Danger |
|---|---|---|---|
| 1 | Map used consciously as a tool | Navigator using Mercator for compass bearings | Low -- user understands distortions |
| 2 | Map mistaken for territory | Student believing Greenland is the size of Africa | Medium -- user cannot see distortions |
| 3 | Map defended against territory | Trader dismissing critics of the Gaussian copula | High -- user actively resists correction |
The progression from Level 1 to Level 3 is driven by several factors: time (the longer you use a map, the more natural it feels), success (the more the map works, the more you trust it), investment (the more you have built on the map, the more costly it is to question), and community (when everyone around you uses the same map, questioning it feels like madness).
Connection to Chapter 14 (Overfitting): There is a deep structural parallel between map-territory confusion and overfitting. An overfitted model (Ch. 14) has confused the noise in the training data with signal -- it has mistaken artifacts of the particular dataset for features of the underlying reality. This is a map-territory confusion at the level of data: the model treats the map (the dataset) as the territory (the underlying process). When the model encounters new data (new territory), it fails because it was fitted to the old map, not to the territory. The cure for overfitting -- regularization, cross-validation, out-of-sample testing -- is essentially a discipline for maintaining Level 1 awareness: use the model, but remember it is a model.
22.8 The Usefulness of Maps -- A Necessary Defense
At this point in the chapter, it would be easy to walk away with the impression that maps are bad. They distort. They mislead. They cause financial crises and medical overdiagnosis and political misunderstanding. If the map is not the territory, why use maps at all?
This would be precisely the wrong conclusion, and it is worth pausing to make the defense of maps explicit.
Maps are not bad. Maps are essential. Maps are, in fact, the only tools we have.
We never access the territory directly. Our senses are maps -- they sample certain frequencies of light and sound and touch and translate them into neural signals that our brains interpret. Our perception is a map -- it constructs a coherent model of the world from fragmentary, noisy sensory data. Our concepts are maps -- they group the infinite particulars of experience into categories we can reason about. Our language is a map -- it translates the continuous, multidimensional flow of experience into discrete symbols that can be communicated. Our scientific theories are maps -- they extract regularities from observation and express them in forms that allow prediction and manipulation.
Without maps, we are blind. A world without models is not a world of clear, unmediated perception. It is a world of overwhelming, undifferentiated, incomprehensible noise. The territory without a map is not more real -- it is less usable. The infant perceives the territory more directly than the adult, and the infant is helpless precisely because it lacks the maps that would allow it to navigate, predict, and act.
The error is not in having maps. The error is in forgetting they are maps.
This distinction matters because the anti-map position -- the claim that all models are distortions and should therefore be abandoned -- is itself a map-territory confusion, just in the opposite direction. The person who says "all maps are lies" is confusing the map's purpose (to be useful) with the map's nature (to be incomplete). A lie is a deliberate misrepresentation designed to deceive. A map is a deliberate simplification designed to be useful. These are not the same thing. Calling every map a lie is like calling every diet a starvation -- it confuses reduction with elimination, selection with destruction.
The right relationship with maps is Level 1: use them consciously, appreciate their power, understand their limits, and maintain the awareness that they are maps. Know which features they preserve and which they distort. Know when they work and when they break down. And when the territory sends back data that contradicts the map, update the map.
This last point is critical. The willingness to update the map in response to the territory is the hallmark of effective thinking -- in science, in medicine, in finance, in everyday life. The Bayesian reasoner (a concept we will encounter more fully later) treats every model as provisional, every belief as a map that should be revised when new evidence arrives. The scientist who abandons a theory when the evidence demands it is not weak -- they are maintaining the correct relationship between map and territory. The person who holds onto a map in the face of contradicting territory is not strong -- they are confused.
Pattern Library Checkpoint (Phase 2): You have now encountered the map-territory relation across five domains: cartography, finance, medicine, language, and science. Add this pattern to your Pattern Library and note its connections to at least three earlier patterns: overfitting (Ch. 14) is fitting a map too closely to noisy terrain; Goodhart's Law (Ch. 15) occurs when a map-derived metric becomes a target; legibility projects (Ch. 16) are institutionalized map-making that reshapes the territory. As you continue through Part IV, notice how the map-territory relation underlies tacit knowledge (Ch. 23 -- knowledge that cannot be fully mapped), paradigm shifts (Ch. 24 -- replacing one map with another), and boundary objects (Ch. 27 -- maps shared between different communities).
🔄 Check Your Understanding
- Why would abandoning all maps and models be itself a map-territory error? What does it confuse?
- What is the difference between a map that is wrong and a map that is useless? Can a wrong map be useful? Can a correct map be useless?
- Describe a situation in which having no map would be worse than having a flawed map. What does this tell you about the value of incomplete knowledge?
22.9 The Radical Conclusion: All Knowledge Is Cartography
We are now ready for the threshold concept of this chapter, and it is more radical than it might first appear.
All knowledge is cartography.
This is not a metaphor. It is a structural claim about the nature of human understanding. Every piece of knowledge you possess -- every fact, every theory, every concept, every perception -- is a map. It is a selective representation of some aspect of reality, created by a mapping process (sensing, measuring, theorizing, conceptualizing, naming), that captures certain features and omits others. You never hold the territory in your mind. You hold maps of the territory. Your entire cognitive life is cartographic.
Consider what this means:
Scientific theories are maps. General relativity maps the large-scale structure of spacetime. Quantum mechanics maps the behavior of subatomic particles. Evolution maps the development of life over time. Each captures aspects of the territory with extraordinary precision. Each omits aspects that the other captures. And the fact that general relativity and quantum mechanics are mutually inconsistent -- that they produce contradictory predictions in certain extreme conditions -- does not mean one is true and the other false. It means they are different maps of the same territory, each accurate within its domain and each distorting features that the other faithfully represents. The search for a "theory of everything" is a search for a single map that captures everything both existing maps capture without the contradictions. It may be that such a map is impossible -- that the territory is too complex for any single map to capture completely. If so, our best maps will always be plural, partial, and perspectival.
Medical diagnoses are maps. A diagnosis maps the patient's condition onto a category: "diabetes," "depression," "breast cancer stage II." The category is useful -- it triggers treatment protocols, enables communication between professionals, and connects the patient to research and support. But the category is not the patient. Every patient with "depression" has a unique constellation of symptoms, causes, circumstances, and responses to treatment. The diagnosis maps this unique territory onto a shared label that is useful but inevitably imprecise. The best clinicians maintain Level 1 awareness: they use the diagnosis as a map while remaining attentive to the ways the individual patient diverges from the diagnostic category.
Economic theories are maps. Supply and demand curves map the behavior of markets. GDP maps the economic output of a nation. Inflation rates map the changing purchasing power of money. Each of these is a useful simplification that captures certain features of economic reality and omits others. GDP, for example, counts market transactions but not unpaid domestic labor, not ecosystem services, not leisure time, not wellbeing. A nation could increase its GDP by cutting down all its forests and selling the timber -- the GDP map would show growth while the territory experienced destruction.
Personal beliefs are maps. Your understanding of a friend, a coworker, a family member is a map of that person -- a simplified model that captures the features you have noticed and omits the features you have not. Your political beliefs are maps of how society works. Your religious or philosophical commitments are maps of how the universe is structured and what matters in it. None of these are the territory. All of them are useful to the extent that they help you navigate, predict, and act. All of them are dangerous to the extent that you forget they are maps.
The threshold concept -- All Knowledge Is Cartography -- transforms the question you ask about any piece of knowledge. The old question is: "Is this true?" The new question is: "How useful is this map, and what does it distort?"
The old question treats knowledge as a binary: true or false, correct or incorrect, right or wrong. The new question treats knowledge as a tool: useful or useless, accurate for this purpose or inaccurate for this purpose, reliable within these boundaries or unreliable beyond them. The new question does not abandon truth -- a map can be more or less accurate, and accuracy matters enormously. But it embeds truth within a richer framework that includes purpose, context, limitation, and perspective.
This is not relativism. Relativism says, "All maps are equally valid." The map-territory relation says, "All maps are incomplete, but some are more accurate, more useful, and more honest about their limitations than others." A map that shows Greenland as the size of Africa is worse than a map that shows their true relative sizes, if your purpose is understanding geography. But it is better if your purpose is navigating by compass. Quality depends on purpose. Purpose depends on context. And context is something the map user must supply -- the map itself cannot tell you what it is for.
22.10 Korzybski's Three Cautions
Before we leave this chapter, it is worth noting that Korzybski's full formulation of the map-territory relation included three principles, not just one:
- The map is not the territory. (The representation is not the thing represented.)
- The map does not cover all of the territory. (No representation is complete -- there are always features of reality that the map omits.)
- The map is self-reflexive. (The map can be included in the territory -- we can make maps of maps, models of models, and the process of mapping is itself part of the territory being mapped.)
The first principle we have explored throughout this chapter. The second is equally important: every map has edges, and beyond those edges lie features of reality that the map does not represent. The ideal gas law has edges (high pressure, low temperature). The Gaussian copula had edges (extreme correlation events). The Mercator projection has edges (the poles). Medical imaging has edges (the significance of what it finds). Language has edges (the untranslatable concepts that other languages map and yours does not).
The third principle is the most subtle and the most relevant to the rest of Part IV. We make maps of maps. Science is a map of reality, and the philosophy of science is a map of science. Language describes the world, and metalanguage describes language. This self-reflexive quality means that our maps are always, in part, maps of our own map-making process. We cannot step entirely outside our maps to compare them to the territory, because the stepping-outside itself requires a map. We are cartographers who live on our own maps.
This is not a cause for despair. It is a cause for humility. We can make better maps. We can compare maps to each other. We can look for inconsistencies, test predictions, and update our maps when the territory sends back data we did not expect. We can maintain Level 1 awareness -- using our maps consciously, gratefully, and provisionally. We cannot achieve a view from nowhere, a God's-eye perspective that sees the territory as it truly is. But we can achieve something almost as good: a view from many somewheres, a collection of maps drawn from different perspectives, each illuminating aspects of the territory that the others miss.
That is, in fact, the project of this entire book. Cross-domain pattern recognition is the art of collecting maps from many domains and comparing them to each other. When the same pattern appears on maps drawn by cartographers who have never met -- when biologists, economists, engineers, and psychologists independently map the same structural feature -- we gain confidence that the feature is not an artifact of any single map. It is something in the territory.
The map is not the territory. But a thousand maps, drawn from a thousand perspectives, triangulated against each other -- that is as close to the territory as cartographers will ever get.
🔄 Check Your Understanding
- Explain the difference between "Is this true?" and "How useful is this map, and what does it distort?" Why is the second question more productive than the first in most practical contexts?
- Why is the claim "All Knowledge Is Cartography" not the same as relativism? What is the difference between saying "all maps are incomplete" and "all maps are equally valid"?
- Korzybski's third principle says the map is self-reflexive -- we can make maps of maps. Give an example from your own field where a "meta-map" (a theory about theories, a model of models) is used. What does the meta-map capture, and what does it omit?
22.11 Part IV Opens: The Epistemological Turn
This chapter opens Part IV -- How Knowledge Works -- with a deceptively simple observation: every representation is incomplete. The chapters that follow build on this foundation in ways that may surprise you.
Chapter 23 (Tacit Knowledge) will explore knowledge that resists mapping entirely -- the skills, intuitions, and understandings that experts possess but cannot articulate, that transfer through apprenticeship but not through textbooks. If all knowledge is cartography, tacit knowledge is the territory that refuses to be mapped.
Chapter 24 (Paradigm Shifts) will examine what happens when one map of reality is replaced by another -- the painful, often violent process by which scientific communities abandon familiar maps and adopt new ones. Kuhn's paradigm shifts are, in the language of this chapter, the replacement of one community-wide map with another, and the resistance to paradigm change is Level 3 map-territory confusion at institutional scale.
Chapter 25 (The Adjacent Possible) will investigate why certain maps become possible at certain times -- why the territory of innovation seems to have a structure that determines which maps can be drawn next. Chapter 26 (Multiple Discovery) will show that when the territory is ready to be mapped, multiple cartographers will draw the same map independently. And Chapter 27 (Boundary Objects) will examine how different communities share maps across their borders -- how objects, concepts, and frameworks serve as maps that mean slightly different things in different territories but enable communication nevertheless.
The map is not the territory. But the art of making, using, evaluating, and replacing maps is the central activity of human intelligence. Part IV is the study of that art.
Chapter Summary
The map-territory relation is Korzybski's principle that every representation -- map, model, theory, image, word -- simplifies and distorts the reality it represents. This chapter traced the principle across five domains:
- Cartography: Mercator's projection preserved compass bearings at the cost of distorting area, and then became the standard classroom map, distorting geographic understanding for centuries.
- Finance: The Gaussian copula provided a mathematically elegant model of default correlation that systematically underestimated extreme events, contributing to the 2007-2008 financial crisis.
- Medicine: Modern imaging technology detects incidentalomas and produces overdiagnosis -- technically accurate findings that lead to unnecessary treatment and harm.
- Language: The Sapir-Whorf hypothesis reveals that language itself is a mapping system that shapes perception and thought, making certain aspects of reality more or less visible depending on linguistic structure.
- Science: Box's dictum -- "all models are wrong, but some are useful" -- captures the essential insight: scientific models are maps whose value lies in their selective accuracy, not in their completeness.
The chapter identified three levels of map-territory confusion: (1) knowing the map is not the territory but using it anyway (the ideal), (2) forgetting the map is not the territory (the common error), and (3) defending the map against the territory (the dangerous error). The threshold concept -- All Knowledge Is Cartography -- reframes the fundamental question of knowledge from "Is this true?" to "How useful is this map, and what does it distort?"
Maps are not bad. Maps are essential. The error is never in having a map. The error is in forgetting you have one.
Related Reading
Explore this topic in other books
Metacognition Planning Your Learning Metacognition Calibration Pattern Recognition Survivorship Bias Applied Psychology Cognitive Biases Science of Luck Cognitive Biases and Luck Media Literacy Cognitive Biases Sports Betting Cognitive Biases in Betting