31 min read

> "We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."

Learning Objectives

  • Trace the full history of the neural networks suppression (1969-2012) as the book's deepest anchor example, identifying every failure mode that operated
  • Analyze tech's unique failure mode — capital sustaining wrong ideas — and explain why venture funding changes the error dynamics compared to other fields
  • Evaluate the 'connecting the world' narrative as a case study in unfalsifiable tech utopianism
  • Assess the myth of disruption and its function as tech's version of the revision myth
  • Apply the Correction Speed Model to the technology sector and compare its correction dynamics to other fields

Chapter 29: Field Autopsy: Technology

"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." — Roy Amara (Amara's Law)

Chapter Overview

In 1969, Marvin Minsky and Seymour Papert — two of the most brilliant and respected figures in artificial intelligence — published Perceptrons, a mathematical analysis of a class of neural network models. The book demonstrated, rigorously, that single-layer perceptrons could not solve certain important problems, including the XOR function. The analysis was correct.

But the conclusion the field drew from the analysis was catastrophic. Perceptrons didn't just critique single-layer networks — it was widely interpreted as a proof that neural networks in general were a dead end. Funding dried up. Research programs were shuttered. Graduate students were told not to study neural networks. An entire approach to artificial intelligence — the approach that would eventually produce the most transformative technology of the twenty-first century — was effectively killed for nearly three decades by a single prestigious critique.

The neural network story is the purest case study in this entire book. It contains every major failure mode operating simultaneously: authority cascade, sunk cost, the outsider problem, Einstellung, consensus enforcement, and the revision myth. It also contains something unique to the technology industry — the role of capital in sustaining and suppressing ideas — that distinguishes tech's failure modes from those of every other field examined in Part IV.

This chapter examines how the technology sector — the industry that celebrates disruption, rewards iconoclasts, and treats the status quo as something to be overthrown — is subject to the same structural forces that trap every other field. And it examines tech's unique failure mode: so much capital that wrong ideas can survive through sheer funding long after the evidence has turned against them.

In this chapter, you will learn to: - Trace the neural networks suppression as a complete case study in every major failure mode - Identify capital-sustained error as tech's distinctive failure mode - Evaluate tech narratives (social media, autonomous vehicles, crypto) against the failure mode framework - Assess the disruption myth as tech's version of the revision myth

🏃 Fast Track: If you're familiar with the AI winter history, skim section 29.1 and focus on 29.2–29.4, which identify the structural failure modes unique to the technology sector.

🔬 Deep Dive: After this chapter, read Nils Nilsson's The Quest for Artificial Intelligence (2010) for the definitive history of AI research, and Carlota Perez's Technological Revolutions and Financial Capital (2002) for the theory of how capital creates and sustains technology bubbles.


29.1 The Neural Network Suppression: A Complete Failure Mode Autopsy

We have encountered the neural network story before — in Chapter 1 (as a preview), Chapter 2 (authority cascade), Chapter 13 (Einstellung), and Chapter 17 (Planck's principle). Now it receives its full treatment, because no other case in this book illustrates as many failure modes operating simultaneously.

The Promise (1940s–1960s)

The idea that artificial neural networks could be used for computation dates to the 1940s, when Warren McCulloch and Walter Pitts published a mathematical model of neural activity. In 1958, Frank Rosenblatt at Cornell built the Perceptron — a physical machine that could learn to classify patterns through a simple learning algorithm. The Perceptron was, by modern standards, primitive. But it demonstrated something extraordinary: a machine that learned from data rather than being explicitly programmed.

The early results generated enormous excitement. Rosenblatt made ambitious predictions about what neural networks would eventually accomplish — pattern recognition, language understanding, even consciousness. The New York Times covered the Perceptron in 1958, reporting that the Navy expected it to be able to "walk, talk, see, write, reproduce itself and be conscious of its existence." The hype was premature, but the fundamental insight — that machines could learn from examples rather than being hand-coded — was correct and would eventually transform the world.

The excitement attracted funding, researchers, and attention. By the mid-1960s, neural network research was a significant component of the broader AI enterprise.

The Critique (1969)

Marvin Minsky, co-founder of the MIT Artificial Intelligence Laboratory, was the most influential figure in AI. His approach to artificial intelligence was symbolic — he believed that intelligence required the manipulation of symbols and rules, not the statistical pattern-matching of neural networks. This was not an unreasonable position; symbolic AI had produced impressive results in game-playing, theorem-proving, and natural language understanding.

Minsky and his MIT colleague Seymour Papert published Perceptrons in 1969. The book was a rigorous mathematical analysis demonstrating the limitations of single-layer perceptrons. The key result: single-layer perceptrons could not compute certain functions, including the XOR (exclusive or) function — a fundamental logical operation. This was mathematically correct and important.

But the book's impact went far beyond its technical content. Minsky and Papert suggested — in their text and even more in their public statements — that multi-layer networks were unlikely to overcome these limitations. This suggestion was not a proven theorem. It was an extrapolation, an opinion, a bet about the future of a research program. But because it came from Marvin Minsky, it was treated as authoritative.

The Authority Cascade

Here is where the failure mode framework illuminates what happened.

The prestige heuristic (Chapter 2). Minsky was not just any AI researcher. He was the AI researcher — co-founder of the MIT AI Lab, recipient of the Turing Award (1969), the person who more than anyone else defined the field. When Minsky said neural networks were a dead end, the field did not evaluate the claim on its merits. It evaluated the claim based on who was making it. The prestige of the source substituted for independent evaluation of the argument.

Citation amplification (Chapter 2). Perceptrons was cited as proof that neural networks were fundamentally flawed — even by people who hadn't read the book carefully enough to notice that its results applied only to single-layer networks. The citation cascade propagated the conclusion without propagating the caveats. Within a few years, the conventional wisdom in AI was that "Minsky and Papert proved that neural networks don't work" — a significant distortion of what the book actually showed.

Funding as enforcement mechanism. Research funding agencies — DARPA, the NSF, corporate labs — relied on expert opinion to allocate funding. When the leading figure in AI declared that neural networks were a dead end, funding committees followed. Neural network research proposals were rejected not because they were evaluated and found wanting, but because the field's most prestigious authority had declared the approach bankrupt. This is the consensus enforcement machine (Chapter 14) operating through funding rather than peer review.

🔗 Connection: Compare the Perceptrons critique to the gastroenterology establishment's rejection of H. pylori (Chapter 1). In both cases, a correct approach was suppressed not because the evidence was evaluated and rejected, but because the most prestigious authorities in the field dismissed it. The mechanism is identical — authority cascade — but the medium differs. In medicine, the cascade operated through peer review and professional ridicule. In AI, it operated through funding decisions and graduate advising. The effect was the same: a correct idea was exiled for decades.

What It Looked Like From Inside: The AI Researcher in 1975

Consider the position of a graduate student in artificial intelligence in 1975:

  • You want to study neural networks. You find the idea of machines learning from data compelling.
  • Your advisor tells you that Minsky and Papert have proven that neural networks are a dead end. Your advisor has read Perceptrons — or at least knows the conclusions.
  • The funding agencies are not funding neural network research. If you pursue this topic, you will not get grants.
  • The conferences are not accepting neural network papers. If you work on this topic, you will not be published.
  • The hiring committees at top universities are not hiring neural network researchers. If you build a career on this topic, you will not get a tenure-track position.
  • The few researchers who are still working on neural networks are doing so in relative obscurity, outside the mainstream AI community.

What do you do? The rational decision — the career-preserving, funding-maximizing, reputation-protecting decision — is to work on symbolic AI. And that is what almost everyone did. The neural network community shrank to a handful of researchers working at the margins of the field, sustained by intellectual conviction rather than institutional support.

This is the outsider problem (Chapter 18) in its purest form. The researchers who continued working on neural networks — Geoffrey Hinton, Yann LeCun, Yoshua Bengio, and a small community of others — did so at significant professional cost. They were not expelled from the field, but they were marginalized within it. Their papers were published in obscure venues. Their students had difficulty finding academic positions. Their work was treated as a curiosity rather than a serious research program.

🧩 Productive Struggle

Before reading the next section, consider: The neural network researchers who persisted through the AI winter were eventually vindicated — Hinton, LeCun, and Bengio received the Turing Award in 2018. But their vindication was not driven by the AI community evaluating the evidence and changing its mind. What actually caused the reversal? What changed between 1985 and 2012 that made neural networks work?

Spend 3–5 minutes, then read on.

The Vindication (1986–2012)

The reversal did not happen through persuasion. It happened through circumvention — exactly as Planck's principle (Chapter 17) predicts, but with a twist.

1986: Backpropagation. David Rumelhart, Geoffrey Hinton, and Ronald Williams published a paper demonstrating that multi-layer neural networks could be trained using the backpropagation algorithm. This directly addressed the Perceptrons critique — multi-layer networks could solve the problems that single-layer networks could not, including XOR. The paper was important. It was published in Nature. But it did not, by itself, change the field's consensus. Symbolic AI remained dominant.

Why? Because the backpropagation results, while theoretically important, were not practically impressive enough to overcome the institutional momentum behind symbolic AI. The networks were small (a few hundred neurons), the training was slow, and the results on practical problems were modest. The technology was ahead of its time — the hardware needed to train large networks did not yet exist.

1990s–2000s: The quiet years. A small community continued working on neural networks — Hinton at the University of Toronto, LeCun at Bell Labs and later NYU, Bengio at the University of Montreal. They made steady progress: convolutional neural networks (LeCun), restricted Boltzmann machines (Hinton), recurrent networks, and theoretical advances in training deep networks. But the work remained marginal within the broader AI community. Mainstream AI conferences continued to favor symbolic and statistical approaches.

2006–2012: The hardware catches up. Three things changed simultaneously:

  1. Computation. GPUs (graphics processing units), originally designed for video game rendering, turned out to be extraordinarily efficient for neural network training. A computation that might have taken months on a 1990s CPU could be completed in hours on a modern GPU.

  2. Data. The internet had generated enormous datasets — millions of labeled images, billions of words of text — that could be used to train large neural networks. The networks needed data at scale, and the internet provided it.

  3. The demonstration. In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton entered a deep convolutional neural network (AlexNet) in the ImageNet Large Scale Visual Recognition Challenge — a competition to classify images. AlexNet won by a staggering margin, reducing the error rate from 26% to 16%. The result was so dramatic that it could not be explained away, reinterpreted, or absorbed into the existing paradigm.

This was the crisis point (Chapter 19). The evidence was unambiguous, the performance gap was enormous, and the result was achieved in a public competition where the comparison was direct and undeniable. Within two years, virtually every leading AI research group had pivoted to deep learning. Within five years, deep learning had transformed computer vision, natural language processing, speech recognition, and dozens of other fields.

The Failure Mode Stack

The neural network suppression illustrates more failure modes operating simultaneously than any other case in this book:

Failure Mode How It Operated
Authority cascade (Ch.2) Minsky's prestige caused the field to treat his extrapolation as proven fact
Consensus enforcement (Ch.14) Funding agencies, conferences, and hiring committees excluded neural network research
Sunk cost (Ch.9) Decades of symbolic AI research, careers, and institutions built on the alternative
Einstellung (Ch.13) The symbolic AI framework made neural approaches seem alien, not just wrong
Outsider problem (Ch.18) Hinton, LeCun, Bengio were marginalized; their work was treated as a curiosity
Planck's principle (Ch.17) Correction came via circumvention (hardware + data enabling demonstrations) not persuasion
Revision myth (Ch.20) The field now tells a clean story of "progress" rather than acknowledging three decades of suppression

🔄 Check Your Understanding (try to answer without scrolling up)

  1. What did Perceptrons actually prove, and how was this result distorted by the authority cascade?
  2. What three factors enabled the neural network vindication in 2012, and why couldn't they have operated earlier?

Verify 1. Perceptrons proved that single-layer perceptrons could not compute certain functions (including XOR). This was mathematically correct. The authority cascade distorted this into "neural networks in general are a dead end" — a far broader claim that was an extrapolation, not a theorem. The distortion propagated through citations, funding decisions, and graduate advising. 2. Computation (GPUs providing massive parallel processing), data (internet-scale labeled datasets), and demonstration (AlexNet's dramatic ImageNet victory providing undeniable evidence). These factors were technology-dependent — the hardware and data infrastructure didn't exist until the 2000s-2010s. The theory was available decades earlier, but couldn't be validated at scale.


29.2 Capital-Sustained Error: Tech's Unique Failure Mode

Every field examined in this book has mechanisms for sustaining wrong ideas — authority cascades, sunk cost, consensus enforcement, incentive structures. But the technology sector has a mechanism that is uniquely its own: capital.

In medicine, a wrong idea is sustained by institutional prestige and career investment. In criminal justice, by legal precedent. In the military, by doctrinal lock-in and budget structures. In technology, a wrong idea can be sustained — sometimes for years, sometimes for an entire market cycle — by the sheer volume of money invested in it.

This changes the error dynamics in fundamental ways.

How Capital Changes the Rules

In most fields, a wrong idea must compete in the marketplace of evidence. If the evidence accumulates against it, the idea eventually faces crisis (Chapter 19). Capital introduces a different dynamic: a wrong idea can survive against the evidence as long as the money holds out.

Capital sustains the ecosystem. When venture capital and public markets pour billions into a technology thesis, the money creates an entire ecosystem: companies, jobs, conferences, media coverage, supporting infrastructure, academic research programs, and — crucially — careers. Everyone in the ecosystem has a financial interest in the thesis being correct. This is the incentive structure (Chapter 11) amplified by financial capital to a degree that no other field can match.

Capital creates narrative momentum. The technology industry runs on narratives — stories about what technology will enable, how it will transform industries, why this time is different. These narratives attract capital, and the capital creates the infrastructure that makes the narrative seem plausible. This creates a feedback loop: narrative attracts capital → capital creates companies → companies generate activity that looks like progress → activity reinforces the narrative → narrative attracts more capital.

The term for this, adapted from the startup concept of "product-market fit," might be called narrative-market fit: the alignment between a technology story and what investors want to believe. When narrative-market fit is strong, the technology doesn't need to work — it just needs to seem like it will work.

The Dot-Com Bubble (1995–2001)

The dot-com bubble is the cleanest example of capital-sustained error in technology history.

By 1999, the conventional wisdom in the technology sector was that the internet would revolutionize every industry, that traditional business metrics (revenue, profit) were obsolete, and that any company with ".com" in its name represented the future. Companies with no revenue, no viable business model, and no path to profitability were valued at billions of dollars. Pets.com raised $82.5 million in an IPO, spent $17.8 million on marketing (including a Super Bowl ad), and had revenue of $619,000 in its last year of operation. It was not unusual.

The fundamentals were wrong and the market knew the fundamentals were wrong — or rather, individual participants knew, but the system operated as if they didn't. Analysts who raised concerns about valuations were sidelined. Investment banks that issued sell recommendations lost underwriting business. Fund managers who avoided dot-com stocks underperformed their peers and lost clients. The incentive structure rewarded participation in the bubble and punished skepticism.

This is the same dynamic as the body count in Vietnam (Chapter 28): the metric (stock price, fund returns) became the target, and everyone in the system had incentives to produce good metrics. The metric no longer tracked the underlying reality. It tracked the system's own self-reinforcing narrative.

When the bubble burst in 2000-2001, it destroyed approximately $5 trillion in market value. But the revision myth (Chapter 20) quickly smoothed the narrative: "We knew there was a bubble, but the underlying thesis — that the internet would transform business — was correct. We were just early." This narrative conveniently erases the period when the thesis was used to justify investments that were not "early" but simply wrong — companies that had no business model, no technology advantage, and no prospect of profitability regardless of timing.

Crypto Utopianism (2009–Present)

The crypto and blockchain enthusiasm followed a structurally identical pattern, with variations that reveal the failure mode architecture even more clearly.

The narrative. Blockchain technology would decentralize finance, eliminate intermediaries, create trustless transactions, democratize access to financial services, and transform everything from supply chains to voting to identity management. The narrative was compelling, idealistic, and — crucially — unfalsifiable (Chapter 3). Any failure of a specific crypto project could be attributed to implementation rather than to the underlying thesis. Any delay in adoption could be attributed to regulatory resistance or insufficient development rather than to fundamental limitations.

The capital. Between 2017 and 2022, hundreds of billions of dollars flowed into cryptocurrency and blockchain projects. The capital created an ecosystem: exchanges, wallets, DeFi protocols, NFT marketplaces, DAOs, stablecoins, layer-2 solutions, and an enormous supporting infrastructure of conferences, media, influencers, and academic programs. The ecosystem looked like progress because it was generating activity — transactions, users, market caps. But activity is not adoption, and market capitalization is not value creation.

The unfalsifiable defense. When specific crypto projects failed (terra/luna collapse, FTX fraud, numerous rug pulls), the crypto community's response was structurally identical to the strategic bombing defense (Chapter 28): the failure was attributed to specific bad actors or insufficient application, never to limitations of the underlying technology. "That wasn't real crypto" functioned exactly like "that wasn't real bombing" — an unfalsifiable defense that protected the core thesis from any specific counter-evidence.

🔗 Connection: Compare the crypto narrative to the dietary fat hypothesis (a recurring anchor example throughout this book). Both were sustained not primarily by evidence but by a combination of institutional investment, career commitment, and an unfalsifiable core thesis. The dietary fat hypothesis had the food industry's money and the nutrition establishment's careers behind it. Crypto had venture capital and the financial returns of early participants behind it. In both cases, the capital (financial for crypto, institutional for nutrition) created an ecosystem that made the thesis seem self-evidently true from inside and obviously problematic from outside.

Autonomous Vehicles: When the Timeline Is the Thesis

The autonomous vehicle (AV) predictions of the mid-2010s offer a particularly clear example of how capital creates false confidence in timelines.

In 2015-2017, leading technologists and companies made specific, confident predictions:

  • Elon Musk predicted "full self-driving capability" by 2018, then 2020, then 2021
  • Waymo (Google) projected commercial robotaxi service by 2018
  • Multiple companies predicted fully autonomous vehicles would be widespread by 2020-2025

By 2025, fully autonomous vehicles remained limited to small geofenced areas, required extensive mapping and infrastructure, and had not achieved the anywhere-anytime capability that was predicted. The problem was not engineering capacity or funding — billions of dollars had been invested. The problem was that the predictions reflected a misunderstanding of the nature of the problem. Driving in the real world involves an almost infinite variety of edge cases — construction zones, unusual weather, ambiguous traffic situations, pedestrian behavior — that resist the kind of systematization that AI systems require.

The AV timeline failure illustrates a distinctive tech failure mode: confusing rate of initial progress with rate of completion. Early progress on autonomous driving was rapid — going from nothing to impressive highway capability in a few years. The industry extrapolated this rate of progress to predict completion. But the difficulty curve of autonomous driving is not linear. The last 10% of driving situations (edge cases, unusual scenarios, adverse conditions) contains 90% of the difficulty. The industry's timelines extrapolated from the easy part of the curve.

🔄 Check Your Understanding (try to answer without scrolling up)

  1. What is "capital-sustained error" and how does it differ from the error-sustaining mechanisms in other fields?
  2. How did the crypto ecosystem's defense against specific failures mirror the strategic bombing defense from Chapter 28?

Verify 1. Capital-sustained error is the dynamic in which a wrong idea survives against the evidence because the volume of money invested in it creates an entire ecosystem (companies, jobs, careers, media coverage) with financial incentives to maintain the thesis. It differs from other fields because the capital creates activity that looks like progress even when it isn't — the ecosystem generates transactions, users, and market caps that substitute for evidence of the thesis being correct. 2. Both used the unfalsifiable defense of attributing failure to specific instances rather than to the underlying thesis. Crypto failures were attributed to "bad actors" or "not real crypto" just as bombing failures were attributed to "political constraints" or "insufficient application." In both cases, no specific failure could disprove the core thesis.


29.3 The "Connecting the World" Narrative

In 2012, Mark Zuckerberg wrote in Facebook's IPO filing: "Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected."

This narrative — that social media was primarily a force for connection, openness, and democratization — persisted for nearly a decade despite accumulating evidence that the platforms were also producing significant harms: political polarization, mental health effects (particularly among adolescents), misinformation propagation, erosion of privacy, and the weaponization of social media by authoritarian governments.

The Narrative's Structure

The "connecting the world" narrative operated as a plausible story (Chapter 6) with an unfalsifiable core:

  • When social media enabled positive outcomes (Arab Spring coordination, disaster relief communication, community building), these were presented as evidence of the thesis — social media connects the world and that connection is inherently good.
  • When social media produced negative outcomes (Myanmar genocide amplification, QAnon radicalization, teenage mental health crisis), these were framed as unfortunate side effects, implementation problems, or the actions of bad actors — not as evidence against the thesis.
  • The core claim — that connecting the world is inherently good — was unfalsifiable because any negative outcome could be attributed to how the tool was used rather than to what the tool was. This is the theory immunization pattern from Chapter 3: a thesis structured so that confirmations count as evidence and disconfirmations count as exceptions.

What Changed

The narrative began to crack not because of internal evidence evaluation but because of external crisis — exactly as the crisis-driven correction model (Chapter 19) predicts.

The 2016 U.S. presidential election, Russian disinformation campaigns, the Cambridge Analytica scandal, and the Myanmar military's use of Facebook to incite violence against the Rohingya population created a series of crises that made the negative consequences of social media impossible to reframe as mere side effects. The 2021 Facebook whistleblower Frances Haugen's disclosures — based on internal company research showing that Instagram's algorithm amplified body image issues among teenage girls — demonstrated that the company's own data contradicted the "connecting the world" narrative.

The correction has been partial. The narrative has shifted from "connecting the world is inherently good" to "connecting the world has trade-offs." But the structural incentives of the social media business model — engagement maximization, advertising revenue, attention economics — remain unchanged. The correction has been cosmetic (Chapter 19): the language has changed, but the architecture that produces the harms has not.

🔍 Why Does This Work?

We've established that tech companies successfully maintained the "connecting the world" narrative despite accumulating evidence of harm. But WHY was this narrative so resistant to challenge? Before reading the next section, consider what structural features of the technology industry made this narrative harder to challenge than, say, a wrong medical consensus.


29.4 The Myth of Disruption

The technology industry has its own version of the revision myth (Chapter 20), and it may be the most powerful one of any field examined in this book: the myth of disruption.

The myth works like this: Technology progresses through disruption. Incumbents are replaced by innovators. The old is replaced by the new. This process is inherently good because it produces better products, greater efficiency, and more value. Anyone who resists disruption is defending their own obsolescence.

This narrative is not entirely wrong. Technology has produced genuine improvements in human welfare — medicine, communication, transportation, agriculture. But the disruption myth serves a specific structural function that goes beyond its truth value: it pre-delegitimizes criticism.

How the Disruption Myth Works

It frames all resistance as self-interested. If disruption is inherently good, then opposition to any specific technology must be motivated by the desire to protect incumbents — not by legitimate concerns about the technology itself. Taxi drivers opposing Uber are protecting their monopoly, not raising valid safety concerns. Traditional media opposing social platforms are protecting their business model, not warning about misinformation. Regulators are stifling innovation, not protecting the public.

It applies survivorship bias (Chapter 5) to technology history. The disruption narrative celebrates the technologies that succeeded and ignores the ones that failed — or worse, the ones that succeeded in being adopted but failed in delivering their promised benefits. For every iPhone, there are dozens of technologies that were "disruptive" narratives backed by billions in capital that simply didn't work. But the survivors dominate the narrative, creating the illusion that disruption is always, eventually, vindicated.

It creates a founder mythology that mirrors the hero narrative (Chapter 20). The revision myth in science converts systemic change into stories about exceptional individuals. The tech version does the same: Apple's success was Steve Jobs's genius, not the product of semiconductor physics, supply chain globalization, and decades of government-funded research. Tesla's success is Elon Musk's vision, not the result of battery technology development, government subsidies, and shifting consumer preferences. The founder myth implies that disruption comes from exceptional individuals, not from structural conditions — which makes it seem like a matter of vision rather than of circumstance.

The Disruption Myth as Error Shield

The disruption myth functions as an error shield because it makes it structurally difficult to challenge any technology thesis within the industry:

  • If you argue that a specific technology doesn't work → "You don't understand disruption"
  • If you argue that a specific company is overvalued → "You said the same thing about Amazon/Apple/Google"
  • If you argue that a specific prediction is wrong → "Amara's Law — you're underestimating the long run"
  • If you argue that a technology is causing harm → "You're a Luddite defending the status quo"

Each of these defenses is unfalsifiable (Chapter 3). They don't engage with the specific argument; they delegitimize the act of arguing. This is consensus enforcement (Chapter 14) disguised as openness — a field that celebrates disruption while systematically silencing disruption of its own narratives.


29.5 Applying the Correction Speed Model

Variable Score Assessment
Evidence clarity HIGH (for technical claims) Technical performance is measurable; adoption metrics are clear
Switching cost LOW–MEDIUM Tech pivots faster than most fields; but capital investment creates switching cost
Defender power MEDIUM–HIGH Incumbents (Google, Meta, OpenAI) have enormous resources; but tech celebrates disruption of incumbents
Outsider access HIGH Tech is relatively open to outsiders; barriers are capital and talent, not credentials
Alternative availability HIGH New approaches emerge constantly; tech's ecosystem generates alternatives rapidly
Crisis probability HIGH Product failures, market crashes, and public backlash occur regularly
Correction mode Mixed — market-driven + circumvention Markets correct pricing errors; technology circumvents established approaches
Revision resistance VERY HIGH The disruption myth smooths history and makes past errors invisible

Prediction: Fast correction for technical claims (does the technology work?), slow correction for narrative claims (is the technology good? will it transform the world?).

This is the technology sector's distinctive correction profile: it is excellent at determining whether something works (the market provides ruthless feedback) and terrible at determining whether something matters (the narrative apparatus obscures this question).

Neural networks illustrate both sides. The technical question (do neural networks work?) was eventually answered by undeniable demonstration — AlexNet in 2012. The narrative question (is AI a transformative technology?) is still subject to the same hype-bust-hype cycle that has characterized AI since the 1960s. The field answers the technical question through evidence and answers the narrative question through capital.

Comparing Tech to Other Fields

Dimension Medicine Military Criminal Justice Technology
Correction trigger Evidence (slow) + crisis Crisis (wars) Crisis (exonerations, rare) Market (fast for products) + crisis (slow for narratives)
Unique error-sustaining force Therapeutic inertia Doctrinal lock-in Legal precedent Capital
Speed of technical correction Slow (17-year bench-to-bedside) Fast during wars Very slow Fast (market feedback)
Speed of narrative correction Moderate Moderate Very slow Very slow (disruption myth)
Revision myth strength Moderate Moderate High Very high

Technology corrects faster than any other field on technical questions — and slower than most on narrative ones. This mismatch explains why the tech industry can simultaneously produce genuine breakthroughs (smartphones, search engines, machine learning) and sustain massive narrative errors (social media utopianism, autonomous vehicle timelines, crypto as the future of finance) without experiencing cognitive dissonance. The technical correction happens in the lab and the market. The narrative correction — if it happens at all — takes decades.


📐 Project Checkpoint

Epistemic Audit — Chapter 29 Addition: The Capital and Narrative Assessment

29A. Capital Assessment. Does your field have a capital-sustained error dynamic — ideas or practices sustained by financial investment rather than evidence? (Examples: pharmaceutical companies funding research that supports their drugs, EdTech companies selling products without evidence of efficacy, consulting firms promoting frameworks that generate more consulting.)

29B. Narrative Assessment. Does your field have a dominant narrative that functions as an error shield — a story so compelling that challenging it is treated as a sign of the challenger's ignorance rather than as legitimate criticism? (Examples: "disruption is always good" in tech, "the market is always efficient" in finance, "evidence-based practice" in fields that don't actually practice it.)

29C. Technical vs. Narrative Correction Assessment. Does your field correct faster on technical questions (does X work?) than on narrative questions (is X good/important/transformative)? If so, what structural features explain the difference?


29.6 Chapter Summary

Key Concepts

  • The neural network suppression (1969-2012): The purest case study in the book — authority cascade, consensus enforcement, sunk cost, Einstellung, outsider problem, and Planck's principle operating simultaneously to suppress a correct approach for three decades
  • Capital-sustained error: Tech's unique failure mode — so much capital that wrong ideas survive through sheer funding, creating ecosystems that generate activity that looks like progress
  • Narrative-market fit: The alignment between a technology story and what investors want to believe, which can sustain wrong ideas independently of evidence
  • The disruption myth: Tech's version of the revision myth — the narrative that disruption is inherently good, which pre-delegitimizes criticism and creates an unfalsifiable defense against challenge
  • Technical vs. narrative correction: Tech corrects fast on technical questions (the market provides feedback) and slow on narrative questions (the disruption myth obscures them)

Key Arguments

  • The neural network story illustrates more failure modes operating simultaneously than any other case in this book — it is the "complete autopsy" of this textbook
  • Tech's unique contribution to the failure mode taxonomy is capital-sustained error — the dynamic in which money replaces evidence as the sustaining force for wrong ideas
  • The technology industry's self-image as inherently disruptive and anti-establishment is itself a form of the revision myth that makes the industry's own failure modes harder to see
  • The industry corrects faster than any other field on technical questions and slower than most on narrative questions — a mismatch that enables simultaneous genuine innovation and massive narrative error

Spaced Review

Revisiting earlier material to strengthen retention.

  1. (From Chapter 2 — The Authority Cascade) The Perceptrons episode is arguably the most consequential authority cascade in this entire book. Compare Minsky's influence on AI to Ancel Keys's influence on nutrition science. What structural similarities made both authority cascades so durable? What differences explain why the neural network correction took longer?

  2. (From Chapter 9 — The Sunk Cost of Consensus) By 1990, the AI community had invested three decades in symbolic AI — careers, departments, funding programs, textbooks. Apply the sunk cost framework: what specific switching costs would an AI researcher have faced in 1990 if they wanted to pivot from symbolic to neural approaches?

  3. (From Chapter 18 — The Outsider Problem) Hinton, LeCun, and Bengio persisted through the AI winter and were eventually vindicated with the Turing Award. Apply the outsider framework from Chapter 18: what structural buffers allowed them to survive professionally? How does their experience compare to Marshall and Warren's in medicine?

  4. (From Chapter 17 — Planck's Principle) The neural network vindication happened through circumvention (hardware enabling demonstrations) rather than persuasion (the old guard changing their minds). Does this support or challenge Planck's principle? Did the old guard have to "die off," or did the demonstrations overcome even entrenched resistance?

Answers 1. Similarities: both were single prestigious figures whose pronouncements were treated as settled science; both operated through funding (grants in nutrition, research funding in AI) rather than just prestige; both created institutional lock-in (nutrition guidelines, AI department curricula). Difference: nutrition correction was driven by accumulated epidemiological evidence (slow, ambiguous), while neural network correction was driven by a dramatic technological demonstration (fast, undeniable once hardware caught up). The nature of the evidence — ambiguous observational studies vs. unambiguous performance benchmarks — explains the speed difference. 2. Switching costs included: career expertise (decades of symbolic AI work would become less relevant), publication record (papers in symbolic AI venues), professional network (colleagues in symbolic AI), teaching materials (courses designed around symbolic approaches), graduate students (whose dissertations were in symbolic AI), and identity (self-concept as a symbolic AI researcher). These are the same categories of sunk cost that sustain any wrong consensus, but amplified by the depth of specialization in academic AI research. 3. Structural buffers: (a) Hinton had a tenured position at the University of Toronto — tenure is the strongest structural buffer for academic dissent; (b) LeCun worked at Bell Labs, a corporate research lab with relative freedom from academic publishing pressure; (c) Bengio was at the University of Montreal, somewhat outside the Anglo-American AI mainstream, which reduced the pressure to conform; (d) all three had genuine results (backpropagation, convolutional networks) that demonstrated the approach wasn't dead — just underpowered. Compared to Marshall and Warren: the neural network outsiders had weaker structural buffers (no dramatic self-experimentation equivalent) but benefited from a longer timeline that eventually produced undeniable evidence. 4. Partially supports, partially challenges Planck's principle. The demonstrations were powerful enough that many established researchers *did* pivot to deep learning — the old guard didn't have to die off. But the demonstrations were only possible because of external changes (hardware, data) that the AI community itself did not produce. This suggests a refinement: Planck's principle holds when the evidence is ambiguous and the debate is about interpretation; it breaks down when the evidence is overwhelming and undeniable. The AlexNet result was the AI equivalent of Marshall drinking H. pylori — evidence too dramatic to explain away.

What's Next

In Chapter 30: Field Autopsy: Education, we will examine the field where everyone has an opinion and almost nothing is tested — where learning styles have been debunked repeatedly and are still taught, where billions are spent on classroom technology with ambiguous results, and where the structural difficulty of educational research makes the field uniquely vulnerable to plausible-story problems and authority cascades.

Before moving on, complete the exercises and quiz to solidify your understanding.


Chapter 29 Exercises → exercises.md

Chapter 29 Quiz → quiz.md

Case Study: The Three Decades in the Wilderness — Hinton, LeCun, and Bengio → case-study-01.md

Case Study: The Dot-Com Bubble — When Capital Replaces Evidence → case-study-02.md