Case Study: The Expert Who Changed Their Mind — What It Feels Like From Inside

The Question

This entire book has documented cases of experts who were wrong. But it has rarely asked: what does it feel like to change your mind about something important? What is the subjective experience of an expert who realizes they were wrong — not about a trivial fact but about a core professional belief?

This case study explores that experience through composite accounts (based on documented cases and published reflections) of experts who changed their minds.

Composite Account 1: The Physician Who Prescribed for Decades

Based on common patterns in the medical literature and published physician reflections on practice changes.

"For twenty years, I prescribed [a treatment that was later shown to be ineffective or harmful]. I wasn't careless. I followed the guidelines. I attended the conferences. I read the journals. I did what every competent physician in my specialty was doing.

When the new evidence came out — when the trial results showed that the treatment wasn't doing what we thought — my first reaction wasn't curiosity. It was defensiveness. I thought: the study must be flawed. Then: my patients are different. Then: I've seen it work with my own eyes.

It took me months to realize that I was going through exactly the process this book describes — I was defending a position because abandoning it meant acknowledging that I had been doing something suboptimal for thousands of patients. The sunk cost wasn't financial. It was moral. Admitting the evidence meant admitting harm.

The hardest moment wasn't accepting the new evidence. It was accepting that my prior confidence — the certainty I had felt while prescribing — was indistinguishable from the certainty I now felt about the new approach. If I was wrong then and certain, I could be wrong now and certain. That realization doesn't go away."

Composite Account 2: The Researcher Who Built a Career on a Finding

Based on patterns documented in the replication crisis literature and published researcher reflections.

"My most-cited paper was one of the early results in [a research area that later failed to replicate]. I had built a research program on it. I had trained graduate students on it. I had given keynotes about it.

When the replication attempts started failing, I went through every stage of the response this book describes. I questioned the replicators' methods. I argued that they had missed subtle features of the original protocol. I pointed to the original statistics — which were significant, which were published in a top journal, which had been peer-reviewed.

It took me three years to accept that the effect was probably not real — or at least not as large as my original study suggested. Three years of defending a position that I was increasingly unsure of, because the alternative — admitting that my most important work was wrong — was too threatening to my identity.

What I wish I had known: the original finding wasn't my fault. The incentive structure that rewarded small studies with surprising results, the publication system that selected for positives, the field's culture that treated replication as boring — these were structural features, not personal failures. My study was a product of a system that was designed to produce exactly this kind of result. I was locally rational within a systemically flawed structure.

That framing — structural, not personal — was what finally allowed me to accept the evidence without feeling destroyed by it."

Based on common patterns in policy evaluation literature.

"I spent five years advocating for [a policy intervention] based on what I genuinely believed was strong evidence. When the rigorous evaluation came back showing no effect, I experienced something I can only describe as epistemic grief.

The policy had made sense. The mechanism was plausible. The pilot data had been encouraging. The populations I worked with seemed to benefit. My personal observation — watching the program in action — confirmed what I believed.

The evaluation said otherwise. The randomized trial, with a proper control group, showed that the people who received the intervention did no better than those who didn't. My observation had been biased by the same factors this book documents: I was seeing what I expected to see, I was talking to the successes and not tracking the failures, and I was interpreting ambiguous outcomes as confirming.

The experience taught me something that no amount of reading could have taught: the difference between seeing the evidence and feeling the evidence. I could have told you, intellectually, that personal observation is subject to confirmation bias. But it took the experience of being personally wrong to understand what that means. The confidence I had felt was not knowledge. It was a feeling. And the feeling had been wrong."

The Common Thread

Across all three accounts, the same pattern emerges:

  1. Initial certainty: The expert believed they were right, with the full force of professional confidence
  2. Defensive reaction: When challenged, the first response was to protect the existing belief — not out of dishonesty but out of the structural dynamics this book has documented
  3. Gradual acceptance: Acceptance came not through a single dramatic moment but through the accumulated weight of evidence that could no longer be absorbed
  4. The identity crisis: The hardest part was not accepting the new evidence but accepting that their prior certainty — which had felt like knowledge — was indistinguishable from their current certainty
  5. The structural reframe: What helped was understanding the failure as structural rather than personal — the system produced the error, not the individual

The structural reframe is the gift this book offers. If failure modes are structural, then being wrong is not a personal failing — it is the predictable outcome of operating within systems that are designed (inadvertently) to produce and protect error.

Analysis Questions

1. The physician describes the "moral sunk cost" of admitting that a treatment was suboptimal — acknowledging harm to thousands of patients. How does this differ from the financial sunk cost discussed in Chapter 9? Is the moral sunk cost harder or easier to overcome?

2. The researcher describes spending three years defending a finding they were increasingly unsure of. Apply the institutional grief cycle (Chapter 19): which stages can you identify in their account? What structural conditions would have accelerated the process?

3. The policy analyst describes the difference between "seeing the evidence" (intellectually) and "feeling the evidence" (experientially). What does this distinction tell you about the limits of this book? Can reading about epistemic humility produce the same calibration as experiencing being wrong?

4. All three accounts mention the "structural reframe" — understanding their error as a product of the system rather than a personal failure. Is this reframe accurate, or is it a defense mechanism? Can the structural explanation go too far — absolving individuals of responsibility?