Case Study 01 — Jordan: The Campaign and the Case He Built

Chapter 4 Application: Cognitive Biases


Extended Analysis

The campaign decision from the chapter opening deserves closer examination, because it illustrates how cognitive biases interact and reinforce each other — not as isolated errors, but as a system.

The Biases at Work

Confirmation bias (primary): Jordan wanted Option B. He had been leaning that direction intuitively — something about its boldness appealed to him. Once he had that prior, his information-gathering became confirming rather than investigating. He read the market analysis, but moved quickly through the section on stable return profiles for Option A. He selected colleagues known to be enthusiastic about bold moves. He populated the industry comparison database with successful comparables rather than comprehensive ones.

He did not experience himself as cherry-picking. He experienced himself as doing analysis.

Overconfidence: Jordan's presentation to Helen was confident. He had charts. He had three colleagues' perspectives. He had market comparisons. He believed the analysis was solid. He was unaware that his confidence was largely being fueled by the coherence of a case he had constructed — not by the actual quality of the evidence.

Research on confidence shows that we tend to be more confident when we have more information, even when the additional information is irrelevant or selected to confirm. Jordan had a lot of information. This felt like rigor. It was not rigor; it was a well-organized confirmation.

Availability heuristic: The most memorable campaign successes in Jordan's professional memory were bold, breakout campaigns. The failures — the bold campaigns that burned budget and underdelivered — were less salient, more quickly forgotten, more often attributed to execution problems rather than strategy. The ease with which he could recall bold successes made boldness seem like a good bet, independently of the actual statistical distribution.

Anchoring: Jordan had been excited about Option B from first encounter. This first-impression anchor shaped his subsequent processing: he was evaluating new information as it related to his anchor (Option B) rather than evaluating both options from an equal baseline.

The Deeper Pattern

What makes Jordan's case illustrative is that his biased process was not a moral failure — it was motivated by real professional enthusiasm and genuine belief that Option B was better.

The problem is that motivated processes and rigorous processes look identical from the inside. Both feel like thinking hard about the problem. Both produce conclusions with confident internal coherence. The difference is only visible from the outside — in the structure of the process, in what was and was not examined.

This is why Jordan's problem is not fixable by "trying harder to be objective." Trying harder within a motivated process produces more of the same. What would have helped:

  1. Explicitly appointing someone to argue for Option A — a formal red team role with explicit mandate to find Option B's weaknesses
  2. Pre-commitment to examining failure rates — before gathering any evidence, committing to looking at the base rate of similar bold campaign results, not just selected successes
  3. Anonymizing the process at key stages — removing his name from the preliminary evaluation to reduce identity stake
  4. A pre-mortem — imagining Option B has failed dramatically, and working backward to identify what would have caused it

None of these would have guaranteed Option A wins. Option B might genuinely be better. But the process would have produced information he could trust, rather than a case he had built.


Jordan's Reflection (Six Months Later)

Six months later, Option B is underperforming its projections. Not catastrophically — there's time to course-correct — but not at the level Jordan's charts had suggested.

In a private moment, Jordan is honest with himself: he chose the consultants he knew would be enthusiastic. He skipped the section about comparable campaigns' fail rate. He built the case.

The next thought is: Did I do that intentionally?

He genuinely does not know. Which is the correct answer. Motivated reasoning is not deliberate dishonesty. It is the cognitive system serving the preferences and identity of its operator — seamlessly, invisibly, convincingly.

What Jordan is learning to do is not to blame himself for the bias, which serves nothing — but to design his processes against it, knowing that his own enthusiasm and desire for a particular outcome is a liability in the analysis phase.

This is the functional definition of wisdom in decision-making: not freedom from bias, but institutional knowledge about your own biases and structures designed to compensate for them.


Discussion Questions

  1. Jordan's biases were not random — they were patterned in a specific direction (toward boldness and risk-taking). What does this suggest about the relationship between personality or character and the specific biases we are most prone to?

  2. The case suggests that motivated reasoning and genuine reasoning look identical from the inside. Is there any internal experience that might distinguish them? What would it feel like to be doing motivated reasoning well enough to fool yourself?

  3. The practical suggestions (red team, pre-commitment, pre-mortem) are described as debiasing strategies. But they require someone to put them in place — someone who already recognizes the bias risk. Jordan didn't recognize it during the campaign decision. When and how should we build debiasing structures into our processes, given that we may not recognize the bias in the moment it is operating?

  4. Six months later, Option B is underperforming. Does this evidence confirm that Jordan made a biased error? Or could his conclusion (Option B is better) have been correct and the underperformance be due to other factors? What does this illustrate about the difficulty of learning from outcomes?