> "A good decision is one that's a logical consequence of explicit values, explicit beliefs, and a willingness to act on those values and beliefs. The quality of a decision is not determined by the quality of the outcome."
In This Chapter
- Section 1: The Nature of Uncertainty and Why Decisions Are Hard
- Section 2: The Cognitive Biases That Distort Decision-Making
- Section 3: Heuristics That Actually Work
- Section 4: Decision-Making Frameworks
- Section 5: Decision-Making in Groups
- Section 6: Values Clarification and the Role of Identity in Decisions
- Section 7: Improving as a Decision-Maker
- From the Field: Dr. Reyes on Values Conflicts
- Research Spotlight: Kahneman and Tversky's Prospect Theory
- Key Terms
- Common Misconceptions
Chapter 24: Decision-Making Under Uncertainty
"A good decision is one that's a logical consequence of explicit values, explicit beliefs, and a willingness to act on those values and beliefs. The quality of a decision is not determined by the quality of the outcome."
— Annie Duke, Thinking in Bets (2018)
In the fourth week of the Strategic Director role, Jordan was required to make a decision he had not anticipated.
The CX team had inherited a customer research database — a multi-year project from the previous team — that required either a significant investment to update and maintain or a decision to retire it in favor of a new approach. The database was genuinely valuable in some respects and genuinely outdated in others. Two vendors had submitted proposals. His team was divided. Senior leadership had given him no guidance beyond "make a call by end of quarter."
He had asked for more information. He had received more information. He now had more information than any single person could fully synthesize, and the answer was not clearer than it had been before.
He had a conversation with Sandra about it. She said: "You're not going to get certainty on this. You're going to get more information until the deadline arrives, and then you'll have to decide. The decision will look obvious in retrospect — they always do, either way — and you won't know which obvious it is until after."
He found this more clarifying than he expected.
The chapter he had been reading on decision-making had made a similar point, using different language. Good decisions are not the same as good outcomes. The quality of the reasoning is not visible in the outcome; it is visible in the process. Two people can make the same decision from the same evidence and have different outcomes, because outcomes have randomness in them that the decision cannot control. What the decision-maker controls is the reasoning.
He made the decision: retire the database, build new infrastructure, accept the short-term cost for the long-term capability. He documented his reasoning.
He turned out to be right — the new infrastructure served the team well over the following year. But he knew, and the chapter had helped him know more clearly, that being right about the outcome was not the same thing as having made a good decision. He had made a good decision partly because he had been careful about the reasoning. He had also been lucky that the outcome matched.
This chapter is about the reasoning.
Section 1: The Nature of Uncertainty and Why Decisions Are Hard
Most important decisions are made under conditions of genuine uncertainty — not merely complexity (where more information would eventually yield the right answer) but irreducible randomness in the relationship between decisions and outcomes.
The Decision/Outcome Distinction
The most important conceptual distinction in decision-making psychology: the quality of the process and the quality of the outcome are not the same thing, and conflating them is the source of most of the distorted feedback loops that make people bad at learning from experience.
A surgeon who follows every best practice protocol and loses a patient has made a good decision with a bad outcome. A teenager who runs a red light and arrives safely has made a bad decision with a good outcome. The quality of the process is what the decision-maker can control. The outcome has additional variance — the world's randomness — layered on top.
This matters for learning: if you evaluate decisions by their outcomes, you will systematically overlearn from lucky successes and underlearn from unlucky failures. You will also systematically punish good reasoning that produced bad outcomes and reward bad reasoning that produced good outcomes.
Resulting (Annie Duke's term): the cognitive error of evaluating the quality of a decision based on its outcome rather than the reasoning that produced it. Resulting is ubiquitous and produces terrible learning from experience.
The Two Types of Uncertainty
Aleatory uncertainty: randomness that is fundamental to the system — the uncertainty in a fair coin flip, the weather three weeks from now, the response of a market to a product launch. This uncertainty cannot be reduced with more information; it can only be accounted for in probabilistic terms.
Epistemic uncertainty: uncertainty that arises from lack of knowledge — about competitors' plans, about the full dataset, about what a key stakeholder actually thinks. This uncertainty can potentially be reduced with more information, more analysis, more consultation. But even epistemic uncertainty has practical limits: at some point, gathering more information costs more than the additional certainty is worth.
Most decisions involve both types. The decision-maker's task is to identify how much of the remaining uncertainty is epistemic (and reducible) versus aleatory (and irreducible), and to stop gathering information when the marginal value of additional information falls below the cost of gathering it.
Why Our Minds Make Poor Decision Machines
Human cognition evolved for a different decision environment: immediate threats and opportunities, small groups, mostly reversible choices, feedback that was immediate and concrete. The decisions that matter most in modern life — career changes, relationship commitments, financial choices, strategic business decisions — are characterized by delayed feedback, complex causality, large irreversible stakes, and multiple competing values. Evolution did not design us for this.
The result: a set of cognitive shortcuts (heuristics) that work well in the ancestral environment and poorly in many modern contexts, producing systematic, predictable errors (biases). The decades of work by Kahneman, Tversky, Thaler, Ariely, and others have catalogued these errors with sufficient precision to be practically useful — if we know what to look for.
Section 2: The Cognitive Biases That Distort Decision-Making
The biases most consequential for real-world decision-making fall into several clusters.
Availability and Representativeness
Availability heuristic (Tversky & Kahneman, 1973): judging probability by how easily examples come to mind. Events that are recent, vivid, emotionally resonant, or widely covered are judged as more likely than statistics support. A person who has just read about a plane crash judges air travel as more dangerous; a manager who recently lost a key employee to a startup overweights the probability of losing others.
Representativeness heuristic: judging probability by how well something matches a prototype or stereotype, ignoring base rates. "He went to a good school and has a strong handshake, so he'll probably be a good hire" ignores the base rate of hiring success. The business plan that "sounds like a winner" gets funded despite base rates showing that most such plans fail.
Both heuristics are fast and often adequate — they work well when the sample of available examples is representative and when prototypes match base rates. They fail systematically when the available examples are non-representative (memorable precisely because unusual) or when the prototype doesn't match the base rate.
Anchoring
Anchoring (Tversky & Kahneman, 1974): the tendency for an initial piece of information (the anchor) to disproportionately influence subsequent estimates, even when the anchor is arbitrary or known to be uninformative.
Classic demonstration: ask a group to spin a wheel of fortune labeled 10 or 65, then estimate the percentage of African countries in the UN. Groups who spun 10 estimated 25%; groups who spun 65 estimated 45%. The wheel spin was known to be random — and yet it anchored the estimate.
In practice: the first price in a negotiation, the first budget estimate, the initial performance appraisal — all anchor subsequent judgments in ways that are hard to correct even with effortful deliberation. The practical implication: pay attention to what is providing the anchor in any quantitative estimate, and generate your own independent estimate before encountering any anchor.
Confirmation Bias
Confirmation bias: the tendency to search for, interpret, favor, and remember information that confirms preexisting beliefs. One of the most robust and consequential biases, because it operates at every stage of information processing.
In decision-making: confirmation bias causes people to gather information that supports the preferred option, interpret ambiguous evidence as supporting the preferred option, and discount or forget information that challenges it. The result: more information gathering often does not produce better decisions — it produces better-rationalized decisions for the predetermined conclusion.
Disconfirmation: the active search for evidence that would change your mind. "What would have to be true for my preferred option to be wrong?" is a disconfirmation question. It is uncomfortable and rarely natural. It is also one of the most effective interventions for confirmation bias.
Sunk Cost Fallacy
The sunk cost fallacy: allowing irretrievable past investments (time, money, effort, emotional investment) to influence decisions about future action. The rational principle is simple: past costs are sunk — they cannot be recovered. The only relevant considerations for a forward-looking decision are future costs and future benefits.
The psychology is not simple: because we are accountable for our past commitments, abandoning them feels like acknowledging failure. The project that "has already cost three million dollars" should be evaluated by its future prospects, not its past costs. But "we've already invested too much to stop" is a near-universal rationalization for continuing bad decisions.
The practical question for any decision that involves prior investment: "If I were starting fresh today, without the sunk costs, would I choose this?" If the honest answer is no, the sunk costs are distorting the decision.
Overconfidence and Calibration
Overconfidence: the robust finding that people's confidence in their judgments consistently exceeds their accuracy. When people say they are 90% confident in a factual estimate, they are correct approximately 70–80% of the time. Experts are not immune — and in some domains, expertise increases overconfidence without proportionally increasing accuracy.
Calibration: the correspondence between confidence and accuracy. A well-calibrated person who says they are 80% confident is correct 80% of the time. Calibration can be improved with practice and feedback — specifically, by making specific probability estimates, recording them, and tracking accuracy over time.
The inside view vs. outside view (Kahneman): the inside view focuses on the specific features of this particular situation — our plan, our team, our intentions. The outside view asks: what is the base rate for projects like this one? The inside view produces overconfident predictions; the outside view, applied systematically, produces better calibration.
Loss Aversion and Risk
Loss aversion (Kahneman & Tversky, 1979): losses loom larger than equivalent gains. The pain of losing $100 is approximately twice the pleasure of gaining $100. This asymmetry produces risk aversion for gains (prefer the certain $50 to a 50/50 chance of $100) and risk seeking for losses (prefer a 50/50 chance of losing $100 to the certain loss of $50).
In practice: loss aversion produces irrational decisions in both directions. People hold losing investments too long ("I can't sell at a loss"), take insufficient risk in positive domains, and make reckless gambles to avoid certain losses. The framing of a decision — as potential gain or potential loss — systematically affects choices even when the objective outcomes are identical.
Reframing: deliberately shifting from loss-frame to gain-frame (or vice versa) to see a decision more completely. "What do I risk by not acting?" is a gain-frame complement to the loss-frame "what do I risk by acting?"
Section 3: Heuristics That Actually Work
Not all cognitive shortcuts are errors. Gerd Gigerenzer's extensive research on "fast and frugal" heuristics demonstrates that certain simple decision rules are accurate, efficient, and adaptive — particularly in domains with regularities the heuristic can exploit.
Recognition Heuristic
When one option is recognized and another is not, infer that the recognized option has higher value on the relevant criterion. This sounds naive — surely knowing more is better. But Gigerenzer demonstrates that the recognition heuristic often outperforms complex algorithms when information is incomplete and recognition is correlated with the criterion.
Classic demonstration: Americans asked to rank German cities by population perform similarly to Germans, because American recognition tracks population reasonably well. German experts, who recognize all cities and use more complex criteria, don't outperform the American simple heuristic.
The lesson: in domains where environmental regularities are exploitable by simple rules, fast and frugal can outperform deliberate analysis. The skill is knowing which domains have this property.
Take the Best
Among competing predictors, use only the best predictor — ignore the rest. A simple lexicographic rule: rank all available cues by their validity, and take the option favored by the most valid cue. If the top cue differentiates the options, stop; if not, go to the second cue.
Gigerenzer shows this outperforms multiple regression in many domains with sparse data and correlated predictors. The practical takeaway: in complex decisions with many factors, identifying the single most important factor and letting it drive the decision is often more accurate than attempting to weight all factors simultaneously.
The Role of Intuition
Intuition is not the enemy of good decision-making. It is pattern recognition operating below the level of conscious deliberation — the experienced chess master who "sees" the right move without being able to articulate why, the seasoned clinician who notices that something is off before identifying what.
Intuition works well when: the domain has regular structure (patterns exist), the decision-maker has sufficient experience in the domain to have encoded those patterns, and feedback has been reliable and prompt enough for learning to occur. Expert intuition in regularized domains is often trustworthy.
Intuition works poorly when: the domain is irregular (random variation is high), experience is limited, feedback is delayed or ambiguous (as in most strategic business decisions), or when cognitive biases and emotional reactions are mislabeled as intuition.
Gary Klein's Recognition-Primed Decision (RPD) model: experts in high-stakes environments (firefighters, military commanders, ICU nurses) often make rapid decisions not by comparing options but by recognizing the situation as belonging to a familiar category and simulating whether the standard response to that category will work. The deliberative analysis occurs not in option comparison but in mental simulation of the selected option.
The practical implication: intuition's reliability depends on the quality of the learning environment. Experienced practitioners in regularized domains: trust it and verify. First-time decision-makers in novel domains: be more skeptical.
Section 4: Decision-Making Frameworks
The research suggests several frameworks that improve decision quality.
Thinking in Bets
Annie Duke's framing: almost every decision is a bet — a commitment of resources (time, money, attention, opportunity cost) under conditions of uncertainty, with a probability distribution over possible outcomes. Explicitly thinking of decisions as bets shifts attention from "what's the right answer?" (which implies certainty is available) to "what's the best bet given what I know?" (which accepts uncertainty as the condition).
Two implications: 1. Calibration matters more than confidence. The question is not "am I confident?" but "what probability would I assign to this outcome, and is that probability well-calibrated?" 2. Outcome ≠ process quality. A good bet that loses is still a good bet. Learning from it requires separating the quality of the reasoning from the randomness of the outcome.
Pre-Mortem Analysis
Gary Klein's pre-mortem: before committing to a decision, imagine that it is one year later and the decision has produced the worst possible outcome. What went wrong?
This is a disconfirmation exercise applied to decisions: it makes it psychologically easier to identify potential failure modes (the group isn't criticizing a decision it hasn't made yet — it's exploring why a hypothetical failure occurred), overcomes groupthink by legitimizing pessimistic scenarios, and surfaces implementation risks that enthusiasm for the decision might otherwise suppress.
The pre-mortem does not prevent decisions; it improves them by forcing explicit attention to the failure modes before commitment.
Steelmanning the Opposition
Confirmation bias produces the habit of engaging with the weakest version of opposing views (the "straw man"). The antidote is steelmanning: constructing the strongest possible version of the opposing case, the argument that would be most difficult to refute.
In practice: before committing to a decision, articulate the best argument against it. Not the obvious objections (which you've already addressed) but the argument that would be most troubling if it were true. What would a thoughtful, well-informed person who disagreed with your decision say?
The goal is not to be convinced by the steelman — it is to ensure you have genuinely engaged with the strongest opposing argument rather than only with arguments you can easily defeat.
The 10-10-10 Rule
Suzy Welch's 10-10-10 rule: when facing a decision, ask three questions: - How will I feel about this decision in 10 minutes? - How will I feel about it in 10 months? - How will I feel about it in 10 years?
The temporal shifts force attention to different considerations. The 10-minute perspective is emotionally immediate and captures the present-moment experience. The 10-month perspective captures the medium-term consequences — the phase when the decision's direct effects are most salient. The 10-year perspective introduces long-term values and identity questions.
Decisions that feel good at 10 minutes but terrible at 10 years are worth reconsidering. Decisions that feel uncomfortable at 10 minutes but fine at 10 months and 10 years may be worth making despite the short-term discomfort.
Expected Value and Its Limits
Expected value: the probability-weighted average of all possible outcomes. If a decision has a 60% chance of producing $100 and a 40% chance of producing $0, its expected value is $60. Rational decision theory recommends maximizing expected value.
The limits: expected value is the right framework for decisions made repeatedly in a stable environment, where the law of large numbers applies. It is a less adequate framework for: - One-shot decisions: where the single outcome matters more than the average across many similar decisions - Skewed downside risks: where the bad outcome is catastrophically bad (bankruptcy, death, irreversible harm) - Decisions where the quantity is not the only value: expected value is a framework for quantities; many important decisions involve values that aren't reducible to a single quantity
Satisficing under uncertainty: Herbert Simon's insight that real decision-makers don't optimize — they set an aspiration level and choose the first option that meets it. Satisficing is often rational under uncertainty, where searching for the optimal option is costly, information is incomplete, and the aspiration level is set intelligently.
Section 5: Decision-Making in Groups
Many of the most important decisions are made in groups. Group decision-making has both advantages and characteristic failure modes.
Groupthink
Irving Janis's groupthink: the tendency for highly cohesive groups with strong leadership to suppress dissent and converge prematurely on a decision without adequate critical analysis. Classic examples: the Bay of Pigs invasion, the Challenger disaster, various corporate failures where warning signals were available and ignored.
Groupthink conditions: high cohesion, insulation from outside perspectives, directive leadership, high stress with time pressure. Symptoms: illusion of invulnerability, collective rationalization, belief in the group's inherent morality, stereotyping of outgroups, self-censorship of dissent, illusion of unanimity, self-appointed mindguards.
Interventions: designated devil's advocate; structured role for dissent; leader announces position after others (not before); seek outside consultation; pre-mortem analysis before commitment.
Diversity of Perspective
Scott Page's research: cognitively diverse groups (different mental models, frameworks, heuristics) outperform homogeneous groups of individually smarter people on complex, novel problems. The mechanism: different cognitive tools encode and perceive problems differently, generating more solution approaches.
The practical implication: for complex decisions under uncertainty, the optimal group is not the group with the most individual expertise but the group with the most diverse cognitive approaches. This is a different criterion from most team composition logic, which emphasizes individual competence.
Polarization and Deliberation
Group polarization: groups tend to shift toward more extreme positions than the average of individual members would predict. If group members lean toward a risky choice before discussion, group discussion makes the choice more risky; if they lean toward caution, discussion makes it more cautious.
The mechanism: during discussion, members hear more arguments for the direction they already lean toward (because the majority generates more arguments), and status considerations push members to appear appropriately bold or appropriately cautious relative to the group norm.
The implication: group deliberation is not a reliable corrective for biased individual judgment — it may amplify it. Structured dissent, secret ballots before public discussion, and devil's advocate roles are more effective interventions.
Section 6: Values Clarification and the Role of Identity in Decisions
Many difficult decisions are not primarily cognitive problems — they are values conflicts. The question is not "what is true?" but "what do I actually care about?"
Identifying the Decision's Real Difficulty
Before applying analytical frameworks, it is worth asking: why is this decision hard?
Hard because of insufficient information: The decision is hard because you don't know what will happen if you choose A or B. This is an epistemic problem — more information, better analysis, or better prediction tools are the relevant interventions.
Hard because of value conflict: The decision is hard because A serves value X and B serves value Y, and you hold both values but cannot fully satisfy both simultaneously. This is not an information problem; it is a values clarification problem. More data will not resolve it.
Hard because of fear: The decision is hard because the potential downside of getting it wrong is aversive, and the decision-maker is avoiding the commitment that could be wrong. This is the procrastination pattern from Chapter 23 applied to a decision context.
Distinguishing these three sources of difficulty directs the appropriate intervention.
Values Clarification
When the difficulty is values conflict, the intervention is explicit articulation of values and their priority order — not to resolve all conflict definitively but to give the decision a basis that is not just anxiety management.
The values clarification process: 1. List the values that are in tension in this decision 2. For each value, identify what you would be giving up by prioritizing a different value 3. Ask: if you knew the outcome would be the same either way, which decision process would you be most proud of? (Identity question) 4. Ask: in ten years, which set of values do you want to have acted from in your decisions?
The goal is not a values hierarchy that applies universally — different decisions appropriately weight values differently. The goal is explicit awareness of what is actually in tension, so the decision is made from those values rather than from anxiety or inertia.
Regret Minimization
Jeff Bezos's regret minimization framework: project yourself to age 80, looking back at this decision. Which choice would you regret more — taking the risk or not taking it? Most people find that regret of action fades, while regret of inaction persists and compounds.
This is psychologically consistent with the research on regret. Gilovich and Medvec's work finds that in the short term, regrets of action predominate (I shouldn't have said that); in the long term, regrets of inaction predominate (I should have taken that risk, said that thing, made that change).
Section 7: Improving as a Decision-Maker
Decision quality is a learnable skill, but it requires the right kind of practice — specifically, practice with calibrated feedback.
The Forecasting Practice
Philip Tetlock's research on superforecasters identifies the habits that separate people who make unusually accurate predictions from experts who don't:
- Think in probabilities, not certainties: "I'm 70% confident" rather than "I think so" or "I'm sure"
- Actively seek disconfirming evidence
- Update beliefs with new information — neither anchoring stubbornly to prior beliefs nor overreacting to each new data point
- Distinguish inside view from outside view and use both
- Decompose complex problems into component questions that can be analyzed separately
- Maintain calibration by tracking actual accuracy against predicted confidence
Superforecasting is not about being a genius or having special access to information. It is a disciplined epistemic practice available to anyone willing to make specific, trackable predictions and evaluate their accuracy honestly.
Decision Journals
A decision journal records: the decision made, the reasoning behind it, the alternatives considered and rejected, the key uncertainties, the expected outcome, the probability estimate, and the value trade-offs acknowledged. When outcomes eventually arrive, the journal allows comparison of the reasoning to the reality — the feedback mechanism needed for calibrated learning.
Without a decision journal, outcome feedback is processed through resulting: "it worked out, so I was right" or "it didn't work out, so I was wrong." With a journal, the question becomes: "given the reasoning I recorded at the time, was the reasoning good? And was the outcome within the range I expected?"
From the Field: Dr. Reyes on Values Conflicts
The decision-making questions I see most often in clinical work are not cognitive ones. They're values questions that people are treating as if they were cognitive questions.
Someone comes in unable to decide whether to leave a marriage, take a different job, or cut off contact with a family member. They've gathered all the information. They've made pro/con lists. They've consulted friends. They still can't decide. And when I ask what the decision is actually about — what's in conflict — they often arrive at something like: "I don't know who I'm supposed to be in this situation."
That's not a cognitive problem. That's a values and identity problem. The frameworks in this chapter — the analytical tools — are genuinely useful for cognitive problems. For the values questions, the work is different: it's helping people articulate what they actually care about, rather than what they think they should care about.
My experience: people know, more often than they admit, what they want to do. What they don't know is whether they have permission to want it. The decision frameworks can help clarify the reasoning. They rarely change the fundamental values orientation — but they can make it visible.
Research Spotlight: Kahneman and Tversky's Prospect Theory
Amos Tversky and Daniel Kahneman's prospect theory (1979) is the most influential psychological model of decision-making under risk. Key elements:
Reference dependence: people evaluate outcomes relative to a reference point (usually the status quo), not in absolute terms. What counts as a gain or a loss depends on where you started.
Loss aversion: losses are approximately 2–2.5 times more impactful than equivalent gains. The pain of losing $100 exceeds the pleasure of gaining $100 by a substantial margin.
Diminishing sensitivity: the psychological impact of gains and losses diminishes as their magnitude increases. The difference between $100 and $200 feels larger than the difference between $1,100 and $1,200, even though both are $100 differences.
Probability weighting: people overweight small probabilities and underweight large ones. A 1% chance of winning $10,000 feels more than 1/100th as good as a certainty of $100; a 99% chance of winning $10,000 feels less than 99/100ths as good as a certainty of $9,900.
Prospect theory explains a wide range of decision anomalies that expected utility theory cannot — including why people buy lottery tickets and insurance, why they hold losing investments too long, and why they respond differently to identical choices framed as gains versus losses.
Kahneman received the Nobel Prize in Economics in 2002 for this work (Tversky had died in 1996, and the Nobel is not awarded posthumously).
Key Terms
| Term | Definition |
|---|---|
| Resulting | Evaluating decision quality by outcome quality rather than reasoning quality |
| Aleatory uncertainty | Fundamental, irreducible randomness in outcomes |
| Epistemic uncertainty | Uncertainty arising from lack of knowledge, potentially reducible with more information |
| Availability heuristic | Judging probability by how easily examples come to mind |
| Representativeness heuristic | Judging probability by similarity to a prototype, ignoring base rates |
| Anchoring | The disproportionate influence of an initial piece of information on subsequent estimates |
| Confirmation bias | The tendency to seek, interpret, and remember information that confirms existing beliefs |
| Sunk cost fallacy | Allowing past irretrievable investments to influence future decisions |
| Overconfidence | Systematic excess of confidence relative to actual accuracy |
| Calibration | The correspondence between stated confidence and actual accuracy |
| Loss aversion | The tendency for losses to loom larger than equivalent gains |
| Pre-mortem | Imagining a decision has failed and working backward to identify what went wrong |
| Groupthink | Premature convergence in cohesive groups that suppresses dissent and critical analysis |
| Regret minimization | A decision framework based on projecting long-term regret of action vs. inaction |
| Prospect theory | A model of decision-making under risk that incorporates reference dependence, loss aversion, and probability weighting |
Common Misconceptions
"More information always improves decisions." More information helps with epistemic uncertainty. It does not help with aleatory uncertainty (irreducible randomness), values conflicts, or decisions distorted by biases — in which case more information may just mean more material to selectively process in confirmation of the prior conclusion.
"Trust your gut." Expert intuition in regularized domains with reliable feedback is trustworthy. Intuition in novel domains, in domains with delayed feedback, or in domains with strong emotional valence (where emotion is mislabeled as intuition) is not reliable. The question is whether the domain has the conditions that allow intuition to be a genuine pattern-recognition system rather than a rationalization of preference or fear.
"A good outcome means it was a good decision." Resulting is the most common decision-learning error. Good outcomes can follow bad reasoning (luck); bad outcomes can follow good reasoning (bad luck). Only the reasoning is within the decision-maker's control.
"Group decisions are better than individual decisions." Group decisions are better when the group has diverse cognitive perspectives, dissent is encouraged, and the leader doesn't announce their position first. Group decisions are worse than thoughtful individual decisions when groupthink conditions obtain.
"I can't be biased — I know about the biases." Knowledge of cognitive biases provides modest protection against them. Awareness is necessary but insufficient. Active debiasing practices — pre-mortems, steelmanning, decision journals, reference class forecasting — are required for meaningful improvement.
Related Reading
Explore this topic in other books
Applied Psychology Cognitive Biases Sports Betting Psychology of Betting Prediction Markets Behavioral Biases in Markets Algorithmic Addiction Dopamine Loops Applied Psychology Cognitive Biases Sports Betting Psychology of Betting Prediction Markets Behavioral Biases in Markets Media Literacy Cognitive Biases