Case Study 10-1: Pascal's Mugging and the Limits of Expected Value
Chapter: 10 — Expected Value: How Rational People Think About Risk Theme: When mathematically correct EV calculations produce absurd action recommendations
The Setup
Imagine a stranger approaches you on the street and says: "Give me five dollars, and I will use it to prevent a catastrophe that will kill 10 trillion people in a simulated universe I have access to. I can't prove it, but I genuinely believe this. If there's even a one-in-a-quintillion chance I'm telling the truth, the expected value of giving me the five dollars is enormous."
Do you hand over the five dollars?
Expected value math says: maybe yes.
Let's run the numbers. Suppose you assign just a 0.0000000000000001% probability (10^-18) that this person is telling the truth. Ten trillion deaths prevented would have enormous positive value — let's conservatively value each life at $10 million (a common regulatory figure for the "value of a statistical life"). That's $10 million × 10^13 = $10^20 in avoided harm.
EV = probability × value = 10^-18 × $10^20 = **$100**
So handing over $5 to avoid $100 of expected harm is... a positive EV decision? By a factor of twenty?
This thought experiment — called Pascal's Mugging, coined by philosopher Nick Bostrom in 2009 — is one of the most important and unsettling critiques of naive expected value reasoning ever devised. It reveals the point at which the machinery of EV breaks down spectacularly.
What Pascal's Mugging Actually Argues
Pascal's Mugging takes its name from Blaise Pascal's famous "wager" — an argument that you should believe in God because the infinite reward of heaven makes the bet infinitely positive EV even at very low probability of God's existence.
Bostrom's mugging extends this logic to mundane street interactions: if expected value can justify Pascal's Wager, why can't a sufficiently large claimed consequence force rational EV compliance from any passerby who believes there's any nonzero probability of their story being true?
The problem is not with any single calculation. The problem is with what the framework implies we should do in general. If we accept that tiny probabilities of enormous outcomes always dominate our decision-making:
-
We become trivially manipulable. Anyone can paralyze or extract value from an EV reasoner by claiming sufficiently large stakes. "Give me $10 or I'll mentally wish for a meteor to hit your city — and I might have telekinetic powers, so..." Even a 10^-30 probability of effective telekinesis, multiplied by a city's worth of lives, generates a massive EV.
-
We can never act on ordinary decisions. If every conversation you have might trigger a butterfly effect that eventually matters to trillions of beings in some future or simulated universe, every decision has a probability-weighted catastrophic tail. How do you ever prioritize the local, concrete, and actual over the speculative and enormous?
-
The calculation is driven by unverifiable numbers. In ordinary EV analysis, you can get evidence about probabilities — you can study base rates, run experiments, gather data. But for extreme existential scenarios, there is often no empirical handle on the probabilities at all. You're just making up numbers that range from 10^-10 to 10^-100, and the difference between those numbers matters astronomically.
The Astronomical Stakes Problem
Pascal's Mugging is a specific case of a broader issue in EV reasoning: what to do when outcomes have astronomical magnitude and very low (but nonzero) probability.
This problem shows up in several genuinely important real-world contexts:
Existential Risk and AI Safety
Some researchers argue that the probability of artificial superintelligence causing human extinction within the next century might be somewhere between 1% and 50%. Even at 1%, the "expected value" of preventing this would be enormous — potentially trillions of lives over millions of years of human civilization.
This leads some EV reasoners to conclude that existential risk reduction should dominate all other philanthropic priorities. If you could spend $1 billion reducing AI extinction risk by even 0.001%, the EV calculation might suggest this is worth more than preventing millions of deaths from malaria, which is more certain and calculable.
This is not obviously wrong. But it is also not obviously right. The problem is that we have much better calibration on "how many people die from malaria this year" than on "what is the probability of AI extinction in 100 years." When one side of the comparison is solid evidence and the other is speculation, multiplying speculation by astronomical stakes produces a number that feels significant but is epistemically hollow.
Nuclear Risk and Policy
The same logic applies to nuclear war risk. If the probability of nuclear exchange in any given decade is 2%, and a full nuclear exchange might kill 2 billion people and cause civilizational collapse, the annual expected death toll from nuclear risk is immense. Should this dominate all other policy priorities?
Many argue yes, in principle. The practical difficulty is that "2% per decade" is itself a guess — other analysts say 0.5%, others say 10%. The uncertainty in the probability estimate spans an order of magnitude, and that uncertainty multiplies through to an order-of-magnitude uncertainty in the EV calculation.
The Simulation Argument
Philosopher Nick Bostrom (the same person who coined "Pascal's Mugging") also proposed the simulation argument: the probability that we live in a computer simulation may be quite high. If we do, there may be trillions of simulated people in nested simulations, and our actions in this simulation might affect all of them.
This leads to truly vertiginous EV calculations. If every action you take affects not just the few thousand people in your physical vicinity but potentially quintillions of simulated beings, then every action has an astronomical EV tail. How do you make decisions in such a framework?
You mostly can't — which is precisely why Pascal's Mugging is a problem.
Why Mathematicians Reject Pascal's Mugging
The philosophical response to Pascal's Mugging is not unanimous, but several powerful objections have been developed:
1. The Garbage In, Garbage Out Problem
The EV formula is mathematically sound. The problem is the inputs. When you assign a probability of 10^-18 to the mugger's claim, you are not making a calibrated estimate — you're making a number up. And the EV output is almost entirely driven by that made-up number.
Statisticians call this garbage in, garbage out. If your probability estimate has an uncertainty range of ten orders of magnitude (could be 10^-18, could be 10^-28), then your EV estimate has a corresponding range of ten orders of magnitude. That range is so large that the EV calculation carries almost no information.
2. The Complexity Discount
One principled response, proposed by various philosophers and decision theorists, is the "complexity discount" — the idea that more elaborate claims should receive exponentially lower prior probability. A mugger claiming they can prevent 10 trillion deaths has a vastly more complex and specific claim than a mugger claiming they need $5 for bus fare. Bayesian reasoning applied to the complexity of the claim might reduce the probability so rapidly that no stated magnitude ever overcomes it.
This is a viable solution, but it requires specifying how rapidly complexity reduces probability — and that specification is not settled.
3. The Bounded Utility Approach
Another response is to adopt a bounded utility function — one that simply does not grow linearly with the number of lives or dollars beyond a certain scale. Bounded utility says that "preventing 10 trillion deaths" is not meaningfully better than "preventing 1 trillion deaths" beyond some psychological ceiling.
This resolves the mathematical problem but requires conceding that we don't actually value lives linearly — which has uncomfortable implications for how we think about mass casualty events.
4. Practical Rationality vs. Theoretical Rationality
Perhaps most compellingly, many philosophers argue that what Pascal's Mugging reveals is not a flaw in EV theory but a gap between theoretical and practical rationality. Theoretical EV says: maximize expected value. Practical rationality says: do so in a world where probabilities must be genuinely estimable, where manipulation is possible, and where the overhead cost of evaluating every speculative claim is itself a real cost.
In a world of limited cognitive resources, refusing to engage with unverifiable astronomical claims is practically rational even if theoretically suboptimal. The mugger's demand should be rejected not because the EV math is wrong, but because opening the door to that kind of reasoning has negative expected value in terms of its systematic effects on how you allocate attention and resources.
The Limits of Expected Value: What This Tells Us About Luck
Pascal's Mugging matters for the science of luck in several ways:
First, it demonstrates that EV is a framework, not an oracle. Expected value calculations are only as good as their probability inputs, and for extreme low-probability scenarios, those inputs are often noise. The more extreme and unverifiable the claimed outcome, the less reliable the EV calculation that results from it.
Second, it highlights the difference between calibrated and uncalibrated probability estimates. Good EV reasoning requires calibrated probabilities — estimates grounded in evidence, base rates, and verifiable information. Uncalibrated probabilities (pure guesses) should receive less weight, not more, in decision-making, even when multiplied by astronomical stakes.
Third, it creates a practical rule of thumb: When a decision requires assigning probability to scenarios that are both low-probability and unverifiable, cap your behavior on that decision to low-cost actions. Don't bet large amounts of your resource base on speculative astronomical returns. This is a practical modification of Kelly: when probability estimates are uncertain, bet less.
Fourth, it has genuine applications to real decisions about existential and long-term risks. The reasonable response to Pascal's Mugging is not to dismiss all long-term existential thinking — genuine tail risks like pandemics, climate change, and nuclear war have real evidence bases and matter. The response is to be appropriately skeptical of astronomically-scaled claims with no empirical grounding, while taking seriously those tail risks that do have evidence behind their probability estimates.
Research Spotlight: The Actual Psychology of Low-Probability High-Magnitude Events
Research by Kahneman and Tversky (1979) and subsequent prospect theory work showed that people do NOT respond to low-probability events in a way consistent with EV reasoning. Instead:
- People tend to overweight small probabilities when thinking about lottery-style wins or catastrophes they can vividly imagine.
- People tend to underweight or dismiss very small probabilities (like 10^-18) that are too small to be psychologically meaningful.
This means human intuition fails in both directions relative to EV: we overreact to small but vivid risks (terrorism, plane crashes) and dismiss tiny but abstractly described risks (certain existential scenarios). The lesson is not that we should always follow EV math — it's that we need calibrated probability estimates AND appropriate skepticism about extreme-scale claims before EV reasoning produces reliable guidance.
Discussion Questions
-
The Line Problem. Where do you draw the line between "low-probability tail risk worth considering" and "speculative astronomical claim to be dismissed"? Propose a heuristic that distinguishes between them.
-
Existential Risk. Climate change advocates, AI safety researchers, and pandemic preparedness experts all argue that low-probability catastrophic risks deserve disproportionate attention. How much of their argument is justified EV reasoning, and how much risks the Pascal's Mugging error?
-
Everyday Applications. Can you think of a small-scale version of Pascal's Mugging in everyday life — a situation where technically correct EV reasoning would produce an obviously bad practical decision? (Hint: think about elaborate justifications for procrastinating, or for spending excessive time on unlikely scenarios.)
-
The Mugger Meets Marcus. Imagine Marcus is pitched by a startup investor who claims: "This technology has a one-in-a-million chance of curing all diseases and being worth $10 trillion. So you should invest your entire startup fund in it." Using the concepts in this case study, how should Marcus respond?
Key Takeaway
Expected value is a powerful tool, but it requires calibrated probability inputs to produce reliable outputs. When probabilities are genuinely unknowable and outcomes are astronomically large, the product of the two is more noise than signal. Practical rationality means applying EV reasoning where evidence exists, while treating unverifiable astronomical claims with appropriate skepticism — even when the math seems to say otherwise.
Luck, well understood, is not about maximizing every EV calculation to its logical extreme. It is about systematically improving the quality of decisions in domains where probabilities are actually estimable, information is actually available, and outcomes are actually grounded in reality.
The mugger on the street is offering you a lesson, not a bet: know the limits of your framework before you trust its output.