39 min read

> "You can make the perfect decision and still lose. You can make the worst decision and still win. The goal is never to guarantee a good outcome. The goal is to guarantee a good process — one that wins more often than it loses over time."

Chapter 10: Expected Value — How Rational People Think About Risk

"You can make the perfect decision and still lose. You can make the worst decision and still win. The goal is never to guarantee a good outcome. The goal is to guarantee a good process — one that wins more often than it loses over time."

— Dr. Yuki Tanaka, at her kitchen table, dealing the first hand


The Poker Night Invitation

The text arrived on a Thursday evening, while Marcus was debugging a session-timeout bug in his chess tutoring app.

"Poker night at my place. Saturday. 7pm. Bring $20 and your ego, because I'm going to take both. — Dr. Yuki"

Marcus stared at the message. He had met Dr. Yuki Tanaka at her opening lecture six weeks earlier — the one where she'd said "luck is not a force, it's an outcome" and then calmly destroyed every assumption Marcus had about his own chess success. Since then, she'd answered exactly three of his emails, each with more questions than answers. This was the first time she'd invited him anywhere.

He texted back: "I don't really know how to play poker."

Her reply came in eleven seconds: "Good. Neither do most people who think they do."


Saturday night. Dr. Yuki's apartment was smaller than Marcus had imagined for a behavioral economist at a research university — cluttered in the way that smart people's spaces often are, with academic journals stacked next to takeout menus and a whiteboard that read VARIANCE IS NOT YOUR ENEMY in red dry-erase marker. Three other people were already seated at the kitchen table: a postdoc named Felix, a law student named Amara, and Priya, who Marcus recognized from Dr. Yuki's lecture and who gave him a small wave that said I have no idea what I'm doing here either.

Dr. Yuki set a stack of poker chips in front of each of them. "Welcome. Before we deal, one rule. Every decision you make tonight, you have to be able to explain why in terms of expected value. Not 'I had a feeling.' Not 'this hand felt lucky.' The math behind the choice. Can everyone agree to that?"

The table agreed.

"Good." She dealt the first hand. "Then let's talk about how rational people think about risk."


Marcus picked up his cards — a seven of clubs and a king of diamonds — and immediately realized he had no idea what to do with them. He glanced sideways at Priya, who was studying her own hand with the same studied blankness. Felix, the postdoc, was watching the table with quiet alertness. Amara was already reaching for a chip.

"Before anyone bets," Dr. Yuki said, "let's pause. What information do you have right now?"

"Two hole cards," Marcus said. "And no information about anyone else."

"So what can you actually calculate?"

"Nothing, really. I don't know what the flop will be. I don't know what anyone else has."

"Exactly. You're operating under deep uncertainty. Which means you're doing what humans always do in deep uncertainty — you're pattern-matching to your cards and getting a feeling. King-high feels decent. Seven-high doesn't. But feelings aren't math. What would be math?"

Felix cleared his throat. "You'd need to know the probability distribution of outcomes — what range of community cards might come, what hands your position might lead to, what hands your opponents might hold."

"And given that you can't know all of that," Dr. Yuki said, "you approximate. You use base rates, historical data, and position. That approximation — imperfect but structured — is what professionals use instead of feelings. It's expected value under uncertainty." She tapped the table. "First bet. Go."


What Is Expected Value?

Expected value (EV) is one of the most powerful concepts in all of mathematics — and one of the most misunderstood in everyday life. At its core, it answers a simple question: If you took this gamble (or made this decision) many, many times, what would the average outcome be?

The formal definition is straightforward:

Expected Value = Sum of (Probability of Outcome × Value of Outcome) for all possible outcomes

Or in mathematical notation:

EV = Σ (P(x) × V(x))

Where P(x) is the probability of outcome x, and V(x) is its value.

Let's make this concrete with the simplest possible example: a fair coin flip.

  • Outcome 1: Heads. You win $10. Probability: 0.5
  • Outcome 2: Tails. You lose $10. Probability: 0.5

EV = (0.5 × $10) + (0.5 × −$10) = $5 − $5 = $0

This bet has an expected value of zero. That means if you flipped this coin thousands of times, you'd expect to end up roughly where you started. Neither good nor bad — a "fair bet."

Now suppose someone offers you a slightly different deal: heads, you win $12; tails, you lose $10.

EV = (0.5 × $12) + (0.5 × −$10) = $6 − $5 = +$1

This bet has a positive expected value of +$1. If you could take this bet hundreds of times, you'd expect to profit, on average, $1 per flip. A rational player should take this bet.

Now suppose the deal reverses: heads, you win $8; tails, you lose $10.

EV = (0.5 × $8) + (0.5 × −$10) = $4 − $5 = −$1

Negative EV. A rational player should decline.

This is the foundational insight of expected value thinking: decisions are evaluated by their average outcome across all possibilities, not by any single result.


A Worked Example: The Startup Pitch Contest

To make EV concrete outside the poker table, consider a scenario Marcus actually faced in real life.

A regional startup pitch competition offered a $5,000 prize to first place. Entry required a $150 application fee and two full weekends of preparation. Marcus estimated (based on the number of applicants announced and his honest self-assessment) that he had roughly a 12% chance of winning.

Should he enter?

Approach 1: Naive gut check "$5,000 is a lot. I should try." Or: "I probably won't win, so it's not worth it." Neither is reasoning — both are feelings.

Approach 2: EV calculation First, identify all outcomes: - Win ($5,000 prize, minus $150 fee = net +$4,850): probability 12% - Lose (lose $150 fee): probability 88%

EV = (0.12 × $4,850) + (0.88 × −$150) EV = $582 − $132 = +$450

The expected monetary value of entering is +$450. That's a strong positive signal.

But wait — we should also account for the non-monetary costs. Two full weekends is roughly 32 hours. At Marcus's current freelance rate of $25/hour, that's $800 of opportunity cost in time alone.

Adjusted EV = +$450 − $800 = −$350

When you account for the cost of time, the decision looks different. The pitch contest might still be worth entering if Marcus values the non-monetary benefits (practice pitching, exposure, community connection) at more than $350. But the structured calculation reveals what the gut check misses: time is a cost, and that cost changes the sign of the decision.

This is EV thinking in practice — not as a veto on intuition, but as a tool that makes hidden costs and probabilities visible.


EV in Poker: Dr. Yuki's First Lesson

At Dr. Yuki's table, after the first hand, Felix had won $8 despite making what appeared to be a reckless call.

"Felix won," Dr. Yuki said. "Does anyone think Felix made a good decision?"

"He put half his stack in with a draw," Amara said. "That doesn't seem smart."

"Let's check." Dr. Yuki pulled out a notepad. "Felix, you called a $6 bet into a $12 pot. You had a flush draw — roughly a 36% chance of completing on the next card. What did you need to justify that call?"

Felix, who clearly had done this before, said: "I needed at least 2:1 odds. The pot was giving me 2:1 exactly — I was getting $12 to call $6. So the call was correct."

"The call was mathematically correct," Dr. Yuki agreed. "And he lost the hand. But over the long run, if you make that exact call every time those numbers appear, you break even or better. Tonight, variance was kind to Felix. But his decision was sound regardless of the outcome."

She looked around the table. "This is the fundamental discipline of EV thinking. You cannot control outcomes. You can control the quality of your decisions. And quality means: positive expected value."


Why EV Does Not Mean Guaranteed Value

Here is where most people go wrong with expected value: they treat it as a prediction about a single event rather than an average across many events.

If you flip a fair coin once, you cannot "expect" to get heads half the time. You'll either get heads or tails. The expected value of zero applies to the distribution of outcomes over a large number of trials. In any single trial, you'll get one specific result — and it may be very far from the expected value.

This is what statisticians call variance — the spread of outcomes around the expected value.

Consider two bets, both with EV = +$1:

  • Bet A: Win $2 with 50% probability, lose $0 with 50% probability.
  • Bet B: Win $1,001 with 0.1% probability, lose $0 with 99.9% probability.

Both have the same expected value. But Bet B has enormous variance — most of the time you'll win nothing, and occasionally you'll win a huge amount. Bet A is much more predictable.

For a rational decision-maker with unlimited bankroll and many opportunities to bet, both are equally attractive. But in the real world, variance matters enormously because:

  1. You may not have enough trials for the law of large numbers to operate. If you can only take a bet once, high variance bets are riskier even at the same EV.

  2. Catastrophic losses are different from ordinary losses. If Bet B's downside was "lose your house," the EV calculation misses something crucial: survival matters.

  3. Time matters. If you go bankrupt on trial #7, you can't play trial #8.

Dr. Yuki explained this to Marcus with a poker metaphor: "Imagine I offered you a bet. Flip a coin. Heads, you win $10,000. Tails, you lose everything you have — your car, your savings, your phone, your startup. The EV might be positive if $10,000 is more than you own. Should you take it?"

Marcus said no immediately.

"Why not?"

"Because if I lose, I can't recover. The startup is gone. School is harder. I'd have to start over from nothing."

"Exactly. Some losses aren't just losses. They're catastrophes — events that remove you from the game entirely. Expected value alone doesn't capture that. We need to add something."

That something is called utility theory — and it's the bridge between the math of expected value and the psychology of real human decision-making.


Research Spotlight: Variance and Decision Quality

In a landmark 1994 paper, psychologists Leda Cosmides and John Tooby argued that human reasoning evolved for environments where repetition was rare and outcomes were direct. Our ancestors didn't take the same bet thousands of times under controlled conditions. They made irreversible, one-shot decisions about predators, food, and shelter.

This evolutionary history may help explain why we're poor at abstracting across many trials (the domain where EV thinking thrives) but reasonably good at avoiding catastrophic single events (the domain where variance aversion is adaptive). The practical implication: EV thinking is a learned skill that goes against certain cognitive defaults. It requires deliberate practice precisely because it isn't natural.

Professional decision-makers — poker players, financial analysts, clinical trial designers — invest hundreds of hours learning to suppress the intuitive single-outcome frame and replace it with the probabilistic multi-outcome frame. The good news: the research also shows this skill is highly transferable. People trained in one domain of probabilistic reasoning (say, poker) perform measurably better at unrelated probabilistic reasoning tasks.


Utility Functions: When $100 Is Not Always Worth $100

In the eighteenth century, a Swiss mathematician named Daniel Bernoulli posed a deceptively simple question: why would a rational person ever pay to reduce risk, even when the insurance costs more than the expected loss?

His answer: people don't value money linearly. An extra $100 when you have $1,000 feels meaningfully different from an extra $100 when you have $100,000. The same dollar amount produces different utility — different psychological and practical value — depending on your starting position.

This gave birth to the concept of the utility function: a curve that maps money (or any other outcome) to its true subjective value.

For most people, the utility function for money is concave — meaning each additional dollar provides less additional utility than the previous one. This is called diminishing marginal utility. Going from $0 to $1,000 feels much bigger than going from $999,000 to $1,000,000.

The implications for decision-making are profound:

When utility is concave: - A guaranteed $50 may be preferred to a 50% chance of $100, even though both have the same EV. - Paying for insurance (which has negative EV) makes rational sense, because the loss you're insuring against has outsized negative utility. - Variance reduction (predictability) has genuine value beyond the expected value calculation.

This is why rational people sometimes take negative EV bets (insurance, warranties, hedges) and sometimes reject positive EV bets (highly volatile investments when you can't afford the downside).


Visualizing Utility: A Worked Comparison

Imagine two people, Alex and Blake, both offered the same bet:

  • 50% chance to win $200
  • 50% chance to lose $100
  • EV = (0.5 × $200) + (0.5 × −$100) = +$50

The EV is positive. Both should take it, by naive EV logic.

But now consider their situations: - Alex has $20,000 in savings. The $100 loss is barely noticeable. The $200 win is a nice dinner. Utility impact: small in both directions. Alex takes the bet comfortably. - Blake has $150 in their bank account. The $100 loss means they can't cover rent this week. The $200 win is genuinely transformative. Utility impact: enormous in the loss direction, significant in the win direction.

Even though the monetary EV is +$50, Blake's *utility EV* might be negative — because the utility of losing $100 from a $150 base is catastrophically worse than the utility of gaining $200. Blake should decline, or at minimum be deeply hesitant, despite the positive monetary EV.

This is not irrationality. It is precisely correct reasoning under a concave utility function. The rational question is never "is the monetary EV positive?" It is always "is the utility EV positive, given my current position and risk tolerance?"


The St. Petersburg Paradox: When EV Breaks Down Completely

Bernoulli's most famous illustration of this problem is called the St. Petersburg Paradox. We'll give it a full treatment in Chapter 11, but here's a preview.

The game: Flip a fair coin repeatedly until it comes up tails. Count the number of flips it took. You receive $2^n, where n is the number of flips.

  • 1 flip (tails on first): You win $2. Probability: 1/2.
  • 2 flips (heads, then tails): You win $4. Probability: 1/4.
  • 3 flips: You win $8. Probability: 1/8.
  • And so on.

What is the expected value?

EV = (1/2 × $2) + (1/4 × $4) + (1/8 × $8) + ... EV = $1 + $1 + $1 + ... = Infinite

The expected value is infinite. By pure EV logic, you should pay any finite amount to play this game. But virtually no one would pay $1,000 to play — and most wouldn't pay $100.

Why not? Several reasons, but the core one is utility. The marginal value of winning $2^30 ($1 billion) instead of $2^29 ($500 million) is essentially zero to most people. The utility function flattens out at high wealth levels. So even though the dollar EV is infinite, the utility EV is finite — and it turns out to be surprisingly small.

The St. Petersburg Paradox is one of the most important results in decision theory because it shows that raw expected value, calculated in dollars, is not always the right objective function. Sometimes you need to think in utility.


The Kelly Criterion: Optimal Bet Sizing

Marcus had won a small pot with a strong hand. He was feeling confident. He bet too much on the next hand and lost it all.

Dr. Yuki smiled. "You just learned why bet sizing matters as much as bet selection."

Even when you've correctly identified a positive EV bet, how much you bet matters enormously. Bet too little, and you leave value on the table. Bet too much, and variance destroys you before the math can work in your favor.

The mathematical solution to this problem was discovered by John L. Kelly Jr. in 1956 while working at Bell Labs. He derived the Kelly Criterion: a formula for the optimal fraction of your bankroll to bet on any positive EV opportunity.

Kelly Formula:

f* = (bp - q) / b

Where: - f = the fraction of your bankroll to bet - b = the net odds received (what you win per dollar wagered) - p = probability of winning - q* = probability of losing (= 1 − p)

Example: You have a 60% chance of winning, and the bet pays even money (win $1 per $1 wagered). Should you bet your entire bankroll?

f* = (1 × 0.60 − 0.40) / 1 = 0.20

The Kelly Criterion says to bet 20% of your bankroll. Not 100%, even though you have a clear edge.

Why? Because even with a 60% win rate, strings of losses are inevitable. If you bet 100% every time, a single loss bankrupts you. The Kelly Criterion balances growth rate against ruin risk, and it can be mathematically proven to maximize the logarithm of wealth over time — which corresponds to maximizing long-run growth.

Practical implications: - Never bet your entire bankroll on even the best positive EV opportunity. Ruin is irreversible. - The stronger your edge, the more you should bet — but there's an optimal ceiling. - Kelly is derived for a known probability. In real life, you rarely know p with certainty, so many practitioners use "half Kelly" (bet half the calculated amount) as a safety margin against estimation error.

Priya, watching from across the table, raised her hand. "Does this apply to career decisions? Like, how much of my effort should I put into any one job application?"

Dr. Yuki pointed at her. "That is exactly the right question. And we'll spend the rest of this chapter on it."


Kelly in Action: Marcus and the Marketing Blitz

After Dr. Yuki's comment, Marcus found himself doing Kelly calculations in his head about his startup decisions. He pulled out his phone and started sketching numbers.

His chess tutoring app had roughly $3,000 in the bank — money earned from app subscriptions and his own tournament winnings reinvested. He was considering a paid advertising blitz on social media platforms, spending $1,500 on targeted ads aimed at middle-school chess parents.

His estimates: - Probability the campaign breaks even or better: 55% - Net gain if it works well: +$4,000 (new subscriptions) - Net loss if it flops: −$1,500 (the ad spend)

EV = (0.55 × $4,000) + (0.45 × −$1,500) = $2,200 − $675 = +$1,525

Strong positive EV. He should do it, right?

Kelly Criterion: b (net odds if successful) = $4,000/$1,500 ≈ 2.67. p = 0.55. q = 0.45.

f* = (2.67 × 0.55 − 0.45) / 2.67 = (1.47 − 0.45) / 2.67 = 1.02 / 2.67 ≈ 0.38

Kelly says he should commit about 38% of his bankroll. His bankroll is $3,000, so the optimal bet is about $1,140 — not the full $1,500 he'd planned.

"Why not just spend the full $1,500 if the EV is positive?" he asked Dr. Yuki.

"Because your probability estimate is soft," she said. "You said 55%. But how confident are you in that 55%? If it's really 40%, your EV is negative and you're risking nearly half your startup's cash reserves. The Kelly Criterion is telling you to be humble about your own estimates. You know less than you think you know."

Marcus sat with that for a moment. "So half Kelly would be $570."

"Which might feel like you're leaving upside on the table. You are. But what you're buying with that conservatism is survivability — the ability to run the experiment again if this one fails."

"And if I go broke on trial seven, there's no trial eight."

"Now you understand Kelly."


The Risk Tolerance Question: When Should You Take Negative EV Bets?

Here is one of the most counterintuitive truths in all of decision theory:

Sometimes it is rational to take bets you expect to lose.

This sounds absurd until you understand the role of variance and the asymmetry of outcomes. There are several situations where accepting negative EV makes sense:

1. When You're Buying Downside Protection (Insurance)

Health insurance, car insurance, renter's insurance — virtually all consumer insurance has negative expected value. The insurance company builds a profit margin into the premium. If you buy insurance over your lifetime, you'll almost certainly pay more in premiums than you receive in claims.

And yet it's rational to buy it. Why?

Because the loss you're insuring against — a medical catastrophe, a totaled car, a house fire — has catastrophic personal impact that goes far beyond the dollar amount. The utility loss from being uninsured and experiencing a $200,000 medical event is vastly greater than the utility cost of paying $3,000/year in premiums.

We'll explore this in depth in Case Study 2: The Insurance Paradox.

2. When a Small Cost Buys Valuable Information

Sometimes you pay to enter a lottery, apply for a long-shot opportunity, or take a risk not primarily because you expect to win, but because the information you gain from trying is valuable. A startup founder who applies to Y Combinator knowing the acceptance rate is 1.5% isn't irrationally ignoring EV. They're buying the option value of feedback, legitimacy (even in rejection), and a small chance at a transformative outcome.

3. When Participation Itself Has Value

Some experiences have value beyond the financial outcome. Buying a lottery ticket isn't purely about EV — it buys you a week of daydreaming about what you'd do with $50 million, a shared water-cooler conversation, a small participation in a cultural ritual. If that entertainment value exceeds the ticket cost (say, $2 buys you $3 worth of enjoyment), the bet is positive EV when utility is properly measured.


When Should You Reject Positive EV Bets?

Equally important, and equally counterintuitive:

Sometimes it is rational to decline bets you expect to win.

1. When Variance Threatens Survival

The chess-playing computer that maximizes expected score will sometimes make brilliant sacrifices. The chess-playing computer that must stay solvent, or never fall below a certain score in tournament play, might rationally avoid those same sacrifices. When you cannot afford to lose, variance reduction matters more than EV maximization.

Marcus thought about this in terms of his startup. He was considering putting all of his development time — plus all of his chess winnings — into a single marketing blitz. Expected value, he estimated, was probably positive. But if it failed, the company could be dead. "That's a positive EV bet I probably shouldn't take," he told Dr. Yuki.

"Why?" she pressed.

"Because I can't survive the downside. And I have other ways to grow the company that are slower but safer."

"Say more."

"It's like Kelly. Even if EV is positive, betting too much of your bankroll — or too much of your operational runway — violates the Kelly Criterion. You should bet less, spread the risk."

Dr. Yuki set down her cards. "Marcus. That's good. That's actually really good."

2. When the Probability Estimate Is Unreliable

The Kelly formula requires you to know p, the probability of winning. In many real-world situations, your p estimate is deeply uncertain. You're not calculating a known 60% edge — you're guessing at a fuzzy 60% that might really be 40%.

Under deep uncertainty, responsible EV thinking demands heavy conservatism. Nassim Taleb calls these situations "fat-tailed" — distributions where catastrophic outcomes are more likely than your model suggests because the model itself is wrong. We'll revisit this in Case Study 1, exploring Pascal's Mugging.

3. When You Have Better Opportunities Available

If a +$1 EV bet requires your $100 of capital, and you also have a +$10 EV bet available that also requires your $100 of capital, taking the first bet means you can't take the second. Opportunity cost is real. A positive EV bet with a low return might rationally be skipped in favor of a higher-return alternative.


Myth vs. Reality

Myth: "I should always take the positive EV option."

Reality: Expected value must be weighed against variance (how spread out the outcomes are), utility (how outcomes map to your actual wellbeing), and opportunity cost (what you give up). A naive "always take positive EV" rule ignores these crucial adjustments. Rational decision-making requires EV plus context.


Dr. Yuki's Poker Framework: EV as Daily Discipline

By the end of the poker night, Marcus had lost $14, Priya had lost $7, Felix had won $20, and Amara had broken even. Dr. Yuki had won $1.

"Wait," Marcus said. "You've been teaching us this whole time and you only won a dollar?"

"Intentionally," she said. "I was giving you better odds than I should have on several hands because watching you figure out the math was worth more to me than the money. That was a negative EV decision by the textbook. But my utility function values your education."

She cleared the chips and spread out her palms on the table.

"Here is how I think about EV as a daily practice. In poker, every decision — fold, call, raise — has an expected value. You can calculate it exactly if you know the cards, and approximately even when you don't. Professionals don't play hands. They play expected value across thousands of hands. A single session is noise. A career is signal."

She paused. "Life works the same way. Most decisions you make aren't one-time events. They're part of a pattern — a way of approaching job applications, social risks, creative projects. If your pattern consistently chooses positive EV moves, you will be luckier than someone who consistently chooses negative EV moves, even if they get lucky more often in small samples."

This is the central insight of Dr. Yuki's poker framework:

  1. Identify all possible outcomes and their probabilities. Don't just think about the best case or the worst case — enumerate all of them.
  2. Assign honest values to each outcome. Not wishful values. Not worst-case catastrophizing. Calibrated, realistic assessments.
  3. Calculate the expected value. Including your utility adjustments (does this outcome threaten survival? Is variance acceptable?).
  4. Make the decision with the highest adjusted EV. Then commit to it, regardless of the short-term outcome.
  5. Update your probability estimates with new information. This is Bayesian thinking — which we'll address more fully in Chapter 27.

"The hardest part," Dr. Yuki said, "isn't the math. It's step 5. Most people don't update. They update their confidence after a win, and lower it after a loss. But that's responding to outcomes, not to evidence. Professionals update based on information, not results."


Research Spotlight: The Outcome Bias Problem

Psychologists Jonathan Baron and John Hershey (1988) demonstrated what they called "outcome bias" — the tendency to evaluate the quality of a decision based on its outcome rather than on the quality of the reasoning process that produced it.

In their experiments, subjects rated the same decision as "better" when it led to a good outcome and "worse" when it led to a bad outcome — even when the information available at the time of the decision was identical in both cases.

This is deeply irrational. A doctor who prescribes a treatment based on sound evidence is making a good decision even if the patient doesn't recover. A gambler who bets on a 1-in-100 longshot and wins has made a terrible decision that happened to work out. Conflating the quality of decisions with the quality of outcomes is one of the most pervasive cognitive errors in human judgment — and it directly undermines EV thinking.

The implication for your luck science: when evaluating your own decisions (job applications, social risks, creative projects), ask not "did this work out?" but "was this the highest EV choice I could have made with the information I had?"


The Poker Framework Applied: Nadia's Content Strategy

Nadia had been following the poker night conversation secondhand — Marcus had described it at length in their next study group meeting. And she found herself doing EV calculations about her content in the days that followed.

She'd been debating whether to spend two days producing a long-form YouTube video on "the science of why some TikToks go viral." It was the kind of deep-dive content she genuinely wanted to make, but it would take roughly 16 hours — scripting, filming, editing — and she had no guarantee it would perform.

A casual TikTok, by contrast, took maybe 30 minutes and sometimes got 10,000 views. Pure throughput logic said: make 32 TikToks instead of one YouTube video. But was that really the better EV?

She worked through it using Dr. Yuki's framework:

Option A: 32 casual TikToks over two days - Expected views per TikTok: ~2,000 (realistic, not wishful) - Total expected views: ~64,000 - Expected new followers: ~150 - Expected brand partnerships or collabs spawned: maybe 0.3 (low probability) - Skill development: Low — she's practiced this format already

Option B: 1 long-form YouTube video over two days - Expected views in first month: ~800 (YouTube is slower to grow) - Expected new subscribers: ~40 - Long-tail views over 12 months: ~5,000 (evergreen content builds) - Expected collabs or brand interest from being seen as an expert: 0.15 probability × high value - Skill development: High — new format, new skills, portfolio signal

The short-term EV (views and followers) clearly favored TikTok. But the long-term utility-adjusted EV was much less clear — and possibly favored YouTube, particularly because Nadia's goal wasn't raw follower count but building something sustainable.

"The question isn't which option gives me more this week," she wrote in her notebook. "The question is which option, repeated consistently over 12 months, gives me the best long-term outcome. That's the Kelly question: how do I bet across my creative bankroll so I don't go broke on any single strategy?"

She made the YouTube video. It got 900 views in the first month. But three months later it had 8,000 views and had been referenced in two other creators' videos. The EV calculation, it turned out, had underestimated the long tail — a common mistake. But the decision process was sound.


Applying EV to Non-Monetary Decisions

The power of expected value extends far beyond poker chips or stock portfolios. It can (and should) be applied to any decision with uncertain outcomes.

Job Decisions

Priya was staring at two job opportunities. One was a stable but uninspiring corporate communications role at $52,000/year. The other was a junior content strategist role at an early-stage startup, paying $42,000/year but with equity and the possibility of genuine creative work.

A naive comparison focuses only on salary. An EV-based comparison requires:

  1. What are all possible outcomes of each choice? - Corporate: Probably stable job for 2–3 years, modest promotions, safe skill development. Low variance. - Startup: Could fail (50%+?), could grow modestly, could grow dramatically. High variance, higher ceiling.

  2. What is the utility value of each outcome? - For Priya, whose student loans are not crushing but present, and who has parents who'd let her move home if needed, the downside of the startup failing is less catastrophic than it might be for someone without that safety net.

  3. What are the non-financial values? - Learning velocity at the startup is probably higher. - Creative autonomy has genuine utility value to Priya. - The skill signal from startup experience is often stronger in the content/tech industry.

  4. What does the expected-value comparison say? - Even if the startup fails 60% of the time, the positive outcomes (both financial and experiential) might make it positive EV when properly adjusted for utility.

This is the kind of analysis that moves job decisions from pure gut feeling or salary comparison to structured reasoning.

Relationship Decisions

Nadia was considering whether to reach out to a senior creator in her niche who she'd met briefly at a conference. What if she was rejected or ignored?

EV analysis: - Probability of being ignored: ~70%. - Value if ignored: $0 (nothing lost, slight discomfort). - Probability of a reply: ~25%. - Value of a reply: Potential collaboration, advice, connection — high value. - Probability of a great, ongoing mentorship relationship: ~5%. - Value of that: Very high.

EV = (0.70 × $0) + (0.25 × moderate value) + (0.05 × high value)

Even with conservative value estimates, this is likely a strongly positive EV action. The downside (rejection, embarrassment) has low absolute cost. The upside is potentially large.

Most people systematically overweight the probability and pain of rejection and underweight the cumulative EV of reaching out broadly. This is loss aversion at work — a topic we'll tackle head-on in Chapter 15.

Creative Projects

Marcus was debating whether to build a new feature for his app — an AI-powered coaching module — that would take six weeks of development time. He had no guarantee it would increase user retention. Was it worth it?

EV analysis for creative/entrepreneurial decisions requires: 1. Probability the feature actually gets built well: 80% (given his current skills) 2. Probability it improves retention: 50% 3. Value if it improves retention significantly: Very high (potentially transformative for the startup) 4. Value if retention is unchanged: Modest (skill development, portfolio value) 5. Cost: Six weeks of time (high but not irreplaceable)

The EV might be positive even with the uncertainty — but Marcus also had to consider the Kelly-style question: Is this the best use of those six weeks relative to other options? Opportunity cost is a cost.


Lucky Break or Earned Win?

Dr. Yuki won $1 at poker while intentionally teaching rather than maximizing her own game. Was her low winnings bad luck? Poor play? A chosen sacrifice?

This question gets at something important about EV thinking: you choose your objective function. For Dr. Yuki, the evening's objective wasn't maximizing chip count — it was maximizing pedagogical value and the long-run benefit of having well-trained students. By her utility function, she had an excellent night.

Think about the last time you "lost" at something. What was your actual objective function? Was the loss relative to the stated stakes, or the real ones? Sometimes we win at the game we're actually playing while appearing to lose at the one on the surface.


The Regret Minimization Framework: An Alternative to EV

Not every decision reduces neatly to expected value calculations. Sometimes you can't enumerate outcomes. Sometimes the probabilities are truly unknowable. And sometimes the most important consideration is not the average outcome, but how you'll feel about the decision ten or twenty years from now.

Jeff Bezos famously described his decision to leave a lucrative job at a hedge fund in 1994 to start Amazon using what he called the regret minimization framework:

"I wanted to minimize the number of regrets I had. I knew that when I was 80, I was not going to regret having tried this. I was not going to regret trying to participate in this thing called the Internet that I thought was going to be a really big deal. I knew that if I failed, I wouldn't regret that. But I knew the one thing I might regret is not ever having tried."

The regret minimization framework asks: When I look back on this decision at the end of my life, which choice will I regret more — taking the risk and failing, or not taking the risk at all?

This is particularly useful when: - The decision is irreversible. A job passed up may not come back. A conversation never had forecloses a relationship. - The downside of trying is bounded. Bezos could always get another Wall Street job. The downside was real but recoverable. - The upside of the unrealized option is potentially massive. The regret of not trying the "crazy idea" can haunt you far longer than the regret of trying it and failing.

The regret minimization framework is not a rejection of EV thinking — it's a practical heuristic that approximates EV reasoning under uncertainty, weighted heavily toward avoiding the pain of inaction.

Priya found this framework more accessible than the pure math. "With the job choice," she told Dr. Yuki after the game, "I keep thinking about being 45 and wondering if I played it too safe when I was 22. That's worth something to me."

"It should be," Dr. Yuki said. "That's real utility. Just don't use it as an excuse to take every wild bet. Regret minimization still requires realistic probability assessment. The question isn't just 'will I regret not doing this?' It's also 'what are the realistic costs if it goes wrong?' Both sides of the ledger matter."


Myth vs. Reality

Myth: "Successful people just follow their gut. All this probability math is overthinking it."

Reality: What experts call "gut instinct" in high-stakes domains is almost always internalized EV reasoning — their nervous systems have been trained by thousands of past decisions to approximate expected value calculations quickly. Professional poker players don't consciously calculate odds on every hand; after years of practice, the calculation becomes automatic. For beginners, making the math explicit is exactly how you build the pattern recognition that later feels like intuition. "Gut feeling" is practiced EV, not a replacement for it.


EV Thinking and Luck: Closing the Loop

Before we get to the Python code, it's worth pausing to make the connection to luck explicit — because it's easy to treat this chapter as pure mathematics and miss the central thesis of this book.

Expected value thinking is a luck multiplier.

Here's why. Luck, as we defined it in Chapter 1, is an outcome shaped by factors outside an agent's control at the moment of action. What EV thinking does is ensure that, over repeated exposure to uncertain situations, you are positioned to benefit when luck runs in your direction and insulated from catastrophe when it runs against you.

Consider two people, both equally talented, both facing the same uncertain landscape of opportunities:

Person A makes decisions reactively — taking bets that feel good, avoiding bets that feel scary, updating based on whether recent outcomes were good or bad rather than whether their reasoning was sound.

Person B applies EV thinking consistently — enumerating outcomes, assigning calibrated probabilities, adjusting for utility, applying Kelly-style bet sizing to avoid ruin, updating based on information rather than emotion.

Over a single week, Person A might outperform Person B — by chance. Over a year, Person B will systematically appear "luckier." Not because luck has favored them more often. But because they've been making more positive-EV choices all along, and those choices accumulate.

Dr. Yuki said it clearly near the end of the poker night, when Marcus was tallying his losses and wondering if the math had somehow failed him:

"You're thinking about tonight. I'm thinking about the next five years of decisions you're going to make. Tonight you lost $14. But you just learned something worth more than $14. That's a positive EV night — you just haven't finished calculating it yet."


Python Preview: Building an EV Calculator

One of the most useful tools you can build — and one we'll develop fully in the course's programming sections — is a simple expected value calculator. Here's the conceptual structure and a preview of what the code does:

# expected_value_calculator.py
# A tool for calculating and comparing expected values of decisions
# Python 3.10+

def calculate_ev(outcomes: list[tuple[float, float]]) -> float:
    """
    Calculate the expected value of a decision.

    Parameters:
    -----------
    outcomes : list of (probability, value) tuples
        Each tuple represents one possible outcome.
        Probabilities should sum to 1.0.

    Returns:
    --------
    float : The expected value of the decision

    Example:
    --------
    # A bet: 60% chance to win $100, 40% chance to lose $50
    outcomes = [(0.60, 100), (0.40, -50)]
    ev = calculate_ev(outcomes)  # Returns: 40.0
    """
    total_prob = sum(p for p, v in outcomes)
    if abs(total_prob - 1.0) > 0.001:
        raise ValueError(f"Probabilities must sum to 1.0, got {total_prob:.3f}")

    return sum(prob * value for prob, value in outcomes)


def kelly_criterion(win_prob: float, win_payout: float, loss_payout: float = 1.0) -> float:
    """
    Calculate optimal bet fraction using the Kelly Criterion.

    Parameters:
    -----------
    win_prob : float
        Probability of winning (0 to 1)
    win_payout : float
        Net profit per dollar wagered on a win
    loss_payout : float
        Net loss per dollar wagered on a loss (default: 1.0)

    Returns:
    --------
    float : Optimal fraction of bankroll to bet (0 to 1)
    """
    loss_prob = 1.0 - win_prob
    kelly_fraction = (win_prob * win_payout - loss_prob * loss_payout) / win_payout
    return max(0.0, kelly_fraction)  # Never bet negative amounts


def compare_decisions(decisions: dict[str, list[tuple[float, float]]]) -> dict:
    """
    Compare multiple decisions by expected value.

    Parameters:
    -----------
    decisions : dict mapping decision names to lists of (prob, value) tuples

    Returns:
    --------
    dict : Decision names mapped to their EVs, sorted by EV descending
    """
    evs = {name: calculate_ev(outcomes) for name, outcomes in decisions.items()}
    return dict(sorted(evs.items(), key=lambda x: x[1], reverse=True))


# Example: Priya's job decision analysis
priya_job_decisions = {
    "corporate_comms": [
        (0.85, 52000),   # Stable employment, $52K salary
        (0.10, 45000),   # Downsizing, reduced role
        (0.05, 0),       # Layoff in first year
    ],
    "startup_role": [
        (0.40, 42000),   # Startup survives, base salary
        (0.25, 85000),   # Startup grows, salary increase + equity
        (0.10, 200000),  # Startup succeeds significantly, equity value
        (0.25, 20000),   # Startup fails, 6-month severance, job search
    ]
}

results = compare_decisions(priya_job_decisions)
# Results would show which option has higher expected monetary value
# (Though remember: we also need to adjust for utility and non-monetary factors)

for decision, ev in results.items():
    print(f"{decision}: EV = ${ev:,.0f}")

The full implementation in Part 8 will include: - Monte Carlo simulation to visualize variance across thousands of trials - Utility function adjustments (log utility for risk-averse decisions) - Regret matrix calculations - Interactive decision comparison with multiple outcome scenarios

Even without running this code, you can see the structure: enumerate outcomes, assign probabilities and values, calculate the weighted sum. That's EV thinking made executable.


Research Spotlight: EV Thinking in Professional Domains

A 2014 study by Ido Erev and colleagues, building on decades of decision research, found that people consistently underweight small probabilities of large outcomes in deliberate calculation, but overweight them in intuitive judgment. That's the opposite of what you'd expect — which suggests that explicit EV calculation and intuitive judgment produce systematically different mistakes.

The implication: conscious, explicit EV thinking makes you better at some aspects of decision-making (incorporating small-probability, large-magnitude events correctly in your planning) while intuitive experts make better moment-to-moment choices in their domains of expertise. The optimal approach combines both: build the EV habit consciously until it becomes intuition, then verify your intuitions against explicit calculation in high-stakes situations.


Putting It All Together: The EV Mindset

Marcus drove home from Dr. Yuki's apartment at 11:30 PM, fourteen dollars lighter and several thousand dollars wiser.

He replayed the evening in his head. He'd lost money, but the losses hadn't felt random. Every mistake had had a structure: he'd overestimated his probability when he was excited, underestimated his opponent's range when he was confident, and bet too much when his edge was small. None of these were luck. They were reasoning errors — systematic, correctable ones.

The next morning, he opened his startup's planning document and started thinking differently about every decision in it. Which marketing channels had the highest EV per dollar? Which features would he regret not building if the company succeeded? Where was he betting too much of his resources on a single outcome?

This is what Dr. Yuki meant by EV as a daily discipline. It's not a formula you apply once. It's a lens you put on every significant choice — one that forces you to be honest about probabilities (not wishful), honest about outcomes (not catastrophizing), and honest about what you actually value.

The four habits of EV thinking: 1. Enumerate outcomes explicitly. Don't just think "this might work out." List the realistic scenarios. 2. Assign calibrated probabilities. Use base rates, not hopes. We'll learn more about calibration in Chapter 6 (revisited) and Chapter 27. 3. Measure value in utility, not just dollars. Ask: does this threaten my survival? Does variance matter here? 4. Evaluate the process, not the outcome. After a decision resolves, assess whether your reasoning was sound — not just whether it worked.

As Dr. Yuki had said at the table, dealing the last hand: "You can make the perfect decision and still lose. You can make the worst decision and still win. The goal is never to guarantee a good outcome. The goal is to guarantee a good process — one that wins more often than it loses over time."


The Luck Ledger

One thing gained this chapter: The ability to translate risky decisions into a structured calculation — probability × value — that cuts through wishful thinking and fear alike.

One thing still uncertain: How to estimate probabilities accurately when you have little data. In real life, you rarely know if your startup idea has a 40% or 10% chance of success. The EV formula is only as good as its inputs. We'll address probability calibration more fully in Chapters 6, 11, and 27.


Chapter Summary

Expected value is the probability-weighted average of all possible outcomes. It is the foundational concept of rational decision-making under uncertainty. But raw EV is insufficient on its own:

  • Variance matters. High-variance bets are more dangerous in small samples, especially when you cannot afford to play many rounds.
  • Utility matters. The same dollar amount has different impact depending on your starting position. This is why insurance, despite negative EV, is often rational.
  • The Kelly Criterion tells you how much to bet on a positive-EV opportunity, balancing growth against ruin risk.
  • Negative EV bets are sometimes rational (insurance, optionality, entertainment). Positive EV bets are sometimes irrational (when variance threatens survival).
  • The regret minimization framework is a useful EV approximation for decisions where probabilities are unknowable and inaction has its own cost.
  • EV thinking applies to all decisions — career choices, relationships, creative projects — not just money.

Key Terms

  • Expected Value (EV): The probability-weighted average outcome across all possible scenarios; the long-run average result if a decision were repeated many times.
  • Variance: The spread of outcomes around the expected value; a measure of uncertainty and risk.
  • Utility Function: A curve mapping financial or other outcomes to their true subjective value; typically concave, reflecting diminishing marginal utility.
  • Diminishing Marginal Utility: The principle that each additional unit of a resource provides less additional satisfaction than the previous unit.
  • Kelly Criterion: A formula for determining the optimal fraction of a bankroll to bet on a positive-EV opportunity, maximizing long-run growth while limiting ruin risk.
  • Regret Minimization Framework: A decision heuristic that asks which choice you will regret less at the end of your life — useful when probabilities are genuinely unknowable.
  • Outcome Bias: The cognitive error of evaluating decision quality based on outcome rather than on the reasoning process and information available at the time of the decision.
  • St. Petersburg Paradox: A thought experiment demonstrating that expected value alone fails as a decision framework when utilities are not properly specified.

Next chapter: We leave the poker table and enter the realm of mathematical surprise — where our intuitions are not just slightly wrong, but spectacularly, fascinatingly wrong. In Chapter 11, we discover why probability is the science of the counterintuitive.