Chapter 10 Quiz: Expected Value

Chapter: 10 — Expected Value: How Rational People Think About Risk Format: 15 multiple-choice questions with hidden answers Instructions: Work through each question before revealing the answer. Explain your reasoning in writing before checking.


Question 1

You have a bet: 70% chance to win $40, 30% chance to lose $50. What is the expected value?

A) $13.00 B) $6.50 C) −$7.00 D) $28.00

Show Answer **Correct Answer: A) $13.00** **Explanation:** EV = (0.70 × $40) + (0.30 × −$50) = $28 + (−$15) = **$13.00** This is a positive EV bet. A rational player with appropriate bankroll should take it. Note that $28 (Choice D) is just the winning outcome weighted alone — a common error that ignores the losing case. $6.50 and −$7.00 reflect arithmetic errors.

Question 2

Which of the following best describes the concept of "variance" in the context of expected value?

A) The average outcome across all possible scenarios B) The difference between the best and worst possible outcomes C) The spread of outcomes around the expected value; a measure of unpredictability D) The probability of losing money on a given bet

Show Answer **Correct Answer: C) The spread of outcomes around the expected value; a measure of unpredictability** **Explanation:** Variance measures how dispersed the outcomes of a bet are. Two bets can have identical expected values but very different variance — one might almost always return close to the average, while the other alternates between extreme wins and extreme losses. Understanding variance is crucial because high-variance bets are more dangerous when you have limited trials or cannot survive a catastrophic loss, even if their EV is positive.

Question 3

The Kelly Criterion formula is f* = (bp − q) / b. If you have a 55% chance of winning, the bet pays 1:1 odds, and you have a $1,000 bankroll, how much should you bet?

A) $100 B) $550 C) $10 D) $1,000

Show Answer **Correct Answer: A) $100** **Explanation:** Here b = 1 (1:1 odds), p = 0.55, q = 0.45. f* = (1 × 0.55 − 0.45) / 1 = (0.55 − 0.45) / 1 = **0.10** 10% of $1,000 = **$100** The Kelly Criterion never recommends betting your entire bankroll (Choice D) on a modest edge. Choice B ($550) would be 55% of the bankroll — betting your win probability, which ignores the loss side. Choice C ($10, or 1%) is too conservative and sacrifices growth rate.

Question 4

Priya has two job offers. Offer A has a guaranteed salary of $50,000/year. Offer B has a 50% chance of $80,000/year and a 50% chance of $20,000/year. Which statement is most accurate?

A) Priya should always choose Offer B because it has higher expected value B) Priya should always choose Offer A because certainty is better C) Offer B has higher EV, but Priya's choice should also depend on her financial situation and risk tolerance D) Both offers have the same expected value

Show Answer **Correct Answer: C) Offer B has higher EV, but Priya's choice should also depend on her financial situation and risk tolerance** **Explanation:** EV of Offer A = $50,000 (certain). EV of Offer B = (0.5 × $80,000) + (0.5 × $20,000) = $40,000 + $10,000 = **$50,000**. Wait — they actually have the same EV! Choice D is also technically correct on the EV calculation. Choice C remains most accurate because it recognizes that Offer B has higher *variance* (it can go as low as $20,000 or as high as $80,000), and variance matters differently depending on Priya's debt, savings, and risk tolerance. Even at equal EV, the choice is not automatic. This question is designed to catch the assumption that "higher EV = clear choice."

Question 5

A lottery ticket costs $2 and offers a 1-in-10,000,000 chance of winning $5,000,000. What is the expected value of buying one ticket?

A) +$0.50 B) −$1.50 C) +$5,000,000 D) −$2.00

Show Answer **Correct Answer: B) −$1.50** **Explanation:** EV = (1/10,000,000 × $5,000,000) + (9,999,999/10,000,000 × −$2) EV = $0.50 + (−$1.9999998) EV ≈ **−$1.50** This is a negative EV bet. However, as discussed in the chapter, some people rationally buy lottery tickets because the entertainment value (the experience of daydreaming) exceeds the $1.50 loss in utility. This is a case where properly accounting for utility can make a negative-EV bet rational — just not for the financial return itself.

Question 6

"Outcome bias" is the tendency to:

A) Assign too much weight to the most extreme possible outcome B) Evaluate a decision's quality based on its result rather than on the reasoning process C) Overestimate the probability of positive outcomes D) Underweight the variance of a bet relative to its expected value

Show Answer **Correct Answer: B) Evaluate a decision's quality based on its result rather than on the reasoning process** **Explanation:** Outcome bias, identified by Baron and Hershey (1988), is the systematic error of judging whether a decision was "good" based on whether it worked out — rather than whether the reasoning was sound given the information available at the time. A skilled poker player who loses with the best hand made a good decision. An unskilled player who wins with a terrible hand made a bad decision. The outcome doesn't retroactively determine decision quality.

Question 7

Dr. Yuki says that poker professionals "don't play hands — they play expected value across thousands of hands." What does this mean for how they evaluate a single losing session?

A) A losing session proves their strategy needs improvement B) A losing session is irrelevant to evaluating the quality of their strategy if the decisions were correct EV C) A losing session means variance was low D) A losing session indicates they should bet more conservatively next time

Show Answer **Correct Answer: B) A losing session is irrelevant to evaluating the quality of their strategy if the decisions were correct EV** **Explanation:** This is the core insight of EV thinking: a single session is a small sample. With correct EV decisions, losing in a given session is expected sometimes — it's variance. The signal of strategic quality only emerges over hundreds or thousands of decisions. Adjusting strategy based on a single losing session is responding to noise, not signal. This is analogous to a content creator abandoning a good content strategy after one underperforming video, or a startup founder pivoting after one bad month.

Question 8

Which of the following correctly explains why purchasing insurance can be rational even though insurance typically has negative expected value for the buyer?

A) The insurance company makes mathematical errors in pricing premiums B) Buyers use insurance primarily for entertainment value C) The utility loss from a catastrophic uninsured event is disproportionately large relative to the premium cost, making the utility EV positive D) Insurance always has positive expected value if you read the fine print carefully

Show Answer **Correct Answer: C) The utility loss from a catastrophic uninsured event is disproportionately large relative to the premium cost, making the utility EV positive** **Explanation:** Insurance companies profit because the dollar EV of insurance is negative for the buyer (premiums exceed expected payouts on average). However, utility functions are concave — the additional suffering caused by a $200,000 uninsured medical bill far exceeds 20 times the suffering caused by a $10,000 annual premium. When you convert dollar outcomes to utility, the insurance may actually be positive utility-EV even though it's negative dollar-EV. This is the insight Bernoulli introduced: rational decision-making requires utility, not just dollars.

Question 9

The St. Petersburg Paradox reveals which fundamental problem with pure expected value reasoning?

A) That expected value calculations always produce infinite results B) That probability estimates are always uncertain C) That raw expected value in dollars can be infinite while rational willingness to pay is finite, showing the need for utility adjustments D) That people are irrational and never make good decisions

Show Answer **Correct Answer: C) That raw expected value in dollars can be infinite while rational willingness to pay is finite, showing the need for utility adjustments** **Explanation:** The St. Petersburg game has an infinite expected dollar value (the sum of infinite $1 terms). Yet no rational person would pay $1,000 to play it, because the marginal utility of enormous wealth gains is negligible — you can only spend so much money before additional millions are functionally irrelevant to your wellbeing. The paradox demonstrates that *dollar EV* is the wrong metric; *utility EV* is the correct one, and utility-EV of the game is finite. This motivates all subsequent utility theory in economics and decision science.

Question 10

Jeff Bezos's regret minimization framework is most useful in which type of decision?

A) Decisions where all probabilities and outcomes can be precisely quantified B) Decisions where the downside is larger than the upside C) Decisions involving irreversible choices where the cost of not trying might haunt you for decades D) Decisions that must be made within a few minutes

Show Answer **Correct Answer: C) Decisions involving irreversible choices where the cost of not trying might haunt you for decades** **Explanation:** The regret minimization framework is most valuable when: (1) the decision is hard to reverse, (2) inaction has its own opportunity cost that is difficult to quantify, and (3) you cannot reliably estimate probabilities. In these situations, asking "how will 80-year-old me feel about this choice?" captures the long-term utility of both action and inaction in a way that formal EV math cannot easily encode. It is less useful in decisions where probabilities and values are known, or where the stakes are too small to generate genuine long-term regret.

Question 11

Marcus has an 80% chance of completing a new app feature successfully. The feature would add an estimated $500/month in recurring revenue if successful, and $0 if he fails. The development takes one month of his full effort. Ignoring opportunity cost, what is the expected monthly revenue gain from attempting this feature?

A) $500 B) $400 C) $100 D) $0

Show Answer **Correct Answer: B) $400** **Explanation:** EV = (0.80 × $500) + (0.20 × $0) = $400 + $0 = **$400** This represents the expected value of the feature attempt in terms of monthly revenue gain. Note that this ignores opportunity cost (what else could he build in that month?) — which the chapter notes is a real cost in EV analysis. The $400 EV is only the starting point; the full analysis requires comparing it to alternative uses of the development month.

Question 12

Which scenario best illustrates the principle that "some positive EV bets should be declined"?

A) A person declines a bet with 60% win probability because they think 60% isn't high enough B) A startup founder declines a high-variance marketing bet because failure would eliminate 18 months of runway, ending the company C) A student doesn't apply to a selective college because they fear rejection D) An investor avoids the stock market entirely because it can go down

Show Answer **Correct Answer: B) A startup founder declines a high-variance marketing bet because failure would eliminate 18 months of runway, ending the company** **Explanation:** The correct case of rationally declining a positive EV bet involves survival risk — when the downside of a bet threatens your ability to continue playing at all, variance reduction can rationally outweigh EV. The founder in choice B has identified that this particular bet's downside isn't just "losing some money" — it's catastrophic and irreversible (the company dies). Choices A and C reflect loss aversion and fear, not rational EV adjustment. Choice D is excessive and general risk avoidance without specific survival-threat reasoning.

Question 13

In the poker context, what is the key difference between a "bad decision" and "bad luck"?

A) Bad decisions always lead to bad outcomes; bad luck means a good decision led to a bad outcome B) There is no meaningful difference — all losses are either bad decisions or bad luck, never both C) Bad luck refers to small losses; bad decisions refer to large losses D) Bad decisions can be corrected in the same game; bad luck cannot

Show Answer **Correct Answer: A) Bad decisions always lead to bad outcomes; bad luck means a good decision led to a bad outcome** **Explanation:** This is the central distinction of EV thinking. A bad decision is one where the reasoning process — given the information available — produced a negative-EV choice. The outcome is separate. A player who correctly calls with pot odds and loses got unlucky; a player who incorrectly calls against the odds and wins got lucky. EV thinking insists we evaluate the *decision*, not the *outcome*. Over many trials, bad decisions will produce bad average outcomes even if individual results vary. Good decisions will produce good average outcomes even with short-term variance.

Question 14

Nadia is considering reaching out to a prominent creator. She estimates: 65% chance of no response, 30% chance of a brief but helpful reply, 5% chance of a meaningful ongoing connection. The cost is 30 minutes of her time and some emotional risk of rejection. Which statement best captures an EV-based analysis?

A) The 65% chance of no response means the action is likely a waste of time B) The 5% chance of a meaningful connection is too small to justify the effort C) Even a small probability of a high-value outcome can make an action strongly positive EV, especially when the downside is bounded and small D) Nadia should only reach out if she can increase her probability of response above 50%

Show Answer **Correct Answer: C) Even a small probability of a high-value outcome can make an action strongly positive EV, especially when the downside is bounded and small** **Explanation:** This scenario illustrates asymmetric upside with bounded downside — a classic positive EV structure. The downside (30 minutes + mild rejection discomfort) is small and recoverable. The upside (a meaningful mentorship connection) is potentially large. Even though most outcomes are "no response," the probability-weighted value of the small chance of a great connection often makes the action strongly positive EV. This is why people who consistently reach out, apply, and initiate get "luckier" over time — they're systematically harvesting positive-EV opportunities that others let pass due to fear of the 65% no-response rate.

Question 15

Which of the following is the most accurate summary of EV thinking as a daily practice?

A) Calculate exact probabilities for every decision you make B) Always choose the option with the highest expected monetary return C) Evaluate decisions by their systematic long-run quality, using honest probability estimates, utility adjustments, and process review rather than outcome review D) Avoid all risky decisions to preserve your position

Show Answer **Correct Answer: C) Evaluate decisions by their systematic long-run quality, using honest probability estimates, utility adjustments, and process review rather than outcome review** **Explanation:** EV thinking is a habit of mind, not a formula to apply mechanically. It requires: (1) honest enumeration of outcomes and their probabilities — not wishful thinking; (2) utility adjustment — recognizing that dollars and subjective value are not the same; (3) long-run thinking — understanding that any single outcome is a small sample; and (4) process review — evaluating whether your reasoning was sound, not whether it worked out. Choices A and B oversimplify; Choice D contradicts the purpose of positive EV decision-making entirely.

Chapter 10 Quiz complete. Review any questions where you were uncertain before proceeding to Chapter 11.