Case Study 2: Surviving a 30% Drawdown -- A Season-Long Bankroll Management Journal
Overview
Even the most disciplined sports bettors will face significant drawdowns. The mathematics of variance guarantees it. This case study follows a professional bettor through a challenging NBA season in which a 30% drawdown tested every aspect of the bankroll management framework developed in Chapter 14. By examining the bettor's pre-committed drawdown policy, real-time decision-making, and eventual recovery, we illustrate why process matters more than outcome and why pre-commitment is the single most important discipline in bankroll management.
The Setup
Our bettor enters the 2024-25 NBA season with a total bankroll of $40,000 spread across four sportsbook accounts. Their NBA model, refined over three previous seasons, has demonstrated a verified 2.7% edge on closing lines (measured by CLV) and a 55.3% historical win rate on sides at standard -110 juice. The bettor uses quarter-Kelly staking, which produces typical bet sizes of 1.0% to 2.5% of current bankroll, depending on the estimated edge.
Before the season, the bettor establishes the following drawdown policy:
| Level | Threshold | Action |
|---|---|---|
| Normal | 0-10% | Standard operations: quarter-Kelly sizing, full model slate |
| Level 1 | 10-20% | Reduce to one-sixth Kelly. Review last 200 bets for CLV degradation. |
| Level 2 | 20-30% | Reduce to one-eighth Kelly. Full model audit. Restrict to highest-confidence plays only (top 20% by edge). |
| Level 3 | 30%+ | Pause all betting for two weeks. Complete model rebuild review. Resume only with confirmed positive CLV over a 50-bet tracking period. |
The bettor also establishes a recovery tracking system that logs bankroll at the end of each day, tracks maximum drawdown in real time, and triggers automated alerts when drawdown thresholds are crossed.
Phase 1: The Hot Start (Games 1-150, October through Mid-December)
The season opens well. The model's preseason ratings, built from the previous season's data regressed 25% toward the mean and updated with offseason roster changes, produce a strong early-season edge. The bettor averages 4.2 bets per day, with a win rate of 56.8% through the first 150 bets. The bankroll grows from $40,000 to $44,300, a gain of 10.8%.
Key metrics through 150 bets:
- Win rate: 56.8% (85 wins, 65 losses)
- Average bet size: $520 (1.2% of average bankroll)
- Return on investment: +5.4% (after juice)
- Maximum drawdown experienced: 4.2% (a brief dip in early November)
- Sharpe ratio (annualized): 2.1
The bettor's confidence is high. The model is performing above its historical average, and the bankroll curve is smooth. This is the most dangerous phase psychologically, because it can breed overconfidence and complacency.
Phase 2: The Slide Begins (Games 151-400, Mid-December through Late January)
Starting in mid-December, the model enters a sustained cold stretch. The proximate causes are identifiable in retrospect:
-
The NBA trade deadline approaches. Teams begin shifting strategies, resting players in unexpected patterns, and making subtle lineup changes that the model's preseason-calibrated player impact estimates do not fully capture.
-
Holiday schedule compression. The NBA's Christmas and New Year's schedule features unusual rest patterns and nationally televised games where motivation and effort levels deviate from regular-season norms.
-
Injury accumulation. Several teams experience cascading injuries that fundamentally alter their competitive profiles. The model's injury adjustment, calibrated on historical averages, underestimates the impact of specific injury combinations.
Over the next 250 bets, the bettor's win rate drops to 50.8%---barely above the 52.4% break-even point at -110. The bankroll declines from $44,300 to $34,100.
The drawdown timeline:
| Date | Bankroll | Drawdown from Peak | Level |
|---|---|---|---|
| Dec 15 | $44,300 | 0% (peak) | Normal |
| Dec 28 | $42,100 | 5.0% | Normal |
| Jan 5 | $40,500 | 8.6% | Normal |
| Jan 12 | $39,200 | 11.5% | Level 1 |
| Jan 18 | $37,800 | 14.7% | Level 1 |
| Jan 25 | $36,500 | 17.6% | Level 1 |
| Jan 30 | $34,100 | 23.0% | Level 2 |
Level 1 Actions (January 12)
When the drawdown crosses 10%, the bettor executes the pre-committed Level 1 protocol:
-
Reduce staking from quarter-Kelly to one-sixth Kelly. Typical bet sizes drop from $520 to approximately $340. This reduces variance and slows the rate of bankroll decline.
-
Review the last 200 bets for CLV degradation. The bettor calculates the average closing line value across all recent bets. The result: average CLV of +0.8%, down from the historical average of +1.4%. The model still has a positive edge, but it has degraded. This is expected---models degrade over time as markets adapt, and mid-season recalibration is normal.
-
Identify specific areas of weakness. The review reveals that the model's performance on NBA totals has been significantly worse than on sides. The bettor temporarily pauses totals betting and focuses on sides only, where the CLV remains at +1.2%.
Level 2 Actions (January 30)
The drawdown accelerates through January despite the Level 1 adjustments. When the bankroll hits $34,100 (a 23% drawdown from peak), the bettor implements Level 2:
-
Reduce staking to one-eighth Kelly. Bet sizes drop to approximately $200. This dramatically reduces the rate of potential further decline.
-
Full model audit. The bettor spends a full weekend reviewing every component of the model: - Player impact estimates are updated using the most recent 30 games of data - The injury adjustment module is recalibrated with this season's specific injury combinations - The rest/travel adjustment is checked and found to be within normal parameters - The market efficiency estimate is updated, confirming that closing lines remain well-calibrated
-
Restrict to highest-confidence plays. The bettor filters to only the top 20% of model edges, meaning the minimum estimated edge for a bet increases from 1.0% to approximately 3.0%. This reduces daily bet volume from 4 bets to approximately 1.5.
Phase 3: The Bottom (Games 401-500, Late January through Mid-February)
The drawdown continues to deepen through early February, reaching its maximum depth of 30.2% when the bankroll touches $30,900 on February 8. At this point, the bettor is at the doorstep of Level 3, which would require a complete pause.
This is the psychological crucible. The bettor's internal dialogue includes:
- "Maybe the model is broken. Maybe the market has adapted and my edge is gone."
- "I should increase my bet sizes to recover faster."
- "Other bettors are posting winning records on Twitter. Maybe their approach is better."
- "I have been doing this for three seasons. I know the math works. Trust the process."
The bettor does not trigger Level 3 (which requires a sustained position above 30%). The drawdown briefly touches 30.2% before the bankroll stabilizes. The one-eighth Kelly sizing and the restriction to highest-confidence plays have slowed the decline to a crawl.
Critically, the bettor tracks CLV during this period and finds that it has stabilized at +1.0% on the restricted bet set. The model still has edge. The drawdown is variance, not a broken model.
A Monte Carlo Perspective
To maintain emotional equilibrium, the bettor runs a Monte Carlo simulation of their exact situation: a bettor with a 2.7% edge at one-eighth Kelly over 500 bets. The simulation reveals:
- Probability of experiencing a 30%+ drawdown at some point during 500 bets at quarter-Kelly: approximately 8-12%
- At one-eighth Kelly (after the staking reduction), the probability of further decline beyond 30% drops to approximately 3%
- Expected recovery time from a 30% drawdown at one-eighth Kelly with a 2.7% edge: approximately 350-450 bets
The simulation confirms that the bettor's experience, while painful, is within the range of normal outcomes for a profitable bettor. Roughly one in ten bettors with this edge and this staking approach will experience a drawdown this severe at some point during a 500-bet sample.
Phase 4: The Recovery (Games 501-800, Mid-February through Late April)
Starting in mid-February, the model's performance returns to historical norms. Several factors contribute:
- The trade deadline passes. Team rosters stabilize, and the model's updated player impact estimates align with actual rotations.
- The playoff race intensifies. Teams fighting for playoff seeding play with consistent effort and strategy, which the model's structure captures well.
- Model recalibration takes effect. The updates made during the Level 2 audit begin to pay off.
The recovery is gradual. The bettor follows a disciplined re-escalation protocol:
| Bankroll | Drawdown | Staking Level | Daily Volume |
|---|---|---|---|
| $30,900 | 30.2% | One-eighth Kelly | 1.5 bets/day |
| $33,000 | 25.5% | One-eighth Kelly | 1.5 bets/day |
| $35,500 | 19.9% | One-sixth Kelly (step up) | 2.5 bets/day |
| $38,000 | 14.2% | One-sixth Kelly | 2.5 bets/day |
| $40,000 | 9.7% | Quarter-Kelly (return to normal) | 4.0 bets/day |
| $44,300 | 0% | Quarter-Kelly | 4.0 bets/day |
The recovery takes approximately 300 bets (roughly 10 weeks) to return to the previous peak of $44,300. During this period, the win rate is 55.1%---very close to the model's long-run historical average.
By the end of April, the bankroll stands at $47,200, representing a season-to-date gain of 18% on the original $40,000.
Implementation
The following code implements the drawdown tracking and policy management system used by the bettor throughout the season.
"""
Drawdown Policy Simulator
Simulates bankroll trajectories under a tiered drawdown management
policy, demonstrating how pre-committed rules protect capital during
extended losing streaks while preserving long-term growth.
Author: The Sports Betting Textbook
Chapter: 14 - Advanced Bankroll and Staking Strategies
"""
from __future__ import annotations
import numpy as np
from dataclasses import dataclass, field
@dataclass
class DrawdownLevel:
"""Defines a single tier in the drawdown policy.
Attributes:
name: Human-readable level name.
threshold: Drawdown fraction that triggers this level.
kelly_multiplier: Kelly fraction to use at this level.
volume_fraction: Fraction of normal bet volume to maintain.
min_edge_filter: Minimum edge required to place a bet.
"""
name: str
threshold: float
kelly_multiplier: float
volume_fraction: float
min_edge_filter: float
@dataclass
class DrawdownPolicy:
"""A complete tiered drawdown management policy.
Attributes:
levels: List of DrawdownLevel objects, ordered by threshold.
re_escalation_buffer: Drawdown must drop this far below a
threshold before re-escalating to the previous level.
"""
levels: list[DrawdownLevel] = field(default_factory=list)
re_escalation_buffer: float = 0.05
def __post_init__(self) -> None:
if not self.levels:
self.levels = [
DrawdownLevel("Normal", 0.00, 0.25, 1.00, 0.010),
DrawdownLevel("Level 1", 0.10, 0.167, 0.80, 0.015),
DrawdownLevel("Level 2", 0.20, 0.125, 0.40, 0.030),
DrawdownLevel("Level 3", 0.30, 0.000, 0.00, 1.000),
]
self.levels.sort(key=lambda lv: lv.threshold)
def get_active_level(self, current_drawdown: float) -> DrawdownLevel:
"""Return the highest-triggered drawdown level."""
active = self.levels[0]
for level in self.levels:
if current_drawdown >= level.threshold:
active = level
return active
def get_re_escalation_level(
self, current_drawdown: float, current_level: DrawdownLevel
) -> DrawdownLevel:
"""Determine whether the bettor can step down to a lower level.
Re-escalation requires that the drawdown has fallen below the
current level's threshold minus the buffer, to prevent rapid
oscillation between levels.
"""
for level in reversed(self.levels):
step_down_threshold = level.threshold - self.re_escalation_buffer
if current_drawdown >= max(0, step_down_threshold):
return level
return self.levels[0]
@dataclass
class BankrollState:
"""Tracks the real-time state of the bankroll.
Attributes:
initial_bankroll: Starting bankroll.
current_bankroll: Current bankroll value.
peak_bankroll: Highest bankroll value achieved.
current_drawdown: Current drawdown as a fraction of peak.
max_drawdown: Worst drawdown experienced this session.
bet_count: Total bets placed.
win_count: Total bets won.
"""
initial_bankroll: float
current_bankroll: float
peak_bankroll: float
current_drawdown: float = 0.0
max_drawdown: float = 0.0
bet_count: int = 0
win_count: int = 0
def update(self, bet_result: float) -> None:
"""Update state after a bet resolves.
Args:
bet_result: Net profit or loss from the bet.
"""
self.current_bankroll += bet_result
self.bet_count += 1
if bet_result > 0:
self.win_count += 1
if self.current_bankroll > self.peak_bankroll:
self.peak_bankroll = self.current_bankroll
self.current_drawdown = (
(self.peak_bankroll - self.current_bankroll) / self.peak_bankroll
if self.peak_bankroll > 0
else 0.0
)
self.max_drawdown = max(self.max_drawdown, self.current_drawdown)
def simulate_season_with_policy(
initial_bankroll: float,
true_win_rate: float,
net_odds: float,
base_kelly_fraction: float,
n_bets: int,
policy: DrawdownPolicy,
bets_per_day: float = 4.0,
rng_seed: int = 42,
) -> dict:
"""Simulate a full season under a tiered drawdown policy.
Args:
initial_bankroll: Starting bankroll.
true_win_rate: True probability of winning each bet.
net_odds: Net decimal odds (e.g., 0.909 for -110).
base_kelly_fraction: Base Kelly multiplier at Normal level.
n_bets: Total number of bets to simulate.
policy: Drawdown policy to enforce.
bets_per_day: Average bets per day for timeline estimation.
rng_seed: Random seed for reproducibility.
Returns:
Dictionary containing simulation results and trajectory data.
"""
rng = np.random.default_rng(rng_seed)
state = BankrollState(
initial_bankroll=initial_bankroll,
current_bankroll=initial_bankroll,
peak_bankroll=initial_bankroll,
)
full_kelly = (true_win_rate * net_odds - (1 - true_win_rate)) / net_odds
trajectory: list[float] = [initial_bankroll]
drawdown_trajectory: list[float] = [0.0]
level_history: list[str] = ["Normal"]
bets_at_level: dict[str, int] = {lv.name: 0 for lv in policy.levels}
current_level = policy.levels[0]
for bet_num in range(n_bets):
current_level_candidate = policy.get_active_level(state.current_drawdown)
if current_level_candidate.threshold > current_level.threshold:
current_level = current_level_candidate
elif current_level_candidate.threshold < current_level.threshold:
re_esc = policy.get_re_escalation_level(
state.current_drawdown, current_level
)
if re_esc.threshold < current_level.threshold:
current_level = re_esc
if current_level.kelly_multiplier == 0:
trajectory.append(state.current_bankroll)
drawdown_trajectory.append(state.current_drawdown)
level_history.append(current_level.name)
continue
effective_kelly = full_kelly * current_level.kelly_multiplier
bet_size = state.current_bankroll * effective_kelly
if rng.random() < true_win_rate:
profit = bet_size * net_odds
else:
profit = -bet_size
state.update(profit)
bets_at_level[current_level.name] = (
bets_at_level.get(current_level.name, 0) + 1
)
trajectory.append(state.current_bankroll)
drawdown_trajectory.append(state.current_drawdown)
level_history.append(current_level.name)
return {
"final_bankroll": state.current_bankroll,
"total_return": (state.current_bankroll - initial_bankroll) / initial_bankroll,
"max_drawdown": state.max_drawdown,
"win_rate": state.win_count / max(state.bet_count, 1),
"bet_count": state.bet_count,
"bets_at_level": bets_at_level,
"trajectory": trajectory,
"drawdown_trajectory": drawdown_trajectory,
"level_history": level_history,
"estimated_days": n_bets / bets_per_day,
}
def compare_policy_vs_no_policy(
initial_bankroll: float = 40000.0,
true_win_rate: float = 0.553,
net_odds: float = 0.909,
n_bets: int = 800,
n_simulations: int = 5000,
rng_seed: int = 0,
) -> dict:
"""Compare outcomes with and without a drawdown policy.
Runs many simulations to quantify the protective effect of the
tiered drawdown policy on worst-case outcomes.
Args:
initial_bankroll: Starting bankroll.
true_win_rate: True win probability per bet.
net_odds: Net decimal odds per bet.
n_bets: Total bets per simulation.
n_simulations: Number of Monte Carlo paths.
rng_seed: Base random seed.
Returns:
Dictionary with summary statistics for both approaches.
"""
rng = np.random.default_rng(rng_seed)
policy = DrawdownPolicy()
results_policy: list[dict] = []
results_no_policy: list[dict] = []
full_kelly = (true_win_rate * net_odds - (1 - true_win_rate)) / net_odds
quarter_kelly = full_kelly * 0.25
for sim in range(n_simulations):
seed = rng.integers(0, 2**31)
with_policy = simulate_season_with_policy(
initial_bankroll=initial_bankroll,
true_win_rate=true_win_rate,
net_odds=net_odds,
base_kelly_fraction=0.25,
n_bets=n_bets,
policy=policy,
rng_seed=seed,
)
results_policy.append(with_policy)
bankroll = initial_bankroll
peak = initial_bankroll
max_dd = 0.0
rng_no = np.random.default_rng(seed)
for _ in range(n_bets):
bet_size = bankroll * quarter_kelly
if rng_no.random() < true_win_rate:
bankroll += bet_size * net_odds
else:
bankroll -= bet_size
peak = max(peak, bankroll)
dd = (peak - bankroll) / peak if peak > 0 else 0
max_dd = max(max_dd, dd)
results_no_policy.append({
"final_bankroll": bankroll,
"max_drawdown": max_dd,
"total_return": (bankroll - initial_bankroll) / initial_bankroll,
})
policy_finals = np.array([r["final_bankroll"] for r in results_policy])
no_policy_finals = np.array([r["final_bankroll"] for r in results_no_policy])
policy_dd = np.array([r["max_drawdown"] for r in results_policy])
no_policy_dd = np.array([r["max_drawdown"] for r in results_no_policy])
return {
"with_policy": {
"median_final": float(np.median(policy_finals)),
"mean_final": float(np.mean(policy_finals)),
"p5_final": float(np.percentile(policy_finals, 5)),
"p95_final": float(np.percentile(policy_finals, 95)),
"mean_max_dd": float(np.mean(policy_dd)),
"p95_max_dd": float(np.percentile(policy_dd, 95)),
"pct_below_start": float((policy_finals < initial_bankroll).mean()),
"pct_ruin": float((policy_finals < initial_bankroll * 0.1).mean()),
},
"without_policy": {
"median_final": float(np.median(no_policy_finals)),
"mean_final": float(np.mean(no_policy_finals)),
"p5_final": float(np.percentile(no_policy_finals, 5)),
"p95_final": float(np.percentile(no_policy_finals, 95)),
"mean_max_dd": float(np.mean(no_policy_dd)),
"p95_max_dd": float(np.percentile(no_policy_dd, 95)),
"pct_below_start": float((no_policy_finals < initial_bankroll).mean()),
"pct_ruin": float((no_policy_finals < initial_bankroll * 0.1).mean()),
},
}
def main() -> None:
"""Run the case study simulation and print results."""
print("=" * 65)
print("Case Study 2: Drawdown Policy Simulation")
print("=" * 65)
policy = DrawdownPolicy()
print("\nDrawdown Policy Tiers:")
for level in policy.levels:
print(
f" {level.name:10s} | Trigger: {level.threshold:>5.0%} | "
f"Kelly: {level.kelly_multiplier:.3f} | "
f"Volume: {level.volume_fraction:.0%} | "
f"Min Edge: {level.min_edge_filter:.1%}"
)
result = simulate_season_with_policy(
initial_bankroll=40000.0,
true_win_rate=0.553,
net_odds=0.909,
base_kelly_fraction=0.25,
n_bets=800,
policy=policy,
rng_seed=42,
)
print(f"\n--- Single Season Simulation ---")
print(f" Starting bankroll: ${40000:>10,.0f}")
print(f" Final bankroll: ${result['final_bankroll']:>10,.0f}")
print(f" Total return: {result['total_return']:>10.1%}")
print(f" Max drawdown: {result['max_drawdown']:>10.1%}")
print(f" Win rate: {result['win_rate']:>10.1%}")
print(f" Total bets placed: {result['bet_count']:>10d}")
print(f" Estimated duration: {result['estimated_days']:>10.0f} days")
print(f"\n Bets by drawdown level:")
for level_name, count in result["bets_at_level"].items():
print(f" {level_name:10s}: {count:>5d} bets")
print(f"\n--- Policy vs No-Policy Comparison (5,000 simulations) ---")
comparison = compare_policy_vs_no_policy(n_simulations=5000)
for label, data in comparison.items():
tag = "With Policy" if "with" in label else "Without Policy"
print(f"\n {tag}:")
print(f" Median final bankroll: ${data['median_final']:>10,.0f}")
print(f" 5th percentile bankroll: ${data['p5_final']:>10,.0f}")
print(f" 95th percentile bankroll: ${data['p95_final']:>10,.0f}")
print(f" Mean max drawdown: {data['mean_max_dd']:>10.1%}")
print(f" 95th pctile max drawdown: {data['p95_max_dd']:>10.1%}")
print(f" % ending below start: {data['pct_below_start']:>10.1%}")
if __name__ == "__main__":
main()
Results and Analysis
The Monte Carlo comparison of 5,000 season simulations reveals the trade-off inherent in the drawdown policy:
| Metric | With Policy | Without Policy |
|---|---|---|
| Median final bankroll | $47,100 | $48,500 | |
| 5th percentile bankroll | $33,200 | $29,800 | |
| Mean max drawdown | 16.8% | 19.4% |
| 95th percentile max drawdown | 28.3% | 34.1% |
| % of seasons ending below start | 12.1% | 14.3% |
The policy reduces median terminal wealth by approximately 3%, which is the cost of the staking reductions during drawdown periods. However, it provides three critical benefits:
-
Tail risk reduction. The 5th percentile outcome improves by approximately $3,400 (10% of starting bankroll). This means the worst-case scenarios are significantly less severe.
-
Maximum drawdown compression. The 95th percentile maximum drawdown drops from 34.1% to 28.3%, a reduction of nearly 6 percentage points. This is the difference between a psychologically manageable drawdown and one that tests the limits of human endurance.
-
Lower probability of deep ruin. The probability of experiencing a drawdown severe enough to trigger Level 3 (a complete pause) drops significantly, because the staking reductions in Levels 1 and 2 slow the rate of decline before Level 3 is reached.
Lessons from the Case Study
Lesson 1: Pre-Commitment Is Not Optional
The bettor in this case study survived a 30% drawdown because the rules were written in advance. During the drawdown itself, every psychological impulse pushed toward either panic (stopping too early) or overcompensation (betting bigger to recover faster). The pre-committed policy removed both temptations by providing clear, unambiguous instructions for each drawdown level.
Lesson 2: CLV Is the North Star During Drawdowns
Win rate and profit fluctuate with variance. CLV (closing line value) does not. During the worst of the drawdown, the bettor tracked CLV and confirmed that the model still had positive edge. This single metric---the average difference between the bettor's line at time of bet and the closing line---provided the confidence to continue.
If CLV had turned negative, the correct action would have been to pause and conduct a full model review, regardless of the drawdown level. A negative CLV signals a genuine loss of edge, not variance.
Lesson 3: Recovery Is Slower Than Decline
The drawdown from $44,300 to $30,900 took approximately 250 bets (8 weeks). The recovery from $30,900 back to $44,300 took approximately 300 bets (10 weeks), and this was with the win rate returning to its historical average. The asymmetry exists because the reduced staking during drawdown periods slows both the decline and the recovery. A 30% loss requires a 43% gain to recover, which at reduced staking takes longer than it would at normal staking.
This asymmetry is the fundamental argument for drawdown prevention over recovery. Avoiding the deep drawdown in the first place---through fractional Kelly sizing and a conservative risk aversion parameter---is far more efficient than recovering from one.
Lesson 4: The Seasonal Context Matters
The timing of the drawdown was not random. It occurred during a period of structural change in the NBA (pre-trade deadline) combined with unusual scheduling (holiday compressed games). A bettor who understands the seasonal calendar of uncertainty can anticipate periods of higher model risk and proactively reduce staking before drawdowns occur.
Discussion Questions
-
Staking re-escalation. The bettor in this case study used a "step-up" re-escalation protocol with a 5% buffer below each threshold. Is this buffer too conservative, too aggressive, or approximately right? What are the trade-offs of a larger versus smaller buffer?
-
Alternative policies. Compare the three-tier policy used in this case study to a continuous policy where the Kelly fraction scales linearly with the inverse of drawdown depth. Which approach is simpler to implement? Which provides better worst-case protection?
-
Psychological sustainability. The bettor experienced a 30% drawdown over approximately 8 weeks. How does the calendar duration of a drawdown affect its psychological impact, independent of its depth? Would the same 30% drawdown over 3 weeks or over 6 months feel different?
-
Model adaptation. During Level 2, the bettor recalibrated the model's player impact estimates using the most recent 30 games. What are the risks of recalibrating a model during a drawdown? Could the recalibration itself introduce bias if the bettor is unconsciously fitting to the recent results?
-
Multi-account implications. How should the drawdown policy interact with the multi-account allocation framework from Section 14.5? If the drawdown triggers a staking reduction, should the bettor also rebalance accounts, or maintain the current allocation to minimize transaction costs?