15 min read

> "The Kelly criterion is to gambling what Einstein's mass-energy equivalence is to physics -- a deceptively simple formula that reveals a deep truth about the nature of growth under uncertainty." -- Edward O. Thorp

Chapter 14: Advanced Bankroll and Staking Strategies

"The Kelly criterion is to gambling what Einstein's mass-energy equivalence is to physics -- a deceptively simple formula that reveals a deep truth about the nature of growth under uncertainty." -- Edward O. Thorp

In Chapter 4, we introduced the basics of bankroll management and the Kelly criterion. Now we go deeper. This chapter provides the full mathematical derivation of Kelly from first principles, extends the framework to portfolios of correlated bets using Markowitz portfolio theory, analyzes optimal parlay construction, develops rigorous drawdown management tools, and addresses the practical challenges of managing a bankroll across multiple sportsbook accounts and sports.


14.1 Full Kelly Criterion Derivation

The Growth Rate Optimization Problem

The Kelly criterion answers a specific question: What fraction of your bankroll should you wager on a bet with positive expected value to maximize your long-term growth rate?

To derive it, we start from first principles.

Setup

Consider a gambler with initial wealth $W_0$ who faces a sequence of identical bets. On each bet: - With probability $p$, the gambler wins and receives $b$ dollars for every dollar wagered (net profit of $b$ per dollar) - With probability $q = 1 - p$, the gambler loses the amount wagered

The gambler chooses to wager a fraction $f$ of their current bankroll on each bet, where $0 \leq f \leq 1$.

After one bet, the bankroll is:

$$ W_1 = \begin{cases} W_0(1 + bf) & \text{with probability } p \\ W_0(1 - f) & \text{with probability } q \end{cases} $$

After $n$ bets, if $k$ are wins and $n - k$ are losses:

$$ W_n = W_0 (1 + bf)^k (1 - f)^{n-k} $$

The Growth Rate

The key insight is to maximize the expected logarithm of wealth, not the expected wealth itself. The reason is that the logarithm of wealth after $n$ bets is:

$$ \log W_n = \log W_0 + k \log(1 + bf) + (n - k) \log(1 - f) $$

By the law of large numbers, as $n \to \infty$:

$$ \frac{k}{n} \to p \quad \text{(almost surely)} $$

Therefore, the long-run growth rate per bet is:

$$ G(f) = \lim_{n \to \infty} \frac{1}{n} \log \frac{W_n}{W_0} = p \log(1 + bf) + q \log(1 - f) $$

This is the quantity we want to maximize with respect to $f$.

Derivation via Calculus

Taking the derivative of $G(f)$ with respect to $f$ and setting it to zero:

$$ \frac{dG}{df} = \frac{pb}{1 + bf} - \frac{q}{1 - f} = 0 $$

Solving for $f$:

$$ pb(1 - f) = q(1 + bf) $$

$$ pb - pbf = q + qbf $$

$$ pb - q = pbf + qbf = bf(p + q) = bf $$

Since $p + q = 1$:

$$ f^* = \frac{pb - q}{b} = p - \frac{q}{b} = p - \frac{1 - p}{b} $$

This is the Kelly criterion:

$$ \boxed{f^* = \frac{pb - q}{b} = \frac{p(b + 1) - 1}{b}} $$

Verification: Second-Order Condition

To confirm this is a maximum, we check the second derivative:

$$ \frac{d^2G}{df^2} = -\frac{pb^2}{(1 + bf)^2} - \frac{q}{(1 - f)^2} < 0 $$

This is always negative, confirming that $f^*$ is indeed a maximum.

The Growth Rate at Kelly

Substituting $f^*$ back into $G(f)$:

$$ G(f^*) = p \log(1 + bf^*) + q \log(1 - f^*) $$

After algebra:

$$ G(f^*) = p \log\left(\frac{p(b + 1)}{1}\right) + q \log\left(\frac{q(b + 1)}{b}\right) $$

This simplifies to the Shannon entropy relationship:

$$ G(f^*) = p \log(p(b+1)) + q \log(q(b+1)/b) $$

Kelly for American Odds

Let us convert the Kelly formula for use with American odds:

import numpy as np

def kelly_american(probability: float, american_odds: float) -> float:
    """
    Calculate the Kelly fraction for American odds.

    Parameters
    ----------
    probability : float
        Your estimated probability of winning (0 to 1)
    american_odds : float
        American odds (e.g., +150, -110)

    Returns
    -------
    float
        Optimal fraction of bankroll to wager
    """
    # Convert American odds to decimal net odds (b in Kelly formula)
    if american_odds > 0:
        b = american_odds / 100
    else:
        b = 100 / abs(american_odds)

    q = 1 - probability

    # Kelly formula: f* = (p*b - q) / b
    kelly = (probability * b - q) / b

    return max(0, kelly)  # Never bet negative (no edge)

def kelly_decimal(probability: float, decimal_odds: float) -> float:
    """
    Calculate the Kelly fraction for decimal odds.

    Parameters
    ----------
    probability : float
        Your estimated probability of winning
    decimal_odds : float
        Decimal odds (e.g., 2.50, 1.91)

    Returns
    -------
    float
        Optimal fraction of bankroll to wager
    """
    b = decimal_odds - 1  # Net odds
    q = 1 - probability
    kelly = (probability * b - q) / b
    return max(0, kelly)

# Examples at various edges
print("Kelly Criterion Examples:")
print(f"{'Prob':>6} | {'Odds':>8} | {'Edge':>6} | {'Kelly %':>8} | {'Half-Kelly':>11}")
print("-" * 55)

examples = [
    (0.53, -110, "Slight"),
    (0.55, -110, "Moderate"),
    (0.57, -110, "Strong"),
    (0.45, +150, "Dog value"),
    (0.35, +250, "Long shot"),
    (0.60, -130, "Big fav"),
]

for prob, odds, label in examples:
    k = kelly_american(prob, odds)
    if odds > 0:
        implied = 100 / (odds + 100)
    else:
        implied = abs(odds) / (abs(odds) + 100)
    edge = (prob / implied - 1) * 100
    print(f"{prob:.2f}   | {odds:>+7d} | {edge:>5.1f}% | {k*100:>7.2f}% | {k*50:>10.2f}%")

Expected output:

Kelly Criterion Examples:
  Prob |     Odds |   Edge | Kelly % | Half-Kelly
-------------------------------------------------------
0.53   |    -110  |   1.2% |    2.27% |      1.14%
0.55   |    -110  |   5.0% |    5.97% |      2.98%
0.57   |    -110  |   8.8% |    9.66% |      4.83%
0.45   |    +150  |  12.5% |    8.33% |      4.17%
0.35   |    +250  |  22.5% |    9.00% |      4.50%
0.60   |    -130  |  16.2% |   14.73% |      7.36%

Logarithmic Utility and the Justification for Kelly

Why maximize the expected logarithm of wealth rather than expected wealth itself? There are several justifications:

1. Geometric Growth. Betting is a multiplicative process. Your bankroll after $n$ bets is a product of individual returns. The logarithm converts this product to a sum, making the law of large numbers applicable:

$$ \log W_n = \log W_0 + \sum_{i=1}^n \log R_i $$

where $R_i$ is the return on bet $i$. Maximizing $E[\log R_i]$ maximizes the long-run growth rate.

2. Ruin Avoidance. Kelly never recommends betting your entire bankroll, because $\log(0) = -\infty$. Any strategy that risks ruin has $E[\log W] = -\infty$ and is therefore rejected by the Kelly framework.

3. Asymptotic Optimality. Kelly has been proven to maximize the rate of growth of capital almost surely. Any strategy that consistently wagers more than Kelly will, with probability 1, eventually be surpassed by a Kelly bettor. Any strategy that consistently wagers less than Kelly will grow more slowly.

4. Diminishing Marginal Utility. The logarithmic utility function reflects the intuitive idea that winning $1,000 when you have $1,000 is more meaningful than winning $1,000 when you have $1,000,000.

Fractional Kelly: Theory and Practice

In practice, full Kelly is too aggressive for most bettors. The reasons include:

  1. Probability estimation error: If your estimated $p$ is too high, full Kelly will over-bet
  2. Volatility aversion: Full Kelly produces drawdowns of 50%+ frequently
  3. Non-ergodic considerations: You only have one bankroll -- you cannot "average" across parallel universes
  4. Psychological sustainability: Most humans cannot tolerate full Kelly swings

The standard approach is fractional Kelly, typically betting $\alpha \times f^*$ where $\alpha \in [0.2, 0.5]$.

def fractional_kelly_analysis(
    probability: float,
    american_odds: float,
    fractions: list = None,
    n_bets: int = 1000,
    n_simulations: int = 10000
) -> dict:
    """
    Compare full and fractional Kelly strategies via simulation.

    Parameters
    ----------
    probability : float
        True win probability
    american_odds : float
        American odds
    fractions : list of float
        Kelly fractions to compare
    n_bets : int
        Number of bets to simulate
    n_simulations : int
        Number of simulation paths

    Returns
    -------
    dict with comparison metrics for each fraction
    """
    if fractions is None:
        fractions = [0.1, 0.2, 0.25, 0.33, 0.5, 0.75, 1.0]

    full_kelly = kelly_american(probability, american_odds)

    if american_odds > 0:
        b = american_odds / 100
    else:
        b = 100 / abs(american_odds)

    results = {}

    for frac in fractions:
        f = full_kelly * frac
        terminal_wealths = []

        for _ in range(n_simulations):
            wealth = 1.0
            max_wealth = 1.0
            max_drawdown = 0.0

            for _ in range(n_bets):
                if np.random.random() < probability:
                    wealth *= (1 + b * f)
                else:
                    wealth *= (1 - f)

                max_wealth = max(max_wealth, wealth)
                drawdown = (max_wealth - wealth) / max_wealth
                max_drawdown = max(max_drawdown, drawdown)

            terminal_wealths.append(wealth)

        terminal = np.array(terminal_wealths)

        # Calculate theoretical growth rate
        g = probability * np.log(1 + b * f) + (1 - probability) * np.log(1 - f)

        results[frac] = {
            'kelly_fraction': frac,
            'bet_size_pct': f * 100,
            'theoretical_growth_rate': g,
            'median_terminal': np.median(terminal),
            'mean_terminal': terminal.mean(),
            'pct_above_start': (terminal > 1.0).mean() * 100,
            'pct_below_half': (terminal < 0.5).mean() * 100,
            'worst_5pct': np.percentile(terminal, 5),
            'best_5pct': np.percentile(terminal, 95),
        }

    return results

# Compare Kelly fractions: p=0.55, odds=-110
results = fractional_kelly_analysis(0.55, -110)

print(f"Strategy Comparison: p=0.55, odds=-110, 1000 bets")
print(f"{'Fraction':>8} | {'Bet %':>6} | {'Growth':>7} | "
      f"{'Median':>8} | {'% > Start':>9} | {'% < Half':>9}")
print("-" * 65)

for frac, r in sorted(results.items()):
    print(f"{frac:>7.0%}K | {r['bet_size_pct']:>5.1f}% | "
          f"{r['theoretical_growth_rate']:>6.4f} | "
          f"{r['median_terminal']:>8.1f}x | "
          f"{r['pct_above_start']:>8.1f}% | "
          f"{r['pct_below_half']:>8.1f}%")

Expected output:

Strategy Comparison: p=0.55, odds=-110, 1000 bets
Fraction | Bet % | Growth |   Median | % > Start | % < Half
-----------------------------------------------------------------
    10%K |   0.6% | 0.0002 |     1.2x |      65.4% |      0.0%
    20%K |   1.2% | 0.0003 |     1.4x |      72.1% |      0.1%
    25%K |   1.5% | 0.0004 |     1.5x |      74.8% |      0.2%
    33%K |   2.0% | 0.0005 |     1.7x |      77.3% |      0.5%
    50%K |   3.0% | 0.0006 |     2.0x |      80.2% |      1.5%
    75%K |   4.5% | 0.0007 |     2.4x |      81.8% |      4.2%
   100%K |   6.0% | 0.0008 |     2.8x |      82.5% |      8.1%

Key Insight: Quarter-Kelly ($\alpha = 0.25$) is a popular choice because it captures approximately 75% of the full Kelly growth rate while reducing the standard deviation of outcomes by 75%. It also provides a natural buffer against probability estimation errors.


14.2 Portfolio Theory Applied to Betting

From Single Bets to Portfolios

In practice, bettors rarely face a single bet in isolation. On any given day, you might have 5-20 simultaneous betting opportunities. How should you allocate your bankroll across these opportunities?

The answer comes from Harry Markowitz's Modern Portfolio Theory (MPT), adapted for the binary-outcome setting of sports betting.

The Bet Portfolio

Consider a portfolio of $n$ simultaneous bets. Each bet $i$ has: - Estimated win probability $p_i$ - Net odds $b_i$ (decimal odds minus 1) - Wager fraction $f_i$ (fraction of bankroll) - Expected return $\mu_i = p_i b_i - (1 - p_i) = p_i(b_i + 1) - 1$ - Return variance $\sigma_i^2 = p_i(1-p_i)(b_i + 1)^2$

The portfolio's expected return is:

$$ \mu_P = \sum_{i=1}^n f_i \mu_i $$

The portfolio's variance is:

$$ \sigma_P^2 = \sum_{i=1}^n \sum_{j=1}^n f_i f_j \sigma_{ij} $$

where $\sigma_{ij}$ is the covariance of returns between bets $i$ and $j$.

Covariance of Bet Outcomes

For independent bets, $\sigma_{ij} = 0$ for $i \neq j$, and the portfolio simplifies considerably. However, many bets are correlated:

  • Same-game bets: A team's spread and the total are correlated
  • Same-sport bets: NFL favorites all losing on one week is more likely than independent probabilities suggest (weather, scheduling patterns)
  • Cross-sport correlations: Public sentiment can create correlated mispricing across sports
  • Systematic factors: Market-wide factors (like sharp money syndicates betting multiple games simultaneously) create correlation
import numpy as np
from scipy.optimize import minimize

def estimate_bet_covariance(
    bets: list,
    correlation_matrix: np.ndarray = None
) -> np.ndarray:
    """
    Estimate the covariance matrix of bet returns.

    Parameters
    ----------
    bets : list of dict
        Each dict: {'prob': float, 'odds': float (decimal net)}
    correlation_matrix : np.ndarray, optional
        Pairwise correlation matrix. If None, assumes independence.

    Returns
    -------
    np.ndarray
        Covariance matrix of bet returns
    """
    n = len(bets)

    # Individual variances
    variances = []
    for bet in bets:
        p = bet['prob']
        b = bet['odds']
        var = p * (1 - p) * (b + 1) ** 2
        variances.append(var)

    variances = np.array(variances)

    if correlation_matrix is None:
        # Assume independence
        return np.diag(variances)

    # Build covariance matrix from correlations and variances
    std_devs = np.sqrt(variances)
    cov_matrix = np.outer(std_devs, std_devs) * correlation_matrix

    return cov_matrix

def optimal_portfolio_allocation(
    bets: list,
    correlation_matrix: np.ndarray = None,
    max_total_fraction: float = 0.20,
    max_single_fraction: float = 0.05,
    risk_aversion: float = 2.0
) -> dict:
    """
    Find the optimal allocation across a portfolio of bets using
    mean-variance optimization (adapted Markowitz framework).

    The objective is to maximize:
        utility = expected_return - (risk_aversion / 2) * variance

    Parameters
    ----------
    bets : list of dict
        Each dict: {'name': str, 'prob': float, 'odds': float (decimal net)}
    correlation_matrix : np.ndarray, optional
        Pairwise correlation matrix
    max_total_fraction : float
        Maximum total bankroll fraction across all bets
    max_single_fraction : float
        Maximum fraction on any single bet
    risk_aversion : float
        Risk aversion parameter (higher = more conservative)

    Returns
    -------
    dict with optimal allocations and portfolio metrics
    """
    n = len(bets)

    # Expected returns
    mu = np.array([
        bet['prob'] * bet['odds'] - (1 - bet['prob'])
        for bet in bets
    ])

    # Covariance matrix
    cov = estimate_bet_covariance(bets, correlation_matrix)

    # Objective: maximize utility = mu'f - (lambda/2) * f'Cov*f
    def neg_utility(f):
        expected_return = mu @ f
        variance = f @ cov @ f
        return -(expected_return - (risk_aversion / 2) * variance)

    # Constraints
    constraints = [
        {'type': 'ineq', 'fun': lambda f: max_total_fraction - np.sum(f)},
    ]

    # Bounds: each allocation between 0 and max_single_fraction
    bounds = [(0, max_single_fraction) for _ in range(n)]

    # Initial guess: equal allocation
    f0 = np.ones(n) * min(max_total_fraction / n, max_single_fraction)

    result = minimize(
        neg_utility, f0,
        method='SLSQP',
        bounds=bounds,
        constraints=constraints
    )

    optimal_f = result.x
    portfolio_return = mu @ optimal_f
    portfolio_variance = optimal_f @ cov @ optimal_f
    portfolio_std = np.sqrt(portfolio_variance)
    sharpe = portfolio_return / portfolio_std if portfolio_std > 0 else 0

    return {
        'allocations': {
            bets[i]['name']: {
                'fraction': optimal_f[i],
                'fraction_pct': optimal_f[i] * 100,
                'expected_return': mu[i],
                'individual_kelly': kelly_decimal(bets[i]['prob'],
                                                   bets[i]['odds'] + 1),
            }
            for i in range(n)
        },
        'portfolio_expected_return': portfolio_return,
        'portfolio_std': portfolio_std,
        'portfolio_sharpe': sharpe,
        'total_allocation': optimal_f.sum(),
        'total_allocation_pct': optimal_f.sum() * 100,
    }

# Example: portfolio of 5 bets
bets = [
    {'name': 'NFL: Patriots +3', 'prob': 0.54, 'odds': 0.909},   # -110
    {'name': 'NFL: Chiefs -7', 'prob': 0.56, 'odds': 0.909},     # -110
    {'name': 'NBA: Lakers ML', 'prob': 0.48, 'odds': 1.50},      # +150
    {'name': 'NHL: Bruins ML', 'prob': 0.58, 'odds': 0.769},     # -130
    {'name': 'NFL: Over 45.5', 'prob': 0.55, 'odds': 0.952},     # -105
]

# Correlation matrix (some NFL bets are mildly correlated)
corr = np.array([
    [1.00, 0.05, 0.00, 0.00, 0.15],   # Patriots vs Chiefs: slight corr
    [0.05, 1.00, 0.00, 0.00, 0.05],   # Chiefs vs Over: slight
    [0.00, 0.00, 1.00, 0.00, 0.00],   # Lakers: independent
    [0.00, 0.00, 0.00, 1.00, 0.00],   # Bruins: independent
    [0.15, 0.05, 0.00, 0.00, 1.00],   # Over correlated with Patriots
])

result = optimal_portfolio_allocation(bets, corr, risk_aversion=3.0)

print("Optimal Portfolio Allocation")
print("=" * 60)
for name, alloc in result['allocations'].items():
    print(f"  {name:25s}: {alloc['fraction_pct']:5.2f}% "
          f"(Kelly: {alloc['individual_kelly']*100:5.2f}%)")

print(f"\nTotal Allocation: {result['total_allocation_pct']:.2f}%")
print(f"Portfolio E[Return]: {result['portfolio_expected_return']*100:.3f}%")
print(f"Portfolio Std Dev:   {result['portfolio_std']*100:.3f}%")
print(f"Portfolio Sharpe:    {result['portfolio_sharpe']:.3f}")

The Diversification Benefit

Diversification reduces portfolio variance without reducing expected return (when bets are not perfectly correlated). The mathematical principle is:

$$ \text{Var}\left(\sum_i f_i R_i\right) = \sum_i f_i^2 \text{Var}(R_i) + 2\sum_{i

When bets are independent ($\text{Cov}(R_i, R_j) = 0$), the variance decreases as $1/n$ for equally sized bets. This means a portfolio of 10 independent bets has $\sqrt{10} \approx 3.2$ times better Sharpe ratio than a single bet.

def diversification_benefit(
    n_bets_range: range = range(1, 21),
    prob: float = 0.55,
    odds: float = 0.909,
    total_allocation: float = 0.10,
    correlation: float = 0.0
) -> list:
    """
    Calculate how diversification improves risk-adjusted returns.

    Parameters
    ----------
    n_bets_range : range
        Number of simultaneous bets to compare
    prob : float
        Win probability for each bet (assumed identical)
    odds : float
        Net decimal odds for each bet
    total_allocation : float
        Total bankroll fraction allocated
    correlation : float
        Pairwise correlation between all bets

    Returns
    -------
    list of dicts with portfolio metrics at each n
    """
    mu = prob * odds - (1 - prob)  # Expected return per bet
    var = prob * (1 - prob) * (odds + 1) ** 2  # Variance per bet

    results = []
    for n in n_bets_range:
        f_each = total_allocation / n  # Equal allocation

        portfolio_mu = n * f_each * mu  # = total_allocation * mu
        portfolio_var = (
            n * f_each**2 * var +
            n * (n - 1) * f_each**2 * correlation * var
        )
        portfolio_std = np.sqrt(portfolio_var)
        sharpe = portfolio_mu / portfolio_std if portfolio_std > 0 else 0

        results.append({
            'n_bets': n,
            'per_bet_fraction': f_each * 100,
            'portfolio_return': portfolio_mu * 100,
            'portfolio_std': portfolio_std * 100,
            'sharpe': sharpe,
        })

    return results

# Compare independent vs. correlated bets
print("Diversification Benefit (10% total allocation, p=0.55, -110)")
print(f"{'N Bets':>7} | {'Per Bet':>8} | {'E[Return]':>10} | "
      f"{'Std (indep)':>11} | {'Sharpe':>7} | {'Std (r=0.3)':>11} | {'Sharpe':>7}")
print("-" * 80)

results_indep = diversification_benefit(correlation=0.0)
results_corr = diversification_benefit(correlation=0.3)

for ri, rc in zip(results_indep, results_corr):
    print(f"{ri['n_bets']:>7} | {ri['per_bet_fraction']:>7.2f}% | "
          f"{ri['portfolio_return']:>9.3f}% | "
          f"{ri['portfolio_std']:>10.3f}% | {ri['sharpe']:>6.3f} | "
          f"{rc['portfolio_std']:>10.3f}% | {rc['sharpe']:>6.3f}")

Callout: The Practical Limit of Diversification

In theory, infinite diversification among uncorrelated bets would eliminate all variance. In practice, two factors limit this: (1) bets within the same sport and time period are rarely truly independent, and (2) you have finite time, attention, and bankroll. The sweet spot for most bettors is 5-15 simultaneous bets per day, enough to capture meaningful diversification while maintaining quality analysis on each bet.


14.3 Correlated Bets and Parlay Optimization

Understanding Bet Correlation

When two bet outcomes are correlated, the standard independent-bet analysis breaks down. The most common sources of correlation in sports betting:

Positive correlations: - Team moneyline and team spread (same team) - Team moneyline and over (winning team contributes to total) - Player props within the same game (game environment affects all) - Weather-sensitive bets (wind, rain affect multiple outcomes)

Negative correlations: - Home spread and away spread (opposite sides) - Over and under (definitionally) - Individual player totals if one player's usage comes at another's expense

from itertools import combinations

def calculate_joint_probability(
    prob_a: float,
    prob_b: float,
    correlation: float
) -> dict:
    """
    Calculate the joint probability table for two correlated binary events.

    Uses the technique of constructing a joint distribution from
    marginals and correlation.

    Parameters
    ----------
    prob_a : float
        Marginal probability of event A
    prob_b : float
        Marginal probability of event B
    correlation : float
        Pearson correlation between the two Bernoulli outcomes

    Returns
    -------
    dict with joint probabilities
    """
    # For two Bernoulli random variables with correlation rho:
    # P(A=1, B=1) = p_a * p_b + rho * sqrt(p_a * q_a * p_b * q_b)
    q_a = 1 - prob_a
    q_b = 1 - prob_b

    max_corr = np.sqrt((prob_a * q_a) / (prob_b * q_b))
    min_corr = -np.sqrt((prob_a * q_a) / (prob_b * q_b))

    # Ensure correlation is feasible
    rho = np.clip(correlation, max(min_corr, -1), min(max_corr, 1))

    p_both = prob_a * prob_b + rho * np.sqrt(prob_a * q_a * prob_b * q_b)
    p_a_not_b = prob_a - p_both
    p_not_a_b = prob_b - p_both
    p_neither = 1 - p_both - p_a_not_b - p_not_a_b

    # Ensure non-negative (numerical stability)
    p_both = max(0, p_both)
    p_a_not_b = max(0, p_a_not_b)
    p_not_a_b = max(0, p_not_a_b)
    p_neither = max(0, p_neither)

    # Renormalize
    total = p_both + p_a_not_b + p_not_a_b + p_neither
    p_both /= total
    p_a_not_b /= total
    p_not_a_b /= total
    p_neither /= total

    return {
        'both_win': p_both,
        'a_win_b_lose': p_a_not_b,
        'a_lose_b_win': p_not_a_b,
        'both_lose': p_neither,
        'effective_correlation': rho,
    }

# Example: Team spread and game total
joint = calculate_joint_probability(
    prob_a=0.55,  # Favorite covers
    prob_b=0.52,  # Over hits
    correlation=0.15  # Mild positive correlation
)

print("Joint Probability: Spread & Total")
print(f"  Both win:      {joint['both_win']:.4f}")
print(f"  Spread only:   {joint['a_win_b_lose']:.4f}")
print(f"  Total only:    {joint['a_lose_b_win']:.4f}")
print(f"  Both lose:     {joint['both_lose']:.4f}")
print(f"  (Independent would give both_win = {0.55*0.52:.4f})")

When Do Parlays Have Positive Expected Value?

The conventional wisdom is that parlays are sucker bets because the sportsbook takes vig on each leg. However, there are specific situations where parlays can offer positive expected value:

Condition 1: Correlated outcomes not properly priced

If two outcomes are positively correlated and the sportsbook prices the parlay as if they were independent, the parlay has extra value:

$$ P(\text{both win}) > P(A) \times P(B) $$

but the parlay pays as if:

$$ \text{Parlay odds} = \text{Odds}_A \times \text{Odds}_B $$

def parlay_ev_analysis(
    legs: list,
    correlation_matrix: np.ndarray = None,
    parlay_odds_method: str = 'standard'
) -> dict:
    """
    Analyze the expected value of a parlay accounting for correlations.

    Parameters
    ----------
    legs : list of dict
        Each dict: {'name': str, 'prob': float, 'decimal_odds': float}
    correlation_matrix : np.ndarray, optional
        Pairwise correlations between legs
    parlay_odds_method : str
        'standard' (multiply odds) or 'custom' (provided parlay odds)

    Returns
    -------
    dict with parlay EV analysis
    """
    n = len(legs)

    # Calculate standard parlay odds (assumes independence)
    parlay_decimal = 1.0
    for leg in legs:
        parlay_decimal *= leg['decimal_odds']

    # True probability of all legs winning
    if correlation_matrix is None or n <= 1:
        # Assume independence
        true_all_win = 1.0
        for leg in legs:
            true_all_win *= leg['prob']
    elif n == 2:
        joint = calculate_joint_probability(
            legs[0]['prob'], legs[1]['prob'],
            correlation_matrix[0, 1]
        )
        true_all_win = joint['both_win']
    else:
        # For n > 2, use Monte Carlo simulation
        true_all_win = _simulate_correlated_parlay(legs, correlation_matrix)

    # Independent probability (what the book assumes)
    independent_all_win = 1.0
    for leg in legs:
        independent_all_win *= leg['prob']

    # Implied probability from parlay odds
    implied_prob = 1.0 / parlay_decimal

    # Expected value
    ev = true_all_win * (parlay_decimal - 1) - (1 - true_all_win)
    ev_independent = independent_all_win * (parlay_decimal - 1) - (1 - independent_all_win)

    # Correlation bonus: extra EV from correlation
    correlation_bonus = (true_all_win - independent_all_win) * parlay_decimal

    return {
        'n_legs': n,
        'parlay_decimal_odds': parlay_decimal,
        'parlay_american_odds': _decimal_to_american(parlay_decimal),
        'implied_probability': implied_prob,
        'independent_probability': independent_all_win,
        'true_probability': true_all_win,
        'ev_assuming_independent': ev_independent,
        'ev_with_correlation': ev,
        'correlation_bonus': correlation_bonus,
        'is_positive_ev': ev > 0,
    }

def _simulate_correlated_parlay(
    legs: list,
    correlation_matrix: np.ndarray,
    n_sim: int = 100000
) -> float:
    """Simulate the joint probability of all legs winning."""
    n = len(legs)
    probs = np.array([leg['prob'] for leg in legs])

    # Generate correlated uniform random variables using Gaussian copula
    L = np.linalg.cholesky(correlation_matrix)
    Z = np.random.randn(n_sim, n) @ L.T

    # Convert to uniform via CDF
    from scipy.stats import norm
    U = norm.cdf(Z)

    # Each leg wins if uniform < probability
    wins = U < probs[np.newaxis, :]
    all_win = wins.all(axis=1)

    return all_win.mean()

def _decimal_to_american(decimal_odds: float) -> float:
    """Convert decimal odds to American odds."""
    if decimal_odds >= 2.0:
        return (decimal_odds - 1) * 100
    else:
        return -100 / (decimal_odds - 1)

# Example: Correlated same-game parlay
legs = [
    {'name': 'Team A ML', 'prob': 0.62, 'decimal_odds': 1.65},
    {'name': 'Over 48.5', 'prob': 0.54, 'decimal_odds': 1.91},
]

# Positive correlation: if the favorite wins, the game is more likely to go over
corr = np.array([[1.0, 0.20], [0.20, 1.0]])

result = parlay_ev_analysis(legs, corr)

print("Parlay EV Analysis")
print(f"  Legs: {result['n_legs']}")
print(f"  Parlay Odds: {result['parlay_decimal_odds']:.3f} "
      f"({result['parlay_american_odds']:+.0f})")
print(f"  Implied Prob: {result['implied_probability']:.4f}")
print(f"  Independent Prob: {result['independent_probability']:.4f}")
print(f"  True Prob (correlated): {result['true_probability']:.4f}")
print(f"  EV (independent): {result['ev_assuming_independent']:.4f}")
print(f"  EV (correlated): {result['ev_with_correlation']:.4f}")
print(f"  Correlation Bonus: {result['correlation_bonus']:.4f}")
print(f"  Positive EV: {result['is_positive_ev']}")

Condition 2: Individual legs have sufficient edge

If each leg individually has positive expected value, the parlay compounds that edge:

$$ \text{EV}_{\text{parlay}} = \prod_i (1 + \text{edge}_i) - 1 $$

For two legs each with 5% edge: $(1.05)^2 - 1 = 10.25\%$ edge on the parlay.

Condition 3: Enhanced or boosted parlays

Sportsbooks increasingly offer "odds boosts" on parlays as promotions. When the boost is large enough, it can create positive EV even on legs without individual edge.

Optimal Parlay Construction

Given a set of potential bets, which combinations (if any) should be parlayed?

def find_optimal_parlays(
    bets: list,
    correlation_matrix: np.ndarray,
    max_legs: int = 4,
    min_ev: float = 0.0
) -> list:
    """
    Find all parlay combinations with positive expected value.

    Parameters
    ----------
    bets : list of dict
        Available bets with prob and decimal_odds
    correlation_matrix : np.ndarray
        Full correlation matrix for all bets
    max_legs : int
        Maximum number of legs in a parlay
    min_ev : float
        Minimum EV to include in results

    Returns
    -------
    list of parlay opportunities sorted by EV
    """
    n = len(bets)
    parlays = []

    for n_legs in range(2, min(max_legs + 1, n + 1)):
        for combo in combinations(range(n), n_legs):
            legs = [bets[i] for i in combo]
            sub_corr = correlation_matrix[np.ix_(combo, combo)]

            result = parlay_ev_analysis(legs, sub_corr)

            if result['ev_with_correlation'] >= min_ev:
                parlays.append({
                    'legs': [bets[i]['name'] for i in combo],
                    'n_legs': n_legs,
                    'ev': result['ev_with_correlation'],
                    'true_prob': result['true_probability'],
                    'parlay_odds': result['parlay_decimal_odds'],
                    'correlation_bonus': result['correlation_bonus'],
                })

    return sorted(parlays, key=lambda x: -x['ev'])

Callout: The Same-Game Parlay (SGP) Opportunity

Same-game parlays are one of the few areas where recreational-oriented sportsbooks regularly misprice correlated outcomes. The book typically prices SGPs using independence assumptions (or conservative correlation adjustments), but the true correlations between outcomes within a single game can be substantial. A bettor who accurately models within-game correlations can find consistent positive EV in SGP markets. However, the edge per parlay is often small, and the hit rate is low, requiring disciplined bankroll management.


14.4 Drawdown Management and Recovery

Maximum Drawdown Analysis

A drawdown is the peak-to-trough decline in bankroll value. Understanding drawdown behavior is critical because:

  1. Drawdowns are psychologically devastating -- even with a positive edge, extended drawdowns cause self-doubt and poor decision-making
  2. Drawdowns determine survival -- if you run out of bankroll, your edge is meaningless
  3. Drawdowns inform bankroll sizing -- your initial bankroll should be large enough to survive expected drawdowns

Mathematical Framework

For a series of bets with fraction $f$ wagered, the maximum drawdown $D$ over $n$ bets follows a distribution that depends on $f$, $p$, $b$, and $n$.

The expected maximum drawdown can be approximated for a biased random walk. For a bettor with edge $\mu$ per bet and standard deviation $\sigma$ per bet, the expected maximum drawdown over $n$ bets is approximately:

$$ E[D_{\max}] \approx \frac{\sigma^2}{2\mu} \left( \log n + 0.5772 + \log\left(\frac{\sigma^2}{2\mu}\right) \right) $$

where $0.5772...$ is the Euler-Mascheroni constant. This formula applies in the regime where $\mu > 0$ (positive edge) and $n$ is large.

import numpy as np
from scipy.stats import norm

def drawdown_analysis(
    probability: float,
    american_odds: float,
    kelly_fraction: float = 0.25,
    n_bets: int = 1000,
    n_simulations: int = 20000,
    bankroll: float = 10000
) -> dict:
    """
    Comprehensive drawdown analysis via Monte Carlo simulation.

    Parameters
    ----------
    probability : float
        Win probability
    american_odds : float
        American odds
    kelly_fraction : float
        Fraction of full Kelly to wager
    n_bets : int
        Number of bets to simulate
    n_simulations : int
        Number of simulation paths
    bankroll : float
        Starting bankroll

    Returns
    -------
    dict with drawdown statistics
    """
    full_kelly = kelly_american(probability, american_odds)
    f = full_kelly * kelly_fraction

    if american_odds > 0:
        b = american_odds / 100
    else:
        b = 100 / abs(american_odds)

    max_drawdowns = []
    max_drawdown_durations = []
    recovery_times = []
    ruin_count = 0
    ruin_threshold = 0.10  # 90% drawdown = ruin

    for _ in range(n_simulations):
        wealth = bankroll
        peak = bankroll
        max_dd = 0
        max_dd_duration = 0
        current_dd_duration = 0
        recovery_time_this_sim = []
        dd_start = None

        for bet_num in range(n_bets):
            if np.random.random() < probability:
                wealth *= (1 + b * f)
            else:
                wealth *= (1 - f)

            # Drawdown tracking
            if wealth > peak:
                peak = wealth
                if dd_start is not None:
                    recovery_time_this_sim.append(bet_num - dd_start)
                    dd_start = None
                current_dd_duration = 0
            else:
                current_dd_duration += 1
                if dd_start is None:
                    dd_start = bet_num

            dd = (peak - wealth) / peak
            if dd > max_dd:
                max_dd = dd
                max_dd_duration = current_dd_duration

            # Check for ruin
            if wealth < bankroll * ruin_threshold:
                ruin_count += 1
                break

        max_drawdowns.append(max_dd)
        max_drawdown_durations.append(max_dd_duration)
        if recovery_time_this_sim:
            recovery_times.extend(recovery_time_this_sim)

    max_drawdowns = np.array(max_drawdowns)
    max_drawdown_durations = np.array(max_drawdown_durations)

    return {
        'bet_size_pct': f * 100,
        'kelly_fraction': kelly_fraction,
        'n_bets': n_bets,
        'mean_max_drawdown': max_drawdowns.mean(),
        'median_max_drawdown': np.median(max_drawdowns),
        'p90_max_drawdown': np.percentile(max_drawdowns, 90),
        'p95_max_drawdown': np.percentile(max_drawdowns, 95),
        'p99_max_drawdown': np.percentile(max_drawdowns, 99),
        'worst_drawdown': max_drawdowns.max(),
        'mean_dd_duration': max_drawdown_durations.mean(),
        'median_dd_duration': np.median(max_drawdown_durations),
        'p95_dd_duration': np.percentile(max_drawdown_durations, 95),
        'ruin_probability': ruin_count / n_simulations,
        'mean_recovery_time': np.mean(recovery_times) if recovery_times else None,
        'median_recovery_time': np.median(recovery_times) if recovery_times else None,
    }

# Compare drawdowns at different Kelly fractions
print("Drawdown Analysis: p=0.55, odds=-110, 1000 bets")
print(f"{'Kelly':>6} | {'Bet %':>6} | {'Mean DD':>8} | {'P95 DD':>8} | "
      f"{'P99 DD':>8} | {'Ruin %':>7} | {'Mean Dur':>9}")
print("-" * 70)

for kf in [0.10, 0.20, 0.25, 0.33, 0.50, 0.75, 1.00]:
    result = drawdown_analysis(0.55, -110, kelly_fraction=kf,
                                n_simulations=5000)
    print(f"{kf:>5.0%}K | {result['bet_size_pct']:>5.1f}% | "
          f"{result['mean_max_drawdown']:>7.1%} | "
          f"{result['p95_max_drawdown']:>7.1%} | "
          f"{result['p99_max_drawdown']:>7.1%} | "
          f"{result['ruin_probability']:>6.2%} | "
          f"{result['mean_dd_duration']:>8.0f}")

Recovery Time Estimation

After a drawdown, how long does it take to recover? The expected recovery time from a drawdown of depth $d$ depends on your edge and bet size:

def estimate_recovery_time(
    drawdown_depth: float,
    probability: float,
    american_odds: float,
    kelly_fraction: float = 0.25,
    confidence: float = 0.90
) -> dict:
    """
    Estimate the number of bets needed to recover from a drawdown.

    Parameters
    ----------
    drawdown_depth : float
        Current drawdown as a fraction (e.g., 0.20 = 20% below peak)
    probability : float
        Win probability
    american_odds : float
        American odds
    kelly_fraction : float
        Fraction of full Kelly
    confidence : float
        Confidence level for recovery time estimate

    Returns
    -------
    dict with recovery time estimates
    """
    full_kelly = kelly_american(probability, american_odds)
    f = full_kelly * kelly_fraction

    if american_odds > 0:
        b = american_odds / 100
    else:
        b = 100 / abs(american_odds)

    # Expected log return per bet
    mu_log = probability * np.log(1 + b * f) + (1 - probability) * np.log(1 - f)

    # Variance of log return per bet
    var_log = (probability * np.log(1 + b * f)**2 +
               (1 - probability) * np.log(1 - f)**2 - mu_log**2)
    sigma_log = np.sqrt(var_log)

    # We need log wealth to increase by -log(1 - drawdown) to recover
    target = -np.log(1 - drawdown_depth)

    # Expected recovery time (first passage time approximation)
    expected_recovery = target / mu_log if mu_log > 0 else float('inf')

    # Confidence interval on recovery time using normal approximation
    # After n bets, log wealth ~ N(n*mu, n*sigma^2)
    # We want P(sum > target) >= confidence
    # n*mu - z*sigma*sqrt(n) >= target
    z = norm.ppf(confidence)

    # Solve quadratic: n*mu - z*sigma*sqrt(n) - target = 0
    # Let u = sqrt(n): mu*u^2 - z*sigma*u - target = 0
    a_coeff = mu_log
    b_coeff = -z * sigma_log
    c_coeff = -target

    discriminant = b_coeff**2 - 4 * a_coeff * c_coeff
    if discriminant < 0 or a_coeff <= 0:
        confident_recovery = float('inf')
    else:
        u = (-b_coeff + np.sqrt(discriminant)) / (2 * a_coeff)
        confident_recovery = u**2

    return {
        'drawdown_depth': drawdown_depth,
        'drawdown_pct': drawdown_depth * 100,
        'expected_recovery_bets': expected_recovery,
        f'recovery_bets_{confidence:.0%}_confidence': confident_recovery,
        'mu_per_bet': mu_log,
        'sigma_per_bet': sigma_log,
    }

# Recovery times from various drawdown depths
print("Recovery Time Estimates (p=0.55, -110, 25% Kelly)")
print(f"{'Drawdown':>9} | {'Expected':>10} | {'90% Conf':>10}")
print("-" * 35)

for dd in [0.05, 0.10, 0.15, 0.20, 0.30, 0.40, 0.50]:
    r = estimate_recovery_time(dd, 0.55, -110, 0.25)
    print(f"{dd:>8.0%} | {r['expected_recovery_bets']:>9.0f} | "
          f"{r['recovery_bets_90%_confidence']:>9.0f}")

The Emotional Impact of Drawdowns

Callout: The Psychology of Drawdowns

Mathematical analysis of drawdowns is necessary but not sufficient. The emotional toll of a sustained drawdown can be profound, even for experienced bettors who intellectually understand variance. Common psychological traps during drawdowns include:

  • Tilt betting: Increasing bet sizes to "get back to even" faster
  • Strategy abandonment: Switching to a new approach mid-drawdown, often at the worst possible time
  • Selective perception: Remembering losses vividly while discounting wins
  • Doubt spiral: Questioning your entire edge, which can lead to correct-but-painful decisions (stopping betting) or incorrect-but-feels-right decisions (changing strategy)

The best defense against emotional decision-making during drawdowns is a pre-commitment plan: "If my drawdown exceeds X%, I will [specific action] rather than [emotional reaction]." Write this plan when you are thinking clearly, and follow it when you are not.

Drawdown Stop-Loss Rules

Some professional bettors implement drawdown-based stop-loss rules. Here is a framework:

@dataclass
class DrawdownPolicy:
    """
    A pre-committed policy for handling drawdowns.

    Attributes
    ----------
    level_1_threshold : float
        First drawdown level (e.g., 15%)
    level_1_action : str
        Action at level 1
    level_2_threshold : float
        Second drawdown level (e.g., 25%)
    level_2_action : str
        Action at level 2
    level_3_threshold : float
        Critical drawdown level (e.g., 40%)
    level_3_action : str
        Action at level 3
    review_frequency : int
        Review interval in bets
    """
    level_1_threshold: float = 0.15
    level_1_action: str = "Reduce bet size by 50%. Review last 100 bets for process errors."
    level_2_threshold: float = 0.25
    level_2_action: str = "Reduce bet size by 75%. Full model review. Consider pausing."
    level_3_threshold: float = 0.40
    level_3_action: str = "Pause all betting. Complete audit of model and process."
    review_frequency: int = 50

    def evaluate(self, current_drawdown: float) -> str:
        """Determine the appropriate action for the current drawdown level."""
        if current_drawdown >= self.level_3_threshold:
            return f"LEVEL 3 ({current_drawdown:.1%}): {self.level_3_action}"
        elif current_drawdown >= self.level_2_threshold:
            return f"LEVEL 2 ({current_drawdown:.1%}): {self.level_2_action}"
        elif current_drawdown >= self.level_1_threshold:
            return f"LEVEL 1 ({current_drawdown:.1%}): {self.level_1_action}"
        else:
            return f"NORMAL ({current_drawdown:.1%}): Continue standard operations."

# Example usage
policy = DrawdownPolicy()
for dd in [0.05, 0.15, 0.20, 0.28, 0.45]:
    print(policy.evaluate(dd))

14.5 Multi-Account and Multi-Sport Bankroll Allocation

The Multi-Account Reality

Serious sports bettors maintain accounts at multiple sportsbooks for two reasons:

  1. Line shopping: Different books offer different prices (Chapter 12)
  2. Account longevity: Spreading action across books reduces the chance of any single book limiting your account

This creates a practical challenge: how do you allocate your total bankroll across accounts?

Bankroll Allocation Across Sportsbooks

from dataclasses import dataclass, field
from typing import Dict, List

@dataclass
class SportsbookAccount:
    """Represents a sportsbook account."""
    name: str
    current_balance: float
    max_deposit: float  # Maximum additional deposit allowed
    avg_juice: float  # Average vig (e.g., 0.043 for 4.3%)
    bet_limits: Dict[str, float] = field(default_factory=dict)
    # e.g., {'NFL_side': 5000, 'NBA_side': 3000}
    restriction_risk: float = 0.0  # 0=safe, 1=about to be limited
    withdrawal_speed: str = 'medium'  # 'fast', 'medium', 'slow'
    promotions_value: float = 0.0  # Monthly estimated promo value

@dataclass
class BankrollAllocator:
    """
    Manages bankroll allocation across multiple sportsbook accounts.

    Parameters
    ----------
    total_bankroll : float
        Total available bankroll (including all account balances + reserves)
    accounts : list of SportsbookAccount
    reserve_fraction : float
        Fraction of bankroll to keep as liquid reserve (not deposited)
    """
    total_bankroll: float
    accounts: List[SportsbookAccount]
    reserve_fraction: float = 0.15

    def optimal_allocation(self) -> Dict[str, float]:
        """
        Calculate optimal bankroll allocation across accounts.

        Factors considered:
        1. Juice efficiency (lower juice = more allocation)
        2. Bet limits (higher limits = more allocation)
        3. Restriction risk (higher risk = less allocation)
        4. Promotion value (higher value = more allocation)
        5. Reserve requirements
        """
        available = self.total_bankroll * (1 - self.reserve_fraction)
        n = len(self.accounts)

        # Score each account
        scores = {}
        for acct in self.accounts:
            # Lower juice is better
            juice_score = 1 - (acct.avg_juice / 0.10)  # Normalize to 10% max

            # Higher limits are better
            avg_limit = (
                np.mean(list(acct.bet_limits.values()))
                if acct.bet_limits else 1000
            )
            limit_score = min(avg_limit / 10000, 1.0)

            # Lower restriction risk is better
            restriction_score = 1 - acct.restriction_risk

            # Promotion value adds to score
            promo_score = min(acct.promotions_value / 500, 1.0)

            # Composite score
            composite = (
                0.30 * juice_score +
                0.25 * limit_score +
                0.25 * restriction_score +
                0.20 * promo_score
            )
            scores[acct.name] = max(0.01, composite)

        # Allocate proportionally to scores
        total_score = sum(scores.values())
        allocations = {}
        for acct in self.accounts:
            fraction = scores[acct.name] / total_score
            target = available * fraction

            # Cap at what the account can hold
            max_balance = acct.current_balance + acct.max_deposit
            actual = min(target, max_balance)

            allocations[acct.name] = {
                'target_balance': actual,
                'current_balance': acct.current_balance,
                'action_needed': actual - acct.current_balance,
                'fraction_of_bankroll': actual / self.total_bankroll,
                'score': scores[acct.name],
            }

        return allocations

    def rebalance_recommendations(self) -> List[str]:
        """Generate specific rebalancing recommendations."""
        allocations = self.optimal_allocation()
        recommendations = []

        for name, alloc in allocations.items():
            action = alloc['action_needed']
            if action > 100:
                recommendations.append(
                    f"DEPOSIT ${action:.0f} into {name} "
                    f"(target: ${alloc['target_balance']:.0f})"
                )
            elif action < -100:
                recommendations.append(
                    f"WITHDRAW ${abs(action):.0f} from {name} "
                    f"(target: ${alloc['target_balance']:.0f})"
                )

        reserve_target = self.total_bankroll * self.reserve_fraction
        current_reserve = self.total_bankroll - sum(
            a.current_balance for a in self.accounts
        )

        if abs(current_reserve - reserve_target) > 100:
            recommendations.append(
                f"RESERVE: Adjust to ${reserve_target:.0f} "
                f"(currently ${current_reserve:.0f})"
            )

        return recommendations

# Example: Managing 5 sportsbook accounts
accounts = [
    SportsbookAccount(
        name="Sharp Book A",
        current_balance=3000, max_deposit=10000,
        avg_juice=0.032,
        bet_limits={'NFL_side': 10000, 'NBA_side': 5000},
        restriction_risk=0.1, promotions_value=0
    ),
    SportsbookAccount(
        name="Mainstream Book B",
        current_balance=2500, max_deposit=5000,
        avg_juice=0.045,
        bet_limits={'NFL_side': 5000, 'NBA_side': 3000},
        restriction_risk=0.3, promotions_value=200
    ),
    SportsbookAccount(
        name="Promo-Heavy Book C",
        current_balance=1500, max_deposit=5000,
        avg_juice=0.050,
        bet_limits={'NFL_side': 3000, 'NBA_side': 2000},
        restriction_risk=0.2, promotions_value=500
    ),
    SportsbookAccount(
        name="Low-Limit Book D",
        current_balance=800, max_deposit=3000,
        avg_juice=0.048,
        bet_limits={'NFL_side': 1000, 'NBA_side': 500},
        restriction_risk=0.6, promotions_value=100
    ),
    SportsbookAccount(
        name="New Book E",
        current_balance=500, max_deposit=10000,
        avg_juice=0.040,
        bet_limits={'NFL_side': 5000, 'NBA_side': 3000},
        restriction_risk=0.05, promotions_value=300
    ),
]

allocator = BankrollAllocator(
    total_bankroll=20000,
    accounts=accounts,
    reserve_fraction=0.15
)

print("Optimal Bankroll Allocation")
print("=" * 60)
allocations = allocator.optimal_allocation()
for name, alloc in allocations.items():
    print(f"  {name:25s}: ${alloc['target_balance']:>7,.0f} "
          f"({alloc['fraction_of_bankroll']:.1%})")

print(f"\n  Reserve: ${20000 * 0.15:>7,.0f} (15.0%)")
print(f"\nRebalancing Recommendations:")
for rec in allocator.rebalance_recommendations():
    print(f"  -> {rec}")

Allocating by Sport

Different sports have different characteristics that affect optimal allocation:

Sport Season Length Games/Week Market Efficiency Recommended Allocation Strategy
NFL Sep-Feb 15-16 Very high (sides) Concentrated, fewer bets, larger stakes
NBA Oct-Jun 50-80 High Moderate volume, moderate stakes
MLB Apr-Oct 90-105 Moderate High volume, smaller stakes
NHL Oct-Jun 50-80 Moderate Similar to NBA but less liquid
Soccer Year-round Varies Moderate-Low Varies widely by league
College FB Sep-Jan 40-60 Lower Higher edge opportunities, lower limits
College BB Nov-Mar 50-100 Lower High volume, many markets
def seasonal_bankroll_plan(
    total_bankroll: float,
    month: int,
    sports_active: dict
) -> dict:
    """
    Generate a seasonal bankroll allocation plan.

    Parameters
    ----------
    total_bankroll : float
        Total available bankroll
    month : int
        Current month (1-12)
    sports_active : dict
        {sport: {'edge': float, 'volume': int, 'efficiency': float}}
        where volume is expected bets per month
        and efficiency is 0-1 (higher = more efficient/harder)

    Returns
    -------
    dict with allocation by sport
    """
    # Filter to only active sports
    active = {
        sport: info for sport, info in sports_active.items()
        if info.get('active', True)
    }

    if not active:
        return {'error': 'No active sports'}

    # Score each sport: higher edge, higher volume, lower efficiency = more allocation
    scores = {}
    for sport, info in active.items():
        # Expected monthly profit proxy
        expected_value = info['edge'] * info['volume']
        # Discount for high efficiency (harder to maintain edge)
        discount = 1 - info['efficiency'] * 0.3
        scores[sport] = expected_value * discount

    total_score = sum(scores.values())

    # Reserve 10% as buffer
    allocatable = total_bankroll * 0.90

    allocation = {}
    for sport, score in scores.items():
        fraction = score / total_score
        amount = allocatable * fraction

        allocation[sport] = {
            'amount': amount,
            'fraction': amount / total_bankroll,
            'expected_bets': active[sport]['volume'],
            'per_bet_kelly_base': amount * 0.03,  # Approximate typical unit
            'edge': active[sport]['edge'],
        }

    return allocation

# Example: January allocation (NFL playoffs, NBA, NHL active)
sports_jan = {
    'NFL': {'edge': 0.02, 'volume': 15, 'efficiency': 0.9, 'active': True},
    'NBA': {'edge': 0.025, 'volume': 100, 'efficiency': 0.75, 'active': True},
    'NHL': {'edge': 0.03, 'volume': 60, 'efficiency': 0.65, 'active': True},
    'MLB': {'edge': 0.03, 'volume': 0, 'efficiency': 0.60, 'active': False},
    'College BB': {'edge': 0.035, 'volume': 80, 'efficiency': 0.55, 'active': True},
}

plan = seasonal_bankroll_plan(20000, 1, sports_jan)

print("January Bankroll Allocation Plan")
print("=" * 55)
for sport, info in plan.items():
    if isinstance(info, dict) and 'amount' in info:
        print(f"  {sport:15s}: ${info['amount']:>7,.0f} ({info['fraction']:>5.1%}) "
              f"| ~{info['expected_bets']} bets | Edge: {info['edge']:.1%}")

Seasonal Bankroll Management

Your bankroll needs change throughout the year as sports seasons start and end:

def annual_bankroll_calendar() -> dict:
    """
    Generate a month-by-month guide for bankroll management
    across the US sports calendar.
    """
    calendar = {
        1: {
            'label': 'January',
            'sports': ['NFL Playoffs', 'NBA', 'NHL', 'College BB'],
            'intensity': 'HIGH',
            'notes': 'NFL playoffs demand premium pricing. College BB conference play begins.',
            'allocation_tilt': 'Increase College BB. NFL small but high-conviction.',
        },
        2: {
            'label': 'February',
            'sports': ['Super Bowl', 'NBA', 'NHL', 'College BB'],
            'intensity': 'HIGH',
            'notes': 'Super Bowl props market is massive and inefficient. NBA trade deadline creates value.',
            'allocation_tilt': 'Super Bowl props. NBA post-trade deadline edges.',
        },
        3: {
            'label': 'March',
            'sports': ['NBA', 'NHL', 'College BB (March Madness)'],
            'intensity': 'VERY HIGH',
            'notes': 'March Madness is the highest-volume betting event. Many inefficient markets.',
            'allocation_tilt': 'Heavy College BB allocation. Reduce other sports slightly.',
        },
        4: {
            'label': 'April',
            'sports': ['NBA Playoffs', 'NHL Playoffs', 'MLB Opening', 'College BB Final Four'],
            'intensity': 'HIGH',
            'notes': 'Playoff pricing tightens. MLB season opens with uncertain lines.',
            'allocation_tilt': 'MLB early season value. Playoff series bets in NBA/NHL.',
        },
        5: {
            'label': 'May',
            'sports': ['NBA Playoffs', 'NHL Playoffs', 'MLB'],
            'intensity': 'MODERATE',
            'notes': 'Conference finals. MLB daily grind begins.',
            'allocation_tilt': 'Balanced allocation across active sports.',
        },
        6: {
            'label': 'June',
            'sports': ['NBA Finals', 'NHL Finals', 'MLB', 'Soccer (Euros/Copa)'],
            'intensity': 'MODERATE',
            'notes': 'Championship series. International soccer tournaments in even years.',
            'allocation_tilt': 'MLB primary. Soccer tournaments when available.',
        },
        7: {
            'label': 'July',
            'sports': ['MLB', 'Summer League'],
            'intensity': 'LOW',
            'notes': 'Slowest month. MLB only major sport. Good time for model development.',
            'allocation_tilt': 'MLB focus. Reduce total allocation; build reserves for fall.',
        },
        8: {
            'label': 'August',
            'sports': ['MLB', 'NFL Preseason', 'Soccer (Europe)'],
            'intensity': 'LOW-MODERATE',
            'notes': 'NFL preseason has exploitable markets. European soccer starts.',
            'allocation_tilt': 'MLB + early soccer. Small NFL preseason plays.',
        },
        9: {
            'label': 'September',
            'sports': ['NFL', 'College FB', 'MLB Pennant Race', 'NBA Preseason'],
            'intensity': 'HIGH',
            'notes': 'Football returns! Early NFL lines can be off. College FB openers are inefficient.',
            'allocation_tilt': 'NFL and College FB primary. MLB postseason begins.',
        },
        10: {
            'label': 'October',
            'sports': ['NFL', 'College FB', 'NBA', 'NHL', 'MLB Playoffs', 'Soccer'],
            'intensity': 'VERY HIGH',
            'notes': 'All major sports active. Maximum diversification opportunity.',
            'allocation_tilt': 'Spread broadly. This is peak diversification month.',
        },
        11: {
            'label': 'November',
            'sports': ['NFL', 'College FB', 'NBA', 'NHL', 'College BB'],
            'intensity': 'VERY HIGH',
            'notes': 'Thanksgiving NFL slate. College rivalry weeks. College BB tips off.',
            'allocation_tilt': 'Maximum allocation across all sports.',
        },
        12: {
            'label': 'December',
            'sports': ['NFL', 'College FB (Bowls)', 'NBA', 'NHL', 'College BB'],
            'intensity': 'HIGH',
            'notes': 'Bowl season. NFL playoff implications. NBA Christmas games.',
            'allocation_tilt': 'College bowl games can be inefficient. NFL primary.',
        },
    }

    return calendar

Multi-Account Rebalancing Protocol

A practical protocol for keeping accounts balanced:

def weekly_rebalance_check(
    allocator: BankrollAllocator,
    rebalance_threshold: float = 0.10
) -> dict:
    """
    Check if accounts need rebalancing and generate instructions.

    Parameters
    ----------
    allocator : BankrollAllocator
    rebalance_threshold : float
        Trigger rebalance if any account deviates more than this
        from target (as fraction of its target)

    Returns
    -------
    dict with rebalancing status and instructions
    """
    optimal = allocator.optimal_allocation()
    needs_rebalance = False
    instructions = []

    for name, alloc in optimal.items():
        target = alloc['target_balance']
        current = alloc['current_balance']

        if target > 0:
            deviation = abs(current - target) / target
        else:
            deviation = 0

        if deviation > rebalance_threshold:
            needs_rebalance = True
            if current < target:
                instructions.append({
                    'action': 'DEPOSIT',
                    'book': name,
                    'amount': target - current,
                    'reason': f'{deviation:.0%} below target',
                })
            else:
                instructions.append({
                    'action': 'WITHDRAW',
                    'book': name,
                    'amount': current - target,
                    'reason': f'{deviation:.0%} above target',
                })

    return {
        'needs_rebalance': needs_rebalance,
        'n_adjustments': len(instructions),
        'instructions': instructions,
        'total_transfer_needed': sum(i['amount'] for i in instructions),
    }

Callout: Tax and Legal Considerations

In many jurisdictions, gambling income is taxable. Maintaining detailed records through a bet tracker (Section 13.3) is not just good practice -- it is a legal obligation. Multi-account management creates additional complexity: transfers between accounts, promotional credits, and sign-up bonuses all may have tax implications. Consult a tax professional familiar with gambling income in your jurisdiction. The bet tracking system we built in Chapter 13 can export the data needed for tax reporting.


14.6 Chapter Summary

This chapter elevated bankroll management from a basic concept to an advanced, mathematically rigorous discipline. Here are the key takeaways:

Kelly Criterion Derivation: - The Kelly criterion maximizes the expected logarithm of wealth, which is equivalent to maximizing the long-run geometric growth rate - The formula $f^* = (pb - q)/b$ emerges naturally from calculus optimization of the growth function - Logarithmic utility provides ruin avoidance, asymptotic optimality, and alignment with diminishing marginal utility - Fractional Kelly (typically 20-33% of full Kelly) is strongly recommended in practice due to probability estimation errors and psychological sustainability

Portfolio Theory: - Markowitz mean-variance optimization adapts naturally to bet portfolios - Correlated bets reduce the diversification benefit; account for covariance in allocation - Optimal portfolio allocation considers edge, correlation, market efficiency, liquidity, and timing - Diversification across 5-15 simultaneous independent bets dramatically improves risk-adjusted returns

Correlated Bets and Parlays: - Same-game parlays can be +EV when books fail to properly price positive correlations - The Gaussian copula method enables simulation of correlated parlay outcomes - Parlays compound edge but also compound vig; they are only advantageous with sufficient per-leg edge or mispriced correlations - Systematic scanning for optimal parlay combinations can uncover hidden value

Drawdown Management: - Maximum drawdown analysis quantifies the worst expected decline in bankroll - Recovery time grows rapidly with drawdown depth; a 30% drawdown can take hundreds of bets to recover - Pre-committed drawdown policies prevent emotional decision-making during inevitable downswings - The psychological impact of drawdowns is often more damaging than the financial impact

Multi-Account and Multi-Sport Allocation: - Bankroll allocation across sportsbooks should consider juice efficiency, limits, restriction risk, and promotional value - Seasonal allocation shifts as sports calendars change throughout the year - Regular (weekly) rebalancing maintains optimal allocation as account balances drift - October-November offers the greatest diversification opportunity; July is the leanest month

These advanced bankroll strategies, combined with the value betting framework from Chapter 13 and the line shopping discipline from Chapter 12, form a complete toolkit for professional-level sports betting. The mathematics provides the foundation, but success ultimately depends on disciplined execution over thousands of bets.


Exercises

Exercise 14.1: Derive the Kelly criterion for a three-outcome bet (win amount $b_1$ with probability $p_1$, win amount $b_2$ with probability $p_2$, lose with probability $q = 1 - p_1 - p_2$). Show that the optimal fraction depends on all three probabilities and both payoffs.

Exercise 14.2: Build a Monte Carlo simulation comparing the following five strategies over 5,000 bets (p=0.54, odds=-110): (a) Flat 2% of initial bankroll, (b) Flat 2% of current bankroll, (c) Full Kelly, (d) Half Kelly, (e) Quarter Kelly. Compare terminal wealth distributions, maximum drawdowns, and ruin probabilities.

Exercise 14.3: Using the portfolio optimization framework from Section 14.2, find the optimal allocation for a portfolio of 8 bets where the first 4 are NFL games (pairwise correlation 0.05) and the last 4 are NBA games (pairwise correlation 0.03), with cross-sport correlation of 0.01. Compare the portfolio Sharpe ratio to the individual bets' Sharpe ratios.

Exercise 14.4: Write a function that takes a matrix of same-game prop correlations (e.g., rushing yards, passing yards, total points, spread) and identifies the SGP combinations with the highest positive expected value. Test it on a realistic 6-prop correlation matrix.

Exercise 14.5: Create a full annual bankroll management plan for a bettor with a $25,000 bankroll, accounts at 4 sportsbooks, edges in NFL (3%), NBA (2.5%), MLB (3.5%), and College BB (4%). The plan should specify monthly allocations by sport and by book, accounting for seasonal availability and expected bet volume. Include drawdown policies and rebalancing triggers.

Exercise 14.6 (Challenge): Extend the Kelly criterion to handle the case where your probability estimate $\hat{p}$ is itself a random variable with known distribution (e.g., $\hat{p} \sim \text{Beta}(\alpha, \beta)$). Derive the "Bayesian Kelly" fraction that accounts for estimation uncertainty. Compare it to the standard fractional Kelly approach.