16 min read

Proposition bets -- commonly called "props" -- represent one of the fastest-growing and most analytically rich segments of the sports betting market. Unlike traditional wagers on game outcomes (moneylines, spreads, totals), props focus on specific...

Chapter 34: Prop Bets and Player Markets

Proposition bets -- commonly called "props" -- represent one of the fastest-growing and most analytically rich segments of the sports betting market. Unlike traditional wagers on game outcomes (moneylines, spreads, totals), props focus on specific occurrences within a game: how many points a player will score, whether a quarterback will throw an interception, how many rebounds a center will grab, or whether the first drive of the game will result in a score. Player props, in particular, have exploded in popularity and volume, driven by the rise of daily fantasy sports culture and the proliferation of same-game parlay products offered by every major sportsbook.

For the quantitative bettor, player props present a unique combination of opportunity and complexity. The opportunity arises from the sheer breadth of markets -- a single NBA game might feature 200+ player prop lines across points, rebounds, assists, three-pointers, steals, blocks, and combined stat categories for every rostered player. The complexity comes from the multivariate nature of the modeling challenge: a player's statistical output depends on playing time, pace, usage rate, opponent defense, game script, rest, injuries to teammates, and numerous other factors that interact in non-trivial ways.

This chapter provides a comprehensive framework for modeling player props, understanding correlation structures in same-game parlays, building projection systems, identifying market inefficiencies, and deploying advanced prop strategies.


34.1 Player Prop Modeling Fundamentals

Projecting Individual Player Stats

The foundation of prop betting is the ability to generate accurate statistical projections for individual players. A projection represents our best estimate of a player's expected output in a specific statistical category for a specific game.

The basic projection formula for any counting stat follows this structure:

$$\text{Projected Stat} = \text{Playing Time} \times \text{Per-Minute Rate} \times \text{Opponent Adjustment} \times \text{Context Adjustments}$$

For example, to project LeBron James's points in a given game:

$$\text{Points} = \text{Minutes} \times \text{Points per Minute} \times \text{Opp Def Adj} \times \text{Pace Adj} \times \text{Rest Adj}$$

Each component requires careful estimation:

Playing time projection: This is often the most important and most uncertain input. A player's minutes depend on game flow (blowouts reduce starter minutes), foul trouble, injury status, coaching decisions, and back-to-back scheduling. Even a small error in minutes projection cascades through all stat projections.

Per-minute rates: A player's per-minute production rate in each statistical category. These should be calculated over a recent window (e.g., last 20 games) with appropriate weighting for recency. Season-long rates are too slow to react to mid-season changes; last-5-game rates are too noisy.

Opponent adjustments: How much easier or harder the opponent makes it to accumulate a particular stat. This requires opponent-specific defensive metrics, ideally position-adjusted.

Context adjustments: Factors like home/away, rest days, teammate absences (which can increase or decrease a player's role), and game script expectations.

Pace and Usage Adjustments

Two of the most critical adjustments in player prop modeling are pace and usage.

Pace measures how quickly a team plays, typically expressed as possessions per 48 minutes in basketball or plays per game in football. A fast-paced game creates more opportunities for counting stats across the board.

The pace adjustment factor is:

$$\text{Pace Factor} = \frac{\text{Expected Game Pace}}{\text{League Average Pace}}$$

Where expected game pace is a function of both teams' pace tendencies:

$$\text{Expected Game Pace} = \frac{\text{Team A Pace} + \text{Team B Pace}}{2} \times \frac{1}{\text{League Avg Pace}}$$

Usage rate measures the percentage of team possessions a player "uses" (via a field goal attempt, free throw attempt, or turnover) while on the court. High-usage players are more volatile -- they have higher ceilings and higher floors for scoring stats.

When a teammate is absent, the remaining players absorb the missing player's usage. This redistribution is not equal; star players tend to absorb more, and the specific stat categories affected depend on the missing player's role.

Opponent Defense Adjustments

Opponent defense adjustments quantify how a specific opponent affects a player's expected output. The key metrics vary by sport:

Basketball: - Defensive rating (points allowed per 100 possessions) - Position-specific defensive effectiveness (how many points/rebounds/assists do opposing point guards score against this team?) - Pace-adjusted defensive metrics - Interior vs. perimeter defense splits

Football: - Passing yards allowed per game (adjusted for opponent quality) - Rushing yards allowed per game - Fantasy points allowed by position - Red zone defensive efficiency - Pressure rate and sack rate (for quarterback props)

Baseball: - Pitcher-specific metrics (ERA, WHIP, K rate, HR rate) - Handedness splits (lefty-righty matchup advantages) - Park factors (for hitter props) - Bullpen quality (for late-game prop implications)

The opponent adjustment factor is typically expressed as a multiplier:

$$\text{Opp Adj} = \frac{\text{Opponent's Stat Allowed (position-specific)}}{\text{League Average Stat Allowed}}$$

Python Code for Player Prop Projections

import numpy as np
import pandas as pd
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Tuple
from scipy.stats import poisson, norm
from scipy.optimize import minimize_scalar


@dataclass
class PlayerProfile:
    """Statistical profile for a player."""
    name: str
    team: str
    position: str
    minutes_per_game: float
    games_played: int

    # Per-minute rates (recent 20-game window)
    pts_per_min: float = 0.0
    reb_per_min: float = 0.0
    ast_per_min: float = 0.0
    stl_per_min: float = 0.0
    blk_per_min: float = 0.0
    tov_per_min: float = 0.0
    fg3_per_min: float = 0.0

    # Variance parameters (per-minute standard deviations)
    pts_std_per_min: float = 0.0
    reb_std_per_min: float = 0.0
    ast_std_per_min: float = 0.0

    # Usage and efficiency
    usage_rate: float = 0.20
    true_shooting_pct: float = 0.55


@dataclass
class GameContext:
    """Context for a specific game."""
    opponent: str
    is_home: bool
    rest_days: int
    team_pace: float           # Team's pace (poss/48)
    opponent_pace: float       # Opponent's pace
    league_avg_pace: float     # League average pace

    # Opponent defensive adjustments (multiplier vs league avg)
    opp_pts_factor: float = 1.0   # >1 means opponent allows more
    opp_reb_factor: float = 1.0
    opp_ast_factor: float = 1.0
    opp_stl_factor: float = 1.0
    opp_blk_factor: float = 1.0
    opp_fg3_factor: float = 1.0

    # Teammate absence impact
    missing_teammates_usage: float = 0.0  # Total usage of missing players
    minutes_boost: float = 0.0  # Extra minutes expected

    # Vegas game environment
    vegas_total: float = 220.0
    vegas_spread: float = 0.0   # Negative = team is favored


class PlayerPropProjector:
    """
    Projects player statistics for prop betting.

    Uses a multiplicative model with pace, opponent, and context
    adjustments applied to baseline per-minute rates.
    """

    # Home/away adjustment factors
    HOME_ADV = {
        'pts': 1.015, 'reb': 1.02, 'ast': 1.01,
        'stl': 1.005, 'blk': 1.01, 'fg3': 1.01,
    }

    # Rest day adjustments (indexed by days of rest)
    REST_ADJ = {
        0: 0.96,  # Back-to-back
        1: 1.00,  # Normal rest
        2: 1.01,  # Extra rest
        3: 1.02,  # Extended rest
    }

    def __init__(self):
        self.projections_cache: Dict[str, Dict] = {}

    def project_minutes(
        self, player: PlayerProfile, context: GameContext
    ) -> Tuple[float, float]:
        """
        Project minutes played with uncertainty.

        Returns:
            Tuple of (expected_minutes, minutes_std_dev)
        """
        base_minutes = player.minutes_per_game + context.minutes_boost

        # Rest adjustment
        rest_adj = self.REST_ADJ.get(
            min(context.rest_days, 3), 1.0
        )

        # Blowout risk adjustment based on spread
        # Large favorites or underdogs may see reduced starter minutes
        spread_abs = abs(context.vegas_spread)
        if spread_abs > 10:
            blowout_adj = 1.0 - (spread_abs - 10) * 0.005
        else:
            blowout_adj = 1.0

        # Game script: trailing teams play starters more
        # (for close games this is roughly neutral)
        expected_minutes = base_minutes * rest_adj * blowout_adj
        expected_minutes = np.clip(expected_minutes, 0, 42)  # Cap at 42

        # Minutes standard deviation (typically 4-6 minutes for starters)
        base_std = 4.5
        if expected_minutes > 34:
            base_std = 3.5  # Stars have more consistent minutes
        elif expected_minutes < 20:
            base_std = 5.5  # Role players more variable

        return expected_minutes, base_std

    def project_stat(
        self,
        player: PlayerProfile,
        context: GameContext,
        stat: str,
    ) -> Tuple[float, float]:
        """
        Project a single stat with mean and standard deviation.

        Args:
            player: Player profile with per-minute rates
            context: Game context with adjustments
            stat: Stat category ('pts', 'reb', 'ast', 'stl', 'blk', 'fg3')

        Returns:
            Tuple of (projected_value, standard_deviation)
        """
        # Get per-minute rate
        rate_map = {
            'pts': player.pts_per_min,
            'reb': player.reb_per_min,
            'ast': player.ast_per_min,
            'stl': player.stl_per_min,
            'blk': player.blk_per_min,
            'fg3': player.fg3_per_min,
        }
        std_map = {
            'pts': player.pts_std_per_min,
            'reb': player.reb_std_per_min,
            'ast': player.ast_std_per_min,
            'stl': 0.03,  # Default per-minute std
            'blk': 0.03,
            'fg3': 0.04,
        }

        per_min_rate = rate_map.get(stat, 0.0)
        per_min_std = std_map.get(stat, 0.03)

        if per_min_rate <= 0:
            return 0.0, 0.0

        # Pace adjustment
        expected_pace = (context.team_pace + context.opponent_pace) / 2
        pace_factor = expected_pace / context.league_avg_pace

        # Vegas total implied pace adjustment
        # If Vegas total is higher than pace models suggest, adjust up
        implied_pace_factor = context.vegas_total / 220.0  # Normalized to avg
        pace_factor = (pace_factor * 0.6 + implied_pace_factor * 0.4)

        # Opponent adjustment
        opp_factors = {
            'pts': context.opp_pts_factor,
            'reb': context.opp_reb_factor,
            'ast': context.opp_ast_factor,
            'stl': context.opp_stl_factor,
            'blk': context.opp_blk_factor,
            'fg3': context.opp_fg3_factor,
        }
        opp_adj = opp_factors.get(stat, 1.0)

        # Home/away adjustment
        home_adj = self.HOME_ADV.get(stat, 1.0) if context.is_home else 1.0

        # Usage redistribution for missing teammates
        if stat == 'pts' and context.missing_teammates_usage > 0:
            # Player absorbs a fraction of missing usage proportional
            # to their own usage
            absorbed_usage = (context.missing_teammates_usage *
                              (player.usage_rate / 0.80))
            usage_boost = 1.0 + absorbed_usage * 0.5
        else:
            usage_boost = 1.0

        # Minutes projection
        exp_minutes, min_std = self.project_minutes(player, context)

        # Projected stat
        adjusted_rate = (per_min_rate * pace_factor * opp_adj *
                         home_adj * usage_boost)
        projected = adjusted_rate * exp_minutes

        # Standard deviation (combines rate variance and minutes variance)
        # Var(rate * minutes) = E[min]^2 * Var(rate) + E[rate]^2 * Var(min)
        rate_variance = (per_min_std * pace_factor * opp_adj) ** 2
        variance = (
            exp_minutes ** 2 * rate_variance +
            adjusted_rate ** 2 * min_std ** 2
        )
        std_dev = np.sqrt(variance)

        return projected, std_dev

    def project_all_stats(
        self, player: PlayerProfile, context: GameContext
    ) -> Dict[str, Tuple[float, float]]:
        """Project all stats for a player in a game."""
        stats = ['pts', 'reb', 'ast', 'stl', 'blk', 'fg3']
        projections = {}
        for stat in stats:
            proj, std = self.project_stat(player, context, stat)
            projections[stat] = (round(proj, 1), round(std, 1))

        # Derived stats
        pts, pts_std = projections['pts']
        reb, reb_std = projections['reb']
        ast, ast_std = projections['ast']

        # Points + Rebounds + Assists
        pra = pts + reb + ast
        pra_std = np.sqrt(pts_std**2 + reb_std**2 + ast_std**2)
        projections['pts_reb_ast'] = (round(pra, 1), round(pra_std, 1))

        # Points + Rebounds
        pr = pts + reb
        pr_std = np.sqrt(pts_std**2 + reb_std**2)
        projections['pts_reb'] = (round(pr, 1), round(pr_std, 1))

        # Points + Assists
        pa = pts + ast
        pa_std = np.sqrt(pts_std**2 + ast_std**2)
        projections['pts_ast'] = (round(pa, 1), round(pa_std, 1))

        # Rebounds + Assists
        ra = reb + ast
        ra_std = np.sqrt(reb_std**2 + ast_std**2)
        projections['reb_ast'] = (round(ra, 1), round(ra_std, 1))

        return projections

    def evaluate_prop(
        self,
        player: PlayerProfile,
        context: GameContext,
        stat: str,
        line: float,
        over_odds: float,
        under_odds: float,
    ) -> Dict:
        """
        Evaluate a specific prop bet.

        Args:
            player: Player profile
            context: Game context
            stat: Stat category
            line: The prop line (e.g., 24.5 for points)
            over_odds: Decimal odds for over
            under_odds: Decimal odds for under

        Returns:
            Dictionary with projection, edge, and recommendation
        """
        proj, std = self.project_stat(player, context, stat)

        if std <= 0:
            return {'error': 'Unable to project this stat'}

        # Probability of over
        over_prob = 1.0 - norm.cdf(line, loc=proj, scale=std)
        under_prob = norm.cdf(line, loc=proj, scale=std)

        # For discrete stats, adjust for landing on the line
        # (push scenarios are less common with .5 lines)
        if line != int(line):
            # Half-point line, no push
            pass
        else:
            # Whole number line -- probability of push
            push_prob = norm.pdf(line, loc=proj, scale=std) * 1.0
            over_prob = max(0, over_prob - push_prob / 2)
            under_prob = max(0, under_prob - push_prob / 2)

        # Expected value calculation
        over_implied = 1.0 / over_odds
        under_implied = 1.0 / under_odds

        over_edge = over_prob - over_implied
        under_edge = under_prob - under_implied

        over_ev = over_prob * (over_odds - 1) - (1 - over_prob)
        under_ev = under_prob * (under_odds - 1) - (1 - under_prob)

        # Recommendation
        if over_edge > under_edge and over_edge > 0.02:
            recommendation = 'OVER'
            best_edge = over_edge
            best_ev = over_ev
        elif under_edge > 0.02:
            recommendation = 'UNDER'
            best_edge = under_edge
            best_ev = under_ev
        else:
            recommendation = 'PASS'
            best_edge = max(over_edge, under_edge)
            best_ev = max(over_ev, under_ev)

        return {
            'player': player.name,
            'stat': stat,
            'line': line,
            'projection': round(proj, 1),
            'std_dev': round(std, 1),
            'over_prob': round(over_prob, 3),
            'under_prob': round(under_prob, 3),
            'over_edge': round(over_edge, 3),
            'under_edge': round(under_edge, 3),
            'over_ev_per_dollar': round(over_ev, 3),
            'under_ev_per_dollar': round(under_ev, 3),
            'recommendation': recommendation,
            'best_edge': round(best_edge, 3),
        }


# Demonstration
if __name__ == "__main__":
    # Create a player profile (e.g., a star NBA player)
    player = PlayerProfile(
        name="Jayson Tatum",
        team="BOS",
        position="SF",
        minutes_per_game=36.2,
        games_played=50,
        pts_per_min=0.74,    # ~26.8 ppg
        reb_per_min=0.23,    # ~8.3 rpg
        ast_per_min=0.13,    # ~4.7 apg
        stl_per_min=0.03,
        blk_per_min=0.02,
        fg3_per_min=0.09,    # ~3.3 per game
        pts_std_per_min=0.25,
        reb_std_per_min=0.10,
        ast_std_per_min=0.06,
        usage_rate=0.305,
        true_shooting_pct=0.598,
    )

    # Game context
    context = GameContext(
        opponent="LAL",
        is_home=True,
        rest_days=1,
        team_pace=99.5,
        opponent_pace=100.8,
        league_avg_pace=100.0,
        opp_pts_factor=1.05,   # LAL allows 5% more points than avg
        opp_reb_factor=0.98,
        opp_ast_factor=1.02,
        opp_fg3_factor=1.08,   # LAL poor at defending the three
        vegas_total=228.0,
        vegas_spread=-6.5,     # Boston favored by 6.5
    )

    projector = PlayerPropProjector()

    # Project all stats
    print("=" * 60)
    print(f"Projections: {player.name} vs {context.opponent}")
    print(f"{'Home' if context.is_home else 'Away'} | "
          f"Spread: {context.vegas_spread} | Total: {context.vegas_total}")
    print("=" * 60)

    projections = projector.project_all_stats(player, context)
    for stat, (proj, std) in projections.items():
        print(f"  {stat:>12}: {proj:6.1f} +/- {std:5.1f}")

    # Evaluate specific props
    print("\n" + "=" * 60)
    print("Prop Evaluations:")
    print("=" * 60)

    props_to_evaluate = [
        ('pts', 26.5, 1.909, 1.909),
        ('reb', 8.5, 1.909, 1.909),
        ('ast', 4.5, 1.833, 2.000),
        ('fg3', 2.5, 1.714, 2.150),
        ('pts_reb_ast', 39.5, 1.909, 1.909),
    ]

    for stat, line, over_odds, under_odds in props_to_evaluate:
        result = projector.evaluate_prop(
            player, context, stat, line, over_odds, under_odds
        )
        print(f"\n  {result['player']} {stat.upper()} {line}")
        print(f"    Projection: {result['projection']} +/- {result['std_dev']}")
        print(f"    Over Prob: {result['over_prob']:.1%} | "
              f"Under Prob: {result['under_prob']:.1%}")
        print(f"    Over Edge: {result['over_edge']:+.1%} | "
              f"Under Edge: {result['under_edge']:+.1%}")
        print(f"    Recommendation: {result['recommendation']} "
              f"(Edge: {result['best_edge']:+.1%})")

34.2 Correlation in Same-Game Parlays

Understanding Correlated Outcomes

Same-game parlays (SGPs) combine multiple selections from the same game into a single parlay wager. The critical analytical challenge is that outcomes within the same game are often correlated, while most sportsbooks price SGP legs as though they are independent or apply a crude correlation adjustment.

Positive correlation means the outcomes tend to occur together. Examples: - A player scoring more points AND the team winning (stars score more in wins) - High game total AND both teams scoring above their individual totals - A quarterback throwing for many yards AND the team covering the spread

Negative correlation means the outcomes tend to be mutually exclusive. Examples: - One team covering AND the other team's quarterback having a great game - A running back rushing for many yards AND the game going under the total (clock control) - A defensive player having many tackles AND the opposing offense being effective

The true price of a same-game parlay depends on the correlation structure:

$$P(A \cap B) = P(A) \times P(B|A) \neq P(A) \times P(B) \text{ when A and B are correlated}$$

For a parlay with $n$ correlated legs:

$$P(\text{all legs win}) = P(L_1) \times P(L_2|L_1) \times P(L_3|L_1 \cap L_2) \times \cdots$$

When SGPs Have Positive Expected Value

SGPs can have positive expected value when:

  1. The book underestimates positive correlation: If two outcomes are positively correlated but the book prices them closer to independent, the true probability of both occurring is higher than the implied parlay probability. This means the parlay pays at odds that exceed the true risk.

  2. The book overestimates negative correlation: If the book applies too large a discount for negative correlation, the parlay price is too generous.

  3. Individual legs have positive EV: Even without correlation edge, if individual legs are mispriced, the SGP can amplify the edge (though it also amplifies variance).

  4. Structural pricing limitations: Some SGP pricing engines use simplified correlation models that fail to capture complex multivariate dependencies.

Correlation Matrix for Game Outcomes

Understanding the correlation structure requires empirical analysis. Here is a representative correlation matrix for NBA game outcomes:

Team Win Over Total Player Pts O Player Reb O Player Ast O
Team Win 1.00 0.10 0.25 0.08 0.15
Over Total 0.10 1.00 0.35 0.20 0.25
Player Pts O 0.25 0.35 1.00 0.15 0.20
Player Reb O 0.08 0.20 0.15 1.00 0.05
Player Ast O 0.15 0.25 0.20 0.05 1.00

These correlations vary significantly by player type, game context, and sport. A center's rebounds might correlate differently with team wins than a guard's assists.

Python Code for Correlation Analysis

import numpy as np
from scipy.stats import multivariate_normal, norm
from itertools import combinations
from typing import Dict, List, Tuple


class SGPCorrelationAnalyzer:
    """
    Analyzes correlations between same-game parlay legs
    and identifies mispriced SGPs.
    """

    def __init__(self):
        # Empirical correlation matrix (from historical data analysis)
        # Categories: team_win, game_total_over, player_pts, player_reb,
        #             player_ast, player_3pm, opp_player_pts
        self.base_correlations = {
            ('team_win', 'game_total_over'): 0.08,
            ('team_win', 'player_pts'): 0.22,
            ('team_win', 'player_reb'): 0.06,
            ('team_win', 'player_ast'): 0.14,
            ('team_win', 'player_3pm'): 0.18,
            ('team_win', 'opp_player_pts'): -0.20,
            ('game_total_over', 'player_pts'): 0.32,
            ('game_total_over', 'player_reb'): 0.18,
            ('game_total_over', 'player_ast'): 0.22,
            ('game_total_over', 'player_3pm'): 0.25,
            ('game_total_over', 'opp_player_pts'): 0.30,
            ('player_pts', 'player_reb'): 0.12,
            ('player_pts', 'player_ast'): 0.18,
            ('player_pts', 'player_3pm'): 0.55,
            ('player_pts', 'opp_player_pts'): 0.10,
            ('player_reb', 'player_ast'): 0.04,
            ('player_reb', 'opp_player_pts'): 0.05,
            ('player_ast', 'opp_player_pts'): 0.08,
            ('player_3pm', 'player_reb'): -0.05,
            ('player_3pm', 'player_ast'): 0.15,
        }

    def get_correlation(self, leg_type_1: str, leg_type_2: str) -> float:
        """Get the correlation between two leg types."""
        key = (leg_type_1, leg_type_2)
        reverse_key = (leg_type_2, leg_type_1)

        if key in self.base_correlations:
            return self.base_correlations[key]
        elif reverse_key in self.base_correlations:
            return self.base_correlations[reverse_key]
        else:
            return 0.0  # Assume independent if no data

    def build_correlation_matrix(
        self, leg_types: List[str]
    ) -> np.ndarray:
        """Build a full correlation matrix for a set of SGP legs."""
        n = len(leg_types)
        corr_matrix = np.eye(n)

        for i in range(n):
            for j in range(i + 1, n):
                rho = self.get_correlation(leg_types[i], leg_types[j])
                corr_matrix[i, j] = rho
                corr_matrix[j, i] = rho

        # Ensure positive semi-definite
        eigenvalues = np.linalg.eigvalsh(corr_matrix)
        if np.any(eigenvalues < -1e-10):
            # Fix by nearest PSD matrix
            corr_matrix = self._nearest_psd(corr_matrix)

        return corr_matrix

    def _nearest_psd(self, matrix: np.ndarray) -> np.ndarray:
        """Find the nearest positive semi-definite matrix."""
        eigenvalues, eigenvectors = np.linalg.eigh(matrix)
        eigenvalues = np.maximum(eigenvalues, 1e-8)
        result = eigenvectors @ np.diag(eigenvalues) @ eigenvectors.T
        # Re-normalize diagonal to 1
        d = np.sqrt(np.diag(result))
        result = result / np.outer(d, d)
        return result

    def calculate_true_parlay_probability(
        self,
        leg_probs: List[float],
        leg_types: List[str],
        n_simulations: int = 500000,
    ) -> Tuple[float, float]:
        """
        Calculate the true parlay probability accounting for correlations.

        Uses Monte Carlo simulation with a multivariate normal copula
        to model the correlated outcomes.

        Args:
            leg_probs: Marginal probability of each leg winning
            leg_types: Type category for each leg (for correlation lookup)
            n_simulations: Number of Monte Carlo simulations

        Returns:
            Tuple of (correlated_probability, independent_probability)
        """
        n_legs = len(leg_probs)
        assert len(leg_types) == n_legs

        # Build correlation matrix
        corr_matrix = self.build_correlation_matrix(leg_types)

        # Simulate using Gaussian copula
        # 1. Generate correlated standard normal variates
        samples = multivariate_normal.rvs(
            mean=np.zeros(n_legs),
            cov=corr_matrix,
            size=n_simulations,
        )

        # 2. Convert to uniform via CDF
        uniform_samples = norm.cdf(samples)

        # 3. Each leg wins if uniform sample < leg probability
        wins = np.zeros((n_simulations, n_legs), dtype=bool)
        for i in range(n_legs):
            wins[:, i] = uniform_samples[:, i] < leg_probs[i]

        # 4. Parlay wins if all legs win
        all_win = np.all(wins, axis=1)
        correlated_prob = np.mean(all_win)

        # Independent probability (no correlation)
        independent_prob = np.prod(leg_probs)

        return correlated_prob, independent_prob

    def evaluate_sgp(
        self,
        legs: List[Dict],
        parlay_odds: float,
        n_simulations: int = 500000,
    ) -> Dict:
        """
        Evaluate a same-game parlay for expected value.

        Args:
            legs: List of dicts with keys 'prob', 'type', 'description'
            parlay_odds: Decimal odds offered for the parlay
            n_simulations: Monte Carlo simulations

        Returns:
            Evaluation results
        """
        leg_probs = [leg['prob'] for leg in legs]
        leg_types = [leg['type'] for leg in legs]

        corr_prob, indep_prob = self.calculate_true_parlay_probability(
            leg_probs, leg_types, n_simulations
        )

        # Implied probability from odds
        implied_prob = 1.0 / parlay_odds

        # Expected value
        ev = corr_prob * (parlay_odds - 1) - (1 - corr_prob)
        edge = corr_prob - implied_prob

        # Correlation impact
        correlation_boost = corr_prob - indep_prob

        return {
            'legs': [leg.get('description', '') for leg in legs],
            'individual_probs': [round(p, 3) for p in leg_probs],
            'independent_parlay_prob': round(indep_prob, 4),
            'correlated_parlay_prob': round(corr_prob, 4),
            'correlation_boost': round(correlation_boost, 4),
            'parlay_odds': parlay_odds,
            'implied_prob': round(implied_prob, 4),
            'edge': round(edge, 4),
            'ev_per_dollar': round(ev, 4),
            'recommendation': 'BET' if edge > 0.02 else 'PASS',
        }


# Demonstration
if __name__ == "__main__":
    analyzer = SGPCorrelationAnalyzer()

    # Example SGP: Celtics win + Tatum over points + game over total
    legs = [
        {
            'prob': 0.68,
            'type': 'team_win',
            'description': 'Celtics ML',
        },
        {
            'prob': 0.52,
            'type': 'player_pts',
            'description': 'Tatum Over 26.5 pts',
        },
        {
            'prob': 0.48,
            'type': 'game_total_over',
            'description': 'Over 228.0',
        },
    ]

    # The book offers +450 (5.50 decimal)
    result = analyzer.evaluate_sgp(legs, parlay_odds=5.50)

    print("Same-Game Parlay Analysis")
    print("=" * 50)
    for key, val in result.items():
        print(f"  {key}: {val}")

    # Compare positively correlated vs negatively correlated
    print("\n\nCorrelation Impact Examples:")
    print("=" * 50)

    # Positively correlated: Team win + star player over points
    pos_corr = analyzer.calculate_true_parlay_probability(
        [0.60, 0.55], ['team_win', 'player_pts']
    )
    print(f"\nTeam Win + Player Pts Over (positive correlation):")
    print(f"  Independent: {0.60*0.55:.4f}")
    print(f"  Correlated:  {pos_corr[0]:.4f}")
    print(f"  Boost:       {pos_corr[0] - 0.60*0.55:+.4f}")

    # Negatively correlated: Team win + opponent player over points
    neg_corr = analyzer.calculate_true_parlay_probability(
        [0.60, 0.55], ['team_win', 'opp_player_pts']
    )
    print(f"\nTeam Win + Opp Player Pts Over (negative correlation):")
    print(f"  Independent: {0.60*0.55:.4f}")
    print(f"  Correlated:  {neg_corr[0]:.4f}")
    print(f"  Boost:       {neg_corr[0] - 0.60*0.55:+.4f}")

The key insight from this analysis is that positively correlated legs make the parlay more likely than the independent calculation suggests. If the sportsbook prices the SGP using an independent (or insufficiently correlated) model, the bettor captures the difference as edge.


34.3 Building Player Projection Systems

A complete player projection system goes beyond single-stat estimates to produce a coherent, multivariate projection for each player in each game. This section describes the full architecture.

Minutes and Touches Projection

The foundation of any projection system is the playing time model. Minutes are the great multiplier -- all counting stats scale roughly linearly with minutes played.

A robust minutes model should account for:

  1. Baseline minutes: The player's typical workload over the last 20-30 games
  2. Injury/rest status: Players listed as "probable" or coming off rest may see adjusted minutes
  3. Game script: Expected blowouts reduce star minutes; competitive games extend them
  4. Foul trouble risk: Players with high foul rates have more minutes variance
  5. Coaching tendencies: Some coaches have rigid rotation patterns; others adjust heavily

For football, the analogous concept is "opportunity share" -- the percentage of team plays in which a player is targeted/carries. For baseball, it is plate appearances or batters faced.

Stat Rate Models

Once we have a minutes (or opportunities) projection, we model the per-unit-time production rates:

Simple approach: Use a weighted average of recent per-minute rates, with exponential decay weighting:

$$\text{Rate}_t = \frac{\sum_{i=1}^{n} w_i \cdot r_i}{\sum_{i=1}^{n} w_i}$$

Where $w_i = \lambda^{n-i}$ and $\lambda \approx 0.95$ (more weight on recent games).

Advanced approach: Use a Bayesian hierarchical model that pools information across players at the same position to produce more stable estimates, especially for players with limited game samples.

Game Script Impact

Game script -- the expected flow and score trajectory of the game -- significantly affects player projections:

Leading teams: Tend to run the ball more (football), control pace (basketball), and protect leads. This reduces counting stats for skill players but increases running back touches.

Trailing teams: Tend to pass more (football), play faster (basketball), and take risks. This increases passing stats but may reduce efficiency.

Close games: Starters play more minutes, and the highest-usage players see the most opportunities.

Blowouts: Starters sit early, and bench players see extended run. This is one of the biggest risks in player props -- a star player can have a great first three quarters and then sit the fourth, falling short of a prop line.

The Vegas spread and total provide direct estimates of expected game script:

  • Large favorite + high total = expected to lead comfortably, likely to rest starters
  • Small favorite + high total = competitive, high-scoring game, good for counting stats
  • Underdog + low total = grind-it-out game, limited opportunities

Python Code for a Complete Player Prop Projection System

import numpy as np
import pandas as pd
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Tuple
from scipy.stats import norm, pearsonr
from scipy.optimize import minimize


@dataclass
class GameLog:
    """Single game log entry for a player."""
    date: str
    opponent: str
    is_home: bool
    minutes: float
    pts: int
    reb: int
    ast: int
    stl: int
    blk: int
    tov: int
    fg3m: int
    fga: int
    fgm: int
    fta: int
    ftm: int
    team_score: int
    opp_score: int
    pace: float
    usage_rate: float


@dataclass
class TeamDefense:
    """Opponent defensive profile by position."""
    team: str
    # Points allowed by position (as multiplier vs league avg)
    pts_allowed_pg: float = 1.0
    pts_allowed_sg: float = 1.0
    pts_allowed_sf: float = 1.0
    pts_allowed_pf: float = 1.0
    pts_allowed_c: float = 1.0
    # Rebounding allowed
    reb_allowed_guards: float = 1.0
    reb_allowed_forwards: float = 1.0
    reb_allowed_centers: float = 1.0
    # Pace factor
    pace: float = 100.0
    # Defensive rating
    def_rating: float = 110.0


class ComprehensiveProjectionSystem:
    """
    Complete player prop projection system.

    Produces correlated multivariate projections using:
    1. Recency-weighted per-minute rates
    2. Opponent-specific adjustments
    3. Game environment adjustments
    4. Bayesian stabilization for small samples
    5. Correlation-aware uncertainty estimates
    """

    STAT_COLUMNS = ['pts', 'reb', 'ast', 'stl', 'blk', 'fg3m', 'tov']
    POSITION_MAP = {
        'PG': 'pg', 'SG': 'sg', 'SF': 'sf', 'PF': 'pf', 'C': 'c'
    }

    # League-average per-minute rates (priors for Bayesian stabilization)
    LEAGUE_AVG_RATES = {
        'pts': 0.48, 'reb': 0.19, 'ast': 0.10,
        'stl': 0.03, 'blk': 0.02, 'fg3m': 0.05, 'tov': 0.06,
    }

    # Prior weight (equivalent games of league average data)
    PRIOR_WEIGHT = 5

    def __init__(self):
        self.player_logs: Dict[str, List[GameLog]] = {}
        self.team_defenses: Dict[str, TeamDefense] = {}

    def add_game_log(self, player_id: str, log: GameLog):
        """Add a game log entry for a player."""
        if player_id not in self.player_logs:
            self.player_logs[player_id] = []
        self.player_logs[player_id].append(log)

    def set_team_defense(self, defense: TeamDefense):
        """Set defensive profile for a team."""
        self.team_defenses[defense.team] = defense

    def _get_recency_weights(
        self, n_games: int, decay: float = 0.95
    ) -> np.ndarray:
        """Generate exponential decay weights for recent games."""
        weights = np.array([decay ** i for i in range(n_games)])
        return weights[::-1]  # Most recent game gets highest weight

    def _calculate_stabilized_rate(
        self,
        observed_rates: np.ndarray,
        weights: np.ndarray,
        stat: str,
    ) -> Tuple[float, float]:
        """
        Calculate Bayesian-stabilized per-minute rate.

        Combines observed data with a league-average prior to produce
        more stable estimates, especially important for low-sample stats.

        Returns:
            Tuple of (stabilized_rate, standard_error)
        """
        prior_rate = self.LEAGUE_AVG_RATES.get(stat, 0.05)
        prior_weight = self.PRIOR_WEIGHT

        # Weighted observed rate
        total_weight = np.sum(weights)
        if total_weight == 0:
            return prior_rate, prior_rate * 0.5

        observed_mean = np.average(observed_rates, weights=weights)
        observed_var = np.average(
            (observed_rates - observed_mean) ** 2, weights=weights
        )
        observed_std = np.sqrt(observed_var) if observed_var > 0 else 0.01

        # Bayesian update (conjugate normal)
        prior_precision = prior_weight / (prior_rate * 0.3) ** 2
        obs_precision = total_weight / max(observed_std ** 2, 1e-6)

        post_precision = prior_precision + obs_precision
        post_mean = (
            (prior_precision * prior_rate + obs_precision * observed_mean)
            / post_precision
        )
        post_std = 1.0 / np.sqrt(post_precision)

        return post_mean, post_std

    def project_player(
        self,
        player_id: str,
        position: str,
        opponent: str,
        is_home: bool,
        vegas_total: float,
        vegas_spread: float,
        rest_days: int = 1,
        n_recent_games: int = 20,
        missing_teammates_usage: float = 0.0,
    ) -> Dict:
        """
        Generate a complete multivariate projection for a player.

        Returns projections for all stat categories with means,
        standard deviations, and correlation structure.
        """
        logs = self.player_logs.get(player_id, [])
        if len(logs) < 3:
            return {'error': f'Insufficient data for {player_id}'}

        # Use most recent N games
        recent_logs = logs[-n_recent_games:]
        n = len(recent_logs)
        weights = self._get_recency_weights(n)

        # Step 1: Project minutes
        minutes_array = np.array([g.minutes for g in recent_logs])
        base_minutes = np.average(minutes_array, weights=weights)
        minutes_std = np.sqrt(np.average(
            (minutes_array - base_minutes) ** 2, weights=weights
        ))

        # Adjust minutes for context
        rest_adj = {0: 0.96, 1: 1.0, 2: 1.01, 3: 1.02}.get(
            min(rest_days, 3), 1.0
        )
        spread_abs = abs(vegas_spread)
        blowout_adj = max(0.90, 1.0 - max(0, spread_abs - 10) * 0.005)
        usage_boost_minutes = min(missing_teammates_usage * 3.0, 3.0)

        proj_minutes = base_minutes * rest_adj * blowout_adj + usage_boost_minutes
        proj_minutes = np.clip(proj_minutes, 0, 42)

        # Step 2: Calculate per-minute rates for each stat
        rates = {}
        rate_stds = {}

        for stat in self.STAT_COLUMNS:
            per_min_rates = np.array([
                getattr(g, stat) / max(g.minutes, 1)
                for g in recent_logs
            ])
            rate, rate_se = self._calculate_stabilized_rate(
                per_min_rates, weights, stat
            )
            rates[stat] = rate
            rate_stds[stat] = rate_se

        # Step 3: Apply opponent adjustments
        defense = self.team_defenses.get(opponent)
        opp_factors = {}
        if defense:
            pos_key = self.POSITION_MAP.get(position, 'sf')

            pts_factor = getattr(defense, f'pts_allowed_{pos_key}', 1.0)
            opp_factors['pts'] = pts_factor

            if position in ('PG', 'SG'):
                opp_factors['reb'] = defense.reb_allowed_guards
            elif position in ('PF', 'C'):
                opp_factors['reb'] = defense.reb_allowed_centers
            else:
                opp_factors['reb'] = defense.reb_allowed_forwards

            # Pace adjustment
            team_pace = np.mean([g.pace for g in recent_logs[-10:]])
            expected_pace = (team_pace + defense.pace) / 2
            pace_adj = expected_pace / 100.0
        else:
            opp_factors = {stat: 1.0 for stat in self.STAT_COLUMNS}
            pace_adj = 1.0

        # Vegas-implied pace
        vegas_pace_adj = vegas_total / 220.0
        combined_pace_adj = 0.6 * pace_adj + 0.4 * vegas_pace_adj

        # Step 4: Generate projections
        projections = {}
        for stat in self.STAT_COLUMNS:
            opp_adj = opp_factors.get(stat, 1.0)
            home_adj = 1.015 if is_home else 1.0

            # Usage redistribution
            if stat == 'pts' and missing_teammates_usage > 0:
                avg_usage = np.mean([g.usage_rate for g in recent_logs[-10:]])
                usage_adj = 1.0 + missing_teammates_usage * (avg_usage / 0.80) * 0.5
            else:
                usage_adj = 1.0

            adjusted_rate = (
                rates[stat] * combined_pace_adj * opp_adj * home_adj * usage_adj
            )
            proj_value = adjusted_rate * proj_minutes

            # Standard deviation combines rate uncertainty and minutes uncertainty
            rate_var = (rate_stds[stat] * combined_pace_adj * opp_adj) ** 2
            var = (
                proj_minutes ** 2 * rate_var +
                adjusted_rate ** 2 * minutes_std ** 2
            )
            proj_std = np.sqrt(var)

            projections[stat] = {
                'mean': round(proj_value, 1),
                'std': round(proj_std, 1),
                'rate_per_min': round(adjusted_rate, 4),
            }

        # Step 5: Calculate correlation matrix between stats
        stat_game_values = {}
        for stat in self.STAT_COLUMNS:
            stat_game_values[stat] = np.array([
                getattr(g, stat) for g in recent_logs
            ])

        corr_matrix = np.eye(len(self.STAT_COLUMNS))
        for i, stat_i in enumerate(self.STAT_COLUMNS):
            for j, stat_j in enumerate(self.STAT_COLUMNS):
                if i < j:
                    if n >= 5:
                        r, _ = pearsonr(
                            stat_game_values[stat_i],
                            stat_game_values[stat_j],
                        )
                        corr_matrix[i, j] = r
                        corr_matrix[j, i] = r

        # Step 6: Derived combination projections
        combo_projections = {}
        stat_idx = {s: i for i, s in enumerate(self.STAT_COLUMNS)}

        combos = {
            'pts_reb_ast': ['pts', 'reb', 'ast'],
            'pts_reb': ['pts', 'reb'],
            'pts_ast': ['pts', 'ast'],
            'reb_ast': ['reb', 'ast'],
        }

        for combo_name, stats in combos.items():
            combo_mean = sum(projections[s]['mean'] for s in stats)

            # Variance with correlations
            combo_var = 0
            for s in stats:
                combo_var += projections[s]['std'] ** 2
            for k in range(len(stats)):
                for m in range(k + 1, len(stats)):
                    idx_k = stat_idx[stats[k]]
                    idx_m = stat_idx[stats[m]]
                    cov = (corr_matrix[idx_k, idx_m] *
                           projections[stats[k]]['std'] *
                           projections[stats[m]]['std'])
                    combo_var += 2 * cov

            combo_std = np.sqrt(max(combo_var, 0))
            combo_projections[combo_name] = {
                'mean': round(combo_mean, 1),
                'std': round(combo_std, 1),
            }

        return {
            'player_id': player_id,
            'minutes': {
                'mean': round(proj_minutes, 1),
                'std': round(minutes_std, 1),
            },
            'stats': projections,
            'combos': combo_projections,
            'correlation_matrix': corr_matrix.tolist(),
            'stat_order': self.STAT_COLUMNS,
            'adjustments': {
                'pace': round(combined_pace_adj, 3),
                'rest': rest_adj,
                'blowout': round(blowout_adj, 3),
            },
        }


def evaluate_prop_line(
    projection: Dict, stat: str, line: float,
    over_odds: float = 1.909, under_odds: float = 1.909,
) -> Dict:
    """Evaluate a prop line against a projection."""
    if stat in projection.get('stats', {}):
        mean = projection['stats'][stat]['mean']
        std = projection['stats'][stat]['std']
    elif stat in projection.get('combos', {}):
        mean = projection['combos'][stat]['mean']
        std = projection['combos'][stat]['std']
    else:
        return {'error': f'Unknown stat: {stat}'}

    if std <= 0:
        return {'error': 'Zero standard deviation'}

    over_prob = 1.0 - norm.cdf(line, loc=mean, scale=std)
    under_prob = norm.cdf(line, loc=mean, scale=std)

    over_implied = 1.0 / over_odds
    under_implied = 1.0 / under_odds

    over_edge = over_prob - over_implied
    under_edge = under_prob - under_implied

    if over_edge > under_edge and over_edge > 0:
        best_side = 'OVER'
        best_edge = over_edge
    elif under_edge > 0:
        best_side = 'UNDER'
        best_edge = under_edge
    else:
        best_side = 'PASS'
        best_edge = max(over_edge, under_edge)

    return {
        'stat': stat,
        'line': line,
        'projection': mean,
        'std': std,
        'over_prob': round(over_prob, 3),
        'under_prob': round(under_prob, 3),
        'over_edge': round(over_edge, 3),
        'under_edge': round(under_edge, 3),
        'best_side': best_side,
        'best_edge': round(best_edge, 3),
    }


# Demonstration with synthetic data
if __name__ == "__main__":
    np.random.seed(42)

    system = ComprehensiveProjectionSystem()

    # Generate synthetic game logs for a star player
    for i in range(30):
        minutes = np.random.normal(36, 3)
        minutes = np.clip(minutes, 20, 42)
        pts_rate = np.random.normal(0.74, 0.15)
        reb_rate = np.random.normal(0.23, 0.06)
        ast_rate = np.random.normal(0.13, 0.04)

        log = GameLog(
            date=f"2026-01-{i+1:02d}",
            opponent=np.random.choice(["LAL", "MIA", "GSW", "MIL", "PHX"]),
            is_home=np.random.random() > 0.5,
            minutes=round(minutes, 1),
            pts=int(max(0, pts_rate * minutes + np.random.normal(0, 3))),
            reb=int(max(0, reb_rate * minutes + np.random.normal(0, 2))),
            ast=int(max(0, ast_rate * minutes + np.random.normal(0, 1.5))),
            stl=int(max(0, np.random.poisson(1.2))),
            blk=int(max(0, np.random.poisson(0.7))),
            tov=int(max(0, np.random.poisson(2.5))),
            fg3m=int(max(0, np.random.poisson(3.0))),
            fga=int(max(0, np.random.normal(20, 3))),
            fgm=int(max(0, np.random.normal(10, 2))),
            fta=int(max(0, np.random.normal(6, 2))),
            ftm=int(max(0, np.random.normal(5, 2))),
            team_score=int(np.random.normal(112, 10)),
            opp_score=int(np.random.normal(108, 10)),
            pace=np.random.normal(100, 3),
            usage_rate=np.random.normal(0.30, 0.03),
        )
        system.add_game_log("tatum_01", log)

    # Set opponent defense
    system.set_team_defense(TeamDefense(
        team="LAL",
        pts_allowed_sf=1.06,
        reb_allowed_forwards=0.98,
        pace=101.5,
        def_rating=112.0,
    ))

    # Generate projection
    projection = system.project_player(
        player_id="tatum_01",
        position="SF",
        opponent="LAL",
        is_home=True,
        vegas_total=228.0,
        vegas_spread=-6.5,
        rest_days=1,
    )

    print("PLAYER PROJECTION")
    print("=" * 50)
    print(f"Minutes: {projection['minutes']['mean']} "
          f"+/- {projection['minutes']['std']}")
    print(f"\nAdjustments: {projection['adjustments']}")
    print(f"\nStat Projections:")
    for stat, vals in projection['stats'].items():
        print(f"  {stat:>6}: {vals['mean']:6.1f} +/- {vals['std']:5.1f} "
              f"(rate: {vals['rate_per_min']:.4f}/min)")
    print(f"\nCombo Projections:")
    for combo, vals in projection['combos'].items():
        print(f"  {combo:>12}: {vals['mean']:6.1f} +/- {vals['std']:5.1f}")

    # Evaluate props
    print(f"\nProp Evaluations:")
    print("-" * 50)
    props = [
        ('pts', 26.5), ('reb', 8.5), ('ast', 4.5),
        ('fg3m', 2.5), ('pts_reb_ast', 39.5),
    ]
    for stat, line in props:
        result = evaluate_prop_line(projection, stat, line)
        print(f"  {stat:>12} {line:>5.1f}: "
              f"proj={result['projection']:5.1f}, "
              f"O={result['over_prob']:.1%}, U={result['under_prob']:.1%}, "
              f"Best: {result['best_side']} ({result['best_edge']:+.1%})")

34.4 Prop Market Inefficiencies

Player prop markets, despite growing sophistication, contain persistent inefficiencies that systematic bettors can exploit. Understanding where and why these inefficiencies arise is as important as the technical modeling itself.

Alternates vs. Standard Lines

Most sportsbooks offer "alternate" prop lines in addition to the standard line. For example, if the standard points line for a player is 24.5 at -110/-110, the book might also offer:

  • Over 20.5 at -250
  • Over 22.5 at -155
  • Over 26.5 at +120
  • Over 28.5 at +175
  • Over 30.5 at +280

These alternate lines are typically derived from the standard line using a fixed model. If the book's underlying distribution assumption is wrong -- particularly in the tails -- alternate lines can be significantly mispriced.

Common mispricing patterns in alternates:

  1. Fat tails underpriced: Books often use normal distributions for player props, but actual stat distributions tend to have fatter tails (more extreme outcomes than a normal distribution predicts). This means high-value alternates (e.g., over 30.5 points for a player projected at 25) may be underpriced.

  2. Correlation with minutes ignored: Alternate over lines at high thresholds implicitly require the player to play heavy minutes. If there is a strong reason to expect above-average minutes (competitive game, no blowout risk), high alternates become more attractive.

  3. Ceiling games: Some players have a higher ceiling than their average suggests. A player who averages 22 points but has scored 35+ in 15% of games may present value at the +35.5 alternate even if the standard 22.5 line is efficiently priced.

Less-Liquid Props

The most liquid prop markets (star player points, rebounds, assists) tend to be the most efficiently priced because they attract the most betting action and the most attention from sharp bettors. Less-liquid markets often harbor larger inefficiencies:

Defensive stats (steals, blocks): Highly variable, harder to project, and less attention from the market. A player averaging 1.8 steals per game facing a turnover-prone opponent might present significant value on the steals over.

Role player props: The market pays less attention to the eighth man's rebounds than to the star's points. Role player lines often show less movement and less precision.

Cross-sport props: Baseball pitcher strikeout props, hockey assist props, and soccer shot props are all less liquid and less efficiently priced than mainstream football and basketball props.

New or returning players: When a player returns from injury or joins a new team, the market is slow to calibrate. The player's role and minutes may be uncertain, creating opportunities for bettors who can project the situation accurately.

Derivative Markets

Derivative markets -- props derived from other props or game outcomes -- can contain structural mispricings:

First basket / first touchdown scorer: These markets carry enormous margins (20-40%) and are generally poor value. However, specific players in specific situations can still present edge.

Double-double / triple-double props: These require hitting thresholds in multiple categories simultaneously and are sensitive to the correlation structure between stats. If the book's correlation model is inaccurate, these props can be mispriced.

Race to X points: These props depend on scoring rate and game flow dynamics. A fast-starting team facing a slow-starting opponent may present systematic value.

Python Analysis of Prop Market Efficiency

import numpy as np
from scipy.stats import norm, poisson
from typing import Dict, List, Tuple
from dataclasses import dataclass


@dataclass
class PropLine:
    """A prop line from the market."""
    player: str
    stat: str
    line: float
    over_odds: float
    under_odds: float
    book: str


class PropMarketAnalyzer:
    """
    Analyzes prop market efficiency by comparing book lines
    to model projections and identifying systematic biases.
    """

    def __init__(self):
        self.analysis_results: List[Dict] = []

    def implied_probability(self, decimal_odds: float) -> float:
        """Convert decimal odds to implied probability."""
        return 1.0 / decimal_odds if decimal_odds > 0 else 0.0

    def calculate_vig(self, over_odds: float, under_odds: float) -> float:
        """Calculate the vig (margin) on a prop line."""
        total_implied = self.implied_probability(over_odds) + \
                        self.implied_probability(under_odds)
        return total_implied - 1.0

    def extract_book_projection(
        self, line: float, over_odds: float, under_odds: float
    ) -> Tuple[float, float]:
        """
        Extract the book's implied mean and standard deviation
        from a prop line and odds.

        Assumes the book uses a normal distribution model.
        """
        over_prob = self.implied_probability(over_odds)
        under_prob = self.implied_probability(under_odds)

        # Remove vig (proportional method)
        total = over_prob + under_prob
        fair_over = over_prob / total
        fair_under = under_prob / total

        # If fair_under = Phi((line - mu) / sigma), we need to solve
        # for mu. Assuming the line is set at the median when odds
        # are equal, we can estimate the implied mean.

        # Z-score corresponding to fair_under probability
        z = norm.ppf(fair_under)

        # The line is at mu + z * sigma
        # We need another equation to solve for both mu and sigma
        # Typically, we assume sigma based on historical stat variability

        # Practical approach: assume line ~= mean (adjusted by z-score)
        # For -110/-110 props, z ~= 0, so line ~= mean
        # For skewed odds, the mean differs from the line

        # Estimate sigma from typical per-game variability
        sigma_map = {
            'pts': 7.5, 'reb': 3.5, 'ast': 2.8,
            'stl': 1.1, 'blk': 1.0, 'fg3': 1.5,
            'tov': 1.2, 'pts_reb_ast': 9.0,
        }

        # Without knowing the stat, use a generic estimate
        implied_mean = line - z * 7.0  # Rough estimate
        return implied_mean, 7.0

    def find_alternate_value(
        self,
        projection_mean: float,
        projection_std: float,
        alternates: List[PropLine],
        stat: str,
    ) -> List[Dict]:
        """
        Find value in alternate prop lines.

        Compares model probability to implied probability for each
        alternate line.
        """
        opportunities = []

        for alt in alternates:
            if alt.stat != stat:
                continue

            # Model probability
            model_over = 1.0 - norm.cdf(alt.line, loc=projection_mean,
                                         scale=projection_std)
            model_under = norm.cdf(alt.line, loc=projection_mean,
                                    scale=projection_std)

            # Book implied
            book_over = self.implied_probability(alt.over_odds)
            book_under = self.implied_probability(alt.under_odds)
            total_impl = book_over + book_under
            fair_over = book_over / total_impl
            fair_under = book_under / total_impl

            over_edge = model_over - fair_over
            under_edge = model_under - fair_under

            vig = self.calculate_vig(alt.over_odds, alt.under_odds)

            best_side = 'OVER' if over_edge > under_edge else 'UNDER'
            best_edge = max(over_edge, under_edge)

            if best_edge > 0.02:  # Minimum 2% edge
                opportunities.append({
                    'player': alt.player,
                    'stat': stat,
                    'line': alt.line,
                    'book': alt.book,
                    'best_side': best_side,
                    'edge': round(best_edge, 3),
                    'model_prob': round(
                        model_over if best_side == 'OVER' else model_under, 3
                    ),
                    'fair_book_prob': round(
                        fair_over if best_side == 'OVER' else fair_under, 3
                    ),
                    'odds': alt.over_odds if best_side == 'OVER' else alt.under_odds,
                    'vig': round(vig, 3),
                    'ev_per_dollar': round(
                        (model_over * (alt.over_odds - 1) - (1 - model_over))
                        if best_side == 'OVER'
                        else (model_under * (alt.under_odds - 1) - (1 - model_under)),
                        3
                    ),
                })

        # Sort by edge
        opportunities.sort(key=lambda x: x['edge'], reverse=True)
        return opportunities

    def analyze_tail_distribution(
        self,
        game_logs: List[float],
        stat_name: str,
    ) -> Dict:
        """
        Analyze whether a player's stat distribution has fatter tails
        than a normal distribution.
        """
        data = np.array(game_logs)
        n = len(data)
        mean = np.mean(data)
        std = np.std(data, ddof=1)

        # Kurtosis (excess kurtosis; normal = 0)
        kurtosis = float(np.mean(((data - mean) / std) ** 4) - 3)

        # Skewness
        skewness = float(np.mean(((data - mean) / std) ** 3))

        # Percentage of games in tails (beyond 1.5 std)
        upper_tail_count = np.sum(data > mean + 1.5 * std)
        lower_tail_count = np.sum(data < mean - 1.5 * std)
        expected_tail = n * (1 - norm.cdf(1.5) + norm.cdf(-1.5))

        tail_ratio = (upper_tail_count + lower_tail_count) / max(expected_tail, 1)

        # Interpret results
        if kurtosis > 1.0:
            tail_assessment = "FAT_TAILS"
            alt_strategy = "High alternates may be underpriced by books using normal models"
        elif kurtosis < -0.5:
            tail_assessment = "THIN_TAILS"
            alt_strategy = "High alternates may be overpriced; focus on standard lines"
        else:
            tail_assessment = "NEAR_NORMAL"
            alt_strategy = "Standard pricing models likely accurate"

        return {
            'stat': stat_name,
            'mean': round(mean, 1),
            'std': round(std, 1),
            'skewness': round(skewness, 3),
            'kurtosis': round(kurtosis, 3),
            'tail_assessment': tail_assessment,
            'upper_tail_games': int(upper_tail_count),
            'lower_tail_games': int(lower_tail_count),
            'expected_tail_games': round(expected_tail, 1),
            'tail_ratio': round(tail_ratio, 2),
            'strategy_note': alt_strategy,
            'sample_size': n,
        }


# Demonstration
if __name__ == "__main__":
    np.random.seed(42)
    analyzer = PropMarketAnalyzer()

    # Generate synthetic game logs with fat tails
    # Use a mixture model: 80% normal games, 20% ceiling games
    n_games = 50
    pts_logs = []
    for _ in range(n_games):
        if np.random.random() < 0.20:
            pts_logs.append(np.random.normal(32, 4))  # Ceiling game
        else:
            pts_logs.append(np.random.normal(24, 5))  # Normal game

    tail_analysis = analyzer.analyze_tail_distribution(pts_logs, 'pts')
    print("Tail Distribution Analysis")
    print("=" * 50)
    for k, v in tail_analysis.items():
        print(f"  {k}: {v}")

    # Evaluate alternate lines
    proj_mean = np.mean(pts_logs)
    proj_std = np.std(pts_logs, ddof=1)

    alternates = [
        PropLine("Player X", "pts", 20.5, 1.400, 3.000, "BookA"),
        PropLine("Player X", "pts", 22.5, 1.588, 2.400, "BookA"),
        PropLine("Player X", "pts", 24.5, 1.909, 1.909, "BookA"),
        PropLine("Player X", "pts", 26.5, 2.300, 1.625, "BookA"),
        PropLine("Player X", "pts", 28.5, 2.900, 1.435, "BookA"),
        PropLine("Player X", "pts", 30.5, 3.800, 1.286, "BookA"),
        PropLine("Player X", "pts", 32.5, 5.000, 1.200, "BookA"),
        PropLine("Player X", "pts", 34.5, 7.000, 1.125, "BookA"),
    ]

    opps = analyzer.find_alternate_value(proj_mean, proj_std, alternates, 'pts')

    print(f"\nAlternate Line Opportunities (projection: {proj_mean:.1f} "
          f"+/- {proj_std:.1f}):")
    print("-" * 60)
    for opp in opps:
        print(f"  {opp['best_side']} {opp['line']}: "
              f"Edge={opp['edge']:+.1%}, "
              f"Model={opp['model_prob']:.1%}, "
              f"Book={opp['fair_book_prob']:.1%}, "
              f"EV={opp['ev_per_dollar']:+.3f}")

34.5 Advanced Prop Strategies

With the foundational modeling and market analysis tools in place, we can now explore more sophisticated prop betting strategies.

Stacking

Stacking is the practice of combining multiple correlated prop bets to amplify expected value when you have a view on the game environment. The concept is borrowed from daily fantasy sports.

Positive game environment stack: When you expect a high-scoring, fast-paced game, you might bet the over on the game total AND the over on multiple players' points props. Each individual bet might have a modest edge, but because they are positively correlated, a favorable game environment will cause multiple bets to win simultaneously.

Negative game environment stack: Conversely, when you expect a low-scoring game, stacking unders across multiple props and the game total under exploits the same correlation in the opposite direction.

Player stack: When you believe a specific player will have a ceiling game (due to matchup, minutes opportunity, usage boost from missing teammates), you can stack multiple props for that player -- points, assists, three-pointers, PRA combo. Because all of a player's stats correlate with their minutes and general performance level, a big game produces wins across all legs.

The mathematical advantage of stacking comes from the covariance between bets. If your total edge is $\sum E_i$ (sum of individual edges) but your variance is reduced by positive covariance between winning bets, your Sharpe ratio improves.

Contrarian Props

Contrarian prop strategies exploit the tendency of the betting public to overvalue certain types of bets:

Unders are underbet: The recreational betting public overwhelmingly prefers overs. They want to cheer for the player to have a big game. This systematic bias means that unders can carry slightly better value on average.

Low-profile players are underbet: The public gravitates toward star players' props. Role player props -- particularly in less popular games -- attract less recreational action and may be more efficiently priced or even tilted toward offering value.

Revenge narrative fade: When a player faces his former team, the public loves the "revenge game" narrative and piles onto the overs. But data shows that revenge game narratives have minimal predictive value. Fading the public on these spots can be profitable.

Game Environment Exploitation

The most powerful prop strategies do not just project individual players in isolation -- they project the entire game environment and identify which players benefit most.

Pace-up spots: When two fast-paced teams meet and the Vegas total is elevated, all counting stats (points, rebounds, assists) are boosted. But the market does not always fully adjust individual player lines for pace-up environments. Identifying these games and systematically betting overs can produce consistent edge.

Negative game script for trailing team: When a significant underdog is expected to trail, their pass-catchers and quarterback benefit from increased passing volume. The spread tells you this, but the player prop lines may not fully adjust.

Defensive weakness exploitation: When a team has a specific defensive weakness (e.g., they allow the most three-pointers in the league), opposing players who specialize in that area benefit disproportionately. Mapping defensive weaknesses to opposing player strengths creates systematic opportunities.

Python Code for Advanced Prop Strategies

import numpy as np
from typing import Dict, List, Tuple, Optional
from dataclasses import dataclass
from scipy.stats import norm, multivariate_normal


@dataclass
class PropBet:
    """A single prop bet."""
    player: str
    stat: str
    line: float
    side: str  # 'over' or 'under'
    odds: float
    model_prob: float
    edge: float
    stake: float = 0.0


class AdvancedPropStrategy:
    """
    Implements advanced prop betting strategies including
    stacking, contrarian approaches, and game environment exploitation.
    """

    def __init__(self, bankroll: float = 10000.0, kelly_fraction: float = 0.25):
        self.bankroll = bankroll
        self.kelly_fraction = kelly_fraction

    def identify_stacking_opportunities(
        self,
        player_projections: Dict[str, Dict],
        game_environment: Dict,
        available_props: List[PropBet],
    ) -> Dict:
        """
        Identify optimal stacking strategies based on game environment.

        When the game environment favors a particular direction (high-scoring,
        pace-up, etc.), identifies which props benefit most and recommends
        a correlated stack.
        """
        # Classify game environment
        vegas_total = game_environment.get('vegas_total', 220)
        league_avg_total = game_environment.get('league_avg_total', 220)
        pace_factor = vegas_total / league_avg_total

        spread = game_environment.get('spread', 0)
        expected_blowout = abs(spread) > 10

        # Determine environment type
        if pace_factor > 1.05:
            env_type = 'PACE_UP'
            preferred_side = 'over'
            boost = pace_factor - 1.0
        elif pace_factor < 0.95:
            env_type = 'PACE_DOWN'
            preferred_side = 'under'
            boost = 1.0 - pace_factor
        else:
            env_type = 'NEUTRAL'
            preferred_side = None
            boost = 0.0

        # Filter props that align with environment
        aligned_props = []
        for prop in available_props:
            if preferred_side and prop.side == preferred_side and prop.edge > 0:
                # Boost edge estimate due to environmental alignment
                env_boost = boost * 0.3  # 30% of pace deviation adds to edge
                adjusted_edge = prop.edge + env_boost
                aligned_props.append({
                    'prop': prop,
                    'base_edge': prop.edge,
                    'env_boost': round(env_boost, 3),
                    'adjusted_edge': round(adjusted_edge, 3),
                })

        # Sort by adjusted edge
        aligned_props.sort(key=lambda x: x['adjusted_edge'], reverse=True)

        # Build optimal stack (top N correlated props)
        max_stack_size = 5
        stack = aligned_props[:max_stack_size]

        # Calculate stack metrics
        if stack:
            stack_edge_sum = sum(s['adjusted_edge'] for s in stack)
            avg_edge = stack_edge_sum / len(stack)

            # Estimate correlation boost to Sharpe ratio
            avg_correlation = 0.15  # Typical intra-game correlation
            n_legs = len(stack)
            diversification_ratio = np.sqrt(
                n_legs + n_legs * (n_legs - 1) * avg_correlation
            ) / n_legs
        else:
            stack_edge_sum = 0
            avg_edge = 0
            diversification_ratio = 1.0

        return {
            'environment': env_type,
            'pace_factor': round(pace_factor, 3),
            'preferred_side': preferred_side,
            'n_aligned_props': len(aligned_props),
            'stack': [
                {
                    'player': s['prop'].player,
                    'stat': s['prop'].stat,
                    'line': s['prop'].line,
                    'side': s['prop'].side,
                    'odds': s['prop'].odds,
                    'base_edge': s['base_edge'],
                    'env_boost': s['env_boost'],
                    'adjusted_edge': s['adjusted_edge'],
                }
                for s in stack
            ],
            'stack_metrics': {
                'total_edge': round(stack_edge_sum, 3),
                'avg_edge': round(avg_edge, 3),
                'n_legs': len(stack),
                'correlation_factor': round(diversification_ratio, 3),
            },
        }

    def contrarian_analysis(
        self,
        props: List[Dict],
    ) -> List[Dict]:
        """
        Identify contrarian prop opportunities.

        Looks for spots where public betting percentages suggest
        the line has been moved toward the popular side, creating
        value on the unpopular side.
        """
        opportunities = []

        for prop in props:
            public_pct_over = prop.get('public_pct_over', 50)
            model_prob_over = prop.get('model_prob_over', 50)
            line = prop.get('line', 0)
            over_odds = prop.get('over_odds', 1.909)
            under_odds = prop.get('under_odds', 1.909)

            # Heavy public action on one side
            if public_pct_over > 70:
                # Public loves the over -- check if under has value
                under_prob = 1.0 - model_prob_over / 100.0
                under_implied = 1.0 / under_odds
                under_edge = under_prob - under_implied

                if under_edge > 0.02:
                    opportunities.append({
                        'player': prop.get('player', ''),
                        'stat': prop.get('stat', ''),
                        'line': line,
                        'side': 'UNDER',
                        'edge': round(under_edge, 3),
                        'public_on_opposite': f"{public_pct_over}% on OVER",
                        'contrarian_signal': 'STRONG',
                        'odds': under_odds,
                    })

            elif public_pct_over < 30:
                # Public on the under -- check if over has value
                over_prob = model_prob_over / 100.0
                over_implied = 1.0 / over_odds
                over_edge = over_prob - over_implied

                if over_edge > 0.02:
                    opportunities.append({
                        'player': prop.get('player', ''),
                        'stat': prop.get('stat', ''),
                        'line': line,
                        'side': 'OVER',
                        'edge': round(over_edge, 3),
                        'public_on_opposite': f"{100-public_pct_over}% on UNDER",
                        'contrarian_signal': 'STRONG',
                        'odds': over_odds,
                    })

        opportunities.sort(key=lambda x: x['edge'], reverse=True)
        return opportunities

    def game_script_exploitation(
        self,
        team: str,
        spread: float,
        total: float,
        players: List[Dict],
    ) -> List[Dict]:
        """
        Exploit game script projections to find mispriced player props.

        When a team is expected to trail significantly, their passing
        game benefits. When leading, the running game benefits.
        """
        recommendations = []

        # Calculate expected game script
        expected_margin = -spread  # Positive = team leads
        expected_total = total
        team_implied_score = (expected_total + expected_margin) / 2

        # Classify game script
        if expected_margin > 7:
            script = 'EXPECTED_BLOWOUT_LEAD'
            passing_adj = 0.92  # Less passing when leading big
            rushing_adj = 1.12  # More rushing to kill clock
            minutes_adj = 0.95  # Starters may rest
        elif expected_margin > 3:
            script = 'EXPECTED_MODERATE_LEAD'
            passing_adj = 0.97
            rushing_adj = 1.05
            minutes_adj = 1.00
        elif expected_margin < -7:
            script = 'EXPECTED_TRAILING'
            passing_adj = 1.10  # More passing to catch up
            rushing_adj = 0.90  # Less rushing
            minutes_adj = 1.02  # Starters play more
        elif expected_margin < -3:
            script = 'EXPECTED_SLIGHT_UNDERDOG'
            passing_adj = 1.03
            rushing_adj = 0.97
            minutes_adj = 1.01
        else:
            script = 'COMPETITIVE'
            passing_adj = 1.00
            rushing_adj = 1.00
            minutes_adj = 1.00

        for player in players:
            player_type = player.get('type', 'balanced')
            name = player.get('name', '')
            proj_mean = player.get('projection', 0)
            proj_std = player.get('std', 0)
            prop_line = player.get('prop_line', 0)
            over_odds = player.get('over_odds', 1.909)
            under_odds = player.get('under_odds', 1.909)
            stat = player.get('stat', 'pts')

            # Apply game script adjustment
            if player_type == 'passer':
                adj = passing_adj
            elif player_type == 'rusher':
                adj = rushing_adj
            elif player_type == 'receiver':
                adj = passing_adj * 0.8 + 1.0 * 0.2  # Partially tied to pass volume
            else:
                adj = 1.0

            adjusted_mean = proj_mean * adj * minutes_adj
            if proj_std <= 0:
                continue

            over_prob = 1.0 - norm.cdf(prop_line, loc=adjusted_mean, scale=proj_std)
            under_prob = norm.cdf(prop_line, loc=adjusted_mean, scale=proj_std)

            over_implied = 1.0 / over_odds
            under_implied = 1.0 / under_odds
            total_implied = over_implied + under_implied
            fair_over = over_implied / total_implied
            fair_under = under_implied / total_implied

            over_edge = over_prob - fair_over
            under_edge = under_prob - fair_under

            best_side = 'OVER' if over_edge > under_edge else 'UNDER'
            best_edge = max(over_edge, under_edge)

            if best_edge > 0.02:
                recommendations.append({
                    'player': name,
                    'stat': stat,
                    'type': player_type,
                    'game_script': script,
                    'adjustment': round(adj * minutes_adj, 3),
                    'base_projection': round(proj_mean, 1),
                    'adjusted_projection': round(adjusted_mean, 1),
                    'prop_line': prop_line,
                    'best_side': best_side,
                    'edge': round(best_edge, 3),
                })

        recommendations.sort(key=lambda x: x['edge'], reverse=True)
        return recommendations


# Demonstration
if __name__ == "__main__":
    np.random.seed(42)
    strategy = AdvancedPropStrategy(bankroll=10000)

    # === Stacking Demo ===
    print("STACKING ANALYSIS")
    print("=" * 60)

    game_env = {
        'vegas_total': 235.0,
        'league_avg_total': 220.0,
        'spread': -3.5,
    }

    props = [
        PropBet("Player A", "pts", 26.5, "over", 1.909, 0.55, 0.03),
        PropBet("Player B", "pts", 22.5, "over", 1.909, 0.54, 0.02),
        PropBet("Player C", "reb", 10.5, "over", 1.909, 0.53, 0.01),
        PropBet("Player A", "ast", 5.5, "over", 2.000, 0.52, 0.02),
        PropBet("Player D", "pts", 18.5, "over", 1.909, 0.56, 0.04),
        PropBet("Player A", "fg3", 2.5, "over", 1.800, 0.54, 0.03),
    ]

    stack_result = strategy.identify_stacking_opportunities(
        {}, game_env, props
    )

    print(f"Environment: {stack_result['environment']} "
          f"(pace factor: {stack_result['pace_factor']})")
    print(f"Aligned props: {stack_result['n_aligned_props']}")
    print(f"\nRecommended Stack:")
    for s in stack_result['stack']:
        print(f"  {s['player']} {s['stat']} {s['side']} {s['line']} "
              f"@ {s['odds']} | Edge: {s['adjusted_edge']:+.1%} "
              f"(base: {s['base_edge']:+.1%}, boost: {s['env_boost']:+.1%})")
    print(f"\nStack Metrics: {stack_result['stack_metrics']}")

    # === Contrarian Demo ===
    print("\n\nCONTRARIAN ANALYSIS")
    print("=" * 60)

    public_props = [
        {'player': 'Star Guard', 'stat': 'pts', 'line': 28.5,
         'public_pct_over': 82, 'model_prob_over': 45,
         'over_odds': 1.952, 'under_odds': 1.870},
        {'player': 'Center', 'stat': 'reb', 'line': 11.5,
         'public_pct_over': 75, 'model_prob_over': 48,
         'over_odds': 1.909, 'under_odds': 1.909},
        {'player': 'Role Player', 'stat': 'pts', 'line': 14.5,
         'public_pct_over': 25, 'model_prob_over': 62,
         'over_odds': 1.870, 'under_odds': 1.952},
    ]

    contrarian = strategy.contrarian_analysis(public_props)
    for opp in contrarian:
        print(f"  {opp['player']} {opp['stat']} {opp['line']}: "
              f"{opp['side']} (Edge: {opp['edge']:+.1%}) | "
              f"Public: {opp['public_on_opposite']} | "
              f"Signal: {opp['contrarian_signal']}")

    # === Game Script Demo ===
    print("\n\nGAME SCRIPT EXPLOITATION")
    print("=" * 60)

    players = [
        {'name': 'QB Smith', 'type': 'passer', 'stat': 'pass_yds',
         'projection': 275, 'std': 55, 'prop_line': 259.5,
         'over_odds': 1.909, 'under_odds': 1.909},
        {'name': 'RB Jones', 'type': 'rusher', 'stat': 'rush_yds',
         'projection': 72, 'std': 28, 'prop_line': 69.5,
         'over_odds': 1.909, 'under_odds': 1.909},
        {'name': 'WR Davis', 'type': 'receiver', 'stat': 'rec_yds',
         'projection': 68, 'std': 30, 'prop_line': 62.5,
         'over_odds': 1.909, 'under_odds': 1.909},
    ]

    # Team is 10-point underdog
    script_recs = strategy.game_script_exploitation(
        team="UNDERDOGS",
        spread=10.0,  # 10-point underdog
        total=44.5,
        players=players,
    )

    for rec in script_recs:
        print(f"  {rec['player']} ({rec['type']}): {rec['stat']} "
              f"{rec['best_side']} {rec['prop_line']} | "
              f"Edge: {rec['edge']:+.1%} | "
              f"Script: {rec['game_script']} (adj: {rec['adjustment']}x) | "
              f"Proj: {rec['base_projection']} -> {rec['adjusted_projection']}")

34.6 Chapter Summary

Player props and same-game parlays represent one of the richest areas for quantitative sports bettors. The breadth of markets, the complexity of the modeling challenge, and the structural inefficiencies in how books price these markets create abundant opportunities for those with the right analytical tools.

Key takeaways:

  1. Player prop projections are multiplicative models. The core formula -- playing time multiplied by per-minute rate multiplied by adjustments -- is simple in structure but complex in execution. Each component (minutes, rates, opponent adjustment, pace, game script) requires careful estimation with appropriate uncertainty quantification.

  2. Correlation is the key to same-game parlay analysis. Books that price SGP legs as (approximately) independent create systematic edge when you can correctly model the positive or negative correlations between outcomes. The Gaussian copula approach provides a tractable framework for modeling these dependencies.

  3. A complete projection system requires multivariate thinking. A player's stats are not independent; they share common drivers (minutes, pace, usage). Modeling these joint dependencies produces more accurate projections and enables better evaluation of combination props (PRA, points+rebounds, etc.).

  4. Market inefficiencies concentrate in specific areas. Alternate lines (especially high alternates for fat-tailed distributions), less-liquid props (defensive stats, role players), and derivative markets (double-doubles, first scorer) tend to harbor larger mispricings than standard star-player props.

  5. Advanced strategies exploit game-level views. Stacking correlated props when you have a strong game environment view, going contrarian against heavy public action, and adjusting projections for expected game script are all strategies that go beyond single-player-single-stat analysis to capture systematic edge.

  6. Variance management is critical in prop betting. Because individual player outcomes are highly variable (a player might score 15 or 35 points on any given night), prop betting requires a diversified approach. Betting many small-edge props across many games produces a more reliable return stream than concentrating on a few high-conviction plays.

The tools and frameworks presented in this chapter form the foundation for a systematic prop betting operation. In the next chapter, we will extend these concepts to futures and season-long markets, where the time horizon is longer but many of the same modeling principles apply.