19 min read

Live betting -- also called in-play or in-game betting -- has transformed the sports wagering landscape from a pre-event activity into a continuous, dynamic marketplace that operates throughout the duration of a sporting event. Where traditional...

Chapter 33: Live and In-Play Betting

Live betting -- also called in-play or in-game betting -- has transformed the sports wagering landscape from a pre-event activity into a continuous, dynamic marketplace that operates throughout the duration of a sporting event. Where traditional pre-game betting provides a single snapshot of market opinion before kickoff, live betting creates a constantly evolving price discovery mechanism that responds to every play, possession, and momentum shift in real time. This chapter opens Part VII of our textbook by laying the foundation for understanding how live markets function, how to build real-time models that update as events unfold, and how to identify and exploit the fleeting inefficiencies that arise when bookmakers struggle to keep pace with rapidly changing game states.

The rise of live betting has been nothing short of revolutionary. In mature European markets, in-play wagering now accounts for over 70% of total handle on football (soccer) matches. In the United States, the share is growing rapidly, with some operators reporting that live betting constitutes 40-50% of total sports handle as of the mid-2020s. This growth is driven by technology -- faster data feeds, sophisticated algorithms, and mobile platforms that put live markets in every bettor's pocket. For the quantitative bettor, live betting represents both an enormous opportunity and a formidable challenge. The opportunity lies in the sheer volume of markets and the speed at which prices must be set; the challenge lies in the infrastructure, latency management, and real-time modeling capabilities required to compete.

This chapter will equip you with the theoretical foundations and practical tools to build a live betting operation. We will cover the market microstructure of live betting, Bayesian updating techniques for real-time probability estimation, latency and execution management, mispricing detection, and the architecture of a complete live betting system.


33.1 The Live Betting Landscape

How Live Betting Works

Live betting markets open when a sporting event begins (or shortly before) and remain active throughout the contest, with odds adjusting continuously based on the evolving game state. Unlike pre-game markets where odds are posted hours or days in advance and may be adjusted a handful of times, live markets can see hundreds or thousands of price changes during a single event.

The fundamental mechanism works as follows:

  1. Initial pricing: The live market typically opens at or near the closing pre-game line, adjusted for any last-minute information (late scratches, weather changes, etc.).
  2. Continuous updating: As the game progresses, the bookmaker's trading engine ingests real-time data feeds and recalculates odds based on the current score, time remaining, game situation, and statistical models.
  3. Suspension and reopening: During key moments -- a goal in soccer, a touchdown in football, a run-scoring play in baseball -- the market is briefly suspended (typically 2-30 seconds) while the book recalculates. The market then reopens at new odds reflecting the changed game state.
  4. Market closure: Live betting on a particular market may close before the event ends (e.g., the full-game spread market might close in the fourth quarter) or remain open until the final whistle.

Market Structure

Live betting markets are structurally different from pre-game markets in several important ways:

Wider margins: Because of the risk of adverse selection (bettors with faster data exploiting stale lines), live betting margins are typically wider than pre-game margins. A pre-game NFL spread might carry a 4.5% margin (implied probabilities summing to 104.5%), while the same live spread might carry a 6-8% margin.

Lower limits: To manage exposure to latency-advantaged bettors, books generally offer lower maximum bet sizes on live markets. Where a pre-game NFL side might accept $10,000-$50,000 at a major book, the live equivalent might cap at $500-$5,000.

Asymmetric acceptance: Many books employ a "bet acceptance" model where live wagers are not immediately confirmed. Instead, the bet enters a queue and is accepted or rejected within a few seconds, during which the book verifies that the line has not moved. This "bet behind" or "delay" mechanism is a critical feature of live market microstructure.

Market depth: The number of available live markets varies enormously by sport and book. A major European soccer match might have 200+ live markets, while a mid-week MLS match might have only the core markets (match result, total goals, next goal).

Odds Updating Frequency

The frequency of odds updates depends on the sport, the specific market, and the sophistication of the bookmaker's trading engine:

Sport Typical Update Frequency Suspension Duration Key Trigger Events
Soccer Every 1-5 seconds 5-30 sec (goals) Goals, red cards, penalties
NFL Football Every play (5-40 sec gaps) 5-15 sec (scores) Scores, turnovers, injuries
NBA Basketball Every 3-10 seconds 2-10 sec (timeouts) Scoring runs, foul trouble
MLB Baseball Every pitch or at-bat 5-15 sec (runs) Runs, pitching changes
Tennis Every point 2-10 sec (games/sets) Break points, set changes
NHL Hockey Every 5-15 seconds 5-20 sec (goals) Goals, penalties, pulls
College Football Every play 5-15 sec Scores, turnovers
College Basketball Every 5-15 seconds 2-10 sec Scoring runs, fouls

Sports Suitability for Live Betting

Not all sports are equally amenable to live betting. Several factors determine how suitable a sport is for in-play wagering:

Scoring frequency: Sports with frequent scoring changes (basketball, tennis) create more opportunities for mispricing but also require faster model updates. Low-scoring sports (soccer, hockey) have fewer dramatic shifts but each scoring event has an outsized impact.

Game state complexity: Sports with complex game states (football with down, distance, field position, score, time) provide richer modeling opportunities. Simpler game states (tennis: sets, games, points) are easier to model accurately but leave less room for edge.

Natural pauses: Sports with built-in stoppages (football between plays, baseball between pitches) give bookmakers time to update lines, reducing mispricing windows. Continuous-play sports (soccer, basketball) force books to update on the fly, creating more opportunities for latency-advantaged bettors.

Data availability: The quality and speed of real-time data feeds varies by sport. Major US professional leagues have official data partnerships, but lower-tier events may rely on slower, less reliable feeds.

Predictability of remaining game: Some sports become increasingly predictable as the game progresses (basketball with a large lead and little time remaining), while others maintain uncertainty longer (soccer where a single goal can change the outcome).

A rough ranking of sports by live betting opportunity for the quantitative bettor:

  1. Tennis -- Point-by-point data, well-understood probability models, frequent scoring
  2. Soccer -- Massive live handle in European markets, goal-based state changes create clear mispricing windows
  3. NFL Football -- Rich game state, natural pauses allow careful analysis, but sophisticated books
  4. NBA Basketball -- High scoring frequency, but fast pace means prices update quickly
  5. MLB Baseball -- Pitch-by-pitch data enables granular modeling, pitching changes create opportunities
  6. NHL Hockey -- Lower liquidity but interesting dynamics around power plays and goaltender pulls

33.2 Real-Time Model Updating

The core technical challenge of live betting is maintaining an accurate probability estimate that updates as new information arrives during a game. This section covers the mathematical frameworks for real-time model updating.

Bayesian Updating During Games

Bayesian inference provides the natural framework for live betting models. We begin with a prior probability (our pre-game estimate) and update it as we observe game events.

The fundamental equation is Bayes' theorem:

$$P(H|E) = \frac{P(E|H) \cdot P(H)}{P(E)}$$

Where: - $P(H)$ is the prior probability of our hypothesis (e.g., "Team A wins") - $P(E|H)$ is the likelihood of observing the evidence given the hypothesis - $P(E)$ is the marginal probability of the evidence - $P(H|E)$ is the posterior probability after observing the evidence

In a live betting context, this becomes iterative. After each game event, our posterior becomes the new prior for the next update:

$$P(\text{Win}|S_t) = \frac{P(S_t|\text{Win}) \cdot P(\text{Win}|S_{t-1})}{P(S_t)}$$

Where $S_t$ represents the game state at time $t$.

Win Probability Models

Win probability models estimate the likelihood of each outcome given the current game state. The general approach is:

  1. Define the state space: Identify all relevant variables that describe the current game situation.
  2. Build a historical database: Collect data on game outcomes from similar states.
  3. Fit a model: Use logistic regression, random forests, or neural networks to estimate win probability as a function of the state variables.
  4. Update in real time: As the game state changes, feed the new state into the model to get updated probabilities.

For NFL football, a basic win probability model might use: - Score differential - Time remaining - Down and distance - Field position - Timeouts remaining - Home/away - Pre-game strength estimate (e.g., Elo rating or power rating)

For NBA basketball: - Score differential - Time remaining - Possession - Foul situation - Pre-game strength estimate

State-Space Models

State-space models provide a more sophisticated framework for tracking latent game dynamics. The basic structure is:

State equation (how the hidden state evolves): $$x_t = f(x_{t-1}, u_t) + w_t$$

Observation equation (how we observe the state): $$y_t = g(x_t) + v_t$$

Where: - $x_t$ is the latent state (e.g., true team strength at time $t$) - $u_t$ is a control input (e.g., known game events) - $w_t$ is process noise - $y_t$ is the observed score or game event - $v_t$ is observation noise

The Kalman filter (for linear systems) or particle filter (for nonlinear systems) can be used to estimate the latent state in real time.

In a live betting context, the latent state might represent the "true" scoring rate differential between two teams, which we update as we observe actual scoring events. This allows us to distinguish between a team that is genuinely dominant (high latent scoring rate) and one that has been lucky (a few fortunate bounces that do not reflect sustained superiority).

Python Code for Live Probability Updates

The following implementation demonstrates a Bayesian win probability model for an NBA basketball game:

import numpy as np
from scipy.stats import norm
from dataclasses import dataclass, field
from typing import List, Optional, Tuple
import time


@dataclass
class GameState:
    """Represents the current state of a basketball game."""
    home_score: int = 0
    away_score: int = 0
    period: int = 1
    time_remaining_seconds: float = 2880.0  # 48 minutes
    home_possessions: int = 0
    away_possessions: int = 0
    home_fouls: int = 0
    away_fouls: int = 0
    home_timeouts: int = 7
    away_timeouts: int = 7

    @property
    def score_diff(self) -> int:
        return self.home_score - self.away_score

    @property
    def total_time(self) -> float:
        return 2880.0  # regulation seconds

    @property
    def fraction_remaining(self) -> float:
        return max(self.time_remaining_seconds / self.total_time, 0.001)

    @property
    def elapsed_fraction(self) -> float:
        return 1.0 - self.fraction_remaining


class BayesianWinProbabilityModel:
    """
    Real-time win probability model using Bayesian updating.

    The model maintains a prior on the home team's 'true' scoring
    rate advantage (points per 48 minutes relative to opponent),
    and updates this as scoring events are observed.
    """

    def __init__(self, pregame_spread: float, pregame_total: float):
        """
        Initialize with pre-game market information.

        Args:
            pregame_spread: Pre-game point spread (negative = home favored)
            pregame_total: Pre-game over/under total
        """
        # Prior on home team scoring rate advantage (points per game)
        # Spread gives us the expected advantage
        self.prior_mean = -pregame_spread  # Convert to home advantage
        self.prior_std = 12.0  # Prior uncertainty (typical NBA game std dev)
        self.pregame_total = pregame_total

        # Current posterior parameters
        self.posterior_mean = self.prior_mean
        self.posterior_std = self.prior_std

        # Scoring rate parameters
        self.expected_pace = pregame_total / 48.0  # Points per minute total
        self.scoring_noise_per_minute = 2.5  # Noise in scoring rate

        # History tracking
        self.probability_history: List[Tuple[float, float]] = []
        self.state_history: List[GameState] = []

    def update(self, state: GameState) -> float:
        """
        Update win probability given current game state.

        Uses Bayesian conjugate update:
        - Prior: N(mu_prior, sigma_prior^2)
        - Likelihood based on observed scoring differential
        - Posterior: N(mu_post, sigma_post^2)

        Returns:
            Home team win probability
        """
        elapsed = state.elapsed_fraction
        remaining = state.fraction_remaining

        if elapsed < 0.01:
            # Game just started, use prior
            win_prob = self._prior_win_probability()
        else:
            # Observed scoring differential
            observed_diff = state.score_diff

            # Expected differential at this point based on prior
            expected_diff = self.prior_mean * elapsed

            # Observation precision (inverse variance)
            # More time elapsed = more precise observation
            obs_variance = (self.scoring_noise_per_minute ** 2) * (elapsed * 48.0)
            obs_precision = 1.0 / obs_variance

            # Prior precision
            prior_precision = 1.0 / (self.prior_std ** 2)

            # Posterior (conjugate normal update)
            post_precision = prior_precision + obs_precision
            self.posterior_std = np.sqrt(1.0 / post_precision)
            self.posterior_mean = (
                (prior_precision * self.prior_mean +
                 obs_precision * (observed_diff / elapsed))
                / post_precision
            )

            # Win probability: P(home wins) given posterior on scoring rate
            # Expected final margin = posterior_mean (full game rate)
            # But we need to account for both observed and remaining
            expected_final_diff = observed_diff + self.posterior_mean * remaining
            remaining_variance = (self.scoring_noise_per_minute ** 2) * (remaining * 48.0)
            remaining_std = np.sqrt(remaining_variance)

            # Also account for posterior uncertainty
            total_std = np.sqrt(
                remaining_variance + (self.posterior_std * remaining) ** 2
            )

            if total_std < 0.01:
                # Game essentially over
                win_prob = 1.0 if observed_diff > 0 else (0.5 if observed_diff == 0 else 0.0)
            else:
                # P(final_diff > 0) where final_diff ~ N(expected_final_diff, total_std^2)
                win_prob = float(norm.sf(0, loc=expected_final_diff, scale=total_std))

        # Handle overtime possibility for close games near end
        if state.time_remaining_seconds < 120 and abs(state.score_diff) <= 3:
            win_prob = self._adjust_for_overtime(win_prob, state)

        # Store history
        time_elapsed = state.total_time - state.time_remaining_seconds
        self.probability_history.append((time_elapsed, win_prob))
        self.state_history.append(state)

        return win_prob

    def _prior_win_probability(self) -> float:
        """Calculate win probability from prior alone."""
        return float(norm.sf(0, loc=self.prior_mean, scale=self.prior_std))

    def _adjust_for_overtime(self, base_prob: float, state: GameState) -> float:
        """Adjust win probability for overtime possibility in close games."""
        diff = state.score_diff
        seconds_left = state.time_remaining_seconds

        # Rough estimate of tie probability at end of regulation
        # Based on scoring rate and remaining time
        pts_remaining_std = self.scoring_noise_per_minute * np.sqrt(seconds_left / 60.0)
        if pts_remaining_std < 0.1:
            return base_prob

        # P(tie at end) approximation
        tie_prob = norm.pdf(0, loc=diff, scale=max(pts_remaining_std, 1.0)) * 2.0
        tie_prob = min(tie_prob, 0.3)

        # In overtime, roughly 50/50 adjusted for team strength
        ot_home_win = norm.sf(0, loc=self.posterior_mean * (5.0 / 48.0),
                              scale=self.scoring_noise_per_minute * np.sqrt(5.0 / 60.0))

        # Blend
        adjusted = base_prob * (1 - tie_prob) + tie_prob * ot_home_win
        return float(adjusted)

    def get_fair_spread(self, state: GameState) -> float:
        """Calculate the fair live spread given current game state."""
        remaining = state.fraction_remaining
        expected_remaining_diff = self.posterior_mean * remaining
        return -(state.score_diff + expected_remaining_diff)

    def get_fair_total(self, state: GameState) -> float:
        """Calculate the fair live total given current game state."""
        remaining = state.fraction_remaining
        current_total = state.home_score + state.away_score
        expected_remaining = self.pregame_total * remaining
        return current_total + expected_remaining


class LiveGameSimulator:
    """Simulates a basketball game for testing the model."""

    def __init__(self, true_home_advantage: float = 5.0, pace: float = 4.5):
        self.true_advantage = true_home_advantage  # points per 48 min
        self.pace = pace  # total points per minute
        self.state = GameState()

    def simulate_minute(self) -> GameState:
        """Simulate one minute of game action."""
        home_rate = (self.pace / 2) + (self.true_advantage / 48.0) / 2
        away_rate = (self.pace / 2) - (self.true_advantage / 48.0) / 2

        home_points = max(0, int(np.random.normal(home_rate, 1.2)))
        away_points = max(0, int(np.random.normal(away_rate, 1.2)))

        self.state.home_score += home_points
        self.state.away_score += away_points
        self.state.time_remaining_seconds = max(
            0, self.state.time_remaining_seconds - 60
        )

        if self.state.time_remaining_seconds <= 2160:
            self.state.period = 2
        if self.state.time_remaining_seconds <= 1440:
            self.state.period = 3
        if self.state.time_remaining_seconds <= 720:
            self.state.period = 4

        return GameState(
            home_score=self.state.home_score,
            away_score=self.state.away_score,
            period=self.state.period,
            time_remaining_seconds=self.state.time_remaining_seconds,
            home_timeouts=self.state.home_timeouts,
            away_timeouts=self.state.away_timeouts,
        )


# Demonstration
if __name__ == "__main__":
    np.random.seed(42)

    # Initialize model with pre-game market data
    # Home team favored by 5 points, total of 220
    model = BayesianWinProbabilityModel(pregame_spread=-5.0, pregame_total=220.0)

    # Simulate a game
    sim = LiveGameSimulator(true_home_advantage=5.0, pace=220 / 48.0)

    print(f"{'Time':>8} {'Score':>10} {'Win Prob':>10} {'Spread':>8} {'Total':>8}")
    print("-" * 50)

    for minute in range(49):
        if minute == 0:
            state = GameState()
        else:
            state = sim.simulate_minute()

        win_prob = model.update(state)
        fair_spread = model.get_fair_spread(state)
        fair_total = model.get_fair_total(state)

        if minute % 6 == 0 or minute == 48:
            elapsed = f"{minute}:00"
            score = f"{state.home_score}-{state.away_score}"
            print(f"{elapsed:>8} {score:>10} {win_prob:>10.3f} "
                  f"{fair_spread:>+8.1f} {fair_total:>8.1f}")

    print(f"\nFinal: Home {'wins' if state.home_score > state.away_score else 'loses'} "
          f"{state.home_score}-{state.away_score}")

This model demonstrates several key principles. First, we start with a prior derived from market information (the pre-game spread and total). Second, as the game progresses, we use the observed scoring differential to update our belief about the true scoring rate advantage. Third, the uncertainty in our estimate naturally decreases as we observe more of the game, causing the win probability to converge toward 0 or 1 as the game nears completion.


33.3 Latency and Execution Challenges

In live betting, the speed at which you receive information, process it, and execute bets is often the difference between profit and loss. This section examines the latency landscape and strategies for managing execution challenges.

Data Feed Delays

Real-time sports data arrives through several channels, each with different latency characteristics:

Television broadcast: The slowest option, typically 5-15 seconds behind real time due to production delay and satellite/cable transmission. Never suitable for live betting.

Official league data feeds: Services like Sportradar, Genius Sports, and Stats Perform provide official data with latency typically in the 1-6 second range from the actual event occurrence. These are the gold standard for most bettors.

Court-side / stadium data: Some operators and data providers have scouts physically present at events who report data with sub-second latency. This is the fastest generally available option but requires significant infrastructure.

Derived data: Some advanced operations use computer vision and audio processing to extract game state from broadcast feeds with minimal delay, potentially achieving 1-3 second latency.

The latency hierarchy creates a pecking order in live betting markets:

Court-side scouts (~0.5s) > Official data feeds (~2-5s) > Broadcast (~8-15s)

If you are operating on broadcast data while others have court-side feeds, you are structurally disadvantaged. Your entire strategy must account for this.

Execution Speed

Even with fast data, execution speed matters enormously. The execution pipeline has several components:

  1. Data reception (0-500ms): Receiving and parsing the data feed
  2. Model computation (10-500ms): Running your model to generate updated probabilities
  3. Decision logic (1-50ms): Comparing your fair price to the offered price and deciding whether to bet
  4. API call (50-2000ms): Submitting the bet to the sportsbook
  5. Bet acceptance (0-10000ms): The book's acceptance/rejection delay

Total round-trip latency from event occurrence to bet acceptance can range from under 1 second (professional operations) to 30+ seconds (manual bettors using broadcast data).

API Latency

Most sportsbooks that offer programmatic access do so through REST APIs or WebSocket connections. The characteristics differ:

REST APIs: - Each bet placement requires a new HTTP request - Typical latency: 100-500ms per request - Subject to rate limiting - Simpler to implement

WebSocket connections: - Persistent connection reduces overhead - Typical latency: 20-100ms per message - Real-time odds streaming possible - More complex implementation

Automated vs. Manual Live Betting

The choice between automated and manual live betting depends on your edge source:

Automated systems are necessary when: - Your edge depends on speed (exploiting stale lines) - You need to bet across many simultaneous events - The decision logic is well-defined and algorithmic - You are operating in high-frequency update sports (basketball, tennis)

Manual betting can work when: - Your edge comes from subjective assessment (e.g., noticing a player is injured) - You are betting on lower-frequency events (e.g., halftime markets) - The mispricing window is long enough (minutes, not seconds) - You are trading on information not captured in standard data feeds

In practice, a hybrid approach often works best: automated systems handle the high-frequency opportunities while the human operator monitors for qualitative edges and adjusts model parameters.

Python Code for Low-Latency Bet Submission

import asyncio
import aiohttp
import time
import json
import hashlib
import hmac
from dataclasses import dataclass
from typing import Optional, Dict, Any, List
from collections import deque
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


@dataclass
class BetOrder:
    """Represents a bet to be placed."""
    event_id: str
    market_id: str
    selection_id: str
    odds: float
    stake: float
    side: str  # 'back' or 'lay'
    timestamp_created: float = 0.0
    timestamp_submitted: float = 0.0
    timestamp_confirmed: float = 0.0
    status: str = 'pending'
    book_response: Optional[Dict] = None

    def __post_init__(self):
        if self.timestamp_created == 0.0:
            self.timestamp_created = time.time()


class LowLatencyBetExecutor:
    """
    Manages low-latency bet execution across multiple sportsbooks.

    Uses async I/O and connection pooling to minimize execution latency.
    Tracks timing metrics for performance analysis.
    """

    def __init__(self, config: Dict[str, Any]):
        self.config = config
        self.session: Optional[aiohttp.ClientSession] = None
        self.pending_orders: deque = deque(maxlen=1000)
        self.completed_orders: List[BetOrder] = []
        self.latency_history: deque = deque(maxlen=10000)

        # Rate limiting
        self.requests_per_second = config.get('rate_limit', 10)
        self.last_request_time = 0.0
        self.min_interval = 1.0 / self.requests_per_second

    async def initialize(self):
        """Set up persistent connection pool."""
        connector = aiohttp.TCPConnector(
            limit=20,              # Max simultaneous connections
            ttl_dns_cache=300,     # DNS cache TTL
            keepalive_timeout=30,  # Keep connections alive
            enable_cleanup_closed=True,
        )
        timeout = aiohttp.ClientTimeout(
            total=10,       # Total timeout
            connect=2,      # Connection timeout
            sock_read=5,    # Read timeout
        )
        self.session = aiohttp.ClientSession(
            connector=connector,
            timeout=timeout,
            headers={
                'Content-Type': 'application/json',
                'Authorization': f'Bearer {self.config["api_key"]}',
            }
        )

    async def close(self):
        """Clean up connections."""
        if self.session:
            await self.session.close()

    def _sign_request(self, payload: str) -> str:
        """Generate HMAC signature for request authentication."""
        secret = self.config.get('api_secret', '').encode()
        return hmac.new(secret, payload.encode(), hashlib.sha256).hexdigest()

    async def _enforce_rate_limit(self):
        """Ensure we don't exceed rate limits."""
        now = time.time()
        elapsed = now - self.last_request_time
        if elapsed < self.min_interval:
            await asyncio.sleep(self.min_interval - elapsed)
        self.last_request_time = time.time()

    async def submit_bet(self, order: BetOrder) -> BetOrder:
        """
        Submit a single bet with minimal latency.

        Returns the order with updated status and timing information.
        """
        if not self.session:
            await self.initialize()

        await self._enforce_rate_limit()

        payload = {
            'event_id': order.event_id,
            'market_id': order.market_id,
            'selection_id': order.selection_id,
            'odds': order.odds,
            'stake': order.stake,
            'side': order.side,
            'accept_odds_change': self.config.get('accept_odds_change', False),
        }

        payload_str = json.dumps(payload, sort_keys=True)
        signature = self._sign_request(payload_str)

        headers = {'X-Signature': signature}

        order.timestamp_submitted = time.time()

        try:
            async with self.session.post(
                f"{self.config['base_url']}/bets",
                json=payload,
                headers=headers,
            ) as response:
                order.timestamp_confirmed = time.time()
                result = await response.json()
                order.book_response = result

                if response.status == 200:
                    order.status = 'accepted'
                elif response.status == 409:
                    order.status = 'odds_changed'
                elif response.status == 429:
                    order.status = 'rate_limited'
                else:
                    order.status = 'rejected'

        except asyncio.TimeoutError:
            order.timestamp_confirmed = time.time()
            order.status = 'timeout'
        except Exception as e:
            order.timestamp_confirmed = time.time()
            order.status = f'error: {str(e)}'

        # Record latency
        submission_latency = order.timestamp_submitted - order.timestamp_created
        execution_latency = order.timestamp_confirmed - order.timestamp_submitted
        total_latency = order.timestamp_confirmed - order.timestamp_created

        self.latency_history.append({
            'submission_latency': submission_latency,
            'execution_latency': execution_latency,
            'total_latency': total_latency,
            'status': order.status,
            'timestamp': order.timestamp_confirmed,
        })

        self.completed_orders.append(order)

        logger.info(
            f"Bet {order.status}: {order.event_id}/{order.market_id} "
            f"@ {order.odds:.3f} for ${order.stake:.2f} "
            f"(latency: {total_latency*1000:.0f}ms)"
        )

        return order

    async def submit_batch(self, orders: List[BetOrder]) -> List[BetOrder]:
        """Submit multiple bets concurrently."""
        tasks = [self.submit_bet(order) for order in orders]
        return await asyncio.gather(*tasks)

    def get_latency_stats(self) -> Dict[str, float]:
        """Calculate latency statistics from recent history."""
        if not self.latency_history:
            return {}

        total_latencies = [h['total_latency'] for h in self.latency_history]
        exec_latencies = [h['execution_latency'] for h in self.latency_history]

        return {
            'mean_total_ms': np.mean(total_latencies) * 1000,
            'p50_total_ms': np.percentile(total_latencies, 50) * 1000,
            'p95_total_ms': np.percentile(total_latencies, 95) * 1000,
            'p99_total_ms': np.percentile(total_latencies, 99) * 1000,
            'mean_exec_ms': np.mean(exec_latencies) * 1000,
            'acceptance_rate': sum(
                1 for h in self.latency_history if h['status'] == 'accepted'
            ) / len(self.latency_history),
            'sample_size': len(self.latency_history),
        }


# Example usage
async def demo_execution():
    config = {
        'base_url': 'https://api.example-sportsbook.com/v1',
        'api_key': 'your_api_key_here',
        'api_secret': 'your_api_secret_here',
        'rate_limit': 10,
        'accept_odds_change': False,
    }

    executor = LowLatencyBetExecutor(config)
    await executor.initialize()

    # Create a bet order
    order = BetOrder(
        event_id='nba_20260218_lal_bos',
        market_id='live_spread',
        selection_id='home',
        odds=1.909,  # -110
        stake=100.0,
        side='back',
    )

    # Submit
    result = await executor.submit_bet(order)
    print(f"Result: {result.status}")
    print(f"Latency: {(result.timestamp_confirmed - result.timestamp_created)*1000:.0f}ms")

    # Print stats
    stats = executor.get_latency_stats()
    for key, val in stats.items():
        print(f"  {key}: {val}")

    await executor.close()

Key design principles in this implementation:

  1. Connection pooling: Reusing TCP connections eliminates the overhead of establishing new connections for each bet.
  2. Async I/O: Non-blocking operations allow the system to submit multiple bets concurrently without waiting for each to complete.
  3. Latency tracking: Every bet records precise timestamps at each stage, enabling detailed performance analysis.
  4. Rate limiting: Built-in rate limiting prevents API bans while maximizing throughput.

33.4 Identifying Mispricing Windows

The most profitable live betting opportunities arise when the bookmaker's odds do not accurately reflect the true probability of an outcome. These mispricings are typically brief, lasting seconds to minutes, and arise from predictable causes.

When Books Are Slow to Adjust

Bookmakers use automated trading engines to update live odds, but these systems have inherent limitations:

Model complexity tradeoffs: Sportsbooks balance model accuracy against computation speed. A more complex model might better capture game dynamics but takes longer to recalculate, creating brief windows where the posted odds lag behind reality.

Multi-market consistency: When a touchdown is scored in an NFL game, the book must simultaneously update the moneyline, spread, total, team totals, half markets, quarter markets, and numerous prop markets. Ensuring consistency across all these markets takes time, and some markets may be updated before others.

Manual intervention: For unusual events -- a bench-clearing brawl, a power outage, a serious injury -- the automated system may not know how to react, and a human trader must step in. This can create longer mispricing windows.

Momentum Events

Momentum shifts create some of the most significant mispricing opportunities in live betting:

Scoring runs: In basketball, a 10-0 run might cause the market to overreact, pricing in a continuation of the run that is unlikely to persist. Mean reversion is a powerful force -- after a 10-0 run, the trailing team's scoring rate typically reverts toward its baseline.

Key plays in football: A long touchdown pass shifts the score dramatically, and the market must reprice. But the market sometimes overcorrects, particularly if the big play was a low-probability event (e.g., a 70-yard pass off a tipped ball) rather than evidence of sustained offensive dominance.

Breaks of serve in tennis: A break of serve is a significant event, but the market sometimes over-adjusts, particularly early in a match when a single break is less decisive.

Injury During Game

In-game injuries are among the most valuable information edges in live betting:

  1. Observation advantage: If you are watching the game and see a key player limp off the field, you may recognize the significance before the data feed reports it.
  2. Severity assessment: Market reaction to injuries often follows a pattern -- initial overreaction followed by correction, or initial underreaction followed by gradual adjustment. If you can assess injury severity faster than the market, you have an edge.
  3. Substitution patterns: In some sports, the impact of losing a specific player can be precisely quantified. In basketball, you can calculate the expected drop in team efficiency when a star player leaves the game.

Weather Changes

For outdoor sports, mid-game weather changes can create significant mispricing:

  • Wind picking up in football or baseball changes passing/hitting dynamics
  • Rain starting in soccer changes scoring likelihood and favors defensive teams
  • Temperature drops in evening games can affect ball flight and player performance

Python Code for Detecting Stale Lines

import numpy as np
from dataclasses import dataclass, field
from typing import Dict, List, Tuple, Optional
from collections import deque
import time


@dataclass
class OddsSnapshot:
    """A snapshot of odds from a single book at a point in time."""
    book_id: str
    market_id: str
    odds_home: float
    odds_away: float
    timestamp: float
    implied_prob_home: float = 0.0
    implied_prob_away: float = 0.0

    def __post_init__(self):
        # Convert decimal odds to implied probabilities
        self.implied_prob_home = 1.0 / self.odds_home if self.odds_home > 0 else 0.0
        self.implied_prob_away = 1.0 / self.odds_away if self.odds_away > 0 else 0.0


@dataclass
class StaleLine:
    """Represents a detected stale line opportunity."""
    book_id: str
    market_id: str
    direction: str  # 'home' or 'away'
    stale_odds: float
    fair_odds: float
    implied_edge: float
    confidence: float
    staleness_seconds: float
    timestamp: float


class StaleLineDetector:
    """
    Detects stale lines by comparing odds across books and against
    a real-time model. Uses multiple detection methods:

    1. Cross-book comparison: Identifies books that haven't moved
       with the consensus.
    2. Model-based: Compares posted odds to model-generated fair odds.
    3. Velocity-based: Detects when a book's update frequency drops
       below normal, suggesting the line is frozen.
    """

    def __init__(self, min_edge: float = 0.03, min_confidence: float = 0.7):
        """
        Args:
            min_edge: Minimum implied edge to flag (default 3%)
            min_confidence: Minimum confidence to report (default 70%)
        """
        self.min_edge = min_edge
        self.min_confidence = min_confidence

        # Store odds history per book per market
        self.odds_history: Dict[str, Dict[str, deque]] = {}

        # Consensus tracking
        self.consensus_history: Dict[str, deque] = {}

        # Detected opportunities
        self.opportunities: List[StaleLine] = []

    def ingest_odds(self, snapshot: OddsSnapshot):
        """Process a new odds snapshot from a book."""
        book_id = snapshot.book_id
        market_id = snapshot.market_id

        if book_id not in self.odds_history:
            self.odds_history[book_id] = {}
        if market_id not in self.odds_history[book_id]:
            self.odds_history[book_id][market_id] = deque(maxlen=500)

        self.odds_history[book_id][market_id].append(snapshot)

        # Update consensus
        self._update_consensus(market_id, snapshot.timestamp)

    def _update_consensus(self, market_id: str, current_time: float):
        """Calculate consensus odds across all books for a market."""
        if market_id not in self.consensus_history:
            self.consensus_history[market_id] = deque(maxlen=500)

        # Gather most recent odds from each book
        recent_odds = []
        for book_id, markets in self.odds_history.items():
            if market_id in markets and markets[market_id]:
                latest = markets[market_id][-1]
                # Only include if recent (within last 30 seconds)
                if current_time - latest.timestamp < 30.0:
                    recent_odds.append(latest)

        if len(recent_odds) < 2:
            return

        # Consensus = average implied probability (margin-adjusted)
        home_probs = [s.implied_prob_home for s in recent_odds]
        away_probs = [s.implied_prob_away for s in recent_odds]

        # Remove margin by normalizing
        total_probs = [h + a for h, a in zip(home_probs, away_probs)]
        fair_home_probs = [h / t for h, t in zip(home_probs, total_probs)]

        consensus_home_prob = np.median(fair_home_probs)
        consensus_away_prob = 1.0 - consensus_home_prob

        self.consensus_history[market_id].append({
            'timestamp': current_time,
            'home_prob': consensus_home_prob,
            'away_prob': consensus_away_prob,
            'n_books': len(recent_odds),
        })

    def detect_stale_lines(
        self, market_id: str, model_fair_prob: Optional[float] = None
    ) -> List[StaleLine]:
        """
        Scan all books for stale lines on a specific market.

        Args:
            market_id: The market to check
            model_fair_prob: Optional model-generated fair probability for home

        Returns:
            List of detected stale line opportunities
        """
        opportunities = []

        if market_id not in self.consensus_history:
            return opportunities
        if not self.consensus_history[market_id]:
            return opportunities

        consensus = self.consensus_history[market_id][-1]
        current_time = consensus['timestamp']

        for book_id, markets in self.odds_history.items():
            if market_id not in markets or not markets[market_id]:
                continue

            latest = markets[market_id][-1]
            staleness = current_time - latest.timestamp

            # Method 1: Cross-book comparison
            total_implied = latest.implied_prob_home + latest.implied_prob_away
            fair_home = latest.implied_prob_home / total_implied
            fair_away = latest.implied_prob_away / total_implied

            home_diff = consensus['home_prob'] - fair_home
            away_diff = consensus['away_prob'] - fair_away

            # If book's odds imply lower probability than consensus,
            # the book's odds are too generous
            if home_diff > self.min_edge:
                confidence = self._calculate_confidence(
                    home_diff, staleness, consensus['n_books']
                )
                if confidence >= self.min_confidence:
                    opportunities.append(StaleLine(
                        book_id=book_id,
                        market_id=market_id,
                        direction='home',
                        stale_odds=latest.odds_home,
                        fair_odds=1.0 / consensus['home_prob'],
                        implied_edge=home_diff,
                        confidence=confidence,
                        staleness_seconds=staleness,
                        timestamp=current_time,
                    ))

            if away_diff > self.min_edge:
                confidence = self._calculate_confidence(
                    away_diff, staleness, consensus['n_books']
                )
                if confidence >= self.min_confidence:
                    opportunities.append(StaleLine(
                        book_id=book_id,
                        market_id=market_id,
                        direction='away',
                        stale_odds=latest.odds_away,
                        fair_odds=1.0 / consensus['away_prob'],
                        implied_edge=away_diff,
                        confidence=confidence,
                        staleness_seconds=staleness,
                        timestamp=current_time,
                    ))

            # Method 2: Model-based comparison
            if model_fair_prob is not None:
                model_away_prob = 1.0 - model_fair_prob

                model_home_diff = model_fair_prob - fair_home
                model_away_diff = model_away_prob - fair_away

                if model_home_diff > self.min_edge:
                    confidence = self._calculate_confidence(
                        model_home_diff, staleness, 1, model_based=True
                    )
                    if confidence >= self.min_confidence:
                        opportunities.append(StaleLine(
                            book_id=book_id,
                            market_id=market_id,
                            direction='home',
                            stale_odds=latest.odds_home,
                            fair_odds=1.0 / model_fair_prob,
                            implied_edge=model_home_diff,
                            confidence=confidence,
                            staleness_seconds=staleness,
                            timestamp=current_time,
                        ))

                if model_away_diff > self.min_edge:
                    confidence = self._calculate_confidence(
                        model_away_diff, staleness, 1, model_based=True
                    )
                    if confidence >= self.min_confidence:
                        opportunities.append(StaleLine(
                            book_id=book_id,
                            market_id=market_id,
                            direction='away',
                            stale_odds=latest.odds_away,
                            fair_odds=1.0 / model_away_prob,
                            implied_edge=model_away_diff,
                            confidence=confidence,
                            staleness_seconds=staleness,
                            timestamp=current_time,
                        ))

            # Method 3: Velocity-based staleness detection
            stale_velocity = self._check_update_velocity(book_id, market_id)
            if stale_velocity and staleness > 10:
                # Book hasn't updated in a while; flag both sides
                for direction, diff in [('home', home_diff), ('away', away_diff)]:
                    if diff > self.min_edge * 0.5:  # Lower threshold for velocity
                        opportunities.append(StaleLine(
                            book_id=book_id,
                            market_id=market_id,
                            direction=direction,
                            stale_odds=(latest.odds_home if direction == 'home'
                                        else latest.odds_away),
                            fair_odds=1.0 / (consensus['home_prob']
                                             if direction == 'home'
                                             else consensus['away_prob']),
                            implied_edge=diff,
                            confidence=0.6,
                            staleness_seconds=staleness,
                            timestamp=current_time,
                        ))

        self.opportunities.extend(opportunities)
        return opportunities

    def _calculate_confidence(
        self, edge: float, staleness: float, n_books: int,
        model_based: bool = False
    ) -> float:
        """Calculate confidence in a stale line detection."""
        # Higher edge = higher confidence
        edge_factor = min(edge / 0.10, 1.0)

        # More staleness = higher confidence (line hasn't updated)
        stale_factor = min(staleness / 20.0, 1.0)

        # More books in consensus = higher confidence
        book_factor = min(n_books / 5.0, 1.0)

        # Model-based detections get a slight penalty (model might be wrong)
        model_penalty = 0.85 if model_based else 1.0

        confidence = (
            0.4 * edge_factor +
            0.3 * stale_factor +
            0.3 * book_factor
        ) * model_penalty

        return min(confidence, 0.99)

    def _check_update_velocity(self, book_id: str, market_id: str) -> bool:
        """Check if a book's update frequency has dropped abnormally."""
        history = self.odds_history.get(book_id, {}).get(market_id, deque())

        if len(history) < 5:
            return False

        # Calculate recent inter-update intervals
        recent = list(history)[-10:]
        intervals = [
            recent[i].timestamp - recent[i-1].timestamp
            for i in range(1, len(recent))
        ]

        if len(intervals) < 3:
            return False

        avg_interval = np.mean(intervals[:-1])
        last_interval = intervals[-1]

        # If last interval is 3x the average, the line might be stale
        return last_interval > avg_interval * 3.0


# Demonstration
if __name__ == "__main__":
    detector = StaleLineDetector(min_edge=0.02, min_confidence=0.5)

    # Simulate odds from multiple books
    np.random.seed(42)
    base_time = time.time()

    market = "nba_lal_bos_moneyline"

    # Book A updates frequently (every 2s)
    for i in range(20):
        t = base_time + i * 2
        prob = 0.55 + i * 0.005 + np.random.normal(0, 0.005)
        detector.ingest_odds(OddsSnapshot(
            book_id="book_a", market_id=market,
            odds_home=1.0 / (prob * 1.04),  # 4% margin
            odds_away=1.0 / ((1 - prob) * 1.04),
            timestamp=t,
        ))

    # Book B stops updating after point 10 (stale line)
    for i in range(10):
        t = base_time + i * 2
        prob = 0.55 + i * 0.005 + np.random.normal(0, 0.005)
        detector.ingest_odds(OddsSnapshot(
            book_id="book_b", market_id=market,
            odds_home=1.0 / (prob * 1.05),  # 5% margin
            odds_away=1.0 / ((1 - prob) * 1.05),
            timestamp=t,
        ))

    # Detect stale lines
    opps = detector.detect_stale_lines(market, model_fair_prob=0.65)

    print(f"Detected {len(opps)} stale line opportunities:")
    for opp in opps:
        print(f"  Book: {opp.book_id}, Direction: {opp.direction}, "
              f"Edge: {opp.implied_edge:.1%}, "
              f"Confidence: {opp.confidence:.1%}, "
              f"Stale: {opp.staleness_seconds:.1f}s")

This detector implements three complementary approaches. Cross-book comparison identifies when one book diverges from the consensus. Model-based detection flags lines that differ from your own fair price estimate. Velocity-based detection catches lines that have stopped updating, even if they have not yet diverged significantly from consensus.


33.5 Building a Live Betting Framework

Bringing together all the components discussed in this chapter, we now outline the architecture for a complete live betting system. A production live betting system consists of several interconnected components, each with specific responsibilities.

Architecture Overview

A well-designed live betting system has the following layered architecture:

Layer 1: Data Ingestion
  ├── Odds Feed Handler (multiple books)
  ├── Event Data Feed Handler (Sportradar, etc.)
  ├── Supplementary Data (weather, injuries, lineups)
  └── Data Normalization & Storage

Layer 2: Analytics Engine
  ├── Win Probability Model
  ├── Fair Odds Calculator
  ├── Stale Line Detector
  └── Edge Estimator

Layer 3: Decision Engine
  ├── Kelly Criterion Sizing
  ├── Risk Management
  ├── Portfolio Constraints
  └── Execution Priority Queue

Layer 4: Execution Layer
  ├── Book API Connectors
  ├── Order Management
  ├── Latency Monitoring
  └── Confirmation Tracking

Layer 5: Monitoring & Logging
  ├── Real-Time Dashboard
  ├── P&L Tracking
  ├── Performance Metrics
  └── Alert System

Data Feeds

The data ingestion layer must handle multiple concurrent data streams with different formats, frequencies, and reliability characteristics. Key considerations include:

Redundancy: Never rely on a single data source. Use multiple feeds and cross-validate. If one feed goes down, the system should gracefully degrade.

Normalization: Different data providers use different event IDs, market names, and odds formats. A normalization layer translates everything into a common internal representation.

Timestamping: Every piece of incoming data must be timestamped with both the provider's timestamp and a local receipt timestamp. This enables latency analysis and helps resolve conflicts between data sources.

Buffering: Incoming data should be buffered in a message queue (e.g., Redis, Kafka) to decouple data ingestion from processing. This prevents a slow analytics engine from causing data loss.

Decision Engine

The decision engine sits between the analytics layer and the execution layer. It receives model outputs (fair probabilities, edge estimates) and determines which bets to place, at what size, and with which priority.

Key components include:

Edge threshold: Only bets with expected edge above a minimum threshold are considered. This threshold should account for the vig (margin) and the expected slippage between the quoted odds and the actually received odds.

Kelly sizing: Bet size is determined by the Kelly criterion (or a fractional Kelly variant) based on the estimated edge and the confidence in that estimate. See Chapter 24 for the full Kelly framework.

Risk management: Position limits, correlation constraints, and maximum exposure rules prevent catastrophic losses. For example, you might limit total exposure to any single game to 5% of bankroll.

Priority queue: When multiple opportunities arise simultaneously, the system must prioritize based on expected edge, confidence, time sensitivity, and execution probability.

Execution Layer

The execution layer translates decisions into actual bets placed at sportsbooks. This is the most latency-sensitive component and requires careful engineering.

Python Code for a Complete Live Betting Framework

import asyncio
import numpy as np
import time
import json
import logging
from dataclasses import dataclass, field
from typing import Dict, List, Optional, Callable, Any
from collections import deque
from enum import Enum

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


class EventType(Enum):
    SCORE_CHANGE = "score_change"
    ODDS_UPDATE = "odds_update"
    GAME_STATE = "game_state"
    INJURY = "injury"
    TIMEOUT = "timeout"
    PERIOD_CHANGE = "period_change"


@dataclass
class LiveEvent:
    """Represents any event in the live data stream."""
    event_type: EventType
    event_id: str
    market_id: str
    data: Dict[str, Any]
    timestamp: float = field(default_factory=time.time)
    source: str = "unknown"


@dataclass
class BettingOpportunity:
    """Represents a detected betting opportunity."""
    event_id: str
    market_id: str
    book_id: str
    direction: str
    model_prob: float
    book_prob: float
    edge: float
    confidence: float
    recommended_stake: float
    odds_offered: float
    fair_odds: float
    timestamp: float
    expiry_seconds: float = 5.0


@dataclass
class PortfolioState:
    """Current state of the betting portfolio."""
    bankroll: float = 10000.0
    open_positions: Dict[str, float] = field(default_factory=dict)
    total_exposure: float = 0.0
    daily_pnl: float = 0.0
    total_bets: int = 0
    winning_bets: int = 0


class LiveDataIngestion:
    """Layer 1: Handles data ingestion from multiple sources."""

    def __init__(self):
        self.subscribers: List[Callable] = []
        self.event_buffer: deque = deque(maxlen=10000)
        self.latest_odds: Dict[str, Dict[str, Dict]] = {}  # market -> book -> odds

    def subscribe(self, callback: Callable):
        """Register a callback for new events."""
        self.subscribers.append(callback)

    async def process_event(self, event: LiveEvent):
        """Process an incoming event and notify subscribers."""
        self.event_buffer.append(event)

        if event.event_type == EventType.ODDS_UPDATE:
            market = event.market_id
            book = event.data.get('book_id', 'unknown')
            if market not in self.latest_odds:
                self.latest_odds[market] = {}
            self.latest_odds[market][book] = {
                'odds_home': event.data.get('odds_home'),
                'odds_away': event.data.get('odds_away'),
                'timestamp': event.timestamp,
            }

        for callback in self.subscribers:
            try:
                await callback(event)
            except Exception as e:
                logger.error(f"Subscriber error: {e}")

    def get_consensus_odds(self, market_id: str) -> Optional[Dict]:
        """Calculate consensus odds across all books for a market."""
        if market_id not in self.latest_odds:
            return None

        books = self.latest_odds[market_id]
        if not books:
            return None

        home_probs = []
        away_probs = []
        for book_data in books.values():
            oh = book_data['odds_home']
            oa = book_data['odds_away']
            if oh and oa and oh > 1 and oa > 1:
                total = (1 / oh) + (1 / oa)
                home_probs.append((1 / oh) / total)
                away_probs.append((1 / oa) / total)

        if not home_probs:
            return None

        return {
            'home_prob': np.median(home_probs),
            'away_prob': np.median(away_probs),
            'n_books': len(home_probs),
        }


class AnalyticsEngine:
    """Layer 2: Real-time analytics and model updates."""

    def __init__(self, data_layer: LiveDataIngestion):
        self.data_layer = data_layer
        self.models: Dict[str, Any] = {}  # event_id -> model
        self.fair_prices: Dict[str, Dict] = {}  # market_id -> fair prices

    def register_model(self, event_id: str, model):
        """Register a win probability model for an event."""
        self.models[event_id] = model

    async def on_event(self, event: LiveEvent):
        """Process a game event and update models."""
        event_id = event.event_id

        if event_id in self.models:
            model = self.models[event_id]

            if event.event_type in (EventType.SCORE_CHANGE, EventType.GAME_STATE):
                # Update the model with new game state
                game_state = event.data.get('game_state')
                if game_state:
                    win_prob = model.update(game_state)
                    self.fair_prices[event.market_id] = {
                        'home_prob': win_prob,
                        'away_prob': 1.0 - win_prob,
                        'timestamp': event.timestamp,
                        'model_id': type(model).__name__,
                    }
                    logger.debug(
                        f"Updated {event.market_id}: "
                        f"home={win_prob:.3f}, away={1-win_prob:.3f}"
                    )

    def get_fair_price(self, market_id: str) -> Optional[Dict]:
        """Get the current model fair price for a market."""
        return self.fair_prices.get(market_id)


class DecisionEngine:
    """Layer 3: Determines which bets to place and sizing."""

    def __init__(
        self,
        analytics: AnalyticsEngine,
        data_layer: LiveDataIngestion,
        portfolio: PortfolioState,
    ):
        self.analytics = analytics
        self.data_layer = data_layer
        self.portfolio = portfolio

        # Configuration
        self.min_edge = 0.03        # 3% minimum edge
        self.kelly_fraction = 0.25  # Quarter Kelly
        self.max_bet_pct = 0.02     # Max 2% of bankroll per bet
        self.max_event_exposure = 0.05  # Max 5% per event
        self.max_total_exposure = 0.20  # Max 20% total

    async def evaluate_opportunity(
        self, event: LiveEvent
    ) -> Optional[BettingOpportunity]:
        """Evaluate whether a new odds update creates an opportunity."""
        if event.event_type != EventType.ODDS_UPDATE:
            return None

        market_id = event.market_id
        book_id = event.data.get('book_id', 'unknown')

        # Get model fair price
        fair_price = self.analytics.get_fair_price(market_id)
        if not fair_price:
            return None

        # Get the book's current odds
        odds_home = event.data.get('odds_home', 0)
        odds_away = event.data.get('odds_away', 0)

        if not odds_home or not odds_away or odds_home <= 1 or odds_away <= 1:
            return None

        # Calculate edge for each direction
        book_home_prob = 1.0 / odds_home
        book_away_prob = 1.0 / odds_away
        total_implied = book_home_prob + book_away_prob

        # Remove vig for comparison
        book_fair_home = book_home_prob / total_implied
        book_fair_away = book_away_prob / total_implied

        model_home = fair_price['home_prob']
        model_away = fair_price['away_prob']

        # Check home side
        home_edge = model_home - book_fair_home
        away_edge = model_away - book_fair_away

        best_direction = None
        best_edge = 0
        best_odds = 0
        best_model_prob = 0
        best_book_prob = 0

        if home_edge > away_edge and home_edge > self.min_edge:
            best_direction = 'home'
            best_edge = home_edge
            best_odds = odds_home
            best_model_prob = model_home
            best_book_prob = book_fair_home
        elif away_edge > self.min_edge:
            best_direction = 'away'
            best_edge = away_edge
            best_odds = odds_away
            best_model_prob = model_away
            best_book_prob = book_fair_away

        if best_direction is None:
            return None

        # Calculate confidence (blend of edge magnitude, consensus agreement,
        # and model freshness)
        consensus = self.data_layer.get_consensus_odds(market_id)
        consensus_agreement = 0.5
        if consensus:
            consensus_prob = (consensus['home_prob'] if best_direction == 'home'
                              else consensus['away_prob'])
            consensus_agreement = 1.0 - abs(best_model_prob - consensus_prob) / 0.1
            consensus_agreement = max(0.3, min(1.0, consensus_agreement))

        confidence = min(0.95, 0.5 + best_edge * 5 + consensus_agreement * 0.2)

        # Kelly sizing
        # Edge = p * b - q, where p = model prob, b = net odds, q = 1 - p
        net_odds = best_odds - 1
        p = best_model_prob
        q = 1 - p
        kelly_full = (p * net_odds - q) / net_odds
        kelly_bet = max(0, kelly_full * self.kelly_fraction)

        # Apply constraints
        max_bet = self.portfolio.bankroll * self.max_bet_pct
        event_exposure = self.portfolio.open_positions.get(event.event_id, 0)
        event_limit = (self.portfolio.bankroll * self.max_event_exposure
                       - event_exposure)
        total_limit = (self.portfolio.bankroll * self.max_total_exposure
                       - self.portfolio.total_exposure)

        stake = min(
            kelly_bet * self.portfolio.bankroll,
            max_bet,
            max(0, event_limit),
            max(0, total_limit),
        )

        if stake < 1.0:  # Minimum bet size
            return None

        return BettingOpportunity(
            event_id=event.event_id,
            market_id=market_id,
            book_id=book_id,
            direction=best_direction,
            model_prob=best_model_prob,
            book_prob=best_book_prob,
            edge=best_edge,
            confidence=confidence,
            recommended_stake=round(stake, 2),
            odds_offered=best_odds,
            fair_odds=1.0 / best_model_prob,
            timestamp=time.time(),
        )


class ExecutionLayer:
    """Layer 4: Handles bet placement and order management."""

    def __init__(self, portfolio: PortfolioState):
        self.portfolio = portfolio
        self.order_queue: asyncio.Queue = asyncio.Queue()
        self.execution_log: List[Dict] = []

    async def execute(self, opportunity: BettingOpportunity) -> Dict:
        """Execute a betting opportunity."""
        start_time = time.time()

        # In production, this would call the sportsbook API
        # Here we simulate the execution
        result = await self._simulate_execution(opportunity)

        execution_time = time.time() - start_time

        log_entry = {
            'timestamp': time.time(),
            'event_id': opportunity.event_id,
            'market_id': opportunity.market_id,
            'book_id': opportunity.book_id,
            'direction': opportunity.direction,
            'stake': opportunity.recommended_stake,
            'odds': opportunity.odds_offered,
            'edge': opportunity.edge,
            'status': result['status'],
            'execution_ms': execution_time * 1000,
        }
        self.execution_log.append(log_entry)

        if result['status'] == 'accepted':
            self.portfolio.total_bets += 1
            self.portfolio.total_exposure += opportunity.recommended_stake
            event_id = opportunity.event_id
            self.portfolio.open_positions[event_id] = (
                self.portfolio.open_positions.get(event_id, 0)
                + opportunity.recommended_stake
            )

        logger.info(
            f"Execution: {result['status']} | "
            f"{opportunity.direction} @ {opportunity.odds_offered:.3f} "
            f"${opportunity.recommended_stake:.2f} | "
            f"Edge: {opportunity.edge:.1%} | "
            f"Time: {execution_time*1000:.0f}ms"
        )

        return result

    async def _simulate_execution(self, opp: BettingOpportunity) -> Dict:
        """Simulate bet execution with realistic acceptance rates."""
        # Simulate API latency
        await asyncio.sleep(np.random.uniform(0.05, 0.3))

        # Acceptance probability decreases with edge (books reject sharp action)
        acceptance_prob = max(0.3, 1.0 - opp.edge * 5)
        accepted = np.random.random() < acceptance_prob

        return {
            'status': 'accepted' if accepted else 'rejected',
            'odds_received': opp.odds_offered if accepted else None,
            'stake_accepted': opp.recommended_stake if accepted else 0,
        }


class LiveBettingSystem:
    """
    Complete live betting system that ties all layers together.

    This is the main orchestrator that coordinates data ingestion,
    analytics, decision making, and execution.
    """

    def __init__(self, initial_bankroll: float = 10000.0):
        self.portfolio = PortfolioState(bankroll=initial_bankroll)
        self.data_layer = LiveDataIngestion()
        self.analytics = AnalyticsEngine(self.data_layer)
        self.decision = DecisionEngine(
            self.analytics, self.data_layer, self.portfolio
        )
        self.execution = ExecutionLayer(self.portfolio)

        # Wire up event processing pipeline
        self.data_layer.subscribe(self.analytics.on_event)
        self.data_layer.subscribe(self._on_odds_event)

        # Performance tracking
        self.opportunities_found = 0
        self.bets_placed = 0
        self.bets_accepted = 0

    async def _on_odds_event(self, event: LiveEvent):
        """Handle odds update events through the decision pipeline."""
        opportunity = await self.decision.evaluate_opportunity(event)

        if opportunity:
            self.opportunities_found += 1
            result = await self.execution.execute(opportunity)

            self.bets_placed += 1
            if result['status'] == 'accepted':
                self.bets_accepted += 1

    async def ingest_event(self, event: LiveEvent):
        """Main entry point for new data."""
        await self.data_layer.process_event(event)

    def get_status(self) -> Dict:
        """Get current system status."""
        return {
            'bankroll': self.portfolio.bankroll,
            'total_exposure': self.portfolio.total_exposure,
            'exposure_pct': (self.portfolio.total_exposure
                             / self.portfolio.bankroll * 100),
            'opportunities_found': self.opportunities_found,
            'bets_placed': self.bets_placed,
            'bets_accepted': self.bets_accepted,
            'acceptance_rate': (self.bets_accepted / max(1, self.bets_placed)
                                * 100),
            'open_positions': len(self.portfolio.open_positions),
        }


# Demonstration
async def run_demo():
    np.random.seed(42)
    system = LiveBettingSystem(initial_bankroll=10000.0)

    # Register a simple model (using the BayesianWinProbabilityModel from 33.2)
    model = BayesianWinProbabilityModel(pregame_spread=-4.0, pregame_total=220.0)
    system.analytics.register_model("game_001", model)

    # Simulate a game with live odds updates
    sim = LiveGameSimulator(true_home_advantage=6.0, pace=220 / 48.0)
    base_time = time.time()

    print("Running live betting simulation...")
    print("=" * 70)

    for minute in range(48):
        state = sim.simulate_minute()

        # Send game state update
        await system.ingest_event(LiveEvent(
            event_type=EventType.GAME_STATE,
            event_id="game_001",
            market_id="game_001_ml",
            data={'game_state': state},
            timestamp=base_time + minute * 60,
        ))

        # Simulate odds from 3 books with varying latency
        model_prob = system.analytics.fair_prices.get(
            "game_001_ml", {}
        ).get('home_prob', 0.55)

        for book in ['book_a', 'book_b', 'book_c']:
            # Each book has slightly different odds + noise + delay
            noise = np.random.normal(0, 0.02)
            delay = np.random.uniform(0, 0.03)
            book_prob = model_prob + noise + delay

            margin = 1.04 + np.random.uniform(0, 0.02)
            odds_home = margin / max(book_prob, 0.05)
            odds_away = margin / max(1 - book_prob, 0.05)

            await system.ingest_event(LiveEvent(
                event_type=EventType.ODDS_UPDATE,
                event_id="game_001",
                market_id="game_001_ml",
                data={
                    'book_id': book,
                    'odds_home': odds_home,
                    'odds_away': odds_away,
                },
                timestamp=base_time + minute * 60 + np.random.uniform(0, 2),
            ))

        if minute % 12 == 0:
            status = system.get_status()
            print(f"\n--- Minute {minute} ---")
            print(f"Score: {state.home_score}-{state.away_score}")
            print(f"Opportunities: {status['opportunities_found']}, "
                  f"Bets: {status['bets_placed']}, "
                  f"Accepted: {status['bets_accepted']}")
            print(f"Exposure: ${status['total_exposure']:.2f} "
                  f"({status['exposure_pct']:.1f}%)")

    # Final summary
    print("\n" + "=" * 70)
    print("SIMULATION COMPLETE")
    status = system.get_status()
    for key, val in status.items():
        print(f"  {key}: {val}")


if __name__ == "__main__":
    asyncio.run(run_demo())

This framework demonstrates the complete architecture in a single runnable program. In production, each layer would be a separate service communicating through message queues, with proper error handling, persistence, and monitoring. The key insight is that the layered architecture allows each component to be developed, tested, and optimized independently.

Production Considerations

When deploying a live betting system in production, several additional concerns arise:

Reliability: The system must handle network failures, API outages, and data feed interruptions gracefully. Circuit breakers should pause betting when data quality degrades.

State management: If the system crashes mid-game, it needs to recover its state -- open positions, model parameters, and portfolio exposure -- from persistent storage.

Monitoring: Real-time dashboards should show key metrics: latency percentiles, acceptance rates, edge distribution, exposure, and P&L. Alerts should fire for anomalous conditions (all bets rejected, data feed frozen, model outputs extreme values).

Backtesting: Before deploying any live strategy, it must be thoroughly backtested on historical live data, including realistic execution assumptions (acceptance rates, latency, slippage).

Compliance: Ensure your operation complies with all applicable regulations regarding automated betting, API usage terms of service, and reporting requirements.


33.6 Chapter Summary

This chapter laid the groundwork for Part VII by establishing the fundamental concepts, tools, and architecture required for live betting.

Key takeaways:

  1. Live betting is structurally different from pre-game betting. Wider margins, lower limits, and bet acceptance delays create a unique market microstructure. Understanding this structure is essential before attempting to trade live markets.

  2. Bayesian updating provides the natural framework for live models. Starting with a prior from pre-game markets and updating iteratively as game events unfold, we can maintain a real-time estimate of outcome probabilities. The precision of our estimate naturally increases as more of the game is observed.

  3. Latency is a first-class concern. The speed of your data feed, model computation, and execution pipeline directly determines which opportunities you can capture. Understanding the latency hierarchy -- from court-side scouts to broadcast delays -- is essential for realistic strategy design.

  4. Mispricing windows arise from predictable causes. Books are slow to adjust after scoring events, momentum shifts, injuries, and weather changes. By building detectors for these specific scenarios, you can systematically identify and exploit brief periods of market inefficiency.

  5. A complete live betting system requires careful architecture. The five-layer design -- data ingestion, analytics, decision engine, execution, and monitoring -- provides a framework for building a robust, scalable operation. Each layer can be developed and optimized independently while communicating through well-defined interfaces.

  6. Risk management is paramount in live betting. The speed and volume of live betting amplify both gains and losses. Kelly sizing, position limits, and correlation-aware risk management are not optional extras -- they are survival requirements.

In subsequent chapters, we will build on this foundation to explore specific live betting strategies, prop betting markets, and futures markets that leverage many of the same real-time modeling and execution principles introduced here.