4 min read

Defense wins championships—or so the saying goes. But how do we measure defensive performance beyond simple points allowed? Traditional statistics like tackles and sacks tell only part of the story. Advanced defensive analytics dig deeper, measuring...

Chapter 9: Defensive Metrics and Analysis

Introduction

Defense wins championships—or so the saying goes. But how do we measure defensive performance beyond simple points allowed? Traditional statistics like tackles and sacks tell only part of the story. Advanced defensive analytics dig deeper, measuring not just what defenders did, but how well they did it relative to opportunity.

This chapter explores the comprehensive landscape of defensive metrics, from basic counting stats to sophisticated measures of coverage and pressure. You'll learn to evaluate individual defenders, unit performance, and how to adjust for opponent quality. By the end, you'll have the tools to identify truly elite defenses and the players who make them so.

Learning Objectives

After completing this chapter, you will be able to:

  1. Calculate and interpret traditional defensive statistics
  2. Apply advanced metrics like EPA allowed and success rate allowed
  3. Evaluate pass rush effectiveness using pressure rate and win rate
  4. Analyze coverage performance with target-based metrics
  5. Adjust defensive metrics for opponent and situation
  6. Build comprehensive defensive evaluation systems

9.1 Traditional Defensive Statistics

Basic Counting Statistics

Traditional defensive stats form the foundation of defensive evaluation. While limited on their own, they provide important context.

Tackles: The most basic defensive statistic. A tackle ends the play when a defender brings down the ball carrier.

import pandas as pd
import numpy as np
from typing import Dict, List, Tuple, Optional
from dataclasses import dataclass

@dataclass
class DefensiveStats:
    """Container for basic defensive statistics."""
    player_name: str
    position: str
    games: int
    solo_tackles: int
    assisted_tackles: int
    tackles_for_loss: int
    sacks: float
    interceptions: int
    pass_breakups: int
    forced_fumbles: int
    fumble_recoveries: int

    @property
    def total_tackles(self) -> float:
        """Calculate total tackles (solo + 0.5 * assisted)."""
        return self.solo_tackles + 0.5 * self.assisted_tackles

    @property
    def tackles_per_game(self) -> float:
        """Calculate tackles per game."""
        return self.total_tackles / self.games if self.games > 0 else 0


class TraditionalDefensiveCalculator:
    """
    Calculate traditional defensive statistics.

    Includes tackles, sacks, turnovers, and per-game rates.
    """

    def __init__(self, players: List[DefensiveStats]):
        self.players = players

    def calculate_team_stats(self) -> Dict:
        """Calculate team-level defensive totals."""
        return {
            'total_tackles': sum(p.total_tackles for p in self.players),
            'tackles_for_loss': sum(p.tackles_for_loss for p in self.players),
            'sacks': sum(p.sacks for p in self.players),
            'interceptions': sum(p.interceptions for p in self.players),
            'pass_breakups': sum(p.pass_breakups for p in self.players),
            'forced_fumbles': sum(p.forced_fumbles for p in self.players),
            'fumble_recoveries': sum(p.fumble_recoveries for p in self.players),
            'total_turnovers': (sum(p.interceptions for p in self.players) +
                               sum(p.fumble_recoveries for p in self.players))
        }

    def rank_players(self, stat: str, top_n: int = 10) -> pd.DataFrame:
        """Rank players by specified statistic."""
        data = []
        for p in self.players:
            value = getattr(p, stat, None)
            if value is not None:
                data.append({
                    'player': p.player_name,
                    'position': p.position,
                    'games': p.games,
                    stat: value,
                    f'{stat}_per_game': value / p.games if p.games > 0 else 0
                })

        df = pd.DataFrame(data)
        return df.nlargest(top_n, stat)


# Example usage
print("=" * 70)
print("TRADITIONAL DEFENSIVE STATISTICS")
print("=" * 70)

# Sample defensive players
sample_defenders = [
    DefensiveStats("Marcus Johnson", "LB", 12, 78, 32, 12, 3.5, 1, 5, 2, 1),
    DefensiveStats("Tyler Williams", "DT", 12, 28, 18, 8, 6.0, 0, 2, 1, 0),
    DefensiveStats("Devon Smith", "CB", 12, 42, 8, 2, 0.0, 4, 12, 1, 0),
    DefensiveStats("Jordan Davis", "S", 12, 52, 22, 4, 1.0, 2, 8, 2, 1),
    DefensiveStats("Alex Brown", "DE", 12, 35, 14, 15, 9.5, 0, 3, 3, 1),
]

calc = TraditionalDefensiveCalculator(sample_defenders)
team_stats = calc.calculate_team_stats()

print("\nTeam Defensive Totals:")
for stat, value in team_stats.items():
    print(f"  {stat}: {value}")

print("\nTop Tacklers:")
print(calc.rank_players('solo_tackles', 5).to_string(index=False))

Limitations of Traditional Statistics

Traditional defensive stats suffer from several limitations:

  1. Context-blind: A tackle on a 15-yard gain is counted the same as a tackle for loss
  2. Opportunity-dependent: Linebackers get more tackle opportunities than cornerbacks
  3. No quality adjustment: Doesn't account for opponent strength
  4. Missing coverage info: Tackles don't tell us about coverage success
  5. Scheme-dependent: 3-4 vs 4-3 creates different stat distributions

The Tackle Problem:

def demonstrate_tackle_limitations():
    """
    Show why raw tackle numbers can be misleading.
    """
    # Two linebackers with same tackle totals, different impact
    lb1 = {
        'name': 'Run-Stopper LB',
        'tackles': 95,
        'avg_tackle_depth': 1.2,    # Stops runs early
        'missed_tackles': 8,
        'tackles_per_opportunity': 0.82
    }

    lb2 = {
        'name': 'Cleanup LB',
        'tackles': 95,
        'avg_tackle_depth': 4.8,    # Tackles after big gains
        'missed_tackles': 22,
        'tackles_per_opportunity': 0.68
    }

    print("Two linebackers with identical tackle totals:")
    print("-" * 50)
    print(f"{'Metric':<25} {lb1['name']:<15} {lb2['name']:<15}")
    print("-" * 50)

    for metric in ['tackles', 'avg_tackle_depth', 'missed_tackles',
                   'tackles_per_opportunity']:
        print(f"{metric:<25} {lb1[metric]:<15} {lb2[metric]:<15}")

    print("\nConclusion: LB1 is far more impactful despite same tackle total")

demonstrate_tackle_limitations()

9.2 Advanced Defensive Metrics

EPA Allowed

Expected Points Added (EPA) allowed measures the value a defense gives up on each play. Negative EPA allowed is good for the defense.

class DefensiveEPACalculator:
    """
    Calculate EPA allowed by defense.

    EPA allowed = EP after play - EP before play
    Negative values indicate good defensive plays.
    """

    def __init__(self):
        # Simplified EP table (yard line -> expected points)
        self.ep_table = self._build_ep_table()

    def _build_ep_table(self) -> Dict[int, float]:
        """Build expected points by yard line."""
        ep = {}
        for yard in range(1, 100):
            # Simplified: linear interpolation
            # Own 1 = -1.5 EP, Own 50 = 2.0 EP, Opp 1 = 6.0 EP
            if yard <= 50:
                ep[yard] = -1.5 + (yard / 50) * 3.5
            else:
                ep[yard] = 2.0 + ((yard - 50) / 49) * 4.0
        return ep

    def get_ep(self, yard_line: int, down: int, distance: int) -> float:
        """
        Get expected points for a game state.

        Simplified model using yard line and down/distance adjustments.
        """
        base_ep = self.ep_table.get(yard_line, 0)

        # Down adjustments
        down_adj = {1: 0, 2: -0.3, 3: -0.7, 4: -1.2}
        base_ep += down_adj.get(down, 0)

        # Distance adjustment (longer = harder)
        if distance > 10:
            base_ep -= 0.5

        return base_ep

    def calculate_play_epa(self, play: Dict) -> float:
        """
        Calculate EPA for a single play from defensive perspective.

        Parameters:
        -----------
        play : dict
            Keys: yard_line_before, yard_line_after, down_before, down_after,
                  distance_before, distance_after, turnover, touchdown
        """
        # EP before play (offense's perspective)
        ep_before = self.get_ep(
            play['yard_line_before'],
            play['down_before'],
            play['distance_before']
        )

        # Handle special cases
        if play.get('turnover', False):
            # Turnover: defense gets ball
            new_yard = 100 - play['yard_line_after']
            ep_after = -self.get_ep(new_yard, 1, 10)
        elif play.get('touchdown', False):
            ep_after = 7.0
        else:
            ep_after = self.get_ep(
                play['yard_line_after'],
                play['down_after'],
                play['distance_after']
            )

        # EPA allowed = how much offense gained
        epa_allowed = ep_after - ep_before

        return round(epa_allowed, 2)

    def calculate_defensive_epa(self, plays: List[Dict]) -> Dict:
        """Calculate total and per-play EPA allowed."""
        epas = [self.calculate_play_epa(p) for p in plays]

        return {
            'total_plays': len(plays),
            'total_epa_allowed': round(sum(epas), 1),
            'epa_per_play': round(np.mean(epas), 3),
            'positive_plays': sum(1 for e in epas if e < 0),
            'positive_play_rate': round(sum(1 for e in epas if e < 0) / len(epas) * 100, 1)
        }


# Example
epa_calc = DefensiveEPACalculator()

sample_plays = [
    {'yard_line_before': 75, 'yard_line_after': 72, 'down_before': 1,
     'down_after': 2, 'distance_before': 10, 'distance_after': 7},
    {'yard_line_before': 72, 'yard_line_after': 72, 'down_before': 2,
     'down_after': 3, 'distance_before': 7, 'distance_after': 7},
    {'yard_line_before': 72, 'yard_line_after': 50, 'down_before': 3,
     'down_after': 1, 'distance_before': 7, 'distance_after': 10, 'turnover': True},
]

print("\n" + "=" * 70)
print("DEFENSIVE EPA ANALYSIS")
print("=" * 70)

for i, play in enumerate(sample_plays):
    epa = epa_calc.calculate_play_epa(play)
    print(f"Play {i+1}: EPA Allowed = {epa:+.2f}")

Success Rate Allowed

Success rate allowed measures the percentage of plays where the offense achieves success (based on down and distance).

class DefensiveSuccessRateCalculator:
    """
    Calculate success rate allowed by defense.

    Success is defined as:
    - 1st down: Gain 40%+ of needed yards
    - 2nd down: Gain 50%+ of needed yards
    - 3rd/4th down: Convert first down
    """

    def __init__(self):
        self.thresholds = {
            1: 0.40,
            2: 0.50,
            3: 1.00,
            4: 1.00
        }

    def is_successful_offense(self, play: Dict) -> bool:
        """Determine if offense was successful on play."""
        down = play['down']
        distance = play['distance']
        yards_gained = play['yards_gained']

        threshold = self.thresholds.get(down, 1.0)
        required = distance * threshold

        return yards_gained >= required

    def calculate_success_rate_allowed(self, plays: List[Dict]) -> Dict:
        """
        Calculate defensive success rate allowed.

        Returns overall and by-down breakdown.
        """
        total = len(plays)
        successful_offense = sum(1 for p in plays if self.is_successful_offense(p))

        # By down
        by_down = {}
        for down in [1, 2, 3, 4]:
            down_plays = [p for p in plays if p['down'] == down]
            if down_plays:
                down_success = sum(1 for p in down_plays
                                  if self.is_successful_offense(p))
                by_down[down] = {
                    'plays': len(down_plays),
                    'success_allowed': down_success,
                    'rate': round(down_success / len(down_plays) * 100, 1)
                }

        return {
            'total_plays': total,
            'success_allowed': successful_offense,
            'success_rate_allowed': round(successful_offense / total * 100, 1),
            'stop_rate': round((total - successful_offense) / total * 100, 1),
            'by_down': by_down
        }


# Example
success_calc = DefensiveSuccessRateCalculator()

sample_defensive_plays = [
    {'down': 1, 'distance': 10, 'yards_gained': 3},   # Stop
    {'down': 2, 'distance': 7, 'yards_gained': 4},    # Success (offense)
    {'down': 3, 'distance': 3, 'yards_gained': 2},    # Stop
    {'down': 1, 'distance': 10, 'yards_gained': 6},   # Success
    {'down': 2, 'distance': 4, 'yards_gained': 1},    # Stop
]

result = success_calc.calculate_success_rate_allowed(sample_defensive_plays)

print("\n" + "=" * 70)
print("DEFENSIVE SUCCESS RATE ALLOWED")
print("=" * 70)
print(f"Success Rate Allowed: {result['success_rate_allowed']}%")
print(f"Stop Rate: {result['stop_rate']}%")

9.3 Pass Rush Analysis

Pressure Rate and Win Rate

Pressure rate measures how often the defense pressures the quarterback. Pass rush win rate (PRWR) measures how often rushers beat their blockers.

class PassRushAnalyzer:
    """
    Analyze pass rush effectiveness.

    Metrics:
    - Pressure rate: QB pressures / dropbacks
    - Sack rate: Sacks / dropbacks
    - Pass rush win rate: Block wins / pass rush snaps
    - Time to pressure: Average time to generate pressure
    """

    def __init__(self):
        # Benchmarks for evaluation
        self.benchmarks = {
            'pressure_rate': {'elite': 35, 'good': 30, 'average': 25},
            'sack_rate': {'elite': 8, 'good': 6, 'average': 5},
            'prwr': {'elite': 50, 'good': 45, 'average': 40}
        }

    def analyze_team_pass_rush(self, plays: List[Dict]) -> Dict:
        """
        Analyze team pass rush effectiveness.

        Parameters:
        -----------
        plays : list
            Each play needs: dropback (bool), pressure (bool), sack (bool),
                            hurry (bool), hit (bool)
        """
        dropbacks = [p for p in plays if p.get('dropback', False)]

        if not dropbacks:
            return {'error': 'No dropbacks found'}

        n = len(dropbacks)

        # Pressure components
        pressures = sum(1 for p in dropbacks if p.get('pressure', False))
        sacks = sum(1 for p in dropbacks if p.get('sack', False))
        hurries = sum(1 for p in dropbacks if p.get('hurry', False))
        hits = sum(1 for p in dropbacks if p.get('hit', False))

        # Rates
        pressure_rate = pressures / n * 100
        sack_rate = sacks / n * 100
        hurry_rate = hurries / n * 100

        # Results when pressured
        pressured_plays = [p for p in dropbacks if p.get('pressure', False)]
        if pressured_plays:
            sack_when_pressured = sum(1 for p in pressured_plays
                                      if p.get('sack', False))
            conversion_rate = sack_when_pressured / len(pressured_plays) * 100
        else:
            conversion_rate = 0

        return {
            'dropbacks': n,
            'pressures': pressures,
            'pressure_rate': round(pressure_rate, 1),
            'sacks': sacks,
            'sack_rate': round(sack_rate, 1),
            'hurries': hurries,
            'hurry_rate': round(hurry_rate, 1),
            'hits': hits,
            'sack_conversion_rate': round(conversion_rate, 1),
            'evaluation': self._evaluate(pressure_rate, sack_rate)
        }

    def _evaluate(self, pressure_rate: float, sack_rate: float) -> str:
        """Evaluate pass rush performance."""
        if pressure_rate >= self.benchmarks['pressure_rate']['elite']:
            return 'Elite'
        elif pressure_rate >= self.benchmarks['pressure_rate']['good']:
            return 'Good'
        elif pressure_rate >= self.benchmarks['pressure_rate']['average']:
            return 'Average'
        else:
            return 'Below Average'

    def analyze_individual_rusher(self, rusher_snaps: List[Dict]) -> Dict:
        """
        Analyze individual pass rusher.

        Each snap needs: pass_rush (bool), pressure (bool),
                        sack (bool), win (bool), time_to_pressure
        """
        rush_snaps = [s for s in rusher_snaps if s.get('pass_rush', False)]

        if not rush_snaps:
            return {'error': 'No pass rush snaps'}

        n = len(rush_snaps)

        pressures = sum(1 for s in rush_snaps if s.get('pressure', False))
        sacks = sum(1 for s in rush_snaps if s.get('sack', False))
        wins = sum(1 for s in rush_snaps if s.get('win', False))

        # Time to pressure (only on successful pressures)
        pressure_snaps = [s for s in rush_snaps if s.get('pressure', False)]
        if pressure_snaps:
            avg_time = np.mean([s.get('time_to_pressure', 3.0)
                               for s in pressure_snaps])
        else:
            avg_time = None

        return {
            'rush_snaps': n,
            'pressures': pressures,
            'pressure_rate': round(pressures / n * 100, 1),
            'sacks': sacks,
            'sack_rate': round(sacks / n * 100, 1),
            'wins': wins,
            'win_rate': round(wins / n * 100, 1),
            'avg_time_to_pressure': round(avg_time, 2) if avg_time else 'N/A'
        }


# Example
print("\n" + "=" * 70)
print("PASS RUSH ANALYSIS")
print("=" * 70)

# Generate sample dropback data
np.random.seed(42)
sample_dropbacks = []

for _ in range(400):  # Season worth of dropbacks
    pressure = np.random.random() < 0.28
    sack = pressure and np.random.random() < 0.22
    hurry = pressure and not sack and np.random.random() < 0.45
    hit = pressure and not sack and np.random.random() < 0.30

    sample_dropbacks.append({
        'dropback': True,
        'pressure': pressure,
        'sack': sack,
        'hurry': hurry,
        'hit': hit
    })

rush_analyzer = PassRushAnalyzer()
team_rush = rush_analyzer.analyze_team_pass_rush(sample_dropbacks)

print("\nTeam Pass Rush Results:")
print(f"  Dropbacks Faced: {team_rush['dropbacks']}")
print(f"  Pressure Rate: {team_rush['pressure_rate']}%")
print(f"  Sack Rate: {team_rush['sack_rate']}%")
print(f"  Hurry Rate: {team_rush['hurry_rate']}%")
print(f"  Sack Conversion: {team_rush['sack_conversion_rate']}%")
print(f"  Evaluation: {team_rush['evaluation']}")

Blitz Analysis

Blitzing—sending more than the standard four rushers—creates risk-reward tradeoffs that analytics can quantify.

class BlitzAnalyzer:
    """
    Analyze blitz effectiveness and tendencies.
    """

    def analyze_blitz_effectiveness(self, plays: List[Dict]) -> Dict:
        """
        Compare performance when blitzing vs. not blitzing.

        Each play needs: blitz (bool), plus standard passing metrics
        """
        blitz_plays = [p for p in plays if p.get('blitz', False)]
        no_blitz = [p for p in plays if not p.get('blitz', False)]

        def analyze_subset(subset, name):
            if not subset:
                return {}

            pressures = sum(1 for p in subset if p.get('pressure', False))
            sacks = sum(1 for p in subset if p.get('sack', False))
            completions = sum(1 for p in subset if p.get('completion', False))
            yards = sum(p.get('yards', 0) for p in subset)

            # EPA if available
            if 'epa' in subset[0]:
                avg_epa = np.mean([p['epa'] for p in subset])
            else:
                avg_epa = None

            return {
                'plays': len(subset),
                'pressure_rate': round(pressures / len(subset) * 100, 1),
                'sack_rate': round(sacks / len(subset) * 100, 1),
                'completion_rate': round(completions / len(subset) * 100, 1),
                'yards_per_play': round(yards / len(subset), 2),
                'avg_epa': round(avg_epa, 3) if avg_epa else 'N/A'
            }

        blitz_analysis = analyze_subset(blitz_plays, 'blitz')
        no_blitz_analysis = analyze_subset(no_blitz, 'no_blitz')

        # Blitz rate
        total = len(plays)
        blitz_rate = len(blitz_plays) / total * 100 if total > 0 else 0

        return {
            'blitz_rate': round(blitz_rate, 1),
            'when_blitzing': blitz_analysis,
            'when_not_blitzing': no_blitz_analysis
        }


# Example
print("\n" + "=" * 70)
print("BLITZ ANALYSIS")
print("=" * 70)

# Generate blitz data
blitz_plays = []
for _ in range(400):
    blitz = np.random.random() < 0.35  # 35% blitz rate

    if blitz:
        pressure = np.random.random() < 0.42  # Higher pressure when blitzing
        completion = np.random.random() < 0.58  # Lower completion
        yards = np.random.choice([0, 5, 8, 15, 25, 40], p=[0.2, 0.3, 0.2, 0.15, 0.1, 0.05])
    else:
        pressure = np.random.random() < 0.22
        completion = np.random.random() < 0.65
        yards = np.random.choice([0, 5, 8, 12, 18], p=[0.15, 0.35, 0.25, 0.15, 0.1])

    sack = pressure and np.random.random() < 0.20

    blitz_plays.append({
        'blitz': blitz,
        'pressure': pressure,
        'sack': sack,
        'completion': completion and not sack,
        'yards': 0 if sack else yards,
        'epa': np.random.normal(-0.1 if pressure else 0.1, 0.5)
    })

blitz_analyzer = BlitzAnalyzer()
blitz_result = blitz_analyzer.analyze_blitz_effectiveness(blitz_plays)

print(f"\nBlitz Rate: {blitz_result['blitz_rate']}%")
print("\nWhen Blitzing:")
for k, v in blitz_result['when_blitzing'].items():
    print(f"  {k}: {v}")
print("\nWhen NOT Blitzing:")
for k, v in blitz_result['when_not_blitzing'].items():
    print(f"  {k}: {v}")

9.4 Coverage Analysis

Target-Based Coverage Metrics

Modern coverage evaluation focuses on what happens when defenders are targeted, not just their tackle totals.

class CoverageAnalyzer:
    """
    Analyze coverage performance using target-based metrics.

    Key metrics:
    - Targets: Times thrown at when in coverage
    - Completion rate allowed: Completions / targets
    - Yards per target: Yards allowed per target
    - Passer rating allowed: QB rating when targeted
    - Coverage grade: Composite evaluation
    """

    def __init__(self):
        self.benchmarks = {
            'completion_rate': {'elite': 52, 'good': 58, 'average': 64},
            'yards_per_target': {'elite': 5.5, 'good': 7.0, 'average': 8.5},
            'passer_rating': {'elite': 70, 'good': 85, 'average': 95}
        }

    def analyze_defender_coverage(self, targets: List[Dict]) -> Dict:
        """
        Analyze a defender's coverage performance.

        Parameters:
        -----------
        targets : list
            Each target needs: completion (bool), yards, touchdown (bool),
                              interception (bool), pass_breakup (bool)
        """
        if not targets:
            return {'error': 'No targets'}

        n = len(targets)

        # Basic counts
        completions = sum(1 for t in targets if t.get('completion', False))
        yards = sum(t.get('yards', 0) for t in targets)
        touchdowns = sum(1 for t in targets if t.get('touchdown', False))
        interceptions = sum(1 for t in targets if t.get('interception', False))
        pass_breakups = sum(1 for t in targets if t.get('pass_breakup', False))

        # Rates
        comp_rate = completions / n * 100
        yards_per_target = yards / n
        yards_per_completion = yards / completions if completions > 0 else 0
        td_rate = touchdowns / n * 100
        int_rate = interceptions / n * 100

        # Passer rating allowed (NFL formula)
        passer_rating = self._calculate_passer_rating_allowed(
            n, completions, yards, touchdowns, interceptions
        )

        # Coverage score
        coverage_score = self._calculate_coverage_score(
            comp_rate, yards_per_target, passer_rating
        )

        return {
            'targets': n,
            'completions': completions,
            'completion_rate': round(comp_rate, 1),
            'yards': yards,
            'yards_per_target': round(yards_per_target, 2),
            'yards_per_completion': round(yards_per_completion, 2),
            'touchdowns': touchdowns,
            'td_rate': round(td_rate, 1),
            'interceptions': interceptions,
            'int_rate': round(int_rate, 1),
            'pass_breakups': pass_breakups,
            'passer_rating_allowed': round(passer_rating, 1),
            'coverage_score': round(coverage_score, 1),
            'evaluation': self._evaluate(coverage_score)
        }

    def _calculate_passer_rating_allowed(self, targets: int, completions: int,
                                         yards: int, tds: int, ints: int) -> float:
        """Calculate passer rating allowed using NFL formula."""
        if targets == 0:
            return 0

        comp_pct = completions / targets * 100
        ypa = yards / targets
        td_pct = tds / targets * 100
        int_pct = ints / targets * 100

        a = max(0, min(((comp_pct - 30) / 20), 2.375))
        b = max(0, min(((ypa - 3) / 4), 2.375))
        c = max(0, min((td_pct / 5), 2.375))
        d = max(0, min((2.375 - (int_pct / 4)), 2.375))

        return ((a + b + c + d) / 6) * 100

    def _calculate_coverage_score(self, comp_rate: float,
                                   yards_per_target: float,
                                   passer_rating: float) -> float:
        """Calculate composite coverage score (0-100)."""
        # Invert metrics (lower is better for defense)
        comp_score = max(0, (75 - comp_rate) / 25 * 100)
        yds_score = max(0, (10 - yards_per_target) / 5 * 100)
        rating_score = max(0, (120 - passer_rating) / 60 * 100)

        return comp_score * 0.35 + yds_score * 0.35 + rating_score * 0.30

    def _evaluate(self, score: float) -> str:
        """Convert coverage score to grade."""
        if score >= 75:
            return 'Elite'
        elif score >= 60:
            return 'Good'
        elif score >= 45:
            return 'Average'
        else:
            return 'Below Average'


# Example
print("\n" + "=" * 70)
print("COVERAGE ANALYSIS")
print("=" * 70)

# Generate target data for a cornerback
np.random.seed(42)
cb_targets = []

for _ in range(75):  # Season worth of targets
    completion = np.random.random() < 0.58

    if completion:
        yards = np.random.choice([3, 8, 12, 18, 35, 50], p=[0.2, 0.35, 0.25, 0.12, 0.05, 0.03])
        touchdown = yards >= 35 and np.random.random() < 0.3
        interception = False
        pbu = False
    else:
        yards = 0
        touchdown = False
        interception = np.random.random() < 0.12
        pbu = not interception and np.random.random() < 0.50

    cb_targets.append({
        'completion': completion,
        'yards': yards,
        'touchdown': touchdown,
        'interception': interception,
        'pass_breakup': pbu
    })

coverage_analyzer = CoverageAnalyzer()
cb_result = coverage_analyzer.analyze_defender_coverage(cb_targets)

print("\nCornerback Coverage Stats:")
print(f"  Targets: {cb_result['targets']}")
print(f"  Completion Rate Allowed: {cb_result['completion_rate']}%")
print(f"  Yards/Target: {cb_result['yards_per_target']}")
print(f"  Passer Rating Allowed: {cb_result['passer_rating_allowed']}")
print(f"  Interceptions: {cb_result['interceptions']}")
print(f"  Pass Breakups: {cb_result['pass_breakups']}")
print(f"  Coverage Score: {cb_result['coverage_score']}/100")
print(f"  Evaluation: {cb_result['evaluation']}")

Zone vs. Man Coverage Analysis

Different coverage schemes create different statistical patterns.

class CoverageSchemeAnalyzer:
    """
    Analyze defensive performance by coverage type.
    """

    def analyze_by_coverage(self, plays: List[Dict]) -> Dict:
        """
        Compare performance in man vs zone coverage.

        Each play needs: coverage_type ('man' or 'zone'), plus outcome data
        """
        man_plays = [p for p in plays if p.get('coverage_type') == 'man']
        zone_plays = [p for p in plays if p.get('coverage_type') == 'zone']

        def analyze_scheme(plays_subset, name):
            if not plays_subset:
                return {}

            n = len(plays_subset)
            completions = sum(1 for p in plays_subset if p.get('completion', False))
            yards = sum(p.get('yards', 0) for p in plays_subset)

            return {
                'plays': n,
                'percentage': round(n / len(plays) * 100, 1),
                'completion_rate': round(completions / n * 100, 1),
                'yards_per_play': round(yards / n, 2),
                'epa_per_play': round(np.mean([p.get('epa', 0) for p in plays_subset]), 3)
            }

        return {
            'man_coverage': analyze_scheme(man_plays, 'man'),
            'zone_coverage': analyze_scheme(zone_plays, 'zone'),
            'recommendation': self._recommend_coverage(
                analyze_scheme(man_plays, 'man'),
                analyze_scheme(zone_plays, 'zone')
            )
        }

    def _recommend_coverage(self, man_stats: Dict, zone_stats: Dict) -> str:
        """Recommend optimal coverage based on results."""
        if not man_stats or not zone_stats:
            return 'Insufficient data'

        man_epa = man_stats.get('epa_per_play', 0)
        zone_epa = zone_stats.get('epa_per_play', 0)

        if man_epa < zone_epa - 0.05:
            return 'Favor man coverage'
        elif zone_epa < man_epa - 0.05:
            return 'Favor zone coverage'
        else:
            return 'Mixed coverage effective'

9.5 Run Defense Analysis

Yards Before Contact (Defensive Perspective)

From the defense's perspective, yards before contact measures how well the front seven controls gaps and meets runners.

class RunDefenseAnalyzer:
    """
    Analyze run defense effectiveness.

    Metrics:
    - Yards per carry allowed
    - Yards before contact allowed (gap control)
    - Stuff rate (stops at or behind line)
    - Success rate allowed
    - Run defense grade
    """

    def __init__(self):
        self.benchmarks = {
            'ypc_allowed': {'elite': 3.5, 'good': 4.0, 'average': 4.5},
            'stuff_rate': {'elite': 22, 'good': 18, 'average': 15},
            'ybc_allowed': {'elite': 1.8, 'good': 2.2, 'average': 2.8}
        }

    def analyze_run_defense(self, rushes: List[Dict]) -> Dict:
        """
        Analyze team run defense.

        Each rush needs: yards_gained, yards_before_contact,
                        gap (optional), formation (optional)
        """
        if not rushes:
            return {'error': 'No rushes'}

        n = len(rushes)

        # Basic metrics
        total_yards = sum(r['yards_gained'] for r in rushes)
        ypc = total_yards / n

        # Yards before contact
        total_ybc = sum(r.get('yards_before_contact', 0) for r in rushes)
        avg_ybc = total_ybc / n

        # Stuff rate (0 or negative yards)
        stuffs = sum(1 for r in rushes if r['yards_gained'] <= 0)
        stuff_rate = stuffs / n * 100

        # TFL rate
        tfls = sum(1 for r in rushes if r['yards_gained'] < 0)
        tfl_rate = tfls / n * 100

        # Explosive runs allowed (10+ yards)
        explosive = sum(1 for r in rushes if r['yards_gained'] >= 10)
        explosive_rate = explosive / n * 100

        # Success rate allowed
        success_allowed = sum(1 for r in rushes if self._is_successful_rush(r))
        success_rate_allowed = success_allowed / n * 100

        # Calculate grade
        grade = self._calculate_run_defense_grade(ypc, stuff_rate, avg_ybc)

        return {
            'rushes': n,
            'yards_allowed': total_yards,
            'ypc_allowed': round(ypc, 2),
            'avg_ybc_allowed': round(avg_ybc, 2),
            'stuff_rate': round(stuff_rate, 1),
            'tfl_rate': round(tfl_rate, 1),
            'explosive_rate': round(explosive_rate, 1),
            'success_rate_allowed': round(success_rate_allowed, 1),
            'run_defense_grade': grade
        }

    def _is_successful_rush(self, rush: Dict) -> bool:
        """Determine if rush was successful for offense."""
        down = rush.get('down', 1)
        distance = rush.get('distance', 10)
        yards = rush['yards_gained']

        if down == 1:
            return yards >= distance * 0.4
        elif down == 2:
            return yards >= distance * 0.5
        else:
            return yards >= distance

    def _calculate_run_defense_grade(self, ypc: float, stuff_rate: float,
                                      ybc: float) -> Dict:
        """Calculate composite run defense grade."""
        # Score each component (0-100, higher is better for defense)
        ypc_score = max(0, (5.5 - ypc) / 2 * 100)
        stuff_score = min(100, stuff_rate / 25 * 100)
        ybc_score = max(0, (3.5 - ybc) / 2 * 100)

        composite = ypc_score * 0.40 + stuff_score * 0.35 + ybc_score * 0.25

        if composite >= 75:
            letter = 'A'
        elif composite >= 60:
            letter = 'B'
        elif composite >= 45:
            letter = 'C'
        elif composite >= 30:
            letter = 'D'
        else:
            letter = 'F'

        return {'score': round(composite, 1), 'letter': letter}

    def analyze_by_gap(self, rushes: List[Dict]) -> Dict:
        """Analyze run defense by gap."""
        gap_results = {}

        for gap in ['A', 'B', 'C', 'outside']:
            gap_rushes = [r for r in rushes if r.get('gap') == gap]
            if gap_rushes:
                n = len(gap_rushes)
                yards = sum(r['yards_gained'] for r in gap_rushes)
                stuffs = sum(1 for r in gap_rushes if r['yards_gained'] <= 0)

                gap_results[gap] = {
                    'rushes': n,
                    'ypc': round(yards / n, 2),
                    'stuff_rate': round(stuffs / n * 100, 1)
                }

        return gap_results


# Example
print("\n" + "=" * 70)
print("RUN DEFENSE ANALYSIS")
print("=" * 70)

# Generate run defense data
np.random.seed(42)
run_plays = []

for _ in range(350):  # Season of rushing attempts faced
    ybc = max(-2, np.random.normal(2.4, 1.5))
    yac = max(-1, np.random.exponential(1.8))
    yards = ybc + yac

    run_plays.append({
        'yards_gained': round(yards, 1),
        'yards_before_contact': round(max(0, ybc), 1),
        'gap': np.random.choice(['A', 'B', 'C', 'outside'], p=[0.25, 0.35, 0.25, 0.15]),
        'down': np.random.choice([1, 2, 3], p=[0.45, 0.35, 0.20]),
        'distance': np.random.choice([10, 7, 5, 3, 2], p=[0.4, 0.2, 0.2, 0.1, 0.1])
    })

run_analyzer = RunDefenseAnalyzer()
run_result = run_analyzer.analyze_run_defense(run_plays)

print("\nRun Defense Results:")
print(f"  Rushes Faced: {run_result['rushes']}")
print(f"  YPC Allowed: {run_result['ypc_allowed']}")
print(f"  Avg YBC Allowed: {run_result['avg_ybc_allowed']}")
print(f"  Stuff Rate: {run_result['stuff_rate']}%")
print(f"  Explosive Rate: {run_result['explosive_rate']}%")
print(f"  Grade: {run_result['run_defense_grade']['letter']} "
      f"({run_result['run_defense_grade']['score']}/100)")

print("\nBy Gap:")
gap_analysis = run_analyzer.analyze_by_gap(run_plays)
for gap, stats in gap_analysis.items():
    print(f"  {gap} Gap: {stats['ypc']} YPC, {stats['stuff_rate']}% Stuff Rate")

9.6 Opponent Adjustments

Strength of Schedule

Raw defensive statistics must be adjusted for opponent quality.

class OpponentAdjuster:
    """
    Adjust defensive statistics for opponent quality.

    Uses opponent offensive rankings to contextualize performance.
    """

    def __init__(self, league_averages: Dict):
        """
        Initialize with league averages.

        Parameters:
        -----------
        league_averages : dict
            League average for key metrics (ypp, epa_per_play, etc.)
        """
        self.league_avg = league_averages

    def calculate_opponent_adjustment(self,
                                       opponent_stats: Dict,
                                       metric: str) -> float:
        """
        Calculate adjustment factor for opponent.

        Returns multiplier > 1 for strong opponents, < 1 for weak.
        """
        opponent_value = opponent_stats.get(metric, self.league_avg.get(metric, 1))
        league_value = self.league_avg.get(metric, 1)

        if league_value == 0:
            return 1.0

        return opponent_value / league_value

    def adjust_defensive_stats(self,
                                raw_stats: Dict,
                                game_opponents: List[Dict]) -> Dict:
        """
        Adjust defensive statistics for opponents faced.

        Parameters:
        -----------
        raw_stats : dict
            Raw defensive statistics
        game_opponents : list
            List of opponent stats for each game
        """
        # Calculate average opponent adjustment
        adjustments = []
        for opp in game_opponents:
            adj = self.calculate_opponent_adjustment(opp, 'off_epa_per_play')
            adjustments.append(adj)

        avg_adjustment = np.mean(adjustments)

        # Adjust raw stats
        adjusted = {}

        # Stats where lower is better (yards, EPA allowed)
        for stat in ['yards_per_play', 'epa_per_play', 'ypc_allowed']:
            if stat in raw_stats:
                # Divide by adjustment (harder opponents = better adjusted stats)
                adjusted[f'adjusted_{stat}'] = round(
                    raw_stats[stat] / avg_adjustment, 3
                )

        # Stats where higher is better (pressure rate, INT rate)
        for stat in ['pressure_rate', 'int_rate', 'stuff_rate']:
            if stat in raw_stats:
                # Multiply by adjustment
                adjusted[f'adjusted_{stat}'] = round(
                    raw_stats[stat] * avg_adjustment, 1
                )

        return {
            'raw_stats': raw_stats,
            'opponent_adjustment': round(avg_adjustment, 3),
            'adjusted_stats': adjusted,
            'schedule_strength': self._rate_schedule(avg_adjustment)
        }

    def _rate_schedule(self, adjustment: float) -> str:
        """Rate schedule strength."""
        if adjustment >= 1.15:
            return 'Very Hard'
        elif adjustment >= 1.05:
            return 'Hard'
        elif adjustment >= 0.95:
            return 'Average'
        elif adjustment >= 0.85:
            return 'Easy'
        else:
            return 'Very Easy'


# Example
print("\n" + "=" * 70)
print("OPPONENT-ADJUSTED STATISTICS")
print("=" * 70)

league_averages = {
    'off_epa_per_play': 0.05,
    'off_ypp': 5.8,
    'off_success_rate': 45
}

raw_defense = {
    'yards_per_play': 5.2,
    'epa_per_play': -0.08,
    'ypc_allowed': 4.1,
    'pressure_rate': 32,
    'int_rate': 3.2,
    'stuff_rate': 19
}

# Opponents faced (above average offenses)
opponents = [
    {'off_epa_per_play': 0.12},  # Strong offense
    {'off_epa_per_play': 0.08},
    {'off_epa_per_play': -0.02},  # Weak offense
    {'off_epa_per_play': 0.15},   # Elite offense
    {'off_epa_per_play': 0.05},   # Average
    {'off_epa_per_play': 0.10},
]

adjuster = OpponentAdjuster(league_averages)
adjusted_result = adjuster.adjust_defensive_stats(raw_defense, opponents)

print(f"\nOpponent Adjustment Factor: {adjusted_result['opponent_adjustment']}")
print(f"Schedule Strength: {adjusted_result['schedule_strength']}")

print("\nRaw vs Adjusted:")
for stat, value in adjusted_result['adjusted_stats'].items():
    raw_stat = stat.replace('adjusted_', '')
    raw_value = raw_defense.get(raw_stat, 'N/A')
    print(f"  {raw_stat}: {raw_value} → {value} (adjusted)")

9.7 Comprehensive Defensive Evaluation

Building a Complete Defensive Grade

class ComprehensiveDefensiveEvaluator:
    """
    Complete defensive evaluation combining all metrics.

    Components:
    - Pass defense (coverage + pass rush)
    - Run defense
    - Situational performance
    - Turnover generation
    """

    def __init__(self):
        self.weights = {
            'pass_defense': 0.45,
            'run_defense': 0.30,
            'situational': 0.15,
            'turnovers': 0.10
        }

    def evaluate_defense(self, data: Dict) -> Dict:
        """
        Generate comprehensive defensive evaluation.

        Parameters:
        -----------
        data : dict
            All defensive statistics needed for evaluation
        """
        # Component grades
        pass_grade = self._grade_pass_defense(data)
        run_grade = self._grade_run_defense(data)
        situational_grade = self._grade_situational(data)
        turnover_grade = self._grade_turnovers(data)

        # Composite score
        composite = (
            pass_grade['score'] * self.weights['pass_defense'] +
            run_grade['score'] * self.weights['run_defense'] +
            situational_grade['score'] * self.weights['situational'] +
            turnover_grade['score'] * self.weights['turnovers']
        )

        # Identify strengths and weaknesses
        components = {
            'Pass Defense': pass_grade['score'],
            'Run Defense': run_grade['score'],
            'Situational': situational_grade['score'],
            'Turnovers': turnover_grade['score']
        }

        strengths = [k for k, v in components.items() if v >= 70]
        weaknesses = [k for k, v in components.items() if v < 50]

        return {
            'pass_defense': pass_grade,
            'run_defense': run_grade,
            'situational': situational_grade,
            'turnovers': turnover_grade,
            'composite_score': round(composite, 1),
            'overall_grade': self._score_to_grade(composite),
            'strengths': strengths,
            'weaknesses': weaknesses,
            'summary': self._generate_summary(components, composite)
        }

    def _grade_pass_defense(self, data: Dict) -> Dict:
        """Grade pass defense component."""
        # Metrics: EPA/dropback, completion rate allowed, pressure rate
        epa_score = max(0, min(100, (-data.get('pass_epa_per_play', 0) + 0.15) / 0.3 * 100))
        comp_score = max(0, min(100, (70 - data.get('completion_rate_allowed', 65)) / 20 * 100))
        pressure_score = min(100, data.get('pressure_rate', 25) / 40 * 100)

        score = epa_score * 0.40 + comp_score * 0.35 + pressure_score * 0.25

        return {'score': round(score, 1), 'grade': self._score_to_grade(score)}

    def _grade_run_defense(self, data: Dict) -> Dict:
        """Grade run defense component."""
        ypc_score = max(0, min(100, (5.0 - data.get('ypc_allowed', 4.5)) / 2 * 100))
        stuff_score = min(100, data.get('stuff_rate', 18) / 25 * 100)
        success_score = max(0, min(100, (50 - data.get('run_success_allowed', 45)) / 15 * 100))

        score = ypc_score * 0.40 + stuff_score * 0.35 + success_score * 0.25

        return {'score': round(score, 1), 'grade': self._score_to_grade(score)}

    def _grade_situational(self, data: Dict) -> Dict:
        """Grade situational defense."""
        third_score = max(0, min(100, (45 - data.get('third_down_conv_allowed', 40)) / 15 * 100))
        redzone_score = max(0, min(100, (60 - data.get('redzone_td_allowed', 55)) / 20 * 100))

        score = third_score * 0.60 + redzone_score * 0.40

        return {'score': round(score, 1), 'grade': self._score_to_grade(score)}

    def _grade_turnovers(self, data: Dict) -> Dict:
        """Grade turnover generation."""
        int_score = min(100, data.get('int_rate', 2.5) / 4 * 100)
        fumble_score = min(100, data.get('forced_fumble_rate', 1.5) / 3 * 100)

        score = int_score * 0.70 + fumble_score * 0.30

        return {'score': round(score, 1), 'grade': self._score_to_grade(score)}

    def _score_to_grade(self, score: float) -> str:
        """Convert numeric score to letter grade."""
        if score >= 90:
            return 'A+'
        elif score >= 85:
            return 'A'
        elif score >= 80:
            return 'A-'
        elif score >= 75:
            return 'B+'
        elif score >= 70:
            return 'B'
        elif score >= 65:
            return 'B-'
        elif score >= 60:
            return 'C+'
        elif score >= 55:
            return 'C'
        elif score >= 50:
            return 'C-'
        elif score >= 45:
            return 'D+'
        elif score >= 40:
            return 'D'
        else:
            return 'F'

    def _generate_summary(self, components: Dict, composite: float) -> str:
        """Generate text summary of defensive performance."""
        grade = self._score_to_grade(composite)

        best = max(components, key=components.get)
        worst = min(components, key=components.get)

        return (f"Overall {grade} defense. Strongest in {best} "
                f"({components[best]:.0f}), needs improvement in {worst} "
                f"({components[worst]:.0f}).")


# Example comprehensive evaluation
print("\n" + "=" * 70)
print("COMPREHENSIVE DEFENSIVE EVALUATION")
print("=" * 70)

sample_defense_data = {
    # Pass defense
    'pass_epa_per_play': -0.05,
    'completion_rate_allowed': 61.5,
    'pressure_rate': 29.8,

    # Run defense
    'ypc_allowed': 4.0,
    'stuff_rate': 20.5,
    'run_success_allowed': 42.3,

    # Situational
    'third_down_conv_allowed': 36.8,
    'redzone_td_allowed': 52.4,

    # Turnovers
    'int_rate': 3.1,
    'forced_fumble_rate': 1.8
}

evaluator = ComprehensiveDefensiveEvaluator()
evaluation = evaluator.evaluate_defense(sample_defense_data)

print(f"\nOverall Grade: {evaluation['overall_grade']} "
      f"({evaluation['composite_score']}/100)")

print("\nComponent Grades:")
print(f"  Pass Defense: {evaluation['pass_defense']['grade']} "
      f"({evaluation['pass_defense']['score']})")
print(f"  Run Defense: {evaluation['run_defense']['grade']} "
      f"({evaluation['run_defense']['score']})")
print(f"  Situational: {evaluation['situational']['grade']} "
      f"({evaluation['situational']['score']})")
print(f"  Turnovers: {evaluation['turnovers']['grade']} "
      f"({evaluation['turnovers']['score']})")

print(f"\nStrengths: {', '.join(evaluation['strengths'])}")
print(f"Weaknesses: {', '.join(evaluation['weaknesses'])}")
print(f"\nSummary: {evaluation['summary']}")

9.8 Individual Defender Evaluation

Position-Specific Metrics

Different positions require different evaluation frameworks.

class PositionEvaluator:
    """
    Position-specific defensive player evaluation.
    """

    def evaluate_edge_rusher(self, stats: Dict) -> Dict:
        """Evaluate edge rusher (DE/OLB)."""
        # Key metrics: Pass rush win rate, pressure rate, run defense grade
        prwr_score = min(100, stats.get('pass_rush_win_rate', 12) / 20 * 100)
        pressure_score = min(100, stats.get('pressure_rate', 10) / 15 * 100)
        run_score = min(100, stats.get('run_defense_grade', 60) / 80 * 100)

        composite = prwr_score * 0.45 + pressure_score * 0.35 + run_score * 0.20

        return {
            'position': 'Edge',
            'pass_rush_score': round(prwr_score, 1),
            'pressure_score': round(pressure_score, 1),
            'run_defense_score': round(run_score, 1),
            'composite': round(composite, 1),
            'primary_role': 'Pass Rusher' if prwr_score > run_score else 'Run Defender'
        }

    def evaluate_interior_dl(self, stats: Dict) -> Dict:
        """Evaluate interior defensive lineman (DT/NT)."""
        # Key metrics: Run stuff rate, interior pressure, double team resistance
        stuff_score = min(100, stats.get('stuff_rate', 8) / 15 * 100)
        pressure_score = min(100, stats.get('interior_pressure_rate', 5) / 10 * 100)
        dt_score = min(100, stats.get('double_team_win_rate', 30) / 50 * 100)

        composite = stuff_score * 0.45 + pressure_score * 0.25 + dt_score * 0.30

        return {
            'position': 'Interior DL',
            'run_stuff_score': round(stuff_score, 1),
            'pressure_score': round(pressure_score, 1),
            'double_team_score': round(dt_score, 1),
            'composite': round(composite, 1)
        }

    def evaluate_linebacker(self, stats: Dict) -> Dict:
        """Evaluate linebacker (ILB/MLB)."""
        # Key metrics: Run stop rate, coverage grade, blitz productivity
        run_score = min(100, stats.get('run_stop_rate', 5) / 10 * 100)
        coverage_score = min(100, stats.get('coverage_grade', 55) / 80 * 100)
        blitz_score = min(100, stats.get('blitz_pressure_rate', 25) / 40 * 100)
        tackle_score = max(0, (stats.get('tackle_rate', 80) - 60) / 30 * 100)

        composite = (run_score * 0.35 + coverage_score * 0.35 +
                    blitz_score * 0.15 + tackle_score * 0.15)

        return {
            'position': 'Linebacker',
            'run_defense_score': round(run_score, 1),
            'coverage_score': round(coverage_score, 1),
            'blitz_score': round(blitz_score, 1),
            'tackling_score': round(tackle_score, 1),
            'composite': round(composite, 1),
            'three_down_lb': coverage_score >= 60
        }

    def evaluate_cornerback(self, stats: Dict) -> Dict:
        """Evaluate cornerback."""
        # Key metrics: Coverage grade, targets/coverage snap, passer rating allowed
        coverage_score = min(100, stats.get('coverage_grade', 65) / 90 * 100)
        target_score = max(0, (25 - stats.get('target_rate', 18)) / 15 * 100)
        rating_score = max(0, (120 - stats.get('passer_rating_allowed', 85)) / 60 * 100)
        playmaker_score = min(100, (stats.get('int_rate', 2) +
                                    stats.get('pbu_rate', 15)) / 25 * 100)

        composite = (coverage_score * 0.40 + rating_score * 0.30 +
                    target_score * 0.15 + playmaker_score * 0.15)

        return {
            'position': 'Cornerback',
            'coverage_score': round(coverage_score, 1),
            'passer_rating_score': round(rating_score, 1),
            'target_avoidance': round(target_score, 1),
            'playmaking': round(playmaker_score, 1),
            'composite': round(composite, 1),
            'shutdown_corner': coverage_score >= 75 and rating_score >= 70
        }

    def evaluate_safety(self, stats: Dict) -> Dict:
        """Evaluate safety (FS/SS)."""
        # Key metrics: Coverage, run support, range/ball hawking
        coverage_score = min(100, stats.get('coverage_grade', 60) / 85 * 100)
        run_score = min(100, stats.get('run_defense_grade', 55) / 75 * 100)
        range_score = min(100, stats.get('tackles_20_plus_yards', 5) / 10 * 100)
        turnover_score = min(100, (stats.get('interceptions', 2) +
                                   stats.get('forced_fumbles', 1)) / 5 * 100)

        composite = (coverage_score * 0.40 + run_score * 0.30 +
                    range_score * 0.15 + turnover_score * 0.15)

        # Determine safety type
        if run_score > coverage_score + 10:
            safety_type = 'Box Safety (SS)'
        elif coverage_score > run_score + 10:
            safety_type = 'Free Safety (FS)'
        else:
            safety_type = 'Versatile Safety'

        return {
            'position': 'Safety',
            'coverage_score': round(coverage_score, 1),
            'run_support_score': round(run_score, 1),
            'range_score': round(range_score, 1),
            'turnover_score': round(turnover_score, 1),
            'composite': round(composite, 1),
            'safety_type': safety_type
        }


# Example position evaluations
print("\n" + "=" * 70)
print("POSITION-SPECIFIC EVALUATIONS")
print("=" * 70)

pos_evaluator = PositionEvaluator()

# Edge rusher
edge_stats = {
    'pass_rush_win_rate': 16.5,
    'pressure_rate': 12.3,
    'run_defense_grade': 72
}
edge_eval = pos_evaluator.evaluate_edge_rusher(edge_stats)
print(f"\nEdge Rusher: {edge_eval['composite']}/100")
print(f"  Primary Role: {edge_eval['primary_role']}")

# Cornerback
cb_stats = {
    'coverage_grade': 78,
    'target_rate': 15.2,
    'passer_rating_allowed': 72.5,
    'int_rate': 3.5,
    'pbu_rate': 18.2
}
cb_eval = pos_evaluator.evaluate_cornerback(cb_stats)
print(f"\nCornerback: {cb_eval['composite']}/100")
print(f"  Shutdown Corner: {'Yes' if cb_eval['shutdown_corner'] else 'No'}")

# Linebacker
lb_stats = {
    'run_stop_rate': 7.2,
    'coverage_grade': 62,
    'blitz_pressure_rate': 32,
    'tackle_rate': 85
}
lb_eval = pos_evaluator.evaluate_linebacker(lb_stats)
print(f"\nLinebacker: {lb_eval['composite']}/100")
print(f"  Three-Down LB: {'Yes' if lb_eval['three_down_lb'] else 'No'}")

Summary

This chapter covered the comprehensive landscape of defensive analytics:

  1. Traditional Statistics: Tackles, sacks, and turnovers form the foundation but have significant limitations
  2. Advanced Metrics: EPA allowed and success rate allowed provide context-aware evaluation
  3. Pass Rush Analysis: Pressure rate, sack rate, and win rate measure front effectiveness
  4. Coverage Metrics: Target-based statistics reveal true coverage ability
  5. Run Defense: Yards before contact and stuff rate measure front seven control
  6. Opponent Adjustments: Raw stats must be adjusted for schedule strength
  7. Comprehensive Evaluation: Combining all components into unified grades
  8. Position-Specific Analysis: Different positions require different metrics

Key Takeaways

  • Context matters: A tackle after a 15-yard gain is not the same as a tackle for loss
  • Separate components: Pass defense, run defense, and turnovers require different metrics
  • Adjust for opponents: Raw statistics can be misleading without context
  • Position-specific evaluation: Edge rushers and cornerbacks need different frameworks
  • Combine metrics: The best evaluations integrate multiple data sources

Looking Ahead

Chapter 10 explores Special Teams Analytics, including field goal probability, punt analysis, and return game evaluation. These often-overlooked phases significantly impact game outcomes.


References

  1. Burke, B. (2014). "Expected Points and Expected Points Added Explained"
  2. Baldwin, B. (2019). "nflfastR: Functions to Efficiently Access NFL Play-by-Play Data"
  3. PFF (Pro Football Focus). "Coverage Grading Methodology"
  4. Football Outsiders. "DVOA Methodology"
  5. Sports Info Solutions. "Broken Tackles and Pressure Attribution"