Case Study 1: NFL Draft Comparison Dashboard

Overview

Client: Major college football program's analytics department Objective: Create comprehensive comparison visualizations for NFL Draft prospect evaluation Timeline: 2-week sprint for initial prototype Audience: NFL scouts, team general managers, and draft analysts


Background

Every spring, NFL teams evaluate hundreds of college players for the draft. Traditional scouting reports provide qualitative assessments, but modern front offices demand quantitative comparisons that contextualize player performance.

The challenge: How do you fairly compare a receiver from an Air Raid offense to one from a pro-style system? How do you account for different competition levels? How do you communicate complex multi-dimensional profiles to time-pressed decision-makers?

State University's analytics department was approached to develop a prototype comparison dashboard that could serve as a model for NFL team adoption.


The Challenge

Primary Requirements

  1. Multi-dimensional profile visualization: Show 12+ metrics simultaneously for prospect comparison
  2. Historical context: Compare prospects to successful NFL players at similar stages
  3. Percentile rankings: Show where each prospect falls within their peer group
  4. Similarity analysis: Identify NFL players with similar college profiles
  5. Adjustments for context: Account for offensive system, opponent quality, and usage

Data Available

  • Complete college career statistics for all FBS players (2018-2024)
  • NFL Combine measurements and testing results
  • Pro Football Focus grades and advanced metrics
  • Historical data linking college performance to NFL outcomes
  • Play-by-play data for situational analysis

Constraints

  • Dashboard must render in under 2 seconds
  • Must work in printed format (for in-person meetings)
  • Must be interpretable without extensive training
  • Must maintain prospect confidentiality until draft

Solution Design

Visualization Strategy

After analyzing user needs and testing multiple approaches, the team developed a four-panel dashboard design:

Panel 1: Radar Profile Comparison - Shows 8 core metrics in spider chart format - Overlays 2-3 prospects simultaneously - Uses consistent normalization (0-100 scale based on position percentiles)

Panel 2: Percentile Context Chart - Horizontal bars showing percentile ranking for each metric - Color-coded zones (Elite/Above Average/Average/Below Average) - Shows raw values alongside percentiles

Panel 3: Historical Comparison - Identifies 5 most similar NFL players based on college profile - Shows how those players performed in NFL careers - Provides statistical similarity scores

Panel 4: Situational Splits - Heatmap showing performance by down/distance - Comparison of red zone efficiency - Third-down conversion rates

Technical Implementation

"""
NFL Draft Comparison Dashboard - Core Implementation
"""

import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
from scipy.spatial.distance import cdist
from scipy import stats as sp_stats
from dataclasses import dataclass
from typing import List, Dict, Tuple, Optional

@dataclass
class ProspectProfile:
    """Complete prospect profile for visualization."""
    name: str
    position: str
    school: str

    # Core metrics (normalized 0-100)
    metrics: Dict[str, float]

    # Raw statistics
    raw_stats: Dict[str, float]

    # Historical comparisons
    similar_players: List[Tuple[str, float]]  # (name, similarity)

    # Situational data
    situational_splits: Dict[str, float]


class DraftComparisonDashboard:
    """
    Create comprehensive comparison dashboards for NFL Draft prospects.
    """

    def __init__(self):
        self.colors = {
            'elite': '#1a9641',      # Green - 75th+ percentile
            'above_avg': '#a6d96a',  # Light green - 50-75th
            'average': '#ffffbf',    # Yellow - 25-50th
            'below_avg': '#fdae61',  # Orange - 10-25th
            'poor': '#d7191c',       # Red - below 10th
            'primary': '#264653',
            'secondary': '#e76f51',
            'tertiary': '#2a9d8f'
        }

        # Standard metrics for each position
        self.wr_metrics = [
            'Separation', 'Catch Rate', 'YAC/Reception',
            'Contested Catch %', 'Route Running', 'Speed Score',
            'Yards/Route', 'Deep Target Rate'
        ]

        self.qb_metrics = [
            'Completion %', 'Yards/Attempt', 'TD Rate',
            'INT Rate (inv)', 'Pressure Comp %', 'Deep Ball Accuracy',
            'Rush Yards', 'Time to Throw'
        ]

    def _get_percentile_color(self, percentile: float) -> str:
        """Return color based on percentile range."""
        if percentile >= 75:
            return self.colors['elite']
        elif percentile >= 50:
            return self.colors['above_avg']
        elif percentile >= 25:
            return self.colors['average']
        elif percentile >= 10:
            return self.colors['below_avg']
        else:
            return self.colors['poor']

    def create_full_dashboard(self,
                              prospect: ProspectProfile,
                              comparison_prospects: List[ProspectProfile] = None,
                              population_data: Dict[str, List[float]] = None) -> plt.Figure:
        """
        Create complete 4-panel comparison dashboard.

        Args:
            prospect: Primary prospect to analyze
            comparison_prospects: Optional additional prospects to overlay
            population_data: Population distributions for percentile calculation

        Returns:
            matplotlib Figure with complete dashboard
        """
        fig = plt.figure(figsize=(16, 12))
        gs = gridspec.GridSpec(2, 2, figure=fig, hspace=0.25, wspace=0.2)

        metrics = self.wr_metrics if prospect.position == 'WR' else self.qb_metrics

        # Panel 1: Radar Profile
        ax1 = fig.add_subplot(gs[0, 0], polar=True)
        self._create_radar_panel(ax1, prospect, comparison_prospects, metrics)

        # Panel 2: Percentile Chart
        ax2 = fig.add_subplot(gs[0, 1])
        self._create_percentile_panel(ax2, prospect, population_data, metrics)

        # Panel 3: Historical Comparisons
        ax3 = fig.add_subplot(gs[1, 0])
        self._create_similarity_panel(ax3, prospect)

        # Panel 4: Situational Analysis
        ax4 = fig.add_subplot(gs[1, 1])
        self._create_situational_panel(ax4, prospect)

        # Title
        fig.suptitle(f'{prospect.name} ({prospect.position}) - {prospect.school}\n'
                    f'NFL Draft Comparison Profile',
                    fontsize=16, fontweight='bold', y=0.98)

        return fig

    def _create_radar_panel(self, ax, prospect: ProspectProfile,
                           comparisons: List[ProspectProfile],
                           metrics: List[str]):
        """Create radar chart panel showing profile comparison."""
        num_metrics = len(metrics)
        angles = np.linspace(0, 2 * np.pi, num_metrics, endpoint=False).tolist()
        angles += angles[:1]

        colors = [self.colors['primary'], self.colors['secondary'],
                 self.colors['tertiary']]

        # Plot primary prospect
        values = [prospect.metrics.get(m, 50) / 100 for m in metrics]
        values += values[:1]

        ax.plot(angles, values, 'o-', linewidth=3, color=colors[0],
               label=prospect.name)
        ax.fill(angles, values, alpha=0.25, color=colors[0])

        # Plot comparison prospects
        if comparisons:
            for idx, comp in enumerate(comparisons[:2]):
                comp_values = [comp.metrics.get(m, 50) / 100 for m in metrics]
                comp_values += comp_values[:1]

                ax.plot(angles, comp_values, 'o-', linewidth=2,
                       color=colors[idx + 1], alpha=0.8, label=comp.name)
                ax.fill(angles, comp_values, alpha=0.1, color=colors[idx + 1])

        ax.set_xticks(angles[:-1])
        ax.set_xticklabels(metrics, size=8)
        ax.set_ylim(0, 1)
        ax.set_yticks([0.25, 0.5, 0.75, 1.0])
        ax.set_yticklabels(['25th', '50th', '75th', '100th'], size=7, alpha=0.7)
        ax.legend(loc='upper right', bbox_to_anchor=(1.25, 1.1), fontsize=9)
        ax.set_title('Profile Comparison', fontsize=11, fontweight='bold', pad=15)

    def _create_percentile_panel(self, ax, prospect: ProspectProfile,
                                 population: Dict[str, List[float]],
                                 metrics: List[str]):
        """Create percentile ranking horizontal bar panel."""
        percentiles = []
        for metric in metrics:
            if population and metric in population:
                raw_val = prospect.raw_stats.get(metric, 0)
                pct = sp_stats.percentileofscore(population[metric], raw_val)
            else:
                pct = prospect.metrics.get(metric, 50)
            percentiles.append(pct)

        y_positions = range(len(metrics))

        # Background zones
        ax.axvspan(0, 25, color='#fee2e2', alpha=0.4)
        ax.axvspan(25, 50, color='#fef3c7', alpha=0.4)
        ax.axvspan(50, 75, color='#d1fae5', alpha=0.4)
        ax.axvspan(75, 100, color='#a7f3d0', alpha=0.4)

        # Bars
        colors = [self._get_percentile_color(p) for p in percentiles]
        bars = ax.barh(y_positions, percentiles, color=colors,
                      edgecolor='white', linewidth=0.5, height=0.7)

        # Labels
        for bar, pct, metric in zip(bars, percentiles, metrics):
            raw_val = prospect.raw_stats.get(metric, 0)
            ax.text(bar.get_width() + 2, bar.get_y() + bar.get_height()/2,
                   f'{pct:.0f}th ({raw_val:.1f})', va='center', fontsize=8)

        ax.axvline(50, color='gray', linestyle='--', linewidth=1, alpha=0.5)
        ax.set_yticks(y_positions)
        ax.set_yticklabels(metrics, fontsize=9)
        ax.set_xlim(0, 105)
        ax.set_xlabel('Percentile Rank', fontsize=10)
        ax.set_title('Percentile Profile', fontsize=11, fontweight='bold')

        ax.spines['top'].set_visible(False)
        ax.spines['right'].set_visible(False)

    def _create_similarity_panel(self, ax, prospect: ProspectProfile):
        """Create historical similarity comparison panel."""
        if not prospect.similar_players:
            ax.text(0.5, 0.5, 'Similarity data not available',
                   ha='center', va='center', transform=ax.transAxes)
            ax.axis('off')
            return

        names = [p[0] for p in prospect.similar_players[:5]]
        scores = [p[1] for p in prospect.similar_players[:5]]

        y_positions = range(len(names))

        bars = ax.barh(y_positions, scores, color=self.colors['tertiary'],
                      edgecolor='white', linewidth=0.5, height=0.7)

        for bar, score in zip(bars, scores):
            ax.text(bar.get_width() + 0.02, bar.get_y() + bar.get_height()/2,
                   f'{score:.0%}', va='center', fontsize=9)

        ax.set_yticks(y_positions)
        ax.set_yticklabels(names, fontsize=9)
        ax.set_xlim(0, 1.1)
        ax.set_xlabel('Similarity Score', fontsize=10)
        ax.set_title(f'Most Similar NFL Players\n(Based on College Profile)',
                    fontsize=11, fontweight='bold')

        ax.invert_yaxis()
        ax.spines['top'].set_visible(False)
        ax.spines['right'].set_visible(False)

    def _create_situational_panel(self, ax, prospect: ProspectProfile):
        """Create situational splits heatmap panel."""
        situations = ['1st Down', '2nd & Long', '2nd & Short',
                     '3rd & Long', '3rd & Short', 'Red Zone']
        periods = ['Q1', 'Q2', 'Q3', 'Q4']

        # Generate sample data if not available
        if prospect.situational_splits:
            data = [[prospect.situational_splits.get(f'{s}_{p}', 50)
                    for p in periods] for s in situations]
        else:
            np.random.seed(42)
            data = np.random.uniform(40, 90, (len(situations), len(periods)))

        data = np.array(data)

        im = ax.imshow(data, cmap='RdYlGn', aspect='auto',
                      vmin=30, vmax=100)

        ax.set_xticks(range(len(periods)))
        ax.set_xticklabels(periods, fontsize=9)
        ax.set_yticks(range(len(situations)))
        ax.set_yticklabels(situations, fontsize=9)

        # Add value annotations
        for i in range(len(situations)):
            for j in range(len(periods)):
                text_color = 'white' if data[i, j] < 55 or data[i, j] > 75 else 'black'
                ax.text(j, i, f'{data[i, j]:.0f}',
                       ha='center', va='center', fontsize=8, color=text_color)

        ax.set_title('Situational Performance\n(Efficiency Rating)',
                    fontsize=11, fontweight='bold')

        # Colorbar
        cbar = plt.colorbar(im, ax=ax, shrink=0.8)
        cbar.set_label('Efficiency', fontsize=9)

Results and Impact

Usability Testing Results

The dashboard prototype was tested with 8 NFL scouts and 4 general manager assistants:

Metric Before After Change
Time to initial assessment 12 min 3 min -75%
Confidence in comparison 6.2/10 8.4/10 +35%
Information recall (24hr) 42% 71% +69%
Cross-team discussion clarity "Low" "High" Significant

Key Design Learnings

  1. Radar charts work for profiles: Despite limitations, scouts found radar charts intuitive for "player type" identification

  2. Percentile context is essential: Raw statistics without context were frequently misinterpreted; percentiles solved this

  3. Historical comparisons drive decisions: The similarity panel was rated as "most valuable" by 9 of 12 testers

  4. Situational splits reveal fit: Teams could quickly assess scheme fit using situational data

Fairness Considerations Implemented

  • System adjustment: Passing metrics adjusted for Air Raid vs. Pro-Style systems
  • Competition adjustment: Metrics weighted by opponent defensive rankings
  • Usage normalization: Per-play metrics used rather than counting stats
  • Injury notation: Games missed clearly indicated in profile

Technical Challenges Overcome

Challenge 1: Metric Normalization Across Positions

Problem: Different positions have entirely different relevant metrics.

Solution: Created position-specific metric templates with consistent 0-100 normalization based on position-specific populations.

Challenge 2: Similarity Calculation

Problem: How to compute meaningful similarity when comparing to historical NFL players?

Solution:

def calculate_similarity(prospect_features: np.ndarray,
                        historical_features: np.ndarray,
                        feature_weights: np.ndarray = None) -> float:
    """
    Calculate weighted cosine similarity between prospect and historical player.

    Weights emphasize metrics most predictive of NFL success.
    """
    if feature_weights is None:
        feature_weights = np.ones(len(prospect_features))

    weighted_prospect = prospect_features * feature_weights
    weighted_historical = historical_features * feature_weights

    dot_product = np.dot(weighted_prospect, weighted_historical)
    norm_p = np.linalg.norm(weighted_prospect)
    norm_h = np.linalg.norm(weighted_historical)

    if norm_p == 0 or norm_h == 0:
        return 0.0

    return dot_product / (norm_p * norm_h)

Challenge 3: Print Compatibility

Problem: Dashboard needed to work in printed handouts for in-person meetings.

Solution: - Used color schemes that remain distinguishable in grayscale - Added value labels so color wasn't sole encoding - Tested printability at multiple DPI settings


Lessons for Practitioners

  1. Context over precision: Scouts valued understanding "what type of player" over precise metric values

  2. Consistency builds trust: Using the same visualization structure across all prospects built familiarity and enabled rapid comparison

  3. Similarity is powerful: Linking prospects to known quantities (successful NFL players) dramatically increased decision confidence

  4. Situational data reveals nuance: Aggregate statistics hide important context; situational breakdowns were "the difference maker" per scouts


Extension Opportunities

  • Interactive version: Web-based dashboard allowing metric toggling
  • Video integration: Linking metrics to representative video clips
  • Projection modeling: Adding expected NFL performance ranges based on historical similarity
  • Team fit scoring: Customizing similarity search based on team's offensive/defensive scheme

Code Repository

Complete implementation available in code/case-study-code.py with: - DraftComparisonDashboard class - Sample prospect data generation - Full working demonstration