The evolution of basketball analytics has fundamentally transformed how we evaluate offensive performance. Where once points per game served as the primary metric for offensive prowess, modern analysis demands a more sophisticated approach that...
In This Chapter
- Introduction
- 17.1 Offensive Rating: Points Per 100 Possessions
- 17.2 Team vs. Individual Offensive Rating
- 17.3 Play Type Analysis
- 17.4 Spacing and Floor Balance Metrics
- 17.5 Ball Movement and Passing Analytics
- 17.6 Half-Court vs. Transition Offense Efficiency
- 17.7 Assist Networks and Ball Distribution
- 17.8 Shot Creation vs. Shot Conversion
- 17.9 Integrating Offensive Efficiency Metrics
- 17.10 Case Studies in Offensive Excellence
- 17.11 Practical Applications
- Summary
- Key Formulas Reference
- Further Reading
Chapter 17: Team Offensive Efficiency
Introduction
The evolution of basketball analytics has fundamentally transformed how we evaluate offensive performance. Where once points per game served as the primary metric for offensive prowess, modern analysis demands a more sophisticated approach that accounts for pace, possession efficiency, and the multidimensional nature of team offense. This chapter explores the comprehensive framework for measuring and analyzing team offensive efficiency, from foundational calculations to advanced play-type breakdowns and spatial analytics.
Understanding offensive efficiency requires moving beyond raw counting statistics to examine how teams generate and convert scoring opportunities. A team scoring 120 points might be less efficient than one scoring 105, depending on how many possessions each team used. This possession-based approach forms the cornerstone of modern offensive analysis and allows for meaningful comparisons across eras, teams, and playing styles.
The modern NBA has witnessed a tactical revolution driven by efficiency analysis. Teams have shifted toward three-point shooting, rim attacks, and spacing-oriented offenses based on expected value calculations. Understanding these principles enables analysts to evaluate not just what teams do, but why their choices maximize offensive production.
17.1 Offensive Rating: Points Per 100 Possessions
17.1.1 The Foundation of Efficiency Measurement
Offensive Rating (ORtg) represents the number of points a team scores per 100 possessions. This pace-adjusted metric provides the fundamental basis for comparing offensive performance across different contexts. The formula is straightforward:
$$\text{Offensive Rating} = \frac{\text{Points Scored}}{\text{Possessions}} \times 100$$
The critical challenge lies in accurately estimating possessions. The standard possession formula, refined over decades of analytical work, is:
$$\text{Possessions} = \text{FGA} + 0.44 \times \text{FTA} - \text{OREB} + \text{TOV}$$
The 0.44 coefficient for free throw attempts accounts for the various situations in which free throws occur: - Two-shot fouls (most common) - Three-shot fouls (on three-point attempts) - And-one opportunities (no additional possession cost) - Technical free throws (separate possession)
This coefficient represents the average proportion of free throw trips that constitute a new possession, empirically derived from league-wide data.
17.1.2 Advanced Possession Estimation
For more precise analysis, particularly at the individual player level, enhanced possession formulas incorporate additional factors:
$$\text{Possessions}_{adj} = \text{FGA} + 0.44 \times \text{FTA} - 1.07 \times \frac{\text{OREB} \times (\text{FGA} - \text{FGM})}{\text{FGA} - \text{FGM} + \text{OREB}} + \text{TOV}$$
This adjustment accounts for the team's offensive rebounding rate when allocating missed shot possessions.
17.1.3 Historical Context and Benchmarks
Understanding offensive rating requires context. League average offensive ratings have fluctuated significantly:
| Era | League Average ORtg | Notable High |
|---|---|---|
| 1980s | 107.5 | 1981-82 Nuggets (115.8) |
| 1990s | 106.3 | 1995-96 Bulls (115.2) |
| 2000s | 106.1 | 2004-05 Suns (114.5) |
| 2010s | 107.8 | 2018-19 Warriors (115.9) |
| 2020s | 112.0+ | Multiple 117+ teams |
The recent surge in offensive efficiency reflects tactical evolution: increased three-point shooting, faster pace, and analytics-driven shot selection emphasizing high-value attempts.
17.1.4 Python Implementation
import numpy as np
import pandas as pd
from typing import Tuple, Dict, Optional
def calculate_possessions(fga: int, fta: int, oreb: int, tov: int,
fgm: Optional[int] = None,
use_advanced: bool = False) -> float:
"""
Calculate team possessions using standard or advanced formula.
Parameters:
-----------
fga : int - Field goal attempts
fta : int - Free throw attempts
oreb : int - Offensive rebounds
tov : int - Turnovers
fgm : int, optional - Field goals made (required for advanced formula)
use_advanced : bool - Whether to use advanced possession formula
Returns:
--------
float - Estimated possessions
"""
if use_advanced and fgm is not None:
# Advanced formula with offensive rebounding adjustment
missed_fg = fga - fgm
if missed_fg + oreb > 0:
oreb_factor = 1.07 * (oreb * missed_fg) / (missed_fg + oreb)
else:
oreb_factor = 0
possessions = fga + 0.44 * fta - oreb_factor + tov
else:
# Standard formula
possessions = fga + 0.44 * fta - oreb + tov
return possessions
def calculate_offensive_rating(points: int, possessions: float) -> float:
"""
Calculate offensive rating (points per 100 possessions).
Parameters:
-----------
points : int - Total points scored
possessions : float - Total possessions
Returns:
--------
float - Offensive rating
"""
if possessions == 0:
return 0.0
return (points / possessions) * 100
def team_offensive_summary(team_stats: Dict) -> Dict:
"""
Calculate comprehensive offensive efficiency metrics for a team.
Parameters:
-----------
team_stats : dict - Dictionary containing team statistics
Required keys: points, fga, fgm, fta, ftm, fg3a, fg3m, oreb, tov
Returns:
--------
dict - Dictionary of offensive efficiency metrics
"""
possessions = calculate_possessions(
team_stats['fga'], team_stats['fta'],
team_stats['oreb'], team_stats['tov'],
team_stats['fgm'], use_advanced=True
)
ortg = calculate_offensive_rating(team_stats['points'], possessions)
# Shooting efficiency metrics
efg_pct = (team_stats['fgm'] + 0.5 * team_stats['fg3m']) / team_stats['fga']
ts_pct = team_stats['points'] / (2 * (team_stats['fga'] + 0.44 * team_stats['fta']))
# Turnover rate
tov_rate = team_stats['tov'] / possessions * 100
# Offensive rebounding rate (requires opponent data)
# oreb_rate = oreb / (oreb + opponent_dreb) * 100
# Free throw rate
ft_rate = team_stats['fta'] / team_stats['fga']
return {
'offensive_rating': round(ortg, 1),
'possessions': round(possessions, 1),
'efg_pct': round(efg_pct, 3),
'ts_pct': round(ts_pct, 3),
'tov_rate': round(tov_rate, 1),
'ft_rate': round(ft_rate, 3),
'points_per_possession': round(ortg / 100, 3)
}
17.2 Team vs. Individual Offensive Rating
17.2.1 Conceptual Differences
While team offensive rating directly measures points per 100 possessions, individual offensive rating requires allocation of team production to individual players. This introduces significant methodological complexity because basketball is inherently a team sport where individual contributions interweave.
Dean Oliver's pioneering work in "Basketball on Paper" established the framework for individual offensive rating. The calculation involves:
- Individual Scoring Production: Points produced through shooting, assisted and unassisted
- Assist Production: Points created for teammates
- Offensive Rebounding: Second-chance points generated
$$\text{Individual ORtg} = \frac{\text{Points Produced}}{\text{Individual Possessions Used}} \times 100$$
17.2.2 The Points Produced Framework
Individual points produced accounts for:
Scoring Points: $$\text{ScPoss} = (\text{FGM} + (1 - (1 - \text{FT\%})^2) \times 0.44 \times \text{FTA} \times \text{Play\%})$$
Where Play% represents the proportion of made shots that were unassisted.
Assist Points: $$\text{AST\_Pts} = \text{AST} \times \text{Teammate FG Value}$$
The teammate field goal value considers whether assists led to two-point or three-point makes.
17.2.3 Contextual Adjustments
Individual offensive ratings must be interpreted in context:
- Usage Rate: Higher usage typically correlates with lower efficiency due to diminishing returns
- Role: Primary creators face different defenses than catch-and-shoot players
- Teammates: Playing alongside skilled players elevates individual metrics
- Competition: Opponent quality affects efficiency
The relationship between usage and efficiency follows a predictable pattern:
$$\text{Expected ORtg} = \alpha - \beta \times \text{USG\%}$$
Where elite players exceed this expectation while maintaining high usage.
17.2.4 On-Off Differential Analysis
A powerful approach to individual offensive impact measures team performance with and without a player:
$$\text{On-Off ORtg} = \text{Team ORtg}_{player\_on} - \text{Team ORtg}_{player\_off}$$
This captures a player's total offensive impact, including both direct production and effects on teammates.
def calculate_individual_ortg(player_stats: Dict, team_stats: Dict) -> Dict:
"""
Estimate individual offensive rating using simplified Oliver method.
Parameters:
-----------
player_stats : dict - Individual player statistics
team_stats : dict - Team statistics for context
Returns:
--------
dict - Individual offensive metrics
"""
# Simplified individual possessions
fga = player_stats['fga']
fta = player_stats['fta']
tov = player_stats['tov']
ast = player_stats['ast']
# Estimate team assist percentage on player's makes
team_ast_rate = team_stats.get('ast_rate', 0.55)
# Individual possessions used
ind_poss = fga + 0.44 * fta + tov + ast * 0.5 * (1 - team_ast_rate)
# Simplified points produced
fgm = player_stats['fgm']
fg3m = player_stats['fg3m']
ftm = player_stats['ftm']
# Direct scoring points
scoring_pts = 2 * (fgm - fg3m) + 3 * fg3m + ftm
# Assist points (simplified: assume average of 2.2 points per assist)
ast_pts = ast * 0.5 * 2.2
total_pts_produced = scoring_pts + ast_pts
if ind_poss > 0:
ind_ortg = (total_pts_produced / ind_poss) * 100
else:
ind_ortg = 0
# Usage rate
team_poss = team_stats.get('possessions', 100)
minutes_pct = player_stats.get('minutes', 36) / 48
usage = ind_poss / (team_poss * minutes_pct) * 100 if team_poss > 0 else 0
return {
'individual_ortg': round(ind_ortg, 1),
'usage_rate': round(usage, 1),
'points_produced': round(total_pts_produced, 1),
'possessions_used': round(ind_poss, 1)
}
17.3 Play Type Analysis
17.3.1 The Play Type Framework
Modern tracking data enables granular analysis of offensive efficiency by play type. The NBA and tracking providers categorize possessions into distinct offensive actions:
- Pick and Roll Ball Handler: The ball handler in screen actions
- Pick and Roll Roll Man: The screener rolling or popping
- Isolation: One-on-one play without screens
- Post-Up: Back-to-basket play in the paint
- Spot-Up: Catch-and-shoot opportunities
- Transition: Fast break and early offense
- Off-Screen: Coming off pindowns or curl actions
- Cut: Basket cuts and movement without the ball
- Hand-Off: Dribble hand-off actions
- Putback: Offensive rebound attempts
- Miscellaneous: Uncategorized plays
17.3.2 Efficiency by Play Type
League-average efficiency varies dramatically by play type:
| Play Type | PPP | Frequency | eFG% |
|---|---|---|---|
| Transition | 1.12 | 15% | 56% |
| Cut | 1.28 | 6% | 64% |
| Putback | 1.05 | 5% | 52% |
| Spot-Up | 0.96 | 18% | 48% |
| Pick and Roll (BH) | 0.91 | 20% | 44% |
| Isolation | 0.88 | 8% | 43% |
| Pick and Roll (RM) | 1.10 | 8% | 55% |
| Post-Up | 0.87 | 5% | 43% |
| Off-Screen | 0.98 | 5% | 49% |
| Hand-Off | 0.93 | 4% | 46% |
17.3.3 Frequency-Efficiency Tradeoff
Teams face a fundamental tradeoff: the most efficient plays (cuts, transition) cannot be run at will, while controllable plays (isolation, pick-and-roll) typically yield lower efficiency. Optimal offensive design maximizes both:
$$\text{Offensive Value} = \sum_{i} (\text{Frequency}_i \times \text{PPP}_i)$$
Subject to constraints on play availability and defensive adjustment.
17.3.4 Play Type Versatility Index
Teams can be evaluated on their ability to generate efficient offense across multiple play types:
$$\text{PTV Index} = \sum_{i} w_i \times \frac{\text{Team PPP}_i}{\text{League PPP}_i}$$
Where weights reflect play frequency and importance.
class PlayTypeAnalyzer:
"""Analyze team offensive efficiency by play type."""
PLAY_TYPES = [
'transition', 'isolation', 'pick_roll_ball_handler',
'pick_roll_roll_man', 'post_up', 'spot_up', 'handoff',
'cut', 'off_screen', 'putback', 'misc'
]
# League average PPP by play type (example values)
LEAGUE_AVERAGES = {
'transition': 1.12,
'isolation': 0.88,
'pick_roll_ball_handler': 0.91,
'pick_roll_roll_man': 1.10,
'post_up': 0.87,
'spot_up': 0.96,
'handoff': 0.93,
'cut': 1.28,
'off_screen': 0.98,
'putback': 1.05,
'misc': 0.90
}
def __init__(self, team_play_type_data: Dict):
"""
Initialize analyzer with team play type data.
Parameters:
-----------
team_play_type_data : dict
Keys are play types, values are dicts with 'possessions',
'points', 'frequency'
"""
self.data = team_play_type_data
def calculate_ppp(self, play_type: str) -> float:
"""Calculate points per possession for a play type."""
if play_type not in self.data:
return 0.0
pt_data = self.data[play_type]
if pt_data['possessions'] == 0:
return 0.0
return pt_data['points'] / pt_data['possessions']
def get_play_type_efficiency(self) -> pd.DataFrame:
"""Get efficiency metrics for all play types."""
results = []
for pt in self.PLAY_TYPES:
if pt in self.data:
ppp = self.calculate_ppp(pt)
league_avg = self.LEAGUE_AVERAGES.get(pt, 1.0)
results.append({
'play_type': pt,
'possessions': self.data[pt]['possessions'],
'points': self.data[pt]['points'],
'ppp': round(ppp, 3),
'frequency': self.data[pt]['frequency'],
'vs_league': round(ppp - league_avg, 3),
'relative_efficiency': round(ppp / league_avg, 3)
})
return pd.DataFrame(results)
def calculate_versatility_index(self) -> float:
"""
Calculate Play Type Versatility Index.
Higher values indicate efficient offense across multiple play types.
"""
total_score = 0
total_weight = 0
for pt in self.PLAY_TYPES:
if pt in self.data:
freq = self.data[pt]['frequency']
ppp = self.calculate_ppp(pt)
league_avg = self.LEAGUE_AVERAGES.get(pt, 1.0)
# Weight by frequency, score by relative efficiency
if league_avg > 0:
total_score += freq * (ppp / league_avg)
total_weight += freq
if total_weight > 0:
return round(total_score / total_weight * 100, 1)
return 0.0
def identify_strengths_weaknesses(self, threshold: float = 0.05) -> Dict:
"""
Identify play types where team excels or struggles.
Parameters:
-----------
threshold : float - Minimum difference from league average
Returns:
--------
dict with 'strengths' and 'weaknesses' lists
"""
strengths = []
weaknesses = []
for pt in self.PLAY_TYPES:
if pt in self.data and self.data[pt]['frequency'] >= 0.03:
ppp = self.calculate_ppp(pt)
league_avg = self.LEAGUE_AVERAGES.get(pt, 1.0)
diff = ppp - league_avg
if diff >= threshold:
strengths.append((pt, round(diff, 3)))
elif diff <= -threshold:
weaknesses.append((pt, round(diff, 3)))
return {
'strengths': sorted(strengths, key=lambda x: x[1], reverse=True),
'weaknesses': sorted(weaknesses, key=lambda x: x[1])
}
17.4 Spacing and Floor Balance Metrics
17.4.1 The Geometry of Modern Offense
Offensive spacing has become a central focus of modern basketball strategy. Proper spacing creates driving lanes, opens passing windows, and forces defensive help rotations. Tracking data enables precise measurement of spacing patterns.
Average Spacing measures the mean distance between all offensive players:
$$\text{Avg Spacing} = \frac{2}{n(n-1)} \sum_{i Where $d(p_i, p_j)$ is the Euclidean distance between players $i$ and $j$. The convex hull of offensive player positions represents the area they collectively control: $$\text{Offensive Hull Area} = \text{Area}(\text{ConvexHull}(P_1, P_2, P_3, P_4, P_5))$$ Larger hull areas indicate better spacing, though optimal values depend on offensive strategy. Effective offense balances interior and perimeter presence: Paint Touch Rate:
$$\text{Paint Touch Rate} = \frac{\text{Possessions with Paint Touch}}{\text{Total Possessions}}$$ Three-Point Gravity:
$$\text{3PT Gravity} = \frac{\sum_{i} \text{Shooter}_i \times \text{Attention}_i}{\text{Total Defensive Attention}}$$ Where attention reflects defensive positioning toward three-point threats. Floor balance measures how evenly players distribute across the court: $$\text{FBI} = 1 - \frac{\sigma_{zones}}{\sigma_{max}}$$ Where $\sigma_{zones}$ is the standard deviation of player positions across court zones and $\sigma_{max}$ is the theoretical maximum. Ball movement creates defensive breakdowns, generates open shots, and optimizes shot selection. Teams that move the ball effectively tend to get higher-quality looks. Key metrics include: Passes Per Possession:
$$\text{PPP}_{passes} = \frac{\text{Total Passes}}{\text{Total Possessions}}$$ Average Touch Time:
$$\text{ATT} = \frac{\sum \text{Individual Touch Durations}}{\text{Total Touches}}$$ Lower touch times generally indicate better ball movement, though context matters. Not all passes are equal. Quality metrics evaluate the value added by passing: Potential Assist Rate:
$$\text{PAR} = \frac{\text{Passes Leading to Shots}}{\text{Total Passes}}$$ Assist Conversion Rate:
$$\text{ACR} = \frac{\text{Actual Assists}}{\text{Potential Assists}}$$ Secondary Assist Value:
A pass leading to the assist pass creates indirect value:
$$\text{Hockey Assist Value} = \sum \text{Points from 2-pass sequences}$$ Modern tracking enables expected assist modeling: $$E[\text{Assist}] = P(\text{Shot Attempt}) \times P(\text{Make}|\text{Attempt}) \times \text{Pass Quality Factor}$$ Where pass quality factor incorporates defender positioning, shot location, and shooter skill. Combining metrics into a comprehensive ball movement score: $$\text{BMES} = \alpha \cdot \text{Passes Per Poss} + \beta \cdot (1 - \text{ATT}) + \gamma \cdot \text{AST Rate} + \delta \cdot \text{Openness Created}$$ With weights calibrated to predict offensive efficiency. Transition and half-court offense represent fundamentally different offensive environments: Transition Offense:
- Defense not set
- Numerical advantages possible
- Quick decision-making required
- Higher efficiency potential
- Limited play calling Half-Court Offense:
- Set defense
- Even numbers
- Structured plays available
- Lower but more consistent efficiency
- Greater tactical complexity Transition rate captures how often teams push pace: $$\text{Transition Rate} = \frac{\text{Transition Possessions}}{\text{Total Possessions}}$$ Transition Efficiency:
$$\text{Trans ORtg} = \frac{\text{Transition Points}}{\text{Transition Possessions}} \times 100$$ Teams typically average 110-115 offensive rating in transition versus 105-110 in half-court sets. Teams generate transition through: Defensive Rebounds:
$$\text{DREB Trans Rate} = \frac{\text{Trans Poss from DREB}}{\text{DREB}}$$ Steals and Turnovers:
$$\text{Turnover Trans Rate} = \frac{\text{Trans Poss from Steals}}{\text{Opponent TOV}}$$ Made Baskets:
$$\text{Made Basket Trans Rate} = \frac{\text{Trans Poss after Opponent Make}}{\text{Opponent FGM}}$$ Between pure transition and set half-court lies "early offense"--attacking before the defense fully sets: $$\text{Early Offense Window} = 8-14 \text{ seconds remaining on shot clock}$$ Early offense efficiency typically falls between transition and half-court: Assist networks reveal the structural patterns of offensive collaboration. Network analysis techniques from social network theory provide powerful tools for understanding team offense: Key Network Metrics: Degree Centrality: Number of connections (passing partners)
$$C_D(v) = \frac{\text{deg}(v)}{n-1}$$ Betweenness Centrality: Frequency on shortest paths between other nodes
$$C_B(v) = \sum_{s \neq v \neq t} \frac{\sigma_{st}(v)}{\sigma_{st}}$$ Clustering Coefficient: Tendency for connected players to connect
$$C_C(v) = \frac{2 \times \text{triangles}(v)}{\text{deg}(v)(\text{deg}(v)-1)}$$ Different offensive systems produce distinct network topologies: Hub-and-Spoke (Point Guard Dominant):
- High centrality for primary ball handler
- Few triangles
- Efficient but predictable Distributed (Motion Offense):
- Relatively even centrality
- Many interconnections
- Harder to defend but requires high IQ Hybrid (Star with Movement):
- One or two primary hubs
- Secondary connections between wings
- Balances predictability and efficiency Not all assists are equal in creation difficulty: $$\text{Assist Quality} = \text{Shot Difficulty}_{assisted} - \text{Shot Difficulty}_{avg}$$ Playmakers who consistently generate easy shots for teammates provide more value than those whose assists come on difficult shots. Entropy measures the evenness of ball distribution: $$H = -\sum_{i=1}^{n} p_i \log(p_i)$$ Where $p_i$ is player $i$'s share of team touches. Higher entropy indicates more distributed offense. Offensive production results from two distinct skills: Teams can excel at one while struggling with the other, creating distinct offensive profiles and optimization opportunities. Expected points quantifies shot quality independent of conversion: $$xPTS = \sum_{i} P(\text{Make}|\text{Shot}_i) \times \text{Value}_i$$ Where $P(\text{Make}|\text{Shot}_i)$ is based on shot location, defender distance, shot type, and other factors. Shot Creation Value:
$$\text{SCV} = xPTS_{team} - xPTS_{league\_avg}$$ Shot Conversion Value:
$$\text{SConV} = \text{Actual PTS} - xPTS_{team}$$ Shooting percentages exhibit significant game-to-game variance. Decomposing results into skill and variance: $$\text{Actual Results} = \text{True Skill} + \text{Variance}$$ For three-point shooting with true percentage $p$:
$$\text{Variance}(\text{3P\%}) = \frac{p(1-p)}{n}$$ Over small samples, shooting results mix signal and noise:
$$\text{Regressed 3P\%} = \frac{n \cdot \text{Actual} + k \cdot \text{Prior}}{n + k}$$ Where $k$ represents the regression factor (approximately 200-300 attempts for three-pointers). Understanding a team's shot diet reveals strategic choices: Modern offenses maximize restricted area and three-point attempts while minimizing mid-range shots. Dean Oliver's Four Factors framework provides a comprehensive view of offensive efficiency: Effective Field Goal Percentage (eFG%)
$$eFG\% = \frac{FGM + 0.5 \times 3PM}{FGA}$$ Turnover Rate (TOV%)
$$TOV\% = \frac{TOV}{FGA + 0.44 \times FTA + TOV}$$ Offensive Rebounding Rate (OREB%)
$$OREB\% = \frac{OREB}{OREB + Opp\_DREB}$$ Free Throw Rate (FTr)
$$FTr = \frac{FTM}{FGA}$$ Research suggests these factors explain approximately 90% of variance in offensive efficiency, with approximate weights:
- eFG%: 40%
- TOV%: 25%
- OREB%: 20%
- FTr: 15% Combining multiple metrics into a single score: $$\text{Composite ORtg} = \sum_{i} w_i \times \frac{X_i - \mu_i}{\sigma_i}$$ Where each metric is z-score normalized and weighted by importance. Teams can be classified by offensive style: The Warriors dynasty redefined offensive basketball: Key Metrics:
- Offensive Rating: 114.5 (2015-16), 115.9 (2016-17)
- eFG%: 56.3% (2016-17, historical high)
- Pace: 99.3 possessions/game Offensive Identity:
- Elite three-point shooting (highest volume and percentage)
- Motion offense generating open looks
- Transition excellence (1.18 PPP)
- Stephen Curry/Klay Thompson gravity creating space Network Characteristics:
- Highly distributed (entropy: 0.87)
- Multiple playmakers (Curry, Green, Durant)
- High assist rate (67% of baskets assisted) The Nets represented star-driven offense at its peak: Key Metrics:
- Offensive Rating: 118.6 (historically elite)
- Isolation PPP: 1.05 (far above league average)
- eFG%: 55.4% Offensive Identity:
- Three elite scorers (Durant, Harden, Irving)
- Spacing around isolation plays
- High-efficiency mid-range from Durant
- Pick-and-roll with Harden as primary initiator Network Characteristics:
- Hub-based (centralization: 0.42)
- Durant and Harden combining for 58% of assists
- Lower total assists but higher quality The Spurs Finals performance showcased motion offense perfection: Key Metrics:
- Offensive Rating: 110.5 (regular season), 116.8 (Finals)
- Assist Rate: 65%
- Ball Movement: 318 passes per game (highest tracked) Offensive Identity:
- Player movement and cutting
- Side-to-side ball movement
- No dominant usage player
- Attack closeouts philosophy Network Characteristics:
- Extremely distributed (entropy: 0.92)
- Five players with 3+ assists per game
- League-leading hockey assists Offensive efficiency analysis informs defensive game plans: Understanding offensive profiles guides personnel decisions: Real-time efficiency analysis supports coaching: Team offensive efficiency analysis provides a comprehensive framework for understanding and optimizing scoring production. From the foundational concept of offensive rating to advanced play type breakdowns and network analysis, modern analytics offers unprecedented insight into offensive performance. Key principles include:
- Pace-adjusted metrics enable meaningful comparisons
- Multiple factors contribute to efficiency, with shooting being most important
- Play type analysis reveals strategic choices and optimization opportunities
- Spacing and ball movement create quality shot opportunities
- Network analysis illuminates collaborative offensive patterns
- Shot creation and conversion represent distinct skills The integration of these analytical approaches enables teams to maximize offensive production through personnel, strategy, and in-game decision-making. For deeper exploration of offensive efficiency analysis, consider:17.4.2 Convex Hull Analysis
17.4.3 Paint Density and Three-Point Coverage
17.4.4 Floor Balance Index
import numpy as np
from scipy.spatial import ConvexHull
from typing import List, Tuple
class SpacingAnalyzer:
"""Analyze offensive spacing and floor balance."""
# Court dimensions (feet)
COURT_LENGTH = 94
COURT_WIDTH = 50
PAINT_WIDTH = 16
PAINT_LENGTH = 19
THREE_POINT_DISTANCE = 23.75 # NBA (corner is 22)
def __init__(self, player_positions: List[Tuple[float, float]]):
"""
Initialize with player positions.
Parameters:
-----------
player_positions : list of (x, y) tuples
Five offensive player positions in feet from baseline
"""
self.positions = np.array(player_positions)
def calculate_average_spacing(self) -> float:
"""Calculate mean distance between all player pairs."""
n = len(self.positions)
total_distance = 0
pair_count = 0
for i in range(n):
for j in range(i + 1, n):
dist = np.linalg.norm(self.positions[i] - self.positions[j])
total_distance += dist
pair_count += 1
return total_distance / pair_count if pair_count > 0 else 0
def calculate_hull_area(self) -> float:
"""Calculate convex hull area of player positions."""
if len(self.positions) < 3:
return 0.0
try:
hull = ConvexHull(self.positions)
return hull.volume # In 2D, volume gives area
except Exception:
return 0.0
def count_players_in_paint(self) -> int:
"""Count players in the paint area."""
count = 0
# Paint boundaries (centered at x=47 for far basket)
paint_x_min = self.COURT_LENGTH / 2 + self.COURT_LENGTH / 2 - self.PAINT_LENGTH
paint_y_min = (self.COURT_WIDTH - self.PAINT_WIDTH) / 2
paint_y_max = (self.COURT_WIDTH + self.PAINT_WIDTH) / 2
for pos in self.positions:
if (pos[0] >= paint_x_min and
paint_y_min <= pos[1] <= paint_y_max):
count += 1
return count
def count_three_point_shooters(self) -> int:
"""Count players positioned beyond the three-point line."""
count = 0
basket_pos = np.array([self.COURT_LENGTH - 5.25, self.COURT_WIDTH / 2])
for pos in self.positions:
dist_to_basket = np.linalg.norm(pos - basket_pos)
# Account for corner three being closer
if pos[1] < 3 or pos[1] > self.COURT_WIDTH - 3:
threshold = 22
else:
threshold = self.THREE_POINT_DISTANCE
if dist_to_basket >= threshold:
count += 1
return count
def calculate_floor_balance(self) -> Dict:
"""
Calculate floor balance metrics.
Returns dictionary with:
- horizontal_balance: Even distribution left to right
- vertical_balance: Distribution from baseline to midcourt
- quadrant_distribution: Players in each court quadrant
"""
# Split court into quadrants
mid_x = self.COURT_LENGTH / 2 + self.COURT_LENGTH / 4
mid_y = self.COURT_WIDTH / 2
quadrants = {'Q1': 0, 'Q2': 0, 'Q3': 0, 'Q4': 0}
for pos in self.positions:
if pos[0] >= mid_x:
if pos[1] >= mid_y:
quadrants['Q1'] += 1 # Far right
else:
quadrants['Q4'] += 1 # Far left
else:
if pos[1] >= mid_y:
quadrants['Q2'] += 1 # Near right
else:
quadrants['Q3'] += 1 # Near left
# Calculate balance scores
left_right = abs(sum([quadrants['Q1'], quadrants['Q2']]) -
sum([quadrants['Q3'], quadrants['Q4']]))
horizontal_balance = 1 - (left_right / 5)
near_far = abs(sum([quadrants['Q1'], quadrants['Q4']]) -
sum([quadrants['Q2'], quadrants['Q3']]))
vertical_balance = 1 - (near_far / 5)
return {
'horizontal_balance': round(horizontal_balance, 3),
'vertical_balance': round(vertical_balance, 3),
'quadrant_distribution': quadrants
}
def comprehensive_spacing_report(self) -> Dict:
"""Generate comprehensive spacing analysis."""
return {
'average_spacing_feet': round(self.calculate_average_spacing(), 1),
'hull_area_sqft': round(self.calculate_hull_area(), 1),
'players_in_paint': self.count_players_in_paint(),
'three_point_shooters': self.count_three_point_shooters(),
'floor_balance': self.calculate_floor_balance()
}
17.5 Ball Movement and Passing Analytics
17.5.1 The Value of Ball Movement
17.5.2 Pass Quality Metrics
17.5.3 Expected Assist Model
17.5.4 Ball Movement Efficiency Score
class PassingAnalyzer:
"""Analyze team ball movement and passing patterns."""
def __init__(self, passing_data: pd.DataFrame):
"""
Initialize with passing tracking data.
Parameters:
-----------
passing_data : DataFrame with columns:
passer_id, receiver_id, pass_type, distance,
lead_to_shot, lead_to_make, touch_time
"""
self.data = passing_data
def calculate_passes_per_possession(self, possessions: int) -> float:
"""Calculate average passes per team possession."""
total_passes = len(self.data)
return total_passes / possessions if possessions > 0 else 0
def calculate_average_touch_time(self) -> float:
"""Calculate mean touch time in seconds."""
return self.data['touch_time'].mean()
def calculate_potential_assist_rate(self) -> float:
"""Calculate proportion of passes leading to shots."""
passes_to_shots = self.data['lead_to_shot'].sum()
total_passes = len(self.data)
return passes_to_shots / total_passes if total_passes > 0 else 0
def calculate_assist_conversion(self) -> float:
"""Calculate conversion rate of potential assists to actual assists."""
potential_assists = self.data['lead_to_shot'].sum()
actual_assists = self.data['lead_to_make'].sum()
return actual_assists / potential_assists if potential_assists > 0 else 0
def get_player_passing_profiles(self) -> pd.DataFrame:
"""Aggregate passing metrics by player."""
profiles = self.data.groupby('passer_id').agg({
'pass_type': 'count',
'distance': 'mean',
'lead_to_shot': 'sum',
'lead_to_make': 'sum',
'touch_time': 'mean'
}).rename(columns={
'pass_type': 'total_passes',
'distance': 'avg_pass_distance',
'lead_to_shot': 'potential_assists',
'lead_to_make': 'assists',
'touch_time': 'avg_touch_time'
})
profiles['potential_ast_rate'] = (
profiles['potential_assists'] / profiles['total_passes']
)
profiles['ast_conversion'] = (
profiles['assists'] / profiles['potential_assists']
).fillna(0)
return profiles.round(3)
def build_passing_network(self) -> Dict:
"""
Build player passing network.
Returns:
--------
dict with 'nodes' (player metrics) and 'edges' (passer-receiver pairs)
"""
# Count passes between each player pair
edge_counts = self.data.groupby(['passer_id', 'receiver_id']).agg({
'pass_type': 'count',
'lead_to_make': 'sum'
}).reset_index()
edge_counts.columns = ['passer', 'receiver', 'passes', 'assists']
# Node metrics (total passes made and received)
passes_made = self.data.groupby('passer_id').size()
passes_received = self.data.groupby('receiver_id').size()
nodes = pd.DataFrame({
'passes_made': passes_made,
'passes_received': passes_received
}).fillna(0)
return {
'nodes': nodes.to_dict(),
'edges': edge_counts.to_dict('records')
}
def calculate_ball_movement_score(self, possessions: int,
weights: Dict = None) -> float:
"""
Calculate comprehensive ball movement efficiency score.
Parameters:
-----------
possessions : int - Total team possessions
weights : dict - Optional custom weights for components
Returns:
--------
float - Ball movement efficiency score (0-100 scale)
"""
if weights is None:
weights = {'passes': 0.25, 'touch_time': 0.25,
'potential_ast': 0.25, 'conversion': 0.25}
# Normalize each component to 0-1 scale
ppp = self.calculate_passes_per_possession(possessions)
ppp_norm = min(ppp / 5, 1) # Assume 5 passes/poss is excellent
att = self.calculate_average_touch_time()
att_norm = max(0, 1 - att / 4) # Lower touch time is better
par = self.calculate_potential_assist_rate()
par_norm = min(par / 0.3, 1) # 30% potential assist rate is excellent
acr = self.calculate_assist_conversion()
acr_norm = min(acr / 0.6, 1) # 60% conversion is excellent
score = (weights['passes'] * ppp_norm +
weights['touch_time'] * att_norm +
weights['potential_ast'] * par_norm +
weights['conversion'] * acr_norm)
return round(score * 100, 1)
17.6 Half-Court vs. Transition Offense Efficiency
17.6.1 Fundamentally Different Contexts
17.6.2 Measuring Transition Opportunities
17.6.3 Transition Opportunity Generation
17.6.4 Early Offense Value
Offense Type
Shot Clock Window
Typical ORtg
Transition
18-24 seconds
112-118
Early
14-18 seconds
108-112
Half-Court
0-14 seconds
104-108
class TransitionAnalyzer:
"""Analyze transition vs half-court offensive efficiency."""
TRANSITION_CUTOFF = 14 # Seconds remaining defines transition
EARLY_OFFENSE_CUTOFF = 8 # Early offense window
def __init__(self, possession_data: pd.DataFrame):
"""
Initialize with possession-level data.
Parameters:
-----------
possession_data : DataFrame with columns:
shot_clock_start, points, possession_type,
transition_source, outcome
"""
self.data = possession_data
def classify_possessions(self) -> pd.DataFrame:
"""Classify possessions by tempo category."""
df = self.data.copy()
def classify(row):
if row['shot_clock_start'] >= self.TRANSITION_CUTOFF:
return 'transition'
elif row['shot_clock_start'] >= self.EARLY_OFFENSE_CUTOFF:
return 'early_offense'
else:
return 'half_court'
df['tempo_category'] = df.apply(classify, axis=1)
return df
def calculate_efficiency_by_tempo(self) -> pd.DataFrame:
"""Calculate offensive rating by tempo category."""
classified = self.classify_possessions()
results = classified.groupby('tempo_category').agg({
'points': 'sum',
'possession_type': 'count'
}).rename(columns={'possession_type': 'possessions'})
results['ortg'] = (results['points'] / results['possessions'] * 100).round(1)
results['frequency'] = (results['possessions'] /
results['possessions'].sum()).round(3)
return results
def transition_generation_analysis(self) -> Dict:
"""Analyze sources of transition opportunities."""
trans_poss = self.data[self.data['shot_clock_start'] >= self.TRANSITION_CUTOFF]
if len(trans_poss) == 0:
return {}
source_breakdown = trans_poss['transition_source'].value_counts(normalize=True)
return {
'total_transition_possessions': len(trans_poss),
'transition_rate': round(len(trans_poss) / len(self.data), 3),
'source_breakdown': source_breakdown.to_dict(),
'transition_ortg': round(trans_poss['points'].sum() /
len(trans_poss) * 100, 1)
}
def shot_clock_efficiency_curve(self, bins: int = 8) -> pd.DataFrame:
"""Calculate efficiency as function of shot clock."""
df = self.data.copy()
df['shot_clock_bin'] = pd.cut(df['shot_clock_start'], bins=bins)
results = df.groupby('shot_clock_bin').agg({
'points': ['sum', 'count']
})
results.columns = ['points', 'possessions']
results['ortg'] = (results['points'] / results['possessions'] * 100).round(1)
return results
def optimal_pace_analysis(self) -> Dict:
"""
Analyze optimal pace based on efficiency differentials.
Returns:
--------
dict with recommendations based on team's transition vs half-court gap
"""
efficiency = self.calculate_efficiency_by_tempo()
trans_ortg = efficiency.loc['transition', 'ortg'] if 'transition' in efficiency.index else 110
hc_ortg = efficiency.loc['half_court', 'ortg'] if 'half_court' in efficiency.index else 105
gap = trans_ortg - hc_ortg
if gap > 10:
recommendation = "Strongly favor pushing pace"
elif gap > 5:
recommendation = "Moderate advantage to transition"
elif gap > 0:
recommendation = "Slight pace advantage, situational"
else:
recommendation = "Half-court offense competitive; pace neutral"
return {
'transition_ortg': trans_ortg,
'half_court_ortg': hc_ortg,
'efficiency_gap': round(gap, 1),
'recommendation': recommendation
}
17.7 Assist Networks and Ball Distribution
17.7.1 Network Analysis Framework
17.7.2 Team Network Topology
17.7.3 Assist Opportunity Differential
17.7.4 Ball Distribution Entropy
import networkx as nx
from collections import defaultdict
class AssistNetworkAnalyzer:
"""Analyze team assist networks and ball distribution."""
def __init__(self, assist_data: pd.DataFrame):
"""
Initialize with assist tracking data.
Parameters:
-----------
assist_data : DataFrame with columns:
passer_id, scorer_id, play_type, shot_value
"""
self.data = assist_data
self.graph = self._build_network()
def _build_network(self) -> nx.DiGraph:
"""Build directed graph from assist data."""
G = nx.DiGraph()
# Add edges weighted by assist frequency
edge_weights = self.data.groupby(['passer_id', 'scorer_id']).size()
for (passer, scorer), weight in edge_weights.items():
G.add_edge(passer, scorer, weight=weight)
return G
def calculate_centrality_metrics(self) -> pd.DataFrame:
"""Calculate network centrality for each player."""
# Degree centrality (normalized)
in_degree = nx.in_degree_centrality(self.graph)
out_degree = nx.out_degree_centrality(self.graph)
# Betweenness centrality
betweenness = nx.betweenness_centrality(self.graph, weight='weight')
# PageRank (importance in network)
pagerank = nx.pagerank(self.graph, weight='weight')
players = list(self.graph.nodes())
return pd.DataFrame({
'player_id': players,
'assists_given_centrality': [out_degree.get(p, 0) for p in players],
'assists_received_centrality': [in_degree.get(p, 0) for p in players],
'betweenness': [betweenness.get(p, 0) for p in players],
'pagerank': [pagerank.get(p, 0) for p in players]
}).round(4)
def identify_playmaker_hierarchy(self) -> List:
"""
Rank players by playmaking importance.
Returns:
--------
list of (player_id, playmaking_score) tuples, sorted descending
"""
assist_counts = self.data.groupby('passer_id').size()
points_created = self.data.groupby('passer_id')['shot_value'].sum()
# Combined playmaking score
playmaking_scores = {}
for player in assist_counts.index:
assists = assist_counts[player]
points = points_created[player]
playmaking_scores[player] = assists * 0.4 + points * 0.3
# Add centrality bonus
centrality = self.calculate_centrality_metrics()
for _, row in centrality.iterrows():
player = row['player_id']
if player in playmaking_scores:
playmaking_scores[player] += row['betweenness'] * 20
return sorted(playmaking_scores.items(), key=lambda x: x[1], reverse=True)
def calculate_network_entropy(self) -> float:
"""
Calculate entropy of assist distribution.
Higher values indicate more distributed playmaking.
"""
assist_counts = self.data.groupby('passer_id').size()
total_assists = assist_counts.sum()
if total_assists == 0:
return 0.0
probs = assist_counts / total_assists
entropy = -np.sum(probs * np.log(probs + 1e-10))
# Normalize by maximum possible entropy
max_entropy = np.log(len(assist_counts))
normalized_entropy = entropy / max_entropy if max_entropy > 0 else 0
return round(normalized_entropy, 3)
def detect_common_combinations(self, min_frequency: int = 5) -> pd.DataFrame:
"""
Find most frequent assist combinations.
Parameters:
-----------
min_frequency : int - Minimum assists to include combination
Returns:
--------
DataFrame of frequent passer-scorer combinations
"""
combinations = self.data.groupby(['passer_id', 'scorer_id']).agg({
'play_type': 'count',
'shot_value': 'sum'
}).rename(columns={
'play_type': 'frequency',
'shot_value': 'total_points'
})
combinations = combinations[combinations['frequency'] >= min_frequency]
combinations['avg_points'] = (combinations['total_points'] /
combinations['frequency']).round(2)
return combinations.sort_values('frequency', ascending=False)
def analyze_network_topology(self) -> Dict:
"""
Classify team's offensive network structure.
Returns:
--------
dict with topology classification and metrics
"""
centrality = self.calculate_centrality_metrics()
# Measure centralization
out_centrality = centrality['assists_given_centrality']
max_centrality = out_centrality.max()
mean_centrality = out_centrality.mean()
centralization = max_centrality - mean_centrality
# Network density
density = nx.density(self.graph)
# Classify topology
if centralization > 0.3 and density < 0.4:
topology = "Hub-and-Spoke"
description = "Primary ball handler dominant offense"
elif centralization < 0.15 and density > 0.5:
topology = "Distributed"
description = "Motion-based, egalitarian offense"
else:
topology = "Hybrid"
description = "Balanced between primary creators and movement"
return {
'topology': topology,
'description': description,
'centralization_score': round(centralization, 3),
'network_density': round(density, 3),
'entropy': self.calculate_network_entropy()
}
17.8 Shot Creation vs. Shot Conversion
17.8.1 Separating Creation and Finishing
17.8.2 Expected Points Framework
17.8.3 Luck and Variance Adjustment
17.8.4 Team Shot Profile Analysis
Zone
League eFG%
Value
Optimal Share
Restricted Area
63%
1.26
High
Paint (non-RA)
40%
0.80
Low
Mid-Range
42%
0.84
Low
Corner 3
39%
1.17
Moderate
Above Break 3
36%
1.08
Moderate-High
class ShotCreationAnalyzer:
"""Analyze shot creation vs shot conversion."""
# League average make probability by shot zone
ZONE_PROBABILITIES = {
'restricted_area': 0.63,
'paint_non_ra': 0.40,
'mid_range': 0.42,
'corner_3': 0.39,
'above_break_3': 0.36,
'other': 0.35
}
ZONE_VALUES = {
'restricted_area': 2,
'paint_non_ra': 2,
'mid_range': 2,
'corner_3': 3,
'above_break_3': 3,
'other': 2
}
def __init__(self, shot_data: pd.DataFrame):
"""
Initialize with shot-level data.
Parameters:
-----------
shot_data : DataFrame with columns:
shot_zone, made, defender_distance, shot_type, player_id
"""
self.data = shot_data
def calculate_expected_points(self, row: pd.Series) -> float:
"""Calculate expected points for a single shot."""
base_prob = self.ZONE_PROBABILITIES.get(row['shot_zone'], 0.40)
value = self.ZONE_VALUES.get(row['shot_zone'], 2)
# Adjust for defender distance
defender_adj = 0
if 'defender_distance' in row:
if row['defender_distance'] > 6: # Wide open
defender_adj = 0.08
elif row['defender_distance'] > 4: # Open
defender_adj = 0.04
elif row['defender_distance'] < 2: # Tight
defender_adj = -0.06
adjusted_prob = base_prob + defender_adj
return adjusted_prob * value
def team_shot_creation_analysis(self) -> Dict:
"""
Analyze team's shot creation quality.
Returns:
--------
dict with expected points metrics and creation value
"""
self.data['xpts'] = self.data.apply(self.calculate_expected_points, axis=1)
self.data['actual_pts'] = self.data.apply(
lambda x: self.ZONE_VALUES.get(x['shot_zone'], 2) if x['made'] else 0,
axis=1
)
total_xpts = self.data['xpts'].sum()
total_actual = self.data['actual_pts'].sum()
total_shots = len(self.data)
# League baseline (average xpts)
league_baseline = sum(
self.ZONE_PROBABILITIES[z] * self.ZONE_VALUES[z]
for z in self.ZONE_PROBABILITIES
) / len(self.ZONE_PROBABILITIES) * total_shots
return {
'expected_points': round(total_xpts, 1),
'actual_points': round(total_actual, 1),
'shot_creation_value': round(total_xpts - league_baseline, 1),
'shot_conversion_value': round(total_actual - total_xpts, 1),
'xpts_per_shot': round(total_xpts / total_shots, 3),
'actual_pts_per_shot': round(total_actual / total_shots, 3)
}
def shot_profile_analysis(self) -> pd.DataFrame:
"""Analyze team shot distribution by zone."""
zone_stats = self.data.groupby('shot_zone').agg({
'made': ['sum', 'count'],
'xpts': 'sum'
})
zone_stats.columns = ['makes', 'attempts', 'xpts']
zone_stats['fg_pct'] = (zone_stats['makes'] / zone_stats['attempts']).round(3)
zone_stats['frequency'] = (zone_stats['attempts'] /
zone_stats['attempts'].sum()).round(3)
zone_stats['xfg_pct'] = zone_stats.apply(
lambda x: self.ZONE_PROBABILITIES.get(x.name, 0.40), axis=1
)
zone_stats['actual_vs_expected'] = (
zone_stats['fg_pct'] - zone_stats['xfg_pct']
).round(3)
return zone_stats
def player_shot_creation_breakdown(self) -> pd.DataFrame:
"""Breakdown shot creation and conversion by player."""
self.data['xpts'] = self.data.apply(self.calculate_expected_points, axis=1)
self.data['actual_pts'] = self.data.apply(
lambda x: self.ZONE_VALUES.get(x['shot_zone'], 2) if x['made'] else 0,
axis=1
)
player_stats = self.data.groupby('player_id').agg({
'xpts': 'sum',
'actual_pts': 'sum',
'made': 'count'
}).rename(columns={'made': 'shots'})
player_stats['creation_value'] = (
player_stats['xpts'] -
player_stats['shots'] * 1.0 # Assume league avg ~1.0 xpts/shot
).round(1)
player_stats['conversion_value'] = (
player_stats['actual_pts'] - player_stats['xpts']
).round(1)
return player_stats.sort_values('creation_value', ascending=False)
def regression_adjusted_shooting(self, prior_rate: float = None,
regression_factor: int = 250) -> pd.DataFrame:
"""
Calculate regression-adjusted shooting percentages.
Parameters:
-----------
prior_rate : float - Prior shooting percentage (league average if None)
regression_factor : int - Sample size for 50% regression
Returns:
--------
DataFrame with raw and regressed shooting by zone
"""
if prior_rate is None:
# Use zone-specific priors
use_zone_priors = True
else:
use_zone_priors = False
zone_stats = self.data.groupby('shot_zone').agg({
'made': ['sum', 'count']
})
zone_stats.columns = ['makes', 'attempts']
zone_stats['raw_pct'] = zone_stats['makes'] / zone_stats['attempts']
def regress(row):
if use_zone_priors:
prior = self.ZONE_PROBABILITIES.get(row.name, 0.40)
else:
prior = prior_rate
n = row['attempts']
raw = row['raw_pct']
regressed = (n * raw + regression_factor * prior) / (n + regression_factor)
return regressed
zone_stats['regressed_pct'] = zone_stats.apply(regress, axis=1).round(3)
zone_stats['raw_pct'] = zone_stats['raw_pct'].round(3)
return zone_stats
17.9 Integrating Offensive Efficiency Metrics
17.9.1 The Four Factors Framework
17.9.2 Composite Offensive Efficiency Score
17.9.3 Offensive Archetype Classification
Archetype
Primary Characteristics
Pace-and-Space
High 3PA rate, fast pace, transition focus
Half-Court Grinders
Methodical, post-play, mid-range
Motion Offense
High assist rate, ball movement, cuts
Star-Driven ISO
High isolation frequency, elite creator
Inside-Out
Paint attacks, kick-outs, balance
17.9.4 Building a Complete Team Offensive Profile
class TeamOffensiveProfile:
"""Comprehensive team offensive analysis combining all metrics."""
def __init__(self, team_id: str):
"""Initialize profile for a team."""
self.team_id = team_id
self.metrics = {}
def calculate_four_factors(self, stats: Dict) -> Dict:
"""Calculate Dean Oliver's Four Factors."""
# Effective Field Goal Percentage
efg = (stats['fgm'] + 0.5 * stats['fg3m']) / stats['fga']
# Turnover Rate
possessions = stats['fga'] + 0.44 * stats['fta'] + stats['tov']
tov_rate = stats['tov'] / possessions
# Offensive Rebounding Rate
oreb_rate = stats['oreb'] / (stats['oreb'] + stats['opp_dreb'])
# Free Throw Rate
ft_rate = stats['ftm'] / stats['fga']
return {
'efg_pct': round(efg, 3),
'tov_rate': round(tov_rate, 3),
'oreb_rate': round(oreb_rate, 3),
'ft_rate': round(ft_rate, 3)
}
def calculate_offensive_rating(self, stats: Dict) -> float:
"""Calculate team offensive rating."""
possessions = calculate_possessions(
stats['fga'], stats['fta'], stats['oreb'], stats['tov'],
stats['fgm'], use_advanced=True
)
return round(stats['points'] / possessions * 100, 1)
def classify_offensive_style(self, play_type_data: Dict,
tempo_data: Dict) -> str:
"""
Classify team's offensive archetype.
Parameters:
-----------
play_type_data : dict - Play type frequencies and efficiencies
tempo_data : dict - Transition vs half-court splits
Returns:
--------
str - Offensive archetype classification
"""
# Check for pace-and-space
trans_rate = tempo_data.get('transition_rate', 0.15)
three_rate = play_type_data.get('spot_up', {}).get('frequency', 0.15)
if trans_rate > 0.18 and three_rate > 0.20:
return "Pace-and-Space"
# Check for star-driven ISO
iso_rate = play_type_data.get('isolation', {}).get('frequency', 0.08)
if iso_rate > 0.12:
return "Star-Driven ISO"
# Check for motion offense
cut_rate = play_type_data.get('cut', {}).get('frequency', 0.05)
off_screen_rate = play_type_data.get('off_screen', {}).get('frequency', 0.04)
if cut_rate > 0.08 or off_screen_rate > 0.07:
return "Motion Offense"
# Check for post-oriented
post_rate = play_type_data.get('post_up', {}).get('frequency', 0.05)
if post_rate > 0.08:
return "Half-Court Grinders"
return "Inside-Out"
def generate_comprehensive_report(self,
team_stats: Dict,
play_type_data: Dict,
tempo_data: Dict,
shot_data: pd.DataFrame) -> Dict:
"""
Generate complete offensive profile report.
Returns:
--------
dict - Comprehensive offensive analysis
"""
# Basic efficiency
ortg = self.calculate_offensive_rating(team_stats)
four_factors = self.calculate_four_factors(team_stats)
# Play type analysis
pta = PlayTypeAnalyzer(play_type_data)
play_type_efficiency = pta.get_play_type_efficiency()
versatility = pta.calculate_versatility_index()
# Tempo analysis
trans = TransitionAnalyzer(pd.DataFrame()) # Would need actual data
# Shot profile
sca = ShotCreationAnalyzer(shot_data)
creation_analysis = sca.team_shot_creation_analysis()
shot_profile = sca.shot_profile_analysis()
# Style classification
style = self.classify_offensive_style(play_type_data, tempo_data)
return {
'team_id': self.team_id,
'offensive_rating': ortg,
'four_factors': four_factors,
'offensive_style': style,
'play_type_versatility': versatility,
'shot_creation_value': creation_analysis['shot_creation_value'],
'shot_conversion_value': creation_analysis['shot_conversion_value'],
'key_strengths': pta.identify_strengths_weaknesses()['strengths'][:3],
'key_weaknesses': pta.identify_strengths_weaknesses()['weaknesses'][:3]
}
17.10 Case Studies in Offensive Excellence
17.10.1 The 2015-17 Golden State Warriors
17.10.2 The 2020-21 Brooklyn Nets
17.10.3 The 2013-14 San Antonio Spurs
17.11 Practical Applications
17.11.1 Scouting and Game Planning
17.11.2 Roster Construction
17.11.3 In-Game Decision Making
Summary
Key Formulas Reference
Metric
Formula
Offensive Rating
$\frac{Points}{Possessions} \times 100$
Possessions
$FGA + 0.44 \times FTA - OREB + TOV$
eFG%
$\frac{FGM + 0.5 \times 3PM}{FGA}$
True Shooting
$\frac{PTS}{2 \times (FGA + 0.44 \times FTA)}$
TOV Rate
$\frac{TOV}{Possessions}$
OREB%
$\frac{OREB}{OREB + Opp\_DREB}$
FT Rate
$\frac{FTM}{FGA}$
PPP
$\frac{Points}{Play Type Possessions}$
Further Reading