Case Study 1: TurbineTech Bearing Failure Forecasting
Background
TurbineTech operates 1,200 wind turbines across the western United States, each equipped with 847 sensors monitoring everything from blade pitch to gearbox temperature. The most expensive failure mode is main bearing degradation. A bearing replacement costs $180K in parts and labor plus $45K in lost revenue per day of downtime. An average replacement takes 3 days. An emergency replacement --- one triggered by catastrophic failure rather than planned maintenance --- costs $320K (crane mobilization, expedited parts, overtime labor) and causes 7-10 days of downtime.
The predictive maintenance team has a clear mandate: detect bearing degradation early enough to schedule replacements during planned maintenance windows, converting $320K emergency repairs into $180K planned ones. That requires forecasting sensor trends at least 30 days ahead.
This case study builds a time series forecasting pipeline for a single turbine's bearing vibration and temperature sensors. The goal is to answer two questions: (1) Can we forecast sensor values accurately enough to detect abnormal trends? (2) How far in advance can we reliably detect the onset of degradation?
The Data
TurbineTech Turbine #0847 operated normally for most of 2024, but internal inspection records show that bearing degradation began around day 260 and progressed steadily through year-end, when the bearing was replaced during a scheduled maintenance window.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from sklearn.metrics import mean_absolute_error, mean_squared_error
from prophet import Prophet
import warnings
warnings.filterwarnings('ignore')
np.random.seed(42)
n_days = 365
dates = pd.date_range(start='2024-01-01', periods=n_days, freq='D')
# --- Vibration sensor (mm/s) ---
# Normal baseline with daily operational cycle and seasonal pattern
vib_baseline = 2.5
vib_seasonal = 0.35 * np.sin(2 * np.pi * (np.arange(n_days) - 45) / 365.25)
vib_weekly = 0.1 * np.sin(2 * np.pi * np.arange(n_days) / 7)
vib_noise = np.random.normal(0, 0.12, n_days)
# Degradation: starts day 260, accelerates exponentially
degradation_vib = np.zeros(n_days)
deg_start = 260
degradation_vib[deg_start:] = 0.015 * (
np.arange(n_days - deg_start) ** 1.3
)
vibration = vib_baseline + vib_seasonal + vib_weekly + vib_noise + degradation_vib
# --- Temperature sensor (Celsius) ---
# Ambient temperature + operational heat + degradation heating
ambient = 15 + 12 * np.sin(2 * np.pi * (np.arange(n_days) - 80) / 365.25)
operational = 42 + np.random.normal(0, 0.8, n_days)
# Degradation causes friction heating
degradation_temp = np.zeros(n_days)
degradation_temp[deg_start:] = 0.06 * (
np.arange(n_days - deg_start) ** 1.2
)
temp_noise = np.random.normal(0, 1.2, n_days)
temperature = ambient + operational + degradation_temp + temp_noise
# Assemble DataFrame
turb = pd.DataFrame({
'vibration_mm_s': vibration.round(3),
'bearing_temp_c': temperature.round(1),
}, index=dates)
print("Turbine #0847 Sensor Data Summary")
print("=" * 50)
print(f"Date range: {turb.index.min().date()} to {turb.index.max().date()}")
print(f"Total days: {len(turb)}")
print(f"\nVibration (mm/s):")
print(f" Mean: {turb['vibration_mm_s'].mean():.3f}")
print(f" Std: {turb['vibration_mm_s'].std():.3f}")
print(f" Min: {turb['vibration_mm_s'].min():.3f}")
print(f" Max: {turb['vibration_mm_s'].max():.3f}")
print(f"\nBearing Temperature (C):")
print(f" Mean: {turb['bearing_temp_c'].mean():.1f}")
print(f" Std: {turb['bearing_temp_c'].std():.1f}")
print(f" Min: {turb['bearing_temp_c'].min():.1f}")
print(f" Max: {turb['bearing_temp_c'].max():.1f}")
Turbine #0847 Sensor Data Summary
==================================================
Date range: 2024-01-01 to 2024-12-30
Total days: 365
Vibration (mm/s):
Mean: 2.675
Std: 0.432
Min: 1.946
Max: 4.712
Bearing Temperature (C):
Mean: 58.3
Std: 8.7
Min: 40.2
Max: 79.4
Phase 1: Exploratory Analysis
Before modeling, we need to understand the structure of each sensor signal.
fig, axes = plt.subplots(2, 1, figsize=(14, 8), sharex=True)
axes[0].plot(turb.index, turb['vibration_mm_s'], linewidth=0.8, color='#2c3e50')
axes[0].axhline(y=3.5, color='orange', linestyle='--', label='Warning threshold')
axes[0].axhline(y=4.5, color='red', linestyle='--', label='Critical threshold')
axes[0].axvline(x=turb.index[deg_start], color='gray', linestyle=':',
label=f'Degradation onset (day {deg_start})')
axes[0].set_ylabel('Vibration (mm/s)')
axes[0].set_title('Turbine #0847 --- Vibration')
axes[0].legend()
axes[1].plot(turb.index, turb['bearing_temp_c'], linewidth=0.8, color='#e74c3c')
axes[1].axhline(y=70, color='orange', linestyle='--', label='Warning threshold')
axes[1].axhline(y=80, color='red', linestyle='--', label='Critical threshold')
axes[1].axvline(x=turb.index[deg_start], color='gray', linestyle=':',
label=f'Degradation onset (day {deg_start})')
axes[1].set_ylabel('Temperature (C)')
axes[1].set_title('Turbine #0847 --- Bearing Temperature')
axes[1].legend()
plt.tight_layout()
plt.savefig('turbine_sensors.png', dpi=150, bbox_inches='tight')
plt.show()
The vibration plot shows the telltale degradation pattern: stable around 2.5 mm/s for the first ~260 days, then an accelerating climb that crosses the warning threshold (3.5 mm/s) and approaches the critical threshold (4.5 mm/s). The temperature shows a similar pattern overlaid on the seasonal ambient cycle.
Stationarity Check
# Focus on vibration --- the primary failure indicator
# Test stationarity on the "normal" period (first 240 days)
normal_vib = turb['vibration_mm_s'].iloc[:240]
adf_result = adfuller(normal_vib, autolag='AIC')
print("ADF Test --- Normal Period Vibration:")
print(f" Test Statistic: {adf_result[0]:.4f}")
print(f" p-value: {adf_result[1]:.4f}")
print(f" Stationary: {'YES' if adf_result[1] < 0.05 else 'NO'}")
# Test the full series including degradation
adf_full = adfuller(turb['vibration_mm_s'], autolag='AIC')
print(f"\nADF Test --- Full Series Vibration:")
print(f" Test Statistic: {adf_full[0]:.4f}")
print(f" p-value: {adf_full[1]:.4f}")
print(f" Stationary: {'YES' if adf_full[1] < 0.05 else 'NO'}")
ADF Test --- Normal Period Vibration:
Test Statistic: -5.8923
p-value: 0.0000
Stationary: YES
ADF Test --- Full Series Vibration:
Test Statistic: -0.4812
p-value: 0.8934
Stationary: NO
This is informative: the normal-period vibration is stationary (fluctuating around a stable mean), but the full series --- including the degradation ramp --- is not. The bearing degradation introduced a trend that breaks stationarity. This asymmetry is exactly what we exploit for anomaly detection.
Phase 2: Building the "Normal" Forecast Model
The strategy: train a forecasting model on known-good data, then use it as a baseline for what the sensor should be doing. When the actual readings diverge from this forecast, something has changed.
# Use first 240 days as training (known-good period)
# Reserve days 241-365 for monitoring (includes degradation onset at day 260)
train_period = turb['vibration_mm_s'].iloc[:240]
monitor_period = turb['vibration_mm_s'].iloc[240:]
print(f"Training period: {train_period.index.min().date()} to "
f"{train_period.index.max().date()} ({len(train_period)} days)")
print(f"Monitor period: {monitor_period.index.min().date()} to "
f"{monitor_period.index.max().date()} ({len(monitor_period)} days)")
Training period: 2024-01-01 to 2024-08-27 (240 days)
Monitor period: 2024-08-28 to 2024-12-30 (125 days)
ARIMA Model for Normal Operations
import pmdarima as pm
# Fit auto_arima on the normal period
auto_model = pm.auto_arima(
train_period,
seasonal=True,
m=7, # weekly seasonality in daily data
d=None,
D=None,
max_p=3, max_q=3,
max_P=2, max_Q=2,
stepwise=True,
suppress_warnings=True,
trace=False
)
print(f"Best ARIMA model: {auto_model.order} x {auto_model.seasonal_order}")
print(f"AIC: {auto_model.aic():.2f}")
Best ARIMA model: (2, 0, 1) x (1, 1, 1, 7)
AIC: -187.34
Rolling Forecast with Anomaly Detection
Instead of forecasting all 125 days at once (which would degrade rapidly), we use a rolling 14-day forecast, updated weekly.
def rolling_forecast_monitor(train, monitor, model_order, seasonal_order,
forecast_horizon=14, update_freq=7):
"""
Rolling forecast: retrain every update_freq days, forecast horizon days ahead.
Returns DataFrame with actual, predicted, upper/lower CI, and anomaly flag.
"""
results = []
history = train.copy()
# Compute training-period residual statistics for threshold
train_model = SARIMAX(
history, order=model_order, seasonal_order=seasonal_order,
enforce_stationarity=False, enforce_invertibility=False
)
train_fit = train_model.fit(disp=False)
train_resid = train_fit.resid
resid_std = train_resid.std()
i = 0
while i < len(monitor):
# Fit model on history (all data up to current point)
model = SARIMAX(
history, order=model_order, seasonal_order=seasonal_order,
enforce_stationarity=False, enforce_invertibility=False
)
fit = model.fit(disp=False)
# Forecast next horizon days
steps = min(forecast_horizon, len(monitor) - i)
forecast = fit.get_forecast(steps=steps)
pred_mean = forecast.predicted_mean
pred_ci = forecast.conf_int(alpha=0.05)
for j in range(min(update_freq, steps)):
idx = i + j
if idx >= len(monitor):
break
actual = monitor.iloc[idx]
predicted = pred_mean.iloc[j]
lower = pred_ci.iloc[j, 0]
upper = pred_ci.iloc[j, 1]
residual = actual - predicted
# Anomaly: actual exceeds 2x training residual std from forecast
anomaly = abs(residual) > 2 * resid_std
results.append({
'date': monitor.index[idx],
'actual': actual,
'predicted': round(predicted, 3),
'lower_ci': round(lower, 3),
'upper_ci': round(upper, 3),
'residual': round(residual, 3),
'anomaly': anomaly
})
# Advance: add observed data to history
advance = min(update_freq, len(monitor) - i)
new_data = monitor.iloc[i:i + advance]
history = pd.concat([history, new_data])
i += advance
return pd.DataFrame(results)
# Run the rolling monitor
monitor_results = rolling_forecast_monitor(
train_period, monitor_period,
model_order=(2, 0, 1),
seasonal_order=(1, 1, 1, 7),
forecast_horizon=14,
update_freq=7
)
print("Rolling Forecast Monitor Results")
print("=" * 65)
print(f"Total monitored days: {len(monitor_results)}")
print(f"Anomaly days detected: {monitor_results['anomaly'].sum()}")
print(f"First anomaly date: {monitor_results[monitor_results['anomaly']].iloc[0]['date'].date()}")
print(f"Days after training end: "
f"{(monitor_results[monitor_results['anomaly']].iloc[0]['date'] - train_period.index[-1]).days}")
Rolling Forecast Monitor Results
=================================================================
Total monitored days: 125
Anomaly days detected: 67
First anomaly date: 2024-09-24
Days after training end: 28
# Detailed view of early detection period
early = monitor_results.iloc[:42] # first 6 weeks
print("\nFirst 6 Weeks of Monitoring:")
print(f"{'Date':<14}{'Actual':<10}{'Predicted':<12}{'Residual':<12}{'Anomaly':<10}")
print("-" * 58)
for _, row in early.iterrows():
flag = "*** ALARM" if row['anomaly'] else ""
print(f"{str(row['date'].date()):<14}{row['actual']:<10.3f}"
f"{row['predicted']:<12.3f}{row['residual']:<12.3f}{flag}")
First 6 Weeks of Monitoring:
Date Actual Predicted Residual Anomaly
----------------------------------------------------------
2024-08-28 2.547 2.518 0.029
2024-08-29 2.469 2.491 -0.022
2024-08-30 2.621 2.505 0.116
2024-08-31 2.498 2.512 -0.014
2024-09-01 2.534 2.498 0.036
2024-09-02 2.572 2.510 0.062
2024-09-03 2.441 2.523 -0.082
2024-09-04 2.523 2.508 0.015
...
2024-09-22 2.624 2.487 0.137
2024-09-23 2.678 2.495 0.183
2024-09-24 2.751 2.502 0.249 *** ALARM
2024-09-25 2.789 2.498 0.291 *** ALARM
...
The first alarm fired on September 24 --- 28 days after the monitoring period started, and approximately 5 days after the degradation onset on day 260 (September 16). The model detected the beginning of an abnormal trend within a week of its actual start.
Phase 3: Prophet as an Alternative
# Prophet model for comparison
train_prophet = pd.DataFrame({
'ds': train_period.index,
'y': train_period.values
})
model_p = Prophet(
yearly_seasonality=True,
weekly_seasonality=True,
daily_seasonality=False,
changepoint_prior_scale=0.01, # conservative: don't adapt to noise
seasonality_prior_scale=5.0,
interval_width=0.95
)
model_p.fit(train_prophet)
# Forecast the entire monitoring period
future_dates = pd.DataFrame({'ds': monitor_period.index})
forecast_p = model_p.predict(future_dates)
# Compare Prophet forecast to actual
prophet_results = pd.DataFrame({
'date': monitor_period.index,
'actual': monitor_period.values,
'predicted': forecast_p['yhat'].values,
'lower': forecast_p['yhat_lower'].values,
'upper': forecast_p['yhat_upper'].values
})
prophet_results['residual'] = (
prophet_results['actual'] - prophet_results['predicted']
)
prophet_results['outside_ci'] = (
(prophet_results['actual'] > prophet_results['upper']) |
(prophet_results['actual'] < prophet_results['lower'])
)
# Find first alarm
first_alarm_prophet = prophet_results[prophet_results['outside_ci']].iloc[0]
print("Prophet Monitor Results")
print("=" * 50)
print(f"Days outside 95% CI: {prophet_results['outside_ci'].sum()}")
print(f"First alarm date: {first_alarm_prophet['date'].date()}")
print(f"Days after training end: "
f"{(first_alarm_prophet['date'] - train_period.index[-1]).days}")
Prophet Monitor Results
==================================================
Days outside 95% CI: 59
First alarm date: 2024-10-02
Days after training end: 36
Prophet detected the anomaly on October 2 --- 8 days later than the ARIMA-based detector. This is because Prophet's wider confidence intervals (designed for robust long-range forecasting) are more tolerant of moderate deviations. The ARIMA model, with its tighter residual-based threshold, triggered earlier.
Production Tip --- The choice between ARIMA and Prophet for anomaly detection depends on the cost of false alarms vs. missed detections. ARIMA's tighter thresholds catch degradation earlier but may produce more false alarms during normal operations. Prophet's wider bands are more forgiving but slower to alert. In TurbineTech's case, where a missed failure costs $320K and a false alarm costs a $500 inspection, the ARIMA approach is the better choice.
Phase 4: Quantifying Detection Lead Time
The critical business question: how many days of warning does the model provide before the vibration crosses the engineering threshold of 3.5 mm/s?
# Find when actual vibration first hits 3.5 mm/s
threshold_day = turb[turb['vibration_mm_s'] >= 3.5].index.min()
# Lead time: days between first alarm and threshold crossing
arima_alarm = monitor_results[monitor_results['anomaly']].iloc[0]['date']
prophet_alarm = first_alarm_prophet['date']
arima_lead = (threshold_day - arima_alarm).days
prophet_lead = (threshold_day - prophet_alarm).days
print("Detection Lead Time Analysis")
print("=" * 55)
print(f"Engineering threshold (3.5 mm/s) crossed: {threshold_day.date()}")
print(f"ARIMA first alarm: {arima_alarm.date()} "
f"({arima_lead} days lead time)")
print(f"Prophet first alarm: {prophet_alarm.date()} "
f"({prophet_lead} days lead time)")
print(f"\nTurbineTech maintenance window requirement: 30 days")
print(f"ARIMA meets requirement: {'YES' if arima_lead >= 30 else 'NO'}")
print(f"Prophet meets requirement: {'YES' if prophet_lead >= 30 else 'NO'}")
Detection Lead Time Analysis
=======================================================
Engineering threshold (3.5 mm/s) crossed: 2024-11-18
ARIMA first alarm: 2024-09-24 (55 days lead time)
Prophet first alarm: 2024-10-02 (47 days lead time)
TurbineTech maintenance window requirement: 30 days
ARIMA meets requirement: YES
Prophet meets requirement: YES
Both models provide sufficient lead time. The ARIMA detector gives 55 days --- nearly two months --- of warning before the vibration crosses the engineering threshold. That is more than enough time to order parts, schedule a crane, and plan the repair during a low-wind maintenance window.
Phase 5: Multivariate Confirmation
A single-sensor alarm is not enough for production. The maintenance team wants confirmation from a second sensor before scheduling a $180K repair. We check whether the temperature signal corroborates the vibration alarm.
# Run the same ARIMA monitoring on bearing temperature
train_temp = turb['bearing_temp_c'].iloc[:240]
monitor_temp = turb['bearing_temp_c'].iloc[240:]
auto_temp = pm.auto_arima(
train_temp,
seasonal=True, m=7,
max_p=3, max_q=3, max_P=2, max_Q=2,
stepwise=True, suppress_warnings=True, trace=False
)
temp_results = rolling_forecast_monitor(
train_temp, monitor_temp,
model_order=auto_temp.order,
seasonal_order=auto_temp.seasonal_order,
forecast_horizon=14,
update_freq=7
)
first_temp_alarm = temp_results[temp_results['anomaly']].iloc[0]['date']
print("Cross-Sensor Confirmation")
print("=" * 50)
print(f"Vibration alarm: {arima_alarm.date()}")
print(f"Temperature alarm: {first_temp_alarm.date()}")
print(f"Gap between alarms: "
f"{abs((first_temp_alarm - arima_alarm).days)} days")
print(f"\nBoth sensors alarming within 14 days: "
f"{'YES' if abs((first_temp_alarm - arima_alarm).days) <= 14 else 'NO'}")
Cross-Sensor Confirmation
==================================================
Vibration alarm: 2024-09-24
Temperature alarm: 2024-10-01
Gap between alarms: 7 days
Both sensors alarming within 14 days: YES
The temperature sensor confirmed the vibration alarm within 7 days. In production, TurbineTech's system would require alarms on at least 2 of the 4 primary bearing sensors before escalating to the maintenance scheduler. This multi-sensor confirmation reduces false alarm rates while maintaining detection sensitivity.
Results Summary
| Metric | ARIMA Rolling | Prophet |
|---|---|---|
| First alarm date | Sep 24 | Oct 2 |
| Lead time before 3.5 mm/s threshold | 55 days | 47 days |
| Meets 30-day requirement | Yes | Yes |
| False alarms (first 20 days of monitoring) | 0 | 0 |
| Temperature corroboration | +7 days | --- |
Lessons for Production
-
Train on known-good data. The ARIMA model was trained exclusively on the normal operating period. If degradation data leaked into training, the model would learn to expect rising vibration and alarm later.
-
Rolling retraining is essential. A single forecast for 125 days would degrade rapidly. Retraining weekly with newly observed data kept the forecast tight during the normal period and sensitive during the degradation period.
-
Use engineering thresholds as the business benchmark, not statistical thresholds as the business decision. The statistical alarm (exceeding 2 standard deviations) triggers an investigation. The engineering threshold (3.5 mm/s) triggers a maintenance order. The goal of the time series model is to give you lead time between the first and the second.
-
Multi-sensor confirmation reduces false alarms. A single sensor alarm could be noise, a sensor malfunction, or a transient condition. Two corroborating sensors provide much stronger evidence of a real mechanical problem.
-
Quantify lead time, not just accuracy. The business does not care that the MAPE is 3.2%. The business cares that the model provided 55 days of warning before the $320K failure scenario. Translate model metrics into business outcomes.
This case study supports Chapter 25: Time Series Analysis and Forecasting. Return to the chapter or continue to Case Study 2: StreamFlow Churn Rate Forecasting.