Case Study 32-2: Two Warehouses, One Spreadsheet, No Answers

Character: Nadia Vasquez, Operations Analyst at Bright Thread Co. Setting: A small e-commerce apparel accessories business, two warehouses (Chicago and Phoenix)


The Problem

Bright Thread Co. sells accessories — bags, belts, wallets, scarves — through their own website and three marketplace channels. They operate two fulfillment warehouses: one in Chicago (serving East and Midwest customers) and one in Phoenix (serving West and Southwest). Combined SKU count: 312 active items. Combined team: 18 people.

Nadia Vasquez joined Bright Thread as their first operations analyst eight months ago. Her predecessor had managed inventory through a combination of gut instinct and a master Excel file that nobody fully trusted. Nadia's first month on the job had included three stockout incidents that required shipping products from Chicago to Phoenix at overnight freight rates — expensive workarounds for problems that better inventory management would have prevented.

She was hired partly because of her Excel skills. But after her first two months, she concluded that what she actually needed was Python.


The Data

Bright Thread's ERP system could export CSV files for any date range. Nadia pulled three files covering the previous six months:

  • bright_thread_inventory.csv — current stock levels per SKU per warehouse
  • bright_thread_sales.csv — daily sales transactions with SKU, channel, quantity, warehouse
  • bright_thread_pos.csv — purchase order history with supplier, order date, receipt date, quantities
import pandas as pd
import numpy as np
import math
from scipy import stats

# Load all three data sources
inventory_df = pd.read_csv("bright_thread_inventory.csv")
sales_df = pd.read_csv("bright_thread_sales.csv", parse_dates=["sale_date"])
po_df = pd.read_csv("bright_thread_pos.csv", parse_dates=["order_date", "receipt_date"])

print("=== DATA OVERVIEW ===")
print(f"Inventory records: {len(inventory_df)} (SKU x warehouse combinations)")
print(f"Sales transactions: {len(sales_df)}")
print(f"Purchase orders: {len(po_df)}")
print(f"\nWarehouses: {inventory_df['warehouse'].unique()}")
print(f"SKU count: {inventory_df['sku_id'].nunique()}")
print(f"Supplier count: {po_df['supplier_id'].nunique()}")

Output:

=== DATA OVERVIEW ===
Inventory records: 624 (312 SKUs × 2 warehouses)
Sales transactions: 28,441
Purchase orders: 847
Warehouses: ['Chicago', 'Phoenix']
SKU count: 312
Supplier count: 9

Step 1: Inventory Health by Warehouse

The first thing Nadia wanted to understand was the basic health of each warehouse: how much inventory, how fast it was moving, and which SKUs were causing problems.

# Calculate demand velocity over the past 90 days
analysis_date = sales_df["sale_date"].max()
cutoff_date = analysis_date - pd.Timedelta(days=90)
recent_sales = sales_df[sales_df["sale_date"] >= cutoff_date]

# Aggregate to SKU-warehouse level
velocity = (
    recent_sales
    .groupby(["sku_id", "warehouse"])["quantity"]
    .sum()
    .reset_index()
    .rename(columns={"quantity": "units_sold_90d"})
)
velocity["avg_daily_demand"] = velocity["units_sold_90d"] / 90

# Merge with current inventory
enriched = inventory_df.merge(velocity, on=["sku_id", "warehouse"], how="left")
enriched["units_sold_90d"] = enriched["units_sold_90d"].fillna(0)
enriched["avg_daily_demand"] = enriched["avg_daily_demand"].fillna(0)

# Days of supply
enriched["days_of_supply"] = enriched.apply(
    lambda row: (
        row["current_stock"] / row["avg_daily_demand"]
        if row["avg_daily_demand"] > 0
        else float("inf")
    ),
    axis=1,
)

# Summary by warehouse
warehouse_summary = enriched.groupby("warehouse").agg(
    total_skus=("sku_id", "count"),
    total_inventory_value=("inventory_value", "sum"),
    skus_with_zero_stock=("current_stock", lambda x: (x == 0).sum()),
    avg_days_of_supply=("days_of_supply", lambda x: x[x != float("inf")].mean()),
).round(1)

print("\n=== WAREHOUSE INVENTORY SUMMARY ===")
print(warehouse_summary.to_string())

Output:

=== WAREHOUSE INVENTORY SUMMARY ===
           total_skus  total_inventory_value  skus_with_zero_stock  avg_days_of_supply
warehouse
Chicago           312             $284,200.0                    18                38.4
Phoenix           312             $196,800.0                    31                29.1

Phoenix was running leaner — fewer days of supply on average and more stockout SKUs. This aligned with Nadia's experience: the three emergency inter-warehouse transfers had all been from Chicago to Phoenix.


Step 2: ABC Analysis — What Actually Drives the Business

# Calculate annual consumption value per SKU (both warehouses combined)
annual_velocity = (
    sales_df
    .groupby("sku_id")["quantity"]
    .sum()
    .reset_index()
    .rename(columns={"quantity": "total_units_sold"})
)

# Merge with cost data
sku_costs = inventory_df.groupby("sku_id")["unit_cost"].first().reset_index()
annual_velocity = annual_velocity.merge(sku_costs, on="sku_id", how="left")
annual_velocity["annual_consumption_value"] = (
    annual_velocity["total_units_sold"] * annual_velocity["unit_cost"]
)

# ABC classification
annual_velocity = annual_velocity.sort_values(
    "annual_consumption_value", ascending=False
).reset_index(drop=True)

total_value = annual_velocity["annual_consumption_value"].sum()
annual_velocity["value_share"] = annual_velocity["annual_consumption_value"] / total_value
annual_velocity["cumulative_share"] = annual_velocity["value_share"].cumsum()

def assign_abc(cumulative_share: float) -> str:
    if cumulative_share <= 0.80:
        return "A"
    elif cumulative_share <= 0.95:
        return "B"
    else:
        return "C"

annual_velocity["abc_category"] = annual_velocity["cumulative_share"].apply(assign_abc)

# Summary
abc_summary = annual_velocity.groupby("abc_category").agg(
    sku_count=("sku_id", "count"),
    total_value=("annual_consumption_value", "sum"),
).reset_index()
abc_summary["sku_pct"] = (abc_summary["sku_count"] / 312 * 100).round(1)
abc_summary["value_pct"] = (abc_summary["total_value"] / total_value * 100).round(1)

print("\n=== ABC ANALYSIS ===")
print(abc_summary.to_string(index=False))

Output:

=== ABC ANALYSIS ===
 abc_category  sku_count  total_value  sku_pct  value_pct
            A         51   $312,600.0     16.3       79.8
            B         82   $117,200.0     26.3       14.9 (sic)
            C        179    $22,000.0     57.4        5.3 (sic)

The pattern matched what theory predicts: 16% of SKUs driving 80% of consumption value. Nadia noticed immediately that the 179 Class C SKUs together generated only $22,000 in annual consumption value — an average of $123 per SKU. Many of those were likely candidates for elimination or consolidation.


Step 3: Identifying Slow-Moving SKUs

# Last sale date per SKU
last_sale_date = (
    sales_df
    .groupby("sku_id")["sale_date"]
    .max()
    .reset_index()
    .rename(columns={"sale_date": "last_sale_date"})
)

# Merge with total inventory
total_inventory_by_sku = (
    inventory_df
    .groupby("sku_id")
    .agg(
        total_stock=("current_stock", "sum"),
        inventory_value=("inventory_value", "sum"),
        unit_cost=("unit_cost", "first"),
    )
    .reset_index()
)

slob_analysis = total_inventory_by_sku.merge(
    last_sale_date, on="sku_id", how="left"
)
slob_analysis = slob_analysis.merge(
    annual_velocity[["sku_id", "abc_category"]], on="sku_id", how="left"
)

slob_analysis["days_since_last_sale"] = (
    analysis_date - slob_analysis["last_sale_date"]
).dt.days

def classify_slob(days: float) -> str:
    if pd.isna(days) or days > 180:
        return "Potentially Obsolete"
    elif days > 90:
        return "Slow-Moving"
    else:
        return "Active"

slob_analysis["status"] = slob_analysis["days_since_last_sale"].apply(classify_slob)

# Summary
status_summary = slob_analysis.groupby("status").agg(
    sku_count=("sku_id", "count"),
    total_inventory_value=("inventory_value", "sum"),
).reset_index().sort_values("total_inventory_value", ascending=False)

print("\n=== SLOW-MOVING INVENTORY ANALYSIS ===")
print(status_summary.to_string(index=False))

# Working capital at risk
at_risk_value = slob_analysis[slob_analysis["status"] != "Active"]["inventory_value"].sum()
print(f"\nTotal working capital in slow-moving/obsolete inventory: ${at_risk_value:,.0f}")

Output:

=== SLOW-MOVING INVENTORY ANALYSIS ===
               status  sku_count  total_inventory_value
               Active        241             $434,800.0
         Slow-Moving         42              $31,200.0
 Potentially Obsolete         29              $15,000.0

Total working capital in slow-moving/obsolete inventory: $46,200

$46,200 tied up in 71 SKUs that had not sold in 90+ days. For a company with 18 employees and tight working capital, this was meaningful.


Step 4: Reorder Analysis — Which SKUs to Buy Now

# Calculate reorder parameters for all SKUs with meaningful demand
SERVICE_LEVEL_A = 0.97  # 97% for Class A
SERVICE_LEVEL_B = 0.95  # 95% for Class B
SERVICE_LEVEL_C = 0.90  # 90% for Class C

# Get lead time parameters from PO history
lead_time_stats = po_df.copy()
lead_time_stats["lead_time_days"] = (
    lead_time_stats["receipt_date"] - lead_time_stats["order_date"]
).dt.days

lt_by_supplier = lead_time_stats.groupby("supplier_id")["lead_time_days"].agg(
    mean_lt="mean",
    std_lt="std",
).fillna({"std_lt": 2.0})

# For simplicity, calculate at SKU level using company-wide average lead time
# (in practice, you would join to the specific supplier for each SKU)
company_avg_lt = lead_time_stats["lead_time_days"].mean()
company_std_lt = lead_time_stats["lead_time_days"].std()

reorder_results = []

for warehouse in ["Chicago", "Phoenix"]:
    warehouse_inv = enriched[enriched["warehouse"] == warehouse].copy()

    for _, row in warehouse_inv.iterrows():
        if row["avg_daily_demand"] <= 0:
            continue

        # Determine service level by ABC category
        abc_cat = annual_velocity.set_index("sku_id")["abc_category"].get(
            row["sku_id"], "C"
        )
        service_level = (
            SERVICE_LEVEL_A if abc_cat == "A"
            else SERVICE_LEVEL_B if abc_cat == "B"
            else SERVICE_LEVEL_C
        )

        z_score = stats.norm.ppf(service_level)
        demand_std = row["avg_daily_demand"] * 0.25  # assume 25% demand variability

        safety_stock = z_score * math.sqrt(
            company_avg_lt * demand_std ** 2
            + row["avg_daily_demand"] ** 2 * company_std_lt ** 2
        )

        reorder_point = row["avg_daily_demand"] * company_avg_lt + safety_stock

        eoq = math.sqrt(
            2 * row["avg_daily_demand"] * 365 * 42  # $42 estimated order cost
            / (row["unit_cost"] * 0.22)  # 22% holding cost
        ) if row.get("unit_cost", 0) > 0 else row.get("reorder_quantity", 50)

        reorder_results.append({
            "sku_id": row["sku_id"],
            "warehouse": warehouse,
            "abc_category": abc_cat,
            "current_stock": row["current_stock"],
            "avg_daily_demand": round(row["avg_daily_demand"], 2),
            "days_of_supply": round(row["days_of_supply"], 1),
            "safety_stock": round(safety_stock, 0),
            "reorder_point": round(reorder_point, 0),
            "eoq": round(eoq, 0),
            "needs_reorder": row["current_stock"] <= reorder_point,
        })

reorder_df = pd.DataFrame(reorder_results)

# Reorder alerts
alerts = reorder_df[reorder_df["needs_reorder"]].sort_values("days_of_supply")
print(f"\n=== REORDER ALERTS ===")
print(f"Total SKU-warehouse combinations needing reorder: {len(alerts)}")
print(f"\nBy warehouse:")
print(alerts.groupby("warehouse")["needs_reorder"].count().to_string())
print(f"\nBy ABC category:")
print(alerts.groupby("abc_category")["needs_reorder"].count().to_string())
print(f"\nTop 10 most urgent (lowest days of supply):")
print(
    alerts.head(10)[
        ["sku_id", "warehouse", "abc_category", "current_stock",
         "reorder_point", "days_of_supply", "eoq"]
    ].to_string(index=False)
)

Output:

=== REORDER ALERTS ===
Total SKU-warehouse combinations needing reorder: 38

By warehouse:
warehouse
Chicago    14
Phoenix    24

By ABC category:
abc_category
A     9
B    17
C    12

Top 10 most urgent (lowest days of supply):
     sku_id warehouse abc_category  current_stock  reorder_point  days_of_supply   eoq
  BT-0014      Phoenix            A             12           45.2             1.8   120
  SC-0082      Phoenix            A              8           31.8             2.1    95
  BG-0019      Phoenix            B              3           18.4             2.7    75
  BT-0014      Chicago            A             18           45.2             3.2   120
  WL-0044      Phoenix            A             15           38.9             4.1    88
  SC-0082      Chicago            A             22           31.8             5.8    95
  BG-0031      Phoenix            B              7           19.2             6.3    68
  WL-0052      Phoenix            B              4           14.6             6.8    55
  BG-0019      Chicago            B              9           18.4             7.2    75
  SC-0107      Phoenix            C             22           27.1             9.4    62

Nine Class A items needed reorder attention — including two that were under 5 days of supply in Phoenix. Nadia immediately sent the top five items to her purchasing contact with a request for expedited shipping on the Phoenix allocation.


Step 5: Supplier Scorecard

from typing import Callable

po_df["on_time"] = po_df["receipt_date"] <= po_df["promised_date"]
po_df["fill_ratio"] = (
    po_df["received_quantity"] / po_df["ordered_quantity"].replace(0, pd.NA)
).clip(upper=1.0)
po_df["lead_time_days"] = (po_df["receipt_date"] - po_df["order_date"]).dt.days

supplier_scorecard = po_df.groupby("supplier_id").agg(
    order_count=("supplier_id", "count"),
    on_time_delivery_rate=("on_time", lambda x: round(x.mean() * 100, 1)),
    fill_rate=("fill_ratio", lambda x: round(x.mean() * 100, 1)),
    avg_lead_time=("lead_time_days", "mean"),
    std_lead_time=("lead_time_days", "std"),
).round(2).reset_index()

supplier_scorecard["cv_lead_time"] = (
    supplier_scorecard["std_lead_time"] / supplier_scorecard["avg_lead_time"]
).round(3)

def grade(row: pd.Series) -> str:
    score = (row["on_time_delivery_rate"] + row["fill_rate"]) / 2
    if score >= 95:
        return "A"
    elif score >= 85:
        return "B"
    elif score >= 75:
        return "C"
    else:
        return "D"

supplier_scorecard["grade"] = supplier_scorecard.apply(grade, axis=1)

print("\n=== SUPPLIER SCORECARD ===")
print(
    supplier_scorecard[
        ["supplier_id", "order_count", "on_time_delivery_rate",
         "fill_rate", "avg_lead_time", "cv_lead_time", "grade"]
    ].sort_values("on_time_delivery_rate", ascending=False).to_string(index=False)
)

Output:

=== SUPPLIER SCORECARD ===
 supplier_id  order_count  on_time_delivery_rate  fill_rate  avg_lead_time  cv_lead_time  grade
     SUP-001          124                   97.6       99.1           12.4         0.124      A
     SUP-006           89                   96.6       98.8           18.2         0.132      A
     SUP-003           78                   95.1       99.4           10.8         0.149      A
     SUP-008           44                   93.2       97.6           14.1         0.198      B
     SUP-007           96                   91.8       98.2           16.8         0.211      B
     SUP-005          112                   88.4       95.8           21.4         0.318      B
     SUP-002           67                   84.2       93.4           17.9         0.287      C
     SUP-009           33                   81.7       97.2           22.6         0.391      C
     SUP-004          204                   72.3       89.4           28.4         0.443      D

SUP-004 was a clear problem: 72.3% OTDR, 89.4% fill rate, 28-day average lead time with a CV of 0.44 — all significantly worse than every other supplier. And SUP-004 had the most purchase orders of any supplier (204 over the analysis period), meaning this unreliability was affecting a large portion of Bright Thread's supply chain.


Step 6: The Excel Dashboard Output

output_path = "bright_thread_supply_chain_report.xlsx"

with pd.ExcelWriter(output_path, engine="openpyxl") as writer:
    # Summary sheet
    summary_data = pd.DataFrame([
        {"Metric": "Total SKUs", "Chicago": 312, "Phoenix": 312, "Combined": 312},
        {"Metric": "Total Inventory Value ($)", "Chicago": 284200, "Phoenix": 196800, "Combined": 481000},
        {"Metric": "SKUs Below Reorder Point", "Chicago": 14, "Phoenix": 24, "Combined": 38},
        {"Metric": "Class A SKUs Below ROP", "Chicago": 4, "Phoenix": 5, "Combined": 9},
        {"Metric": "Slow-Moving Inventory ($)", "Chicago": "N/A", "Phoenix": "N/A", "Combined": 46200},
        {"Metric": "Supplier Grade D Count", "Chicago": "N/A", "Phoenix": "N/A", "Combined": 1},
    ])
    summary_data.to_excel(writer, sheet_name="Summary", index=False)

    # Full reorder alerts
    alerts.to_excel(writer, sheet_name="Reorder Alerts", index=False)

    # ABC analysis
    (
        annual_velocity
        .merge(inventory_df.groupby("sku_id")["inventory_value"].sum().reset_index(), on="sku_id")
        .to_excel(writer, sheet_name="ABC Analysis", index=False)
    )

    # SLOB
    slob_analysis.to_excel(writer, sheet_name="Slow-Moving SKUs", index=False)

    # Supplier scorecard
    supplier_scorecard.to_excel(writer, sheet_name="Supplier Scorecard", index=False)

print(f"\nDashboard saved to: {output_path}")
print("Sheets: Summary, Reorder Alerts, ABC Analysis, Slow-Moving SKUs, Supplier Scorecard")

What Nadia Took to Her Manager

Nadia presented three action items, ordered by urgency:

Immediate (this week): Place emergency orders for the 9 Class A SKUs below their reorder point, prioritizing Phoenix allocation. Total estimated order value: $28,400. This resolves the immediate stockout risk and avoids repeat overnight transfer freight costs.

Near-term (30 days): Initiate a formal supplier review with SUP-004. Their 72% OTDR and 89% fill rate are materially below the rest of the supplier base. The 28-day average lead time combined with high variability is a primary contributor to Phoenix's thin inventory buffer.

Strategic (next quarter): Review the 71 slow-moving and potentially obsolete SKUs ($46,200 combined). Liquidate the oldest 29 items (no sales in 180+ days), and run a clearance promotion on the 42 slow-movers. Freeing this working capital provides budget headroom to increase safety stock on the Class A items that have been running dangerously thin.


The Key Lesson From Bright Thread

Nadia's analysis took approximately four hours of Python work, plus two hours of explanation and discussion. Before Python, this analysis would have taken three to four days of manual spreadsheet work — if anyone had attempted it at all.

The multi-warehouse dimension adds real complexity to supply chain analytics. SKU-warehouse combinations double the number of records, and the same SKU can be a reorder emergency in Phoenix while fully stocked in Chicago. Python handles this naturally through groupby operations and warehouse-level filtering.

The SLOB analysis and supplier scorecard are not glamorous analyses. They are the operational fundamentals that prevent a business from quietly accumulating problems that only become visible when a customer calls to complain. Running them regularly — monthly or quarterly — is far more valuable than any one-time deep-dive.


Python Techniques Demonstrated

  • Multi-source CSV data loading and joining
  • SKU-warehouse level aggregation using groupby() on multiple columns
  • ABC classification applied to consumption value (units × cost, not revenue)
  • Days-of-supply calculation with float("inf") handling for zero-demand SKUs
  • Safety stock calculation differentiated by ABC tier (different service levels)
  • EOQ estimation with assumed cost parameters
  • SLOB classification using apply() with date arithmetic
  • Supplier scorecard with OTDR, fill rate, and lead time CV
  • Multi-sheet Excel output using pd.ExcelWriter and openpyxl