Key Takeaways — Chapter 29

1. Capacity Planning Is a Business Discipline That Uses Technical Data

The technical measurement — MSU, CPU seconds, RMF data — is the easy part. The hard part is translating those numbers into language that finance and business leadership can act on. A capacity plan that exists only in the systems programmer's head is not a capacity plan. It's institutional risk. Effective capacity planning connects technical measurement to business growth projections to financial budgets to procurement timelines.

2. MSU Is the Unit That Matters for Money

IBM mainframe software costs are driven by MSU consumption, specifically the Rolling Four-Hour Average (R4HA). Every MSU you consume costs approximately $12–18 per month in combined MLC charges. Understanding what drives your R4HA — and managing it — is the single most valuable capacity planning skill. A 10% R4HA reduction on a 3,000 MSU system saves $36,000–$54,000 per year.

3. zIIP Offload Is the Biggest Cost Optimization Lever

Work that runs on zIIP engines doesn't count toward MSU for MLC pricing. Converting workloads from GP to zIIP-eligible paths — by restructuring DB2 access to use DRDA, deploying z/OS Connect for API processing, or shifting encryption to IPSec — reduces your software bill without reducing throughput. Traditional COBOL batch is not zIIP-eligible, which makes it the most expensive workload type per CPU second.

4. Different Workloads Have Different Capacity Profiles

Online, batch, DB2, and MQ workloads each have distinct peak patterns, growth rates, and optimization levers. A single "system growth rate" is meaningless. Characterize each workload family independently using WLM service classes, forecast each one separately, and watch for workload mix shifts — the industry trend is away from batch dominance toward API and integration workloads.

5. History + Business Intelligence = Useful Forecasts

Linear regression on historical data tells you what would happen if nothing changed. Seasonal adjustment accounts for predictable patterns. Business-driven projections (acquisitions, new products, migrations, regulatory changes) account for step-function changes that no trend can predict. You need all three components for a forecast that's actually useful. And present it as a range (three scenarios), not a single number.

6. The R4HA Determines Your Bill — Manage It

Your monthly software bill is set by the single worst four-hour period in the month. This creates a strong financial incentive to flatten peaks: spread batch workloads across the window, shift non-critical processing to off-peak hours, use WLM capping for non-essential workloads during peak, and activate Capacity on Demand for predictable seasonal peaks.

7. Every Change Has a Capacity Signature

New programs, DB2 index changes, SQL modifications, platform migrations, version upgrades, and encryption enablement all affect capacity. The danger is cumulative: twenty individually "negligible" changes add up to a material capacity increase. Formal Capacity Impact Assessments for changes above a threshold (10 MSU at CNB) prevent death by a thousand paper cuts.

8. The Annual Cycle Creates Discipline

Data collection, business input, modeling, review, and monitoring — executed as a repeatable annual cycle with quarterly reviews and exception reporting. Without the process, capacity planning is ad hoc, inconsistent, and easily neglected. The process ensures that forecasts are updated, business inputs are gathered, and procurement timelines are respected.

9. Plan for the 75th Percentile, Not the 50th

The cost asymmetry between under-provisioning (emergency upgrades, SLA violations, batch failures) and over-provisioning (slightly higher monthly charges) justifies a planning bias toward having more capacity than you need. Plan for the 75th percentile scenario. The 3:00 AM phone call you prevent is worth far more than the marginal cost of headroom.

10. Modernization Temporarily Increases Capacity Requirements

The strangler fig pattern — running both old and new systems plus integration — creates a "transition hump" where total capacity consumption increases before it decreases. Data synchronization between the mainframe and cloud is the hidden cost driver. Capacity planning for modernization must account for this transition period, not just the end state.

11. The Courage to Present Bad News

The capacity planner's job is to present the data honestly, quantify the cost of inaction, and let leadership make informed decisions. When the forecast says you'll exceed capacity and the budget is frozen, the capacity planner must present the risk clearly. Nobody should be able to say they weren't warned. This is the part of the job that spreadsheets can't do.