Chapter 29 Quiz

Question 1

What is the primary unit IBM uses for mainframe software pricing calculations?

A) MIPS (Millions of Instructions Per Second) B) CPU seconds C) MSU (Millions of Service Units) D) Service units

Answer: C MSU (Millions of Service Units) is IBM's official capacity unit for software pricing. While MIPS is still used informally and CPU seconds are the fundamental measurement unit, MSU is what SCRT reports and what determines your Monthly License Charge. MSU is hardware-independent, meaning 1 MSU on a z14 represents nominally the same workload capacity as 1 MSU on a z16.


Question 2

The Rolling Four-Hour Average (R4HA) is used by IBM to determine the Monthly License Charge. How is the R4HA calculated?

A) The average MSU consumption over any four consecutive hours, taking the monthly maximum B) The average of the four highest hours in the month C) The four-hour period with the highest total MSU consumption D) The rolling average of all four-hour windows, taking the monthly minimum

Answer: A The R4HA is calculated by computing the average MSU consumption for every possible four-hour window in the month, then taking the highest such average as the billing point. This means your monthly bill is determined by your worst four consecutive hours — creating a strong incentive to flatten peak consumption.


Question 3

Which workload type is NOT eligible for zIIP processing?

A) DB2 DRDA distributed queries B) Traditional COBOL batch processing C) z/OS Connect Java-based API processing D) IPSec encryption processing

Answer: B Traditional COBOL batch processing runs on general purpose (GP) processors and is not zIIP-eligible. This is a critical distinction for capacity planning: batch COBOL consumes MSU-rated capacity. DB2 DRDA, z/OS Connect, and IPSec are all zIIP-eligible, which means they don't count toward MLC pricing.


Question 4

Which RMF/SMF record type provides processor activity data used for capacity planning?

A) SMF Type 30 B) SMF Type 42 C) SMF Type 70 D) SMF Type 89

Answer: C SMF Type 70 provides processor activity data including CPU utilization by LPAR, by engine type (GP, zIIP), and MSU consumption. This is the primary data source for capacity planning. Type 30 is job accounting, Type 42 is storage management, and Type 89 is used by SCRT for sub-capacity reporting.


Question 5

Why is a peak GP utilization of 89.7% on an LPAR a capacity concern, even though 10.3% headroom remains?

A) z/OS cannot use more than 90% of processor capacity B) WLM begins throttling lower-priority work above 85%, degrading service levels C) zIIP processing stops above 85% utilization D) SMF recording becomes unreliable above 85% utilization

Answer: B Above approximately 85% GP utilization, WLM's ability to manage workload priorities effectively degrades. WLM must start making hard trade-offs — throttling lower-priority work aggressively, delaying batch in favor of online, and potentially missing service class goals. Industry best practice is to keep peak GP utilization below 85% to preserve WLM's management headroom.


Question 6

A capacity planner presents a forecast as a single number: "We will need 3,200 MSU by Q4." What is wrong with this approach?

A) MSU should be expressed as a range to reflect forecast uncertainty B) The forecast should be in MIPS, not MSU C) Capacity should be forecasted annually, not quarterly D) A single number is acceptable if the confidence level is stated

Answer: A A point forecast implies certainty that doesn't exist. Effective capacity planning presents scenarios — typically conservative, expected, and aggressive — that bound the likely range of outcomes. This gives leadership the information needed to make risk-appropriate decisions. A forecast of "3,000–3,500 MSU by Q4" is more actionable than "3,200 MSU" because it communicates the uncertainty and allows planning for multiple outcomes.


Question 7

You need at least how many years of historical data to calculate meaningful seasonal indices for capacity forecasting?

A) 6 months B) 1 year C) 2 years D) 5 years

Answer: C Two years provides two data points per calendar month — the minimum needed to distinguish seasonal patterns from random variation. One year gives only one observation per month, which can't separate signal from noise. Three years is better, but two is the minimum for meaningful seasonal analysis.


Question 8

Under IBM's Tailored Fit Pricing Enterprise Consumption Solution (TFP/ECS), what happens when consumption exceeds the committed MSU level?

A) IBM automatically adds capacity and bills at the base rate B) Overage charges apply at a rate higher than the base commitment rate C) The excess consumption is not billed until the next contract renewal D) The commitment level automatically adjusts upward

Answer: B Under TFP/ECS, consumption above the committed MSU level incurs overage charges at a rate that is typically higher than the base commitment rate. This creates an incentive to set the commitment level accurately — too low and you pay overage rates; too high and you're paying for unused committed capacity.


Question 9

Which of the following R4HA management techniques typically provides the LARGEST MSU reduction?

A) WLM capping of development/test LPARs B) Batch schedule optimization to spread peak workload C) zIIP offload of eligible workloads D) Workload balancing across LPARs

Answer: C zIIP offload provides the largest MSU reduction because it removes entire categories of workload from the GP MSU calculation permanently. Batch scheduling changes redistribute existing MSU consumption across time, which helps with R4HA but doesn't reduce total consumption. WLM capping and LPAR balancing are important but typically yield smaller reductions. The example in the chapter showed a 70% GP reduction for a single workload through zIIP offload.


Question 10

When estimating the capacity impact of a new COBOL program, what factor should be applied to test environment measurements to project production performance?

A) 0.8x (production is more efficient than test) B) 1.0x (test and production are equivalent) C) 1.1–1.3x (production data is messier and volumes are higher) D) 2.0x (always double the test measurement)

Answer: C A test-to-production factor of 1.1–1.3x is standard practice because production data is typically messier than test data (more edge cases, more data skew, more contention), and production volumes may differ from test volumes. The exact factor depends on how realistic the test environment is — a well-configured performance test environment with production-volume data may need only a 1.1x factor, while a unit test measurement may need 1.5x or more.


Question 11

Lisa Tran's rule is "No DB2 change goes to production without a capacity impact assessment." Which DB2 change has the MOST unpredictable capacity impact?

A) Adding a new table B) Adding an index to a high-volume insert table C) Upgrading the DB2 version D) Increasing the buffer pool size

Answer: C DB2 version upgrades change the optimizer's behavior, which can cause access path changes for any query in the system. While most changes improve performance, optimizer regressions for specific queries can cause significant and unpredictable capacity increases. New indexes have predictable costs (insert overhead) and benefits (query optimization). New tables and buffer pool changes have limited capacity impact.


Question 12

The Sub-Capacity Reporting Tool (SCRT) reads which SMF record types?

A) Type 30 and Type 70 B) Type 70 and Type 89 C) Type 72 and Type 89 D) Type 30 and Type 72

Answer: B SCRT reads SMF Type 89 records (which contain LPAR utilization data from PR/SM) and Type 70 records (processor activity) to calculate billable MSU for each product on each LPAR. Accurate SCRT reporting is a financial control — errors can result in over-billing or retroactive charges from IBM.


Question 13

A shop's workload mix has shifted over five years from 62% batch / 25% online / 5% API to 44% batch / 28% online / 22% API. What is the PRIMARY capacity planning implication?

A) Total capacity requirements are decreasing B) The peak hour may be shifting from nighttime to daytime C) zIIP engines are no longer needed D) Batch performance tuning is no longer important

Answer: B As the workload mix shifts from batch-dominated (nighttime peak) to online/API-dominated, the overall system peak may shift from the batch window (typically 1-4 AM) to the online peak (typically 12-2 PM). This affects R4HA calculations, WLM configuration, and capacity sizing. Batch performance tuning remains important, but the growing API workload — which is more zIIP-eligible — changes both when and how capacity is consumed.


Question 14

What is the recommended minimum retention period for hourly capacity data? For daily capacity data?

A) 30 days hourly, 1 year daily B) 90 days hourly, 3 years daily C) 7 days hourly, 90 days daily D) 1 year hourly, 5 years daily

Answer: B Hourly data retained for 90 days provides sufficient granularity for peak pattern analysis and intra-day trend identification. Daily data retained for 3 years provides enough history for year-over-year comparison and meaningful seasonal analysis (which requires at least 2 years). Less than 18 months of daily data produces unreliable seasonal models.


Question 15

Kwame Mensah's rule of thumb is to "plan for the 75th percentile scenario, not the 50th." This bias toward over-provisioning is justified because:

A) IBM requires 25% headroom for warranty purposes B) The cost of unexpected capacity shortage is 5–10x greater than equivalent over-provisioning C) WLM requires 25% of capacity for its own overhead D) Seasonal peaks always exceed forecasts by exactly 25%

Answer: B The cost asymmetry between under-provisioning and over-provisioning justifies planning above the median scenario. A capacity shortage leads to emergency upgrades (2-3x the cost of planned upgrades), SLA violations, batch window failures, and potential regulatory findings. Modest over-provisioning costs extra in monthly charges but avoids these far more expensive consequences.


Question 16

In the capacity planning annual cycle, business input (meeting with business unit leaders, application teams, and modernization teams) occurs during which phase?

A) Month 1-2 (Data Collection) B) Month 2-3 (Business Input) C) Month 3-4 (Modeling) D) Month 4 (Review and Approval)

Answer: B Business input occurs in Month 2-3 of the annual cycle, after historical data has been collected and analyzed (Month 1-2) but before modeling begins (Month 3-4). This sequencing ensures that the technical trend analysis is ready to inform business discussions, and that business inputs are available before the forecasting models are built.


Question 17

A Tier 1 capacity exception (requiring response within 4 hours) includes all of the following EXCEPT:

A) GP utilization exceeds 90% for any 1-hour period B) R4HA exceeds the planned Aggressive scenario C) Monthly R4HA trend deviating from forecast by 3% D) Any LPAR hits its defined capacity ceiling

Answer: C A 3% monthly R4HA trend deviation is a Tier 3 exception (weekly review). Tier 1 exceptions represent immediate capacity threats: 90%+ utilization for an hour, R4HA exceeding the Aggressive scenario, zIIP utilization above 85%, or hitting the defined capacity ceiling. Tier 1 exceptions require immediate response because they indicate the system is approaching or at its operational limits.


Question 18

When converting batch processing from nightly batch to real-time online, the typical capacity impact is:

A) Total CPU decreases because real-time is more efficient B) Total CPU increases but the peak shifts from nighttime to daytime C) Total CPU is unchanged but distributed differently across the day D) Total CPU decreases because there's no batch overhead

Answer: B Batch-to-online conversion typically increases total CPU because real-time processing has per-transaction overhead (commit frequency, thread management, synchronous I/O) that batch amortizes across many records. However, the peak shifts from the batch window (nighttime) to the online window (daytime). This can reduce the batch R4HA while potentially increasing the online R4HA — a capacity planning trade-off that must be analyzed carefully.


Question 19

The Capacity Impact Assessment (CIA) template at CNB requires all of the following EXCEPT:

A) Volume assumptions and projected growth rate B) Source code review of all affected COBOL programs C) R4HA impact analysis including CoD requirements D) Non-CPU impact (storage, memory, I/O)

Answer: B The CIA template requires change description, affected workloads, volume assumptions, CPU impact estimate (GP and zIIP), non-CPU impact, R4HA impact, and approval signatures. It does not require source code review — the CIA is a capacity assessment, not a code review. The CPU impact estimate may come from measurement, benchmarks, analogies, or vendor estimates, not from code analysis.


Question 20

Diane Okoye at Pinnacle Health tracks "capacity cost per claim" as a metric. This metric is valuable because:

A) It converts a technical measurement into a business efficiency metric B) It satisfies HIPAA regulatory requirements C) It replaces the need for MSU-based capacity planning D) IBM uses it for pricing negotiations

Answer: A "Capacity cost per claim" divides total mainframe cost by claims processed, creating a business-meaningful efficiency metric. This metric allows business leadership to see that mainframe costs are justified by business output — even if total costs are rising, cost per claim may be declining due to efficiency improvements. It bridges the gap between technical capacity metrics and business value, helping justify continued mainframe investment.