Chapter 26 Quiz: Batch Performance at Scale
Question 1
What is the first step in the batch performance optimization priority stack?
A) Optimize COBOL compiler options (OPT level) B) Tune DFSORT parameters (MAINSIZE, HIPRMAX) C) Eliminate unnecessary work (remove jobs, skip steps, filter early) D) Increase buffer counts (BUFNO, BUFNI, BUFND)
Answer: C
Explanation: Priority 1 is always eliminating unnecessary work. The fastest I/O is the one you never issue. The fastest job is the one you don't run. Pinnacle Health discovered 14% of their batch volume was producing unread reports. Removing those jobs cost nothing and recovered 22 minutes from the critical path — more than any tuning could achieve.
Question 2
A batch program shows this performance decomposition: CPU 12%, I/O 68%, DB2 14%, Other 6%. Which optimization should you apply first?
A) Rewrite the COBOL logic for efficiency B) Change the compiler to OPT(2) C) Tune buffers, BLKSIZE, and access methods D) Adjust DB2 commit frequency
Answer: C
Explanation: The program is I/O-bound (68% I/O wait). Buffer tuning, BLKSIZE optimization, and access method selection address I/O directly. Compiler optimization (B) affects the 12% CPU component. DB2 tuning (D) affects the 14% DB2 component. COBOL rewriting (A) also addresses CPU only. Always optimize the dominant component first.
Question 3
What is the optimal BLKSIZE for a sequential file with LRECL=200, RECFM=FB on a 3390 DASD device using half-track blocking?
A) 27998 B) 27800 C) 28000 D) 200
Answer: B
Explanation: The formula is FLOOR(27998 / LRECL) × LRECL = FLOOR(27998 / 200) × 200 = 139 × 200 = 27800. This packs 139 records per block with two blocks per track. 27998 (A) would leave a 198-byte fragment that can't hold another record. 28000 (C) exceeds the half-track limit. 200 (D) is one record per block — catastrophically inefficient.
Question 4
A VSAM KSDS cluster has a 3-level index with 18,600 index records. With BUFNI=1 (default), how many index I/O operations are required per random read?
A) 1 B) 2 C) 3 D) 18,600
Answer: C
Explanation: A 3-level KSDS index requires traversing all three levels to locate a data record: one I/O for the high-level index, one for the mid-level index, and one for the low-level index. With BUFNI=1, only one index CI is cached at a time, so each level requires an I/O. Setting BUFNI high enough to cache the full index (BUFNI=18600) eliminates all three I/Os.
Question 5
Why is DFSORT typically 2-4x faster than a COBOL SORT verb for the same sort operation?
A) DFSORT uses a better sorting algorithm B) DFSORT operates below the access method layer and uses hardware-aware I/O scheduling C) DFSORT runs on zIIP processors exclusively D) DFSORT uses more CPU cycles but finishes faster due to parallelism
Answer: B
Explanation: DFSORT bypasses QSAM/BSAM overhead by reading and writing physical blocks directly. It uses memory-mapped I/O, parallel I/O scheduling, and hardware-aware sort algorithms tuned to the CPU cache hierarchy. While DFSORT can exploit zIIP for some work (C), it doesn't run exclusively on zIIP. The performance advantage comes primarily from the optimized I/O path.
Question 6
What does the INCLUDE statement do in DFSORT?
A) Includes a copybook in the DFSORT control statements B) Filters input records before the sort phase C) Includes additional sort keys D) Includes records from a secondary input file
Answer: B
Explanation: INCLUDE COND filters input records before they enter the sort phase. Records that don't match the condition are excluded entirely — they are not sorted, not written to work datasets, and not written to the output. This is critical for performance: filtering 40% of records before sort reduces sort time by approximately 40%.
Question 7
Which DFSORT parameter has the most impact on sort performance for large files?
A) DYNALLOC (dynamic work dataset allocation) B) FILSZ (estimated file size) C) MAINSIZE=MAX (maximum memory allocation) D) EXPOLD (release unused work datasets)
Answer: C
Explanation: MAINSIZE controls how much memory DFSORT uses, which directly determines the number of merge passes required. More memory means fewer merge passes, and each eliminated merge pass reduces elapsed time significantly. Going from 3 merge passes to 1 can cut elapsed time in half. DYNALLOC, FILSZ, and EXPOLD are helpful but have much smaller impact.
Question 8
What happens when you specify FASTSRT as a COBOL compiler option?
A) The COBOL runtime sorts records using a faster algorithm B) The COBOL runtime delegates SORT I/O directly to DFSORT, bypassing COBOL file handling C) SORT operations are automatically parallelized across multiple CPs D) The SORT verb is replaced with an inline sort at compilation time
Answer: B
Explanation: FASTSRT causes the COBOL runtime to hand off SORT input and output I/O to DFSORT's optimized I/O path instead of using COBOL's internal QSAM-based file handling. This eliminates double buffering and format conversion overhead, typically improving SORT performance by 30-50%. It requires USING/GIVING (not INPUT/OUTPUT PROCEDURE).
Question 9
Which of the following DISQUALIFIES a COBOL SORT from FASTSRT optimization?
A) Using the ON ASCENDING KEY clause B) Having more than one sort key C) Specifying INPUT PROCEDURE instead of USING D) Sorting more than 10 million records
Answer: C
Explanation: INPUT PROCEDURE (and OUTPUT PROCEDURE) prevent FASTSRT because DFSORT cannot intercept procedural I/O — the COBOL program is doing custom processing on records as they enter or leave the sort. FASTSRT requires USING and GIVING, which allow DFSORT to handle all file I/O directly. Multiple sort keys (B) and large record counts (D) are fully supported.
Question 10
What is the recommended commit frequency for a DB2 batch program processing 20 million records at CNB?
A) Every record (maximum safety) B) Every 100 records C) Every 1,000 to 5,000 records D) Never (one commit at end)
Answer: C
Explanation: Committing every 1,000 to 5,000 records balances throughput against recovery risk and lock escalation. Every-record commits (A) add 2-5ms overhead per record, which at 20 million records is 40,000-100,000 seconds of pure commit overhead. Never committing (D) risks lock escalation (violating LOCKMAX) and means the entire run must be re-processed if the program abends.
Question 11
A batch program commits every 500 records. The installation LOCKMAX is 10,000. If each record update acquires one page lock, will lock escalation occur?
A) Yes — 500 locks per commit exceeds LOCKMAX B) No — 500 locks are released at each commit, well below LOCKMAX C) It depends on whether row locking or page locking is configured D) Lock escalation is not related to commit frequency
Answer: B
Explanation: With commits every 500 records, a maximum of 500 page locks are held at any time (assuming one lock per record). At each COMMIT, all 500 locks are released. Since 500 is well below the LOCKMAX of 10,000, lock escalation will not occur. If commits were every 15,000 records, the 10,000 lock threshold would be exceeded between commits.
Question 12
What does FOR FETCH ONLY on a DB2 cursor enable in a batch program?
A) Faster fetches by skipping data validation B) Sequential prefetch and avoidance of intent exclusive locks C) Automatic parallel query execution D) Reduced logging for read operations
Answer: B
Explanation: FOR FETCH ONLY tells DB2 the cursor won't update rows, which enables sequential prefetch without lock compatibility concerns and avoids acquiring intent exclusive (IX) locks — only intent share (IS) locks are needed. This is critical for batch read cursors processing millions of rows, as IX locks block other updaters.
Question 13
Which SMF record type provides the definitive CPU and elapsed time data for a batch job step?
A) SMF Type 14/15 (Dataset activity) B) SMF Type 30 (Job/step accounting) C) SMF Type 42 (DFSORT statistics) D) SMF Type 101 (DB2 accounting)
Answer: B
Explanation: SMF Type 30 (subtype 4 for step termination) contains CPU time (TCB + SRB), elapsed time, EXCP counts, I/O connect time, page-in counts, and other resource consumption metrics for the complete job step. Types 14/15 provide dataset-level detail, Type 42 provides DFSORT-specific data, and Type 101 provides DB2-specific data — all valuable but narrower in scope.
Question 14
You see high "Other" wait time (40%+) in a batch job's performance decomposition. What is the most likely cause?
A) Slow COBOL logic B) Insufficient buffer allocation C) ENQ contention, WLM delays, or GRS serialization D) DB2 lock waits
Answer: C
Explanation: "Other" wait time represents everything that isn't CPU, I/O, or DB2. The most common causes are ENQ contention (two jobs competing for DISP=OLD on the same dataset), WLM delays (workload manager throttling batch for online priority), GRS ring delays (global resource serialization in a sysplex), and operator reply waits. These are systemic issues, not application issues.
Question 15
What is Hiperbatch?
A) A faster version of the BSAM access method B) A data caching facility that stores sequential dataset blocks in data spaces for reuse by multiple jobs C) A parallel sort facility that uses Hiperspace for sort work D) A COBOL compiler option that generates hypervisor-optimized code
Answer: B
Explanation: Hiperbatch (Data Lookaside Facility) caches sequential dataset blocks in data spaces. When the first job reads a dataset, the blocks are cached. Subsequent jobs reading the same dataset find the blocks in cache and avoid DASD I/O entirely. At CNB, Hiperbatch for five frequently-read datasets eliminated 2.8 billion EXCP per month.
Question 16
Which of the following workloads is zIIP-eligible in a COBOL batch program?
A) COBOL PERFORM loops and arithmetic B) QSAM READ and WRITE operations C) DB2 SQL execution D) VSAM random reads
Answer: C
Explanation: DB2 SQL processing in COBOL batch is zIIP-eligible — it runs under the DRDA protocol which qualifies for zIIP offload. COBOL compute logic (A), QSAM I/O (B), and VSAM I/O (D) all run on General Purpose (GP) processors. For a batch program that's 60% DB2-bound, up to 60% of CPU cost can shift to zIIP.
Question 17
The Enterprise COBOL compiler option OPT(2) provides the best runtime performance. Why would you NOT use it for all programs?
A) OPT(2) produces larger object modules B) OPT(2) compromises source-level debugging accuracy C) OPT(2) is not supported on z/OS 2.5+ D) OPT(2) changes the program's output
Answer: B
Explanation: OPT(2) performs aggressive optimizations including code motion, loop optimization, in-line PERFORM, and dead code elimination. These optimizations move and rearrange generated code, making source-level debugging (step-by-step execution, variable inspection) unreliable. The program's output is unchanged (D is wrong), and OPT(2) actually produces smaller object modules (A is wrong). The standard practice is OPT(1) for development/test and OPT(2) for production.
Question 18
A sequential batch file is currently defined with BLKSIZE=4096 and LRECL=200. After changing to BLKSIZE=27800, approximately how much will the EXCP count decrease?
A) 10% B) 50% C) 85% D) 99%
Answer: C
Explanation: With BLKSIZE=4096 and LRECL=200, each block holds 20 records. With BLKSIZE=27800, each block holds 139 records. The ratio is 20/139 = 0.144, meaning BLKSIZE=27800 requires only 14.4% of the EXCPs — an 85.6% reduction. This is why BLKSIZE optimization is one of the highest-ROI tuning actions for I/O-bound batch.
Question 19
At CNB, the batch performance project achieved a 39.4% reduction in critical path elapsed time. Which optimization layer contributed the most to EXCP reduction?
A) COBOL compiler optimization (OPT levels) B) DB2 commit frequency tuning C) Buffer tuning and Hiperbatch D) zIIP offload
Answer: C
Explanation: The EXCP count dropped from 48.2M to 18.7M — a 61.2% reduction. This came primarily from buffer tuning (Section 26.2) increasing BUFNO/BUFNI/BUFND to reduce I/O operations, and Hiperbatch (Section 26.7) caching frequently-read datasets in data spaces. Compiler optimization (A) reduces CPU, not EXCP. DB2 tuning (B) affects DB2 wait time. zIIP offload (D) shifts CPU between processor types without reducing I/O.
Question 20
Why does Lisa Tran mandate SSRANGE (subscript range checking) in production batch even though it costs ~2% CPU overhead?
A) It's required by the COBOL language standard B) A subscript error in the GL posting job would corrupt the general ledger — the risk far outweighs the 2% cost C) SSRANGE is needed for OPT(2) to work correctly D) Without SSRANGE, DB2 calls fail
Answer: B
Explanation: SSRANGE adds runtime bounds checking for table subscripts and reference modification. The 2% CPU increase on a 10-minute job is 12 seconds. A subscript error that goes undetected in a 50-million-record batch run could corrupt the entire dataset — resulting in a Sev-1 incident, regulatory filing delays, and a 3 AM phone call. The risk-reward calculation overwhelmingly favors keeping SSRANGE on in production.