Chapter 26 Exercises: Batch Performance at Scale


Section 26.1 — The Performance Mindset

Exercise 1: Performance Decomposition

Given the following SMF Type 30 data for a batch job step:

SMF30CPT (CPU time):           4 min 12 sec
SMF30AET (Elapsed time):       28 min 45 sec
SMF30TEP (I/O connect):        2 min 18 sec
SMF30TIS (I/O disconnect):     16 min 33 sec
SMF30_DB2_CLASS2:              3 min 48 sec

a) Calculate the performance decomposition (CPU%, I/O%, DB2%, Other%). b) What performance profile does this job fit? c) Which optimization strategy from the priority stack (Section 26.1) would you apply first? d) If you could reduce the I/O wait by 50%, what would the new elapsed time be (assuming other components remain unchanged)?

Exercise 2: Priority Stack Application

A shop has five critical-path batch jobs with the following profiles:

Job      Elapsed   CPU%   I/O%   DB2%   Other%
──────────────────────────────────────────────
JOB-A    45 min    15     68     10      7
JOB-B    30 min    55     22     15      8
JOB-C    25 min    12      8     72      8
JOB-D    20 min    18     14     11     57
JOB-E    35 min    10     75      5     10

a) Classify each job's performance profile. b) Rank the jobs by expected optimization ROI (which should be tuned first?). c) For JOB-D, what does the 57% "Other" suggest? List three possible causes. d) If the total critical path is 155 minutes (all five jobs serial), what's a realistic target after optimization?

Exercise 3: Unnecessary Work Elimination

Review this batch job stream:

Step 1: Extract all transactions from CICS journal (50M records)
Step 2: Sort transactions by account number
Step 3: Validate all transactions (check codes, ranges, cross-references)
Step 4: Sort validated transactions by transaction type
Step 5: Create debit extract (filter debits from sorted file)
Step 6: Create credit extract (filter credits from sorted file)
Step 7: Sort debits by amount (descending)
Step 8: Sort credits by amount (descending)
Step 9: Merge debit and credit reports with headers/footers

a) Identify at least three steps that could be eliminated or combined. b) Rewrite the job stream using DFSORT/ICETOOL to minimize the number of passes. c) Estimate the elapsed time reduction (qualitative — "significant," "moderate," "minimal" — with justification).


Section 26.2 — I/O Optimization

Exercise 4: BLKSIZE Calculation

Calculate the optimal half-track BLKSIZE for each of the following:

LRECL RECFM Optimal BLKSIZE Records/Block Track Utilization
80 FB ___ ___ ___%
133 FBA ___ ___ ___%
250 FB ___ ___ ___%
400 FB ___ ___ ___%
1024 FB ___ ___ ___%

Show your work using the formula: Optimal BLKSIZE = FLOOR(27998 / LRECL) × LRECL

For each, calculate track utilization as: (BLKSIZE × 2) / 56664 × 100

Exercise 5: Buffer Impact Analysis

A sequential file has: - 100 million records - LRECL=200, BLKSIZE=27800 (139 records/block) - Total blocks: 100,000,000 / 139 = 719,425 blocks

With QSAM, each EXCP reads one block (simplified). Calculate:

a) EXCP count with BUFNO=5 (assuming 1 EXCP per block, no read-ahead optimization) b) EXCP count with BUFNO=20 (assuming read-ahead reduces EXCPs by grouping channel programs: EXCP ≈ total blocks / BUFNO × adjustment factor 3.5) c) If each EXCP takes 0.15ms average (cache hit), what's the total I/O time difference? d) What's the memory cost of increasing from BUFNO=5 to BUFNO=20?

Exercise 6: VSAM Buffer Optimization

A VSAM KSDS cluster has: - 8 million records - CI size (data): 4096 bytes - CI size (index): 2048 bytes - Index levels: 3 - Total index records: 18,600 - Average random reads per batch run: 8 million (full file scan via key)

Calculate:

a) Index I/Os per random read with BUFNI=1 (default) b) Total index I/Os for the full batch run with BUFNI=1 c) Memory required to cache the entire index (BUFNI=18600) d) Total index I/Os eliminated with full index caching e) At 0.15ms per EXCP (cache hit), how much elapsed time does full index caching save?

Exercise 7: DISP Contention Analysis

Three batch jobs run concurrently:

JOB-X:
  //INPUT1  DD DSN=PROD.TRANS.FILE,DISP=SHR
  //OUTPUT1 DD DSN=PROD.DEBIT.EXTRACT,DISP=(NEW,CATLG)
  //LOOKUP  DD DSN=PROD.ACCT.MASTER,DISP=SHR

JOB-Y:
  //INPUT1  DD DSN=PROD.TRANS.FILE,DISP=SHR
  //OUTPUT1 DD DSN=PROD.CREDIT.EXTRACT,DISP=(NEW,CATLG)
  //UPDATE  DD DSN=PROD.ACCT.MASTER,DISP=OLD

JOB-Z:
  //INPUT1  DD DSN=PROD.ACCT.MASTER,DISP=SHR
  //OUTPUT1 DD DSN=PROD.ACCT.REPORT,DISP=(NEW,CATLG)

a) Can JOB-X and JOB-Y run concurrently? Why or why not? b) Can JOB-X and JOB-Z run concurrently? Why or why not? c) Can JOB-Y and JOB-Z run concurrently? Why or why not? d) What's the optimal scheduling order to minimize total elapsed time?


Section 26.3 — DFSORT Mastery

Exercise 8: DFSORT Control Statement Writing

Write DFSORT control statements for each operation:

a) Sort a file by positions 1-10 (character, ascending) and 15-22 (packed decimal, descending). Include only records where position 85 = 'A' or 'B'.

b) Copy a file (no sort) but reformat records: extract positions 1-10, 25-32, and 50-69. Add a literal 'EXTRACT' starting at position 41 of the output.

c) Sort a file and produce summary records with one output record per unique key (positions 1-10), showing the count of input records and the total of the packed decimal field at positions 15-22.

Exercise 9: ICETOOL Multi-Operation Design

You have a transaction file (LRECL=200, RECFM=FB) with: - Position 1-10: Account number (CH) - Position 11-18: Transaction amount (PD, S9(13)V99) - Position 19: Transaction type (C = credit, D = debit) - Position 20-27: Transaction date (CH, YYYYMMDD) - Position 28-47: Description (CH)

Write ICETOOL control statements to produce three outputs in a single job step:

  1. Debits over $10,000, sorted by amount descending
  2. Credits over $50,000, sorted by account number
  3. Daily summary: one record per date with total debits, total credits, and net

Exercise 10: DFSORT Performance Tuning

Given: - Input file: 80 million records, LRECL=500 - Region size: 512 MB - Available sort work DASD: 5 volumes

a) Calculate the theoretical number of merge passes with MAINSIZE=64M. b) Calculate the theoretical number of merge passes with MAINSIZE=MAX (assume 400 MB available after LE overhead). c) Estimate the elapsed time difference between (a) and (b), assuming each merge pass adds approximately 40% of the initial sort time. d) How many DYNALLOC work datasets should you specify, and why?

Exercise 11: COBOL SORT to DFSORT Conversion

Convert this COBOL SORT with INPUT PROCEDURE to standalone DFSORT:

       SORT SORT-WORK
           ON ASCENDING KEY SW-ACCT-NUM
           INPUT PROCEDURE IS 1000-FILTER-INPUT
           GIVING SORTED-OUTPUT-FILE.

       1000-FILTER-INPUT SECTION.
           OPEN INPUT TRANSACTION-FILE.
           PERFORM UNTIL WS-EOF = 'Y'
               READ TRANSACTION-FILE
                   AT END SET WS-EOF TO TRUE
               END-READ
               IF NOT WS-EOF
                   IF TR-TRANS-TYPE = 'P' OR 'S'
                   AND TR-AMOUNT > ZERO
                       MOVE TR-RECORD TO SW-RECORD
                       RELEASE SW-RECORD
                   END-IF
               END-IF
           END-PERFORM.
           CLOSE TRANSACTION-FILE.

a) Write the equivalent DFSORT control statements. b) What COBOL compiler option change enables the performance improvement? c) If the original COBOL SORT took 14 minutes, estimate the DFSORT elapsed time.


Section 26.4 — Compiler Optimization

Exercise 12: OPT Level Decision Matrix

For each scenario, recommend an OPT level and justify:

a) A brand-new batch program in unit testing b) A production batch program on the critical path (12,000 lines, compute-intensive) c) A production batch program being debugged for an intermittent data corruption issue d) A batch program that runs once a month for 3 minutes e) A batch program that processes 200 million records nightly

Exercise 13: FASTSRT Qualification Analysis

Examine each COBOL SORT and determine if it qualifies for FASTSRT:

a)

       SORT SORT-FILE ON ASCENDING KEY SF-KEY
           USING INPUT-FILE
           GIVING OUTPUT-FILE.

b)

       SORT SORT-FILE ON ASCENDING KEY SF-KEY
           INPUT PROCEDURE IS FILTER-RECORDS
           GIVING OUTPUT-FILE.

c)

       OPEN INPUT INPUT-FILE.
       SORT SORT-FILE ON ASCENDING KEY SF-KEY
           USING INPUT-FILE
           GIVING OUTPUT-FILE.

d)

       SORT SORT-FILE ON ASCENDING KEY SF-KEY
           USING INPUT-FILE
           OUTPUT PROCEDURE IS ADD-TRAILERS.

For each that does NOT qualify, explain what change would enable FASTSRT qualification.

Exercise 14: Generated Code Analysis

Given this COBOL inner loop executed 50 million times:

       PERFORM VARYING WS-IDX FROM 1 BY 1
           UNTIL WS-IDX > WS-RECORD-COUNT
           MOVE WS-INPUT-AMOUNT(WS-IDX) TO WS-WORK-AMT
           IF WS-WORK-AMT > WS-THRESHOLD
               ADD WS-WORK-AMT TO WS-TOTAL-OVER
               ADD 1 TO WS-COUNT-OVER
           END-IF
           ADD WS-WORK-AMT TO WS-GRAND-TOTAL
       END-PERFORM.

With definitions:

       01 WS-IDX               PIC S9(8) COMP.
       01 WS-RECORD-COUNT      PIC S9(8) COMP-3.
       01 WS-WORK-AMT          PIC S9(9)V99 COMP-3.
       01 WS-THRESHOLD         PIC S9(9)V99 COMP-3.
       01 WS-TOTAL-OVER        PIC S9(13)V99 COMP-3.
       01 WS-COUNT-OVER        PIC S9(8) COMP.
       01 WS-GRAND-TOTAL       PIC S9(13)V99 COMP-3.
       01 WS-INPUT-TABLE.
          05 WS-INPUT-AMOUNT   PIC S9(9)V99 COMP-3
                               OCCURS 50000000 TIMES.

a) Identify the data format mismatch that causes conversion overhead in the loop control. b) How would you fix it without changing the algorithm? c) With OPT(2), which operations would the compiler likely optimize? d) Estimate the CPU time impact of fixing the format mismatch (assume 0.02 microseconds per conversion, 50 million iterations).


Section 26.5 — DB2 Batch Performance

Exercise 15: Commit Frequency Analysis

A batch program updates 20 million rows in a DB2 table. Each update takes 0.3ms of DB2 time and 0.02ms of CPU time. A COMMIT takes 3ms.

Calculate:

Commit Strategy Number of Commits DB2 Time Commit Time Total DB2+Commit Locks at Peak
Every record ___ ___ ___ ___ 1
Every 100 ___ ___ ___ ___ 100
Every 1,000 ___ ___ ___ ___ 1,000
Every 5,000 ___ ___ ___ ___ 5,000
Every 50,000 ___ ___ ___ ___ 50,000

If LOCKMAX is set to 10,000, which commit frequencies will cause lock escalation?

Exercise 16: Prefetch Verification

You have EXPLAIN output for a batch cursor:

ACCESS TYPE:  I (Index scan)
INDEX NAME:   XACT_DATE_IX
MATCHING COLS: 1
PREFETCH:     S (Sequential)
SORT NEEDED:  N
LOCK MODE:    IS (Intent Share)

a) Is this access path appropriate for a batch program reading 5 million rows? Why or why not? b) What does PREFETCH=S indicate? c) If you changed the query to add FOR FETCH ONLY, what effect would it have on LOCK MODE? d) What access type would you expect for a full tablespace scan, and when would that be preferable to the index scan shown?

Exercise 17: Partition Parallelism Design

A TRANSACTIONS table is partitioned by TRANS_DATE (one partition per month, 12 partitions for the year). A batch query summarizes all transactions for the year.

a) What is the maximum degree of parallelism DB2 can apply? b) If the query runs in 120 minutes serial, estimate elapsed time with DEGREE(ANY) and 6 available CPs. c) What BIND parameter enables parallelism? d) Why would you NOT use DEGREE(ANY) for a CICS program accessing the same table?


Section 26.6 — Performance Analysis

Exercise 18: SMF Data Interpretation

Given these SMF Type 30 fields for three consecutive runs of the same batch job:

Run     CPU(sec)  Elapsed(sec)  EXCP      DB2_Class2(sec)  Page-ins
─────────────────────────────────────────────────────────────────────
Mon     312       1,845         892,000   488              12
Tue     315       2,410         891,500   492              8,340
Wed     308       1,822         893,200   485              15

a) Tuesday's elapsed time is 30% higher than Monday and Wednesday. CPU, EXCP, and DB2 are stable. What happened? b) What does the page-in count of 8,340 on Tuesday tell you? c) What RMF report would you check to confirm your hypothesis? d) What corrective action would you recommend?

Exercise 19: Trend Analysis

Weekly SMF data for a critical-path batch job over 8 weeks:

Week   Elapsed(min)  CPU(min)  EXCP(K)   Records(M)
─────────────────────────────────────────────────────
1      42.0          6.2       1,240     48.2
2      43.1          6.4       1,275     49.5
3      44.5          6.6       1,312     50.8
4      45.2          6.7       1,340     51.9
5      46.8          6.9       1,382     53.4
6      47.9          7.1       1,418     54.8
7      49.3          7.3       1,460     56.1
8      50.5          7.5       1,498     57.6

a) Calculate the weekly growth rate for elapsed time, CPU, EXCP, and records. b) Is elapsed time growing faster than, slower than, or proportional to record volume? c) At this rate, when will the job exceed 60 minutes (the limit before it delays the next critical-path job)? d) What's the volume elasticity (% change in elapsed per % change in records)?


Section 26.7 — Advanced Techniques

Exercise 20: Hiperbatch ROI Analysis

A batch window has these datasets read by multiple jobs:

Dataset              Size(GB)  Read Count  EXCP per Read
────────────────────────────────────────────────────────
PROD.EOD.TRANS       12.5      6           4,500,000
PROD.ACCT.MASTER     3.2       8           1,150,000
PROD.RATE.TABLES     0.4       15          145,000
PROD.CODE.TABLES     0.1       22          36,000

a) Calculate total EXCP without Hiperbatch (sum of EXCP × read count for each dataset). b) Calculate total EXCP with Hiperbatch (first read from DASD, subsequent from cache). c) If average EXCP time is 0.15ms (cache hit) and 3ms (DASD miss), what's the elapsed time saved? d) How much data space memory is required (approximate: sum of dataset sizes)?

Exercise 21: zIIP Offload Analysis

A batch job has this profile:

Total CPU time:     45 min (GP)
  COBOL compute:    12 min (not zIIP-eligible)
  QSAM I/O:        8 min  (not zIIP-eligible)
  DB2 SQL:          22 min (zIIP-eligible)
  XML processing:   3 min  (zIIP-eligible)

a) What is the maximum zIIP offload percentage? b) If zIIP offload is 100% efficient, what is the remaining GP CPU time? c) If the shop pays $1,200/MSU/month and the batch job consumes 15 MSU, what is the monthly cost savings from zIIP offload? d) At $150,000 per zIIP engine, what is the payback period if this is the only batch job benefiting?

Exercise 22: Combined Optimization Scenario

A batch job has the following baseline:

Elapsed: 55 minutes
  CPU:    8 min (15%)  — COBOL compute, OPT(0)
  I/O:   35 min (64%) — Sequential reads, BUFNO=5, BLKSIZE=4096
  DB2:    9 min (16%)  — Commit every record, no prefetch
  Other:  3 min (5%)   — WLM delays

Apply optimizations from each section and estimate the new elapsed time:

a) I/O optimization: Change BLKSIZE to 27800 (LRECL=200) and BUFNO=20. Assume EXCP reduction of 75% and I/O time reduction of 60%. b) Compiler optimization: Change to OPT(2). Assume CPU reduction of 30%. c) DB2 optimization: Change commit to every 5,000 records. Assume DB2 time reduction of 40%. d) Calculate the new elapsed time and total improvement percentage.


Integration Exercises

Exercise 23: Performance Audit

You're asked to audit a batch program with this JCL:

//STEP01   EXEC PGM=BATCHPGM,REGION=64M
//STEPLIB  DD DSN=PROD.LOADLIB,DISP=SHR
//INPUT1   DD DSN=PROD.DAILY.TRANS,DISP=SHR
//INPUT2   DD DSN=PROD.ACCT.MASTER,DISP=SHR
//OUTPUT1  DD DSN=PROD.DAILY.REPORT,DISP=(NEW,CATLG),
//            SPACE=(TRK,(100,10)),
//            DCB=(RECFM=FB,LRECL=133,BLKSIZE=133)
//SORTWORK DD DSN=PROD.SORT.WORK,DISP=(NEW,DELETE),
//            SPACE=(CYL,(50,10)),UNIT=SYSDA
//SYSOUT   DD SYSOUT=*

Identify at least six performance problems and provide the corrected JCL for each.

Exercise 24: End-to-End Optimization Plan

Given this five-step batch job stream on the critical path:

Step 1: COBOL extract (read journal, write flat file)     — 22 min, I/O-bound
Step 2: COBOL sort (SORT verb, INPUT PROCEDURE filter)    — 18 min, I/O-bound
Step 3: COBOL validate (read sorted, lookup VSAM, write)  — 35 min, mixed
Step 4: COBOL post (read validated, update DB2)           — 28 min, DB2-bound
Step 5: COBOL report (read DB2, format, write report)     — 15 min, I/O-bound

Total critical path contribution: 118 minutes

Design a complete optimization plan using techniques from every section of this chapter:

a) For each step, identify the primary optimization technique. b) Estimate the improved elapsed time for each step. c) Identify steps that could be combined or replaced with DFSORT. d) Calculate the expected total elapsed time after optimization. e) What measurements would you take to verify the improvements?

Exercise 25: CNB Scenario — New Business Requirement

CNB is launching a new product that will add 20 million additional transactions per day (a 40% increase over the current 50 million). The new transactions have the same record layout as existing transactions.

Using the optimized batch window from Section 26.7 (188-minute critical path, 162-minute margin):

a) Estimate the impact on each critical-path job (use volume elasticity of 0.85). b) Calculate the new critical path duration. c) Will the batch window still complete within the 375-minute effective window? d) What optimization would you apply first if the answer to (c) is "barely" or "no"? e) Design a monitoring plan to track the volume growth impact over the first 90 days.