Case Study 1: Optimizing a Slow End-of-Day Batch Cycle
Background
Sovereign State Bancorp (SSB) operates a nightly end-of-day (EOD) batch cycle that must complete within a 6-hour window between 10:00 PM and 4:00 AM. The batch cycle consists of 14 job steps that perform transaction posting, interest accrual, fee assessment, statement generation, regulatory reporting, and master file backup. For the past 18 months, the cycle has been creeping steadily longer. What once completed in 4 hours and 12 minutes now takes 5 hours and 48 minutes. At the current rate of growth, the cycle will breach the 6-hour window within three months.
The CTO authorized a performance tuning engagement with three goals: identify the specific bottlenecks, implement targeted optimizations, and reduce the batch cycle to under 4 hours.
This case study documents the systematic analysis of each bottleneck and the specific tuning actions that achieved a final elapsed time of 3 hours and 27 minutes -- a 40% improvement.
Step 1: Profiling the Batch Cycle
The team's first action was to instrument every job step with precise timing. They analyzed SMF type 30 records (job step activity) to build a time breakdown:
| Step | Program | Function | Elapsed | CPU | Wait I/O | % of Total |
|---|---|---|---|---|---|---|
| 1 | SORT | Sort daily transactions | 47 min | 8 min | 38 min | 13.5% |
| 2 | SSBPOST1 | Post transactions | 112 min | 42 min | 68 min | 32.2% |
| 3 | SSBINT01 | Interest accrual | 68 min | 61 min | 5 min | 19.5% |
| 4 | SSBFEE01 | Fee assessment | 23 min | 9 min | 13 min | 6.6% |
| 5 | SSBSTMT1 | Statement generation | 38 min | 12 min | 25 min | 10.9% |
| 6-14 | Various | Reporting, backup | 60 min | 18 min | 40 min | 17.3% |
| Total | 348 min | 150 min | 189 min | 100% |
Three steps immediately stood out:
- Step 1 (SORT): 47 minutes, dominated by I/O wait (81% of elapsed time). The sort was I/O-bound.
- Step 2 (SSBPOST1): 112 minutes, the single largest step, with significant I/O wait (61%). The posting program was both I/O-bound and CPU-bound.
- Step 3 (SSBINT01): 68 minutes, almost entirely CPU time (90%). The interest accrual was CPU-bound.
These three steps consumed 65% of the total batch elapsed time. Optimizing them would have the greatest impact.
Bottleneck 1: I/O-Bound Sort (Step 1)
Analysis
The daily transaction sort processed 5.2 million records (250 bytes each, 1.3 GB total) and sorted them by account number and transaction date. The sort JCL was:
//SORTTRN EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=SSB.CORE.TRAN.DAILY,
// DISP=SHR
//SORTOUT DD DSN=SSB.CORE.TRAN.SORTED,
// DISP=(NEW,CATLG,DELETE),
// UNIT=SYSDA,
// SPACE=(CYL,(150,30)),
// DCB=(RECFM=FB,LRECL=250,BLKSIZE=5000)
//SORTWK01 DD UNIT=SYSDA,SPACE=(CYL,(100))
//SORTWK02 DD UNIT=SYSDA,SPACE=(CYL,(100))
//SYSIN DD *
SORT FIELDS=(1,10,CH,A,17,8,CH,A)
/*
The team identified three problems:
Problem A: Small block size. The BLKSIZE of 5,000 bytes meant only 20 records per block (20 x 250 = 5,000). With 5.2 million records, this produced 260,000 blocks. Each block required a separate I/O operation, and the small block size wasted DASD space due to inter-block gaps.
Problem B: Insufficient sort work space. Two sort work datasets of 100 cylinders each (200 total) were insufficient for a 1.3 GB file. DFSORT needed multiple merge passes because it could not hold enough data in the work files for a single-pass merge.
Problem C: Single volume for sort work. Both SORTWK datasets were on the same DASD volume, creating I/O contention as DFSORT simultaneously read from one work file while writing to another.
Optimization
//SORTTRN EXEC PGM=SORT,
// PARM='MAINSIZE=MAX,FILSZ=E5200000'
//SYSOUT DD SYSOUT=*
//SORTIN DD DSN=SSB.CORE.TRAN.DAILY,
// DISP=SHR,
// BUFNO=20
//SORTOUT DD DSN=SSB.CORE.TRAN.SORTED,
// DISP=(NEW,CATLG,DELETE),
// UNIT=SYSDA,
// SPACE=(CYL,(150,30),RLSE),
// DCB=(RECFM=FB,LRECL=250,BLKSIZE=27750),
// BUFNO=20
//SORTWK01 DD UNIT=SYSDA,SPACE=(CYL,(300)),VOL=SER=SORT01
//SORTWK02 DD UNIT=SYSDA,SPACE=(CYL,(300)),VOL=SER=SORT02
//SORTWK03 DD UNIT=SYSDA,SPACE=(CYL,(300)),VOL=SER=SORT03
//SYSIN DD *
SORT FIELDS=(1,10,CH,A,17,8,CH,A)
OPTION MAINSIZE=MAX
/*
Fix A: Optimal block size. Increased BLKSIZE from 5,000 to 27,750 (111 records per block, fitting within the 3390 half-track limit of 27,998 bytes). This reduced the block count from 260,000 to 46,847 -- an 82% reduction in I/O operations. Each I/O now transfers 5.5 times more data.
Fix B: Adequate sort work space. Increased from 200 cylinders to 900 cylinders across three work files. This allows DFSORT to complete the sort in a single merge pass, eliminating the multi-pass overhead.
Fix C: Separate volumes for sort work. Each SORTWK dataset is placed on a different DASD volume (SORT01, SORT02, SORT03) via VOL=SER, eliminating I/O contention between simultaneous read and write operations.
Fix D: MAINSIZE=MAX and FILSZ hint. The MAINSIZE=MAX parameter tells DFSORT to use all available memory for sort workspace, maximizing the amount of data that can be sorted in memory before spilling to work files. The FILSZ parameter provides the estimated record count so DFSORT can optimize its algorithm selection from the start.
Fix E: BUFNO=20. Increasing the buffer count for both SORTIN and SORTOUT from the default (typically 5) to 20 enables deeper read-ahead and write-behind, keeping the channel busy while the CPU processes records.
Result
| Metric | Before | After | Improvement |
|---|---|---|---|
| Elapsed time | 47 min | 11 min | 77% reduction |
| I/O operations | 520,000 | 96,000 | 82% reduction |
| Merge passes | 3 | 1 | 67% reduction |
Bottleneck 2: I/O-Bound Transaction Posting (Step 2)
Analysis
The posting program SSBPOST1 reads the sorted transaction file and updates the VSAM KSDS account master for each transaction. With 5.2 million transactions across 1.8 million unique accounts, the program performed: - 5.2 million sequential reads from the transaction file - Approximately 1.8 million random reads from the VSAM KSDS (one per unique account, as transactions for the same account are grouped by the sort) - Approximately 1.8 million random rewrites to the VSAM KSDS
The team examined the VSAM file statistics with IDCAMS LISTCAT and found alarming numbers:
SPLITS-CI: 847,291
SPLITS-CA: 1,247
FREESPACE-CI: 0%
FREESPACE-CA: 0%
CI-SIZE: 2048
BUFND: 2
BUFNI: 1
Problem A: Massive CI splits. With zero free space remaining in control intervals, every insert of a new account triggered a CI split. Even though the posting program only updates existing records (no inserts during posting), the preceding months of online activity had consumed all free space and fragmented the file.
Problem B: Undersized control intervals. A CI size of 2,048 bytes holds only 4 records of 500 bytes each (with CIDF and RDF overhead). Random reads for sequential accounts often land in different CIs, generating separate I/O operations even though the records are logically adjacent.
Problem C: Minimal buffers. BUFND=2 (data buffers) and BUFNI=1 (index buffers) meant that virtually every VSAM access required a physical I/O. No buffering benefit was achieved.
Optimization
First, the team reorganized the VSAM file and redefined it with better parameters:
//SSBVREOR JOB (ACCT),'SSB VSAM REORG',
// CLASS=A,MSGCLASS=X,MSGLEVEL=(1,1),
// NOTIFY=&SYSUID
//*
//*================================================================*
//* STEP 1: EXPORT ACCOUNT MASTER TO FLAT FILE
//*================================================================*
//EXPORT EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//INFILE DD DSN=SSB.CORE.MAST.ACCT,DISP=SHR
//OUTFILE DD DSN=SSB.CORE.MAST.ACCT.EXPORT,
// DISP=(NEW,CATLG,DELETE),
// UNIT=SYSDA,
// SPACE=(CYL,(1000,200),RLSE),
// DCB=(RECFM=FB,LRECL=500,BLKSIZE=27500)
//SYSIN DD *
REPRO INFILE(INFILE) -
OUTFILE(OUTFILE)
/*
//*
//*================================================================*
//* STEP 2: DELETE AND REDEFINE WITH OPTIMIZED PARAMETERS
//*================================================================*
//REDEFINE EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE SSB.CORE.MAST.ACCT.CLUSTER -
CLUSTER PURGE
DEFINE CLUSTER ( -
NAME(SSB.CORE.MAST.ACCT.CLUSTER) -
INDEXED -
RECORDS(1800000 200000) -
RECORDSIZE(500 500) -
KEYS(10 0) -
FREESPACE(20 10) -
SHAREOPTIONS(2 3) -
SPEED ) -
DATA ( -
NAME(SSB.CORE.MAST.ACCT.DATA) -
CONTROLINTERVALSIZE(4096) -
VOLUMES(VSAM01 VSAM02) ) -
INDEX ( -
NAME(SSB.CORE.MAST.ACCT.INDEX) -
CONTROLINTERVALSIZE(2048) -
VOLUMES(VSAM01) )
/*
//*
//*================================================================*
//* STEP 3: RELOAD FROM EXPORT
//*================================================================*
//RELOAD EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//INFILE DD DSN=SSB.CORE.MAST.ACCT.EXPORT,DISP=SHR
//OUTFILE DD DSN=SSB.CORE.MAST.ACCT.CLUSTER,DISP=SHR
//SYSIN DD *
REPRO INFILE(INFILE) -
OUTFILE(OUTFILE)
/*
Second, the team modified the posting program's JCL to increase VSAM buffers:
//POSTING EXEC PGM=SSBPOST1
//STEPLIB DD DSN=SSB.CORE.PROD.LOADLIB,DISP=SHR
//TRANIN DD DSN=SSB.CORE.TRAN.SORTED,
// DISP=SHR,
// BUFNO=20
//ACCTMAST DD DSN=SSB.CORE.MAST.ACCT.CLUSTER,
// DISP=SHR,
// AMP=('BUFND=30,BUFNI=10')
//SYSOUT DD SYSOUT=*
Fix A: VSAM reorganization. Exporting, deleting, redefining with FREESPACE(20 10), and reloading eliminated all CI/CA splits and restored 20% free space in each CI and 10% of CAs as empty. The team scheduled this reorganization to run weekly on Saturday nights.
Fix B: Increased CI size. Changing from 2,048 to 4,096 bytes doubled the number of records per CI from 4 to 8. This means sequential processing touches half as many CIs, and adjacent account records are more likely to be in the same CI, reducing random I/O.
Fix C: Increased VSAM buffers. BUFND=30 and BUFNI=10 dramatically improved buffer hit rates. With 30 data buffers, the program can cache 30 CIs (240 account records) in memory simultaneously. Since the sorted transaction file groups transactions by account number, most consecutive account lookups hit the buffer pool rather than requiring physical I/O. The 10 index buffers cache the upper levels of the VSAM index tree, eliminating repeated index I/O for the same index set records.
Third, the team optimized the COBOL program itself:
*================================================================*
* ORIGINAL CODE: ONE READ AND REWRITE PER TRANSACTION
*================================================================*
* 2000-PROCESS-TRANSACTION.
* READ ACCTMAST INTO WS-ACCOUNT-REC
* KEY IS WS-TRANS-ACCOUNT
* ADD WS-TRANS-AMOUNT TO WS-ACCT-BALANCE
* REWRITE ACCTMAST-REC FROM WS-ACCOUNT-REC
* READ TRANIN INTO WS-TRANS-REC
* AT END SET END-OF-TRANS TO TRUE
* END-READ.
*================================================================*
* OPTIMIZED CODE: BATCH TRANSACTIONS FOR SAME ACCOUNT
* READ ACCOUNT ONCE, APPLY ALL ITS TRANSACTIONS, REWRITE ONCE
*================================================================*
2000-PROCESS-ACCOUNT-GROUP.
* READ THE ACCOUNT MASTER RECORD ONCE
READ ACCTMAST INTO WS-ACCOUNT-REC
KEY IS WS-TRANS-ACCOUNT
INVALID KEY
PERFORM 8000-HANDLE-MISSING-ACCOUNT
END-READ
MOVE WS-TRANS-ACCOUNT TO WS-CURRENT-ACCOUNT
MOVE ZERO TO WS-GROUP-TRANS-COUNT
* APPLY ALL TRANSACTIONS FOR THIS ACCOUNT
PERFORM UNTIL END-OF-TRANS
OR WS-TRANS-ACCOUNT
NOT = WS-CURRENT-ACCOUNT
ADD 1 TO WS-GROUP-TRANS-COUNT
EVALUATE WS-TRANS-TYPE
WHEN 'DP'
ADD WS-TRANS-AMOUNT
TO WS-ACCT-BALANCE
WHEN 'WD'
SUBTRACT WS-TRANS-AMOUNT
FROM WS-ACCT-BALANCE
WHEN 'FE'
SUBTRACT WS-TRANS-AMOUNT
FROM WS-ACCT-BALANCE
WHEN 'IN'
ADD WS-TRANS-AMOUNT
TO WS-ACCT-BALANCE
WHEN 'TI'
ADD WS-TRANS-AMOUNT
TO WS-ACCT-BALANCE
WHEN 'TO'
SUBTRACT WS-TRANS-AMOUNT
FROM WS-ACCT-BALANCE
END-EVALUATE
ADD 1 TO WS-TOTAL-TRANS-COUNT
PERFORM 8100-READ-TRANSACTION
END-PERFORM
* REWRITE THE ACCOUNT RECORD ONCE
MOVE WS-CURRENT-DATE TO WS-ACCT-LAST-ACTIVITY
REWRITE ACCTMAST-REC FROM WS-ACCOUNT-REC
INVALID KEY
PERFORM 8200-HANDLE-REWRITE-ERROR
END-REWRITE
ADD 1 TO WS-ACCOUNTS-PROCESSED.
Fix D: Account grouping in COBOL. The original program read the account master, applied one transaction, and rewrote the account -- for every single transaction. With an average of 2.9 transactions per account, this meant 5.2 million reads and 5.2 million rewrites. The optimized version reads each account once, applies all its transactions in a loop, and rewrites once. This reduced VSAM I/O from 10.4 million operations to 3.6 million -- a 65% reduction.
Result
| Metric | Before | After | Improvement |
|---|---|---|---|
| Elapsed time | 112 min | 38 min | 66% reduction |
| VSAM reads | 5,200,000 | 1,800,000 | 65% reduction |
| VSAM rewrites | 5,200,000 | 1,800,000 | 65% reduction |
| CI splits (weekly) | 847,291 | < 100 | 99.99% reduction |
Bottleneck 3: CPU-Bound Interest Accrual (Step 3)
Analysis
The interest accrual program SSBINT01 calculated daily interest for all 1.8 million savings and CD accounts. Unlike the posting step, I/O was not the bottleneck -- the program spent 90% of its time in CPU processing. The team examined the COBOL source and found:
*================================================================*
* ORIGINAL INTEREST CALCULATION - PERFORMANCE PROBLEMS
*================================================================*
2000-CALCULATE-INTEREST.
MOVE WS-ANNUAL-RATE TO WS-DISPLAY-RATE
COMPUTE WS-DAILY-RATE =
FUNCTION NUMVAL(WS-DISPLAY-RATE) / 365
COMPUTE WS-DAILY-INTEREST ROUNDED =
WS-ACCT-BALANCE * WS-DAILY-RATE
IF WS-ACCT-TYPE = 'SAV'
ADD WS-DAILY-INTEREST
TO WS-SAVINGS-TOTAL
ELSE IF WS-ACCT-TYPE = 'CHK'
ADD WS-DAILY-INTEREST
TO WS-CHECKING-TOTAL
ELSE IF WS-ACCT-TYPE = 'MMA'
ADD WS-DAILY-INTEREST
TO WS-MONEY-MARKET-TOTAL
ELSE IF WS-ACCT-TYPE = 'CDR'
ADD WS-DAILY-INTEREST
TO WS-CD-TOTAL
END-IF END-IF END-IF END-IF.
Problem A: FUNCTION NUMVAL in a loop. The NUMVAL intrinsic function converts a display numeric string to a numeric value. Called 1.8 million times, it consumed significant CPU because NUMVAL performs character parsing and validation on every call. Since the rate table has only 12 distinct rates, NUMVAL was doing the same conversion millions of times.
Problem B: Nested IF instead of EVALUATE. The nested IF chain evaluates conditions sequentially. For CD accounts (the last condition), all three preceding comparisons must fail before the CD comparison is reached. With 40% of accounts being CDs, this meant 40% of records required four comparisons instead of a direct jump.
Problem C: Compiler optimization not enabled. The program was compiled with default options, which do not include the OPTIMIZE compiler option.
Optimization
*================================================================*
* OPTIMIZED INTEREST CALCULATION
*================================================================*
01 WS-RATE-TABLE.
05 WS-RATE-ENTRY OCCURS 12 TIMES.
10 WS-RT-RATE-CODE PIC X(4).
10 WS-RT-DAILY-RATE PIC SV9(12) COMP-3.
1500-LOAD-RATE-TABLE.
* PRE-COMPUTE DAILY RATES ONCE AT STARTUP
PERFORM VARYING WS-RATE-IDX FROM 1 BY 1
UNTIL WS-RATE-IDX > WS-RATE-COUNT
COMPUTE WS-RT-DAILY-RATE(WS-RATE-IDX) =
WS-RT-ANNUAL-RATE(WS-RATE-IDX) / 365
END-PERFORM.
2000-CALCULATE-INTEREST.
* LOOK UP PRE-COMPUTED DAILY RATE (NO NUMVAL)
PERFORM VARYING WS-RATE-IDX FROM 1 BY 1
UNTIL WS-RATE-IDX > WS-RATE-COUNT
OR WS-RT-RATE-CODE(WS-RATE-IDX)
= WS-ACCT-RATE-CODE
CONTINUE
END-PERFORM
COMPUTE WS-DAILY-INTEREST ROUNDED =
WS-ACCT-BALANCE *
WS-RT-DAILY-RATE(WS-RATE-IDX)
EVALUATE WS-ACCT-TYPE
WHEN 'SAV'
ADD WS-DAILY-INTEREST
TO WS-SAVINGS-TOTAL
WHEN 'CHK'
ADD WS-DAILY-INTEREST
TO WS-CHECKING-TOTAL
WHEN 'MMA'
ADD WS-DAILY-INTEREST
TO WS-MONEY-MARKET-TOTAL
WHEN 'CDR'
ADD WS-DAILY-INTEREST
TO WS-CD-TOTAL
END-EVALUATE.
Fix A: Pre-computed rate table. Instead of calling NUMVAL 1.8 million times to convert the same 12 rates, the program now converts the rates once during initialization and stores the pre-computed daily rates in a table. The per-record lookup is a simple table search -- orders of magnitude faster than NUMVAL.
Fix B: EVALUATE replaces nested IF. The EVALUATE statement generates a branch table in the compiled code, providing constant-time dispatch regardless of which account type is being processed. The nested IF required sequential comparison, with worst-case performance for the most common account type.
Fix C: OPTIMIZE(2) compiler option. The JCL was updated to compile with aggressive optimization:
//COMPILE EXEC PGM=IGYCRCTL,
// PARM='RENT,APOST,DATA(31),OPT(2),ARCH(12)'
The OPT(2) option enables the Enterprise COBOL compiler's full optimization suite: dead code elimination, common subexpression elimination, strength reduction, and loop optimization. The ARCH(12) option generates code optimized for the z15 processor architecture, using hardware decimal instructions and vector operations where applicable.
Result
| Metric | Before | After | Improvement |
|---|---|---|---|
| Elapsed time | 68 min | 19 min | 72% reduction |
| CPU time | 61 min | 16 min | 74% reduction |
| CPU per record | 2.03 ms | 0.53 ms | 74% reduction |
Combined Results
| Step | Before | After | Improvement |
|---|---|---|---|
| Sort | 47 min | 11 min | 77% |
| Posting | 112 min | 38 min | 66% |
| Interest | 68 min | 19 min | 72% |
| Fee assessment | 23 min | 21 min | 9% (minor tuning) |
| Statement gen | 38 min | 29 min | 24% (buffer tuning) |
| Other steps | 60 min | 49 min | 18% (buffer tuning) |
| Total | 348 min | 167 min | 52% reduction |
The total batch elapsed time dropped from 5 hours 48 minutes to 2 hours 47 minutes, well under the 4-hour target and providing substantial headroom for future growth.
Lessons Learned
1. Profile Before Optimizing
The team's initial instinct was to focus on the posting program because it was the most complex. Profiling revealed that the sort step -- a utility invocation with a 4-line control statement -- consumed more time than expected due to trivially fixable I/O configuration problems. Without profiling, the team would have spent weeks optimizing COBOL code while ignoring the easy wins in JCL parameters.
2. Block Size Is the Single Most Impactful I/O Parameter
Changing the sort output block size from 5,000 to 27,750 reduced I/O operations by 82%. This single change, requiring no code modification and no application testing, accounted for a significant portion of the sort step's improvement. Every sequential file in a batch cycle should be checked for optimal blocking.
3. VSAM Reorganization Is Maintenance, Not Optional
The VSAM account master had accumulated 847,291 CI splits over 18 months of operation with zero free space. Regular reorganization (weekly or monthly) is a required maintenance activity, not an optional optimization. The team added a weekly VSAM reorganization job to the Saturday batch schedule.
4. COBOL Algorithm Changes Outperform Infrastructure Tuning
The account grouping optimization in the posting program reduced VSAM I/O by 65% -- a larger improvement than buffer tuning alone could achieve. Algorithm optimization and infrastructure tuning are complementary; the best results come from doing both.
5. Compiler Optimization Is Free Performance
Enabling OPT(2) and ARCH(12) on the interest accrual program reduced CPU time by 74% without changing a single line of source code. Many mainframe shops compile with default options because "it has always been that way." A systematic review of compiler options across all production programs can yield substantial CPU savings.
Discussion Questions
-
The posting optimization groups transactions by account and applies them all before rewriting. What happens if the program abends after applying three of five transactions for an account but before the REWRITE? How would you implement restart logic for this grouped processing approach?
-
The VSAM reorganization exports to a flat file, deletes and redefines the cluster, and reloads. During this process, the VSAM file is unavailable. How would you minimize or eliminate this outage window for a 24/7 banking operation?
-
The OPT(2) compiler option performs aggressive optimization that can change the order of operations. For financial arithmetic with packed decimal fields, could this reordering affect the results of calculations? Under what circumstances might you choose OPT(1) over OPT(2)?
-
The BUFNO=20 and AMP BUFND=30 settings consume additional virtual storage. Calculate the approximate memory impact of these buffer settings and discuss how you would determine the optimal buffer count without over-allocating memory.
-
The sort step was improved by 77% through I/O optimization alone. If the input file doubled in size (10.4 million records), would the current configuration still complete in an acceptable time? What additional optimizations would you consider?
Connection to Chapter Concepts
This case study demonstrates several key concepts from Chapter 32:
-
Performance profiling (Section: Identifying Performance Bottlenecks): The SMF type 30 analysis and elapsed/CPU/wait time breakdown illustrate the systematic approach to identifying bottlenecks before attempting optimization.
-
I/O optimization (Section: Optimizing I/O Performance): Block size optimization, VSAM buffer tuning (BUFND/BUFNI), and sort work file placement demonstrate the major I/O tuning techniques.
-
VSAM performance (Section: VSAM Performance Tuning): CI size selection, free space management, CI/CA split analysis, and periodic reorganization address the full lifecycle of VSAM performance management.
-
COBOL compiler options (Section: Compiler Options for Performance): OPT(2) and ARCH(n) compiler options demonstrate how the Enterprise COBOL compiler's optimization capabilities can significantly reduce CPU consumption.
-
Algorithm optimization (Section: Efficient COBOL Coding Techniques): The pre-computed rate table, account grouping pattern, and EVALUATE replacement for nested IF illustrate COBOL-specific coding techniques that reduce CPU and I/O.