Chapter 32: Quiz – Performance Tuning for COBOL Programs
Test your knowledge of COBOL performance metrics, compiler options, file I/O optimization, DB2/CICS performance, and batch optimization with the following 25 questions.
Question 1 (Multiple Choice)
Which compiler option enables the COBOL compiler to apply advanced optimization techniques such as dead code elimination, loop optimization, and common subexpression elimination?
- A) OPTIMIZE(0)
- B) OPTIMIZE(FULL)
- C) TEST(ALL)
- D) LIST
Answer
**B) OPTIMIZE(FULL)** `OPTIMIZE(FULL)` (or `OPT(FULL)` / `OPT(2)`) tells the compiler to apply the full range of optimization techniques to the generated machine code. This includes eliminating redundant computations, optimizing loop structures, removing dead code, and improving register allocation. The trade-off is longer compile times and reduced debugging capability.Question 2 (True/False)
Increasing the block size (BLKSIZE) of a sequential file always reduces the number of physical I/O operations (EXCPs) required to read the file.
Answer
**True.** A larger block size means more logical records per physical block, which means fewer physical I/O operations are needed to read the entire file. For example, changing BLKSIZE from 800 (1 record at LRECL=800) to 27,200 (34 records) reduces EXCPs by approximately 97%. However, the benefit has a practical ceiling — BLKSIZE cannot exceed 32,760 for non-VSAM datasets (or the track capacity), and very large blocks require more memory for buffers.Question 3 (Multiple Choice)
What does the COBOL compiler option FASTSRT do?
- A) Uses a faster sorting algorithm within the COBOL runtime
- B) Allows DFSORT to directly read input files and write output files, bypassing COBOL I/O routines
- C) Sorts data in memory instead of using work files
- D) Parallelizes the sort operation across multiple processors
Answer
**B) Allows DFSORT to directly read input files and write output files, bypassing COBOL I/O routines.** With `FASTSRT`, when a COBOL SORT statement uses `USING` and/or `GIVING` phrases, DFSORT handles the file I/O directly using its own optimized routines instead of passing records through the COBOL program. This eliminates the overhead of the COBOL I/O interface and can reduce sort elapsed time by 20–40%. FASTSRT does not apply when INPUT PROCEDURE or OUTPUT PROCEDURE is used.Question 4 (Code Analysis)
Examine the following COBOL code:
PERFORM VARYING WS-IDX FROM 1 BY 1
UNTIL WS-IDX > 50000
IF WS-SEARCH-KEY = WS-TABLE-KEY(WS-IDX)
MOVE WS-TABLE-DATA(WS-IDX) TO WS-RESULT
SET WS-FOUND TO TRUE
END-IF
END-PERFORM.
What is the primary performance problem with this code?
- A) The PERFORM VARYING loop is inherently slow
- B) The linear search continues through all 50,000 entries even after finding a match — there is no early exit
- C) The table is too large for a PERFORM loop
- D) WS-IDX should be COMP-3 instead of DISPLAY
Answer
**B) The linear search continues through all 50,000 entries even after finding a match — there is no early exit.** The loop has no exit condition when the key is found. Even after setting `WS-FOUND` to TRUE and moving the result, the PERFORM continues checking all remaining entries. The fix is to add `OR WS-FOUND` to the UNTIL condition: SET WS-NOT-FOUND TO TRUE.
PERFORM VARYING WS-IDX FROM 1 BY 1
UNTIL WS-IDX > 50000
OR WS-FOUND
IF WS-SEARCH-KEY = WS-TABLE-KEY(WS-IDX)
MOVE WS-TABLE-DATA(WS-IDX) TO WS-RESULT
SET WS-FOUND TO TRUE
END-IF
END-PERFORM.
Even better, if the table is sorted, use `SEARCH ALL` for a binary search, reducing average comparisons from 25,000 to approximately 16.
Question 5 (Multiple Choice)
In VSAM tuning, what does increasing the number of data buffers (BUFND) primarily improve?
- A) CPU utilization
- B) The likelihood that a needed control interval is already in memory, reducing physical I/O
- C) The maximum record size that can be processed
- D) The number of concurrent users
Answer
**B) The likelihood that a needed control interval is already in memory, reducing physical I/O.** Each VSAM data buffer holds one Control Interval (CI). More buffers mean more CIs can be cached in memory. For sequential access, additional buffers enable read-ahead (prefetching CIs before they are needed). For random access, more buffers increase the "hit ratio" — the probability that a requested CI is already in a buffer. This directly reduces EXCP count and I/O wait time.Question 6 (True/False)
The COBOL compiler option TRUNC(OPT) generates more efficient code for binary (COMP) fields than TRUNC(STD), but it requires that the program never assigns values exceeding the PICTURE clause range to binary fields.
Answer
**True.** With `TRUNC(STD)`, the compiler generates extra instructions after every binary arithmetic operation to truncate the result to fit the PICTURE clause (e.g., PIC 9(4) is truncated to 4 digits by dividing by 10000 and keeping the remainder). With `TRUNC(OPT)`, the compiler assumes values always fit the PICTURE clause and uses the native binary representation without truncation. This is faster but can produce incorrect results if a binary field ever exceeds its PICTURE range.Question 7 (Multiple Choice)
A COBOL program uses the following data definition for a loop counter:
01 WS-COUNTER PIC 9(7).
Why is this less efficient than PIC 9(7) COMP or PIC 9(8) COMP for a counter used in a tight loop?
- A) DISPLAY format takes more disk space
- B) DISPLAY format numbers require conversion to binary for each arithmetic operation, while COMP numbers use native binary arithmetic
- C) DISPLAY format is limited to 7 digits
- D) DISPLAY format cannot be used with PERFORM VARYING
Answer
**B) DISPLAY format numbers require conversion to binary for each arithmetic operation, while COMP numbers use native binary arithmetic.** A PIC 9(7) DISPLAY field stores each digit as a separate EBCDIC character. Every ADD, SUBTRACT, or comparison requires the hardware to convert from zoned decimal to binary, perform the operation, and convert back. A COMP (binary) field is stored in native machine binary format, and arithmetic uses fast hardware binary instructions directly. For a counter incremented millions of times, this overhead is significant. Using PIC 9(8) COMP is ideal because it fits in a fullword (4 bytes), aligning with the hardware's native 32-bit arithmetic.Question 8 (Code Analysis)
Analyze the performance difference between these two COBOL approaches for reading and processing a sequential file:
Approach A:
PERFORM UNTIL END-OF-FILE
READ INPUT-FILE INTO WS-RECORD
AT END SET END-OF-FILE TO TRUE
END-READ
IF NOT END-OF-FILE
PERFORM PROCESS-RECORD
END-IF
END-PERFORM.
Approach B:
READ INPUT-FILE INTO WS-RECORD
AT END SET END-OF-FILE TO TRUE
END-READ.
PERFORM UNTIL END-OF-FILE
PERFORM PROCESS-RECORD
READ INPUT-FILE INTO WS-RECORD
AT END SET END-OF-FILE TO TRUE
END-READ
END-PERFORM.
Which approach is more efficient and why?
- A) Approach A, because it has fewer lines of code
- B) Approach B, because it eliminates the
IF NOT END-OF-FILEcheck on every iteration - C) They are identical in performance
- D) Approach A, because it uses a simpler loop structure
Answer
**B) Approach B, because it eliminates the `IF NOT END-OF-FILE` check on every iteration.** Approach A performs an unnecessary conditional check (`IF NOT END-OF-FILE`) on every single iteration. For a file with 10 million records, that is 10 million extra comparisons. Approach B uses the "read-ahead" pattern: the first READ is performed before the loop, and each iteration processes the current record and reads the next. The loop terminates naturally when the READ hits end-of-file. While the per-iteration savings is small, it accumulates over millions of records. More importantly, Approach B is considered better structured COBOL style.Question 9 (True/False)
In DB2, using SELECT * in a COBOL program is less efficient than selecting only the specific columns needed, even if the program uses all columns in the table.
Answer
**True.** Even when all columns are needed, `SELECT *` has performance implications: 1. DB2 must resolve `*` to the actual column list at execution time. 2. If the table structure changes (columns added), `SELECT *` may retrieve more data than expected, potentially causing COBOL data truncation or record layout mismatches. 3. `SELECT *` prevents DB2 from using index-only access (where all needed columns exist in the index, eliminating the need to read the base table). 4. Explicit column lists make the program self-documenting and easier for DB2 to optimize. Always code explicit column lists in production COBOL programs.Question 10 (Multiple Choice)
A COBOL batch job has the following SMF statistics: CPU time = 5 minutes, elapsed time = 90 minutes, I/O wait time = 82 minutes. This job is:
- A) CPU-bound
- B) I/O-bound
- C) Contention-bound
- D) Well-balanced
Answer
**B) I/O-bound** The job spends 82 out of 90 minutes (91%) waiting for I/O operations. CPU utilization is only 5 out of 90 minutes (5.6%). This is a classic I/O-bound profile. Tuning efforts should focus on reducing I/O: increasing block sizes, adding VSAM buffers, using DFSORT for sort operations, and ensuring optimal DASD placement. Compiler optimizations would have minimal impact since CPU is not the bottleneck.Question 11 (Multiple Choice)
What is the primary performance benefit of using NUMPROC(PFD) (Preferred sign) over NUMPROC(NOPFD)?
- A) It allows larger numeric fields
- B) It eliminates sign validation instructions on packed decimal and zoned decimal operations, reducing CPU usage
- C) It enables parallel numeric processing
- D) It uses floating-point arithmetic instead of decimal
Answer
**B) It eliminates sign validation instructions on packed decimal and zoned decimal operations, reducing CPU usage.** With `NUMPROC(NOPFD)`, the compiler generates extra instructions before and after every decimal arithmetic operation to validate and correct the sign nibble (ensuring it is the preferred X'C' for positive or X'D' for negative). With `NUMPROC(PFD)`, these instructions are omitted because the compiler assumes all signs are already in the preferred format. For a program performing millions of decimal operations (common in financial calculations), this can reduce CPU time by 5–15%.Question 12 (True/False)
For VSAM KSDS files accessed primarily in random mode by CICS transactions, a larger Control Interval (CI) size generally improves performance.
Answer
**False.** For random access, a smaller CI size is often better because each random read brings in one CI. A large CI (e.g., 32,768 bytes) means reading a large amount of data to retrieve a single record, wasting I/O bandwidth. A CI size that is a modest multiple of the average record size (e.g., 4,096 or 8,192 for 300–500 byte records) is typically optimal for random access. Larger CI sizes benefit sequential access where reading many consecutive records is the goal. The optimal CI size depends on the access pattern mix.Question 13 (Code Analysis)
A COBOL program contains the following SQL in a loop that executes 100,000 times:
PERFORM VARYING WS-IDX FROM 1 BY 1
UNTIL WS-IDX > WS-TRAN-COUNT
MOVE WS-TRAN-ACCT(WS-IDX) TO WS-HOST-ACCT
EXEC SQL
SELECT ACCT_BALANCE
INTO :WS-BALANCE
FROM ACCOUNT_MASTER
WHERE ACCT_NUMBER = :WS-HOST-ACCT
END-EXEC
IF WS-BALANCE > 0
PERFORM UPDATE-ACCOUNT
END-IF
END-PERFORM.
What optimization technique would most improve this code's DB2 performance?
- A) Adding an index on ACCT_NUMBER
- B) Using a multi-row FETCH with a cursor instead of individual singleton SELECTs
- C) Moving the SQL outside the loop
- D) Using DISPLAY instead of COMP for WS-BALANCE
Answer
**B) Using a multi-row FETCH with a cursor instead of individual singleton SELECTs.** Each singleton SELECT incurs overhead: SQL parsing (if not cached), lock acquisition, buffer pool access, and network communication between the COBOL address space and the DB2 subsystem. With 100,000 iterations, this overhead is multiplied 100,000 times. A multi-row FETCH retrieves multiple rows per DB2 call, dramatically reducing the call overhead. Alternatively, if the transaction accounts can be loaded into a temporary table, a single JOIN query could replace all 100,000 individual SELECTs. For example: EXEC SQL
DECLARE ACCT_CURSOR CURSOR FOR
SELECT ACCT_NUMBER, ACCT_BALANCE
FROM ACCOUNT_MASTER
WHERE ACCT_NUMBER IN
(SELECT TRAN_ACCT FROM SESSION.TRAN_WORK)
END-EXEC.
EXEC SQL OPEN ACCT_CURSOR END-EXEC.
EXEC SQL
FETCH ACCT_CURSOR
FOR :WS-FETCH-COUNT ROWS
INTO :WS-ACCT-ARRAY, :WS-BAL-ARRAY
END-EXEC.
Question 14 (Multiple Choice)
Which of the following is the most effective way to reduce the elapsed time of a COBOL SORT operation on a large file?
- A) Increase the COBOL program's WORKING-STORAGE size
- B) Use FASTSRT with USING/GIVING and allocate sufficient sort work space on fast DASD
- C) Use DISPLAY format instead of COMP for sort keys
- D) Add OPTIMIZE(FULL) to the compiler options
Answer
**B) Use FASTSRT with USING/GIVING and allocate sufficient sort work space on fast DASD.** The most impactful sort optimizations are: 1. **FASTSRT**: Lets DFSORT handle I/O directly, bypassing COBOL overhead. 2. **USING/GIVING**: Enables FASTSRT (INPUT/OUTPUT PROCEDURE disables it). 3. **Sort work space**: Adequate SORTWK allocations on separate DASD volumes allow DFSORT to parallelize I/O during the merge phase. 4. **DFSORT options**: Parameters like MAINSIZE, HIPROC, and DYNALLOC let DFSORT use memory efficiently. Compiler options like OPTIMIZE(FULL) affect CPU time but have minimal impact on sort elapsed time, which is dominated by I/O.Question 15 (True/False)
In a CICS environment, using LOCAL-STORAGE SECTION instead of WORKING-STORAGE SECTION in a COBOL program can improve performance in a multi-user environment.
Answer
**True.** In CICS, WORKING-STORAGE is allocated once per program load and shared across all tasks using that program (for threadsafe reentrant programs). LOCAL-STORAGE is allocated fresh for each task invocation and automatically freed when the task ends. LOCAL-STORAGE eliminates the need for CICS to manage serialized access to shared WORKING-STORAGE, reducing contention in high-volume transaction environments. However, the allocation/deallocation overhead means LOCAL-STORAGE is primarily beneficial for data that is truly task-specific, while shared read-only data (like lookup tables) should remain in WORKING-STORAGE.Question 16 (Multiple Choice)
What is the performance impact of VSAM CI (Control Interval) splits?
- A) CI splits have no performance impact
- B) CI splits cause temporary locking but no lasting impact
- C) CI splits cause records to be physically out of sequence, degrading sequential access performance and increasing random access I/O
- D) CI splits only affect index performance, not data access
Answer
**C) CI splits cause records to be physically out of sequence, degrading sequential access performance and increasing random access I/O.** When a CI split occurs, approximately half the records in the full CI are moved to a new CI (possibly in a different physical location). This means sequential reading must now jump to a non-adjacent CI, losing the benefit of sequential prefetch and increasing rotational delay. For random access, the index must be updated to reflect the new CI locations, adding overhead. Accumulated CI splits progressively degrade performance until the VSAM file is reorganized (REPRO out, DELETE, DEFINE, REPRO in).Question 17 (Code Analysis)
Compare the performance of these two COBOL approaches for a table lookup:
Approach A — Sequential SEARCH:
SET WS-TBL-IDX TO 1.
SEARCH WS-TABLE-ENTRY
AT END
SET WS-NOT-FOUND TO TRUE
WHEN WS-TBL-KEY(WS-TBL-IDX) = WS-SEARCH-KEY
MOVE WS-TBL-DATA(WS-TBL-IDX) TO WS-RESULT
SET WS-FOUND TO TRUE
END-SEARCH.
Approach B — Binary SEARCH ALL:
SEARCH ALL WS-TABLE-ENTRY
AT END
SET WS-NOT-FOUND TO TRUE
WHEN WS-TBL-KEY(WS-TBL-IDX) = WS-SEARCH-KEY
MOVE WS-TBL-DATA(WS-TBL-IDX) TO WS-RESULT
SET WS-FOUND TO TRUE
END-SEARCH.
If the table has 100,000 entries and lookups are performed 5 million times, estimate the comparative performance.
Answer
**Approach B (SEARCH ALL) is dramatically faster.** - **Sequential SEARCH (A):** Average comparisons per lookup = 100,000 / 2 = 50,000. Total comparisons = 5,000,000 x 50,000 = 250 billion comparisons. - **Binary SEARCH ALL (B):** Maximum comparisons per lookup = log2(100,000) = approximately 17. Total comparisons = 5,000,000 x 17 = 85 million comparisons. **Improvement ratio:** 250,000,000,000 / 85,000,000 = approximately 2,941 times fewer comparisons. The prerequisite for SEARCH ALL is that the table must be sorted by the key field and the table definition must include `ASCENDING KEY IS WS-TBL-KEY`. If the table is loaded unsorted, it must be sorted before SEARCH ALL can be used. The one-time cost of sorting the table is negligible compared to the savings from 5 million binary searches.Question 18 (True/False)
Adding more VSAM index buffers (BUFNI) primarily improves performance for sequential access patterns.
Answer
**False.** Additional index buffers primarily benefit **random** access patterns. During random access, VSAM must traverse the index (from the highest-level index set down through lower-level index records to the sequence set) to locate the target CI. More index buffers mean more index records are cached in memory, reducing the physical I/O needed for index traversal. For sequential access, the index is consulted less frequently (only when moving to a new CA), so extra index buffers have less impact. Sequential access benefits more from additional data buffers (BUFND) for read-ahead.Question 19 (Multiple Choice)
A COBOL program performs the following operation inside a loop that executes 10 million times:
MOVE SPACES TO WS-OUTPUT-RECORD.
MOVE WS-ACCT-NUM TO OUT-ACCT-NUM.
MOVE WS-ACCT-NAME TO OUT-ACCT-NAME.
MOVE WS-BALANCE TO OUT-BALANCE.
What optimization could reduce the CPU time of this loop?
- A) Replace individual MOVEs with a single group MOVE
- B) Use INITIALIZE instead of MOVE SPACES
- C) Replace
MOVE SPACES TO WS-OUTPUT-RECORDwith moving only the fields that change, or restructure the record so that only varying fields need to be updated - D) Use COMP-3 for all fields
Answer
**C) Replace `MOVE SPACES TO WS-OUTPUT-RECORD` with moving only the fields that change, or restructure the record so that only varying fields need to be updated.** The `MOVE SPACES TO WS-OUTPUT-RECORD` initializes the entire record (potentially hundreds of bytes) to spaces on every iteration, only to immediately overwrite portions of it with actual data. If the record layout has fixed filler areas between fields, these are being needlessly cleared and then left as spaces anyway. Options to optimize: 1. Initialize the record once before the loop and only move fields that change. 2. If all fields are populated every iteration, skip the SPACES initialization entirely. 3. Use reference modification to move data directly to specific positions without a full-record clear. For 10 million iterations, eliminating an unnecessary 200-byte MOVE SPACES saves approximately 2 billion bytes of memory writes.Question 20 (Code Analysis)
A developer compiles a COBOL program with these options for production:
CBL OPTIMIZE(FULL),TRUNC(OPT),NUMPROC(PFD),FASTSRT,NOTEST
And another developer compiles the same program with:
CBL OPTIMIZE(0),TRUNC(STD),NUMPROC(NOPFD),NOFASTSRT,TEST
If the program processes 20 million packed decimal records with SORT and heavy arithmetic, estimate the relative performance difference.
- A) Less than 5% difference
- B) 10–20% difference
- C) 30–60% difference
- D) No difference — compiler options do not affect runtime performance
Answer
**C) 30–60% difference** Each option contributes to the gap: - **OPTIMIZE(FULL) vs. (0):** 10–25% CPU improvement through instruction optimization. - **TRUNC(OPT) vs. (STD):** 5–15% CPU improvement by eliminating truncation code on binary operations. - **NUMPROC(PFD) vs. (NOPFD):** 5–15% CPU improvement by eliminating sign validation on packed decimal operations (significant with 20 million records of decimal arithmetic). - **FASTSRT vs. NOFASTSRT:** 20–40% improvement in sort elapsed time by bypassing COBOL I/O. - **NOTEST vs. TEST:** 5–10% CPU improvement by eliminating debug hooks. The combined effect is cumulative and depends on the program's specific mix of operations. For a program dominated by decimal arithmetic and sorting, the total CPU time difference can easily reach 30–60%. The elapsed time improvement from FASTSRT alone can be substantial.Question 21 (True/False)
In DB2, a correlated subquery that executes once for each row of the outer query is generally less efficient than an equivalent JOIN.
Answer
**True.** A correlated subquery is re-evaluated for every row of the outer query, which can result in millions of subquery executions for a large outer result set. An equivalent JOIN typically allows DB2 to use more efficient access strategies such as merge join or hash join, processing both tables in fewer passes. For example: Inefficient correlated subquery:SELECT A.ACCT_NUM, A.BALANCE
FROM ACCOUNTS A
WHERE A.BALANCE > (SELECT AVG(B.BALANCE)
FROM ACCOUNTS B
WHERE B.BRANCH = A.BRANCH)
More efficient JOIN/derived table:
SELECT A.ACCT_NUM, A.BALANCE
FROM ACCOUNTS A
JOIN (SELECT BRANCH, AVG(BALANCE) AS AVG_BAL
FROM ACCOUNTS
GROUP BY BRANCH) B
ON A.BRANCH = B.BRANCH
WHERE A.BALANCE > B.AVG_BAL
Question 22 (Multiple Choice)
Which approach is most efficient for a COBOL batch program that needs to match records between two large sorted files (10 million records each)?
- A) For each record in File A, perform a sequential search through File B
- B) Load all of File B into a WORKING-STORAGE table and use SEARCH ALL for each File A record
- C) Use a co-sequential (balanced-line) matching algorithm that reads both files in parallel
- D) Use a random-access VSAM READ for each File A record against File B
Answer
**C) Use a co-sequential (balanced-line) matching algorithm that reads both files in parallel.** The balanced-line algorithm reads both sorted files simultaneously, advancing through each based on key comparison. Since both files are sorted, each record is read exactly once, resulting in approximately 20 million total reads (10M + 10M). This is O(n+m) complexity. - **Option A** would be O(n*m) — 100 trillion comparisons. - **Option B** would require 10 million entries in memory (likely exceeding available storage) and 10 million binary searches. - **Option D** would require 10 million random VSAM reads, each involving index traversal and random I/O. The co-sequential approach is the classic mainframe batch pattern for matching sorted files and is by far the most efficient for large file matching.Question 23 (Code Analysis)
Analyze the performance implications of this COBOL DB2 cursor usage:
PERFORM PROCESS-BRANCHES
VARYING WS-BR-IDX FROM 1 BY 1
UNTIL WS-BR-IDX > WS-BRANCH-COUNT.
PROCESS-BRANCHES.
MOVE WS-BRANCH-CODE(WS-BR-IDX) TO WS-HOST-BRANCH.
EXEC SQL
OPEN ACCT_CURSOR
END-EXEC.
PERFORM FETCH-ACCOUNTS UNTIL SQLCODE = 100.
EXEC SQL
CLOSE ACCT_CURSOR
END-EXEC.
FETCH-ACCOUNTS.
EXEC SQL
FETCH ACCT_CURSOR
INTO :WS-ACCT-NUM, :WS-ACCT-BAL
END-EXEC.
If there are 500 branches with an average of 10,000 accounts each, what is the performance concern?
Answer
The cursor is opened and closed 500 times (once per branch), and 5 million individual FETCH operations are performed (500 branches x 10,000 accounts). The performance concerns are: 1. **Repeated OPEN/CLOSE overhead**: Each OPEN requires DB2 to establish the cursor position, and each CLOSE releases resources. 500 OPEN/CLOSE cycles add significant overhead. 2. **Single-row FETCH**: Each FETCH retrieves one row, requiring 5 million DB2 calls. Each call involves address space switching between the COBOL program and the DB2 subsystem. **Optimizations:** 1. **Eliminate the loop**: Rewrite the SQL to process all branches in a single cursor OPEN with `WHERE BRANCH_CODE IN (...)` or by using a work table. 2. **Multi-row FETCH**: Use `FETCH FOR n ROWS` to retrieve multiple rows per call, reducing the number of DB2 calls by a factor of n (e.g., FETCH 100 rows at a time reduces calls from 5 million to 50,000). 3. **Single SQL**: If possible, rewrite as a single query with ORDER BY BRANCH_CODE, ACCT_NUMBER and process the results in a single pass.Question 24 (True/False)
Allocating DFSORT work datasets (SORTWKnn) on separate physical DASD volumes improves sort performance by enabling parallel I/O during the merge phase.
Answer
**True.** DFSORT uses work datasets during the intermediate merge phases of sorting. When SORTWK datasets are on separate physical volumes (ideally on separate channels), DFSORT can read from and write to multiple work files simultaneously, overlapping I/O operations. This parallelism significantly reduces sort elapsed time for large files. Best practice is to allocate 3–6 SORTWK datasets on separate volumes with sufficient space (at least 2x the input file size distributed across all SORTWK files).Question 25 (Multiple Choice)
A CICS transaction has a response time of 3 seconds. The performance breakdown shows: CPU = 0.05s, DB2 wait = 0.10s, VSAM I/O = 0.05s, task suspend for string waits = 2.70s, other = 0.10s. What is the most likely cause and the recommended fix?
- A) The DB2 queries need optimization; add indexes
- B) The VSAM files need reorganization; reduce CI splits
- C) The CICS VSAM file string number (STRINGS parameter) is too low, causing tasks to wait for file access; increase the STRINGS value
- D) The program is CPU-intensive; use OPTIMIZE(FULL)