Chapter 23 Quiz: Batch Window Engineering


Question 1

What is the critical path in a batch window?

A) The job that takes the longest to execute B) The longest chain of dependent jobs measured by total elapsed time C) The path through the batch window that uses the most CPU D) The sequence of jobs that have the highest priority in the scheduler

Answer: B

Explanation: The critical path is the longest path through the job dependency DAG measured by total elapsed time. It determines the minimum possible batch window duration. A single long job (A) isn't the critical path unless it's part of the longest dependent chain. CPU usage (C) and scheduler priority (D) are unrelated to critical path analysis.


Question 2

A batch window has 5 independent paths. Their elapsed times are: 180, 220, 195, 240, and 210 minutes. Assuming unlimited resources, what is the minimum possible batch window duration?

A) 1,045 minutes (sum of all paths) B) 240 minutes (longest single path) C) 220 minutes (median path) D) 209 minutes (average of all paths)

Answer: B

Explanation: If all paths are independent and resources are unlimited, they can all run in parallel. The window finishes when the longest path completes — 240 minutes. The sum (A) would be the time if all paths ran serially. The median and average are statistically interesting but irrelevant to scheduling.


Question 3

Job EOD-010 has an earliest start time of 2:25 AM and a latest start time of 3:20 AM. What is its slack?

A) 25 minutes B) 35 minutes C) 55 minutes D) 0 minutes (it's on the critical path)

Answer: C

Explanation: Slack = Latest Start - Earliest Start = 3:20 - 2:25 = 55 minutes. This means EOD-010 can be delayed by up to 55 minutes without affecting the overall batch window completion time.


Question 4

Which of the following is NOT a hidden dependency that can affect batch window scheduling?

A) Two jobs both requiring DISP=OLD on the same dataset B) Two jobs updating overlapping rows in the same DB2 table C) Two jobs both reading the same sequential file with DISP=SHR D) Two jobs both needing tape drives when available drives are exhausted

Answer: C

Explanation: DISP=SHR allows concurrent read access — no contention occurs. DISP=OLD (A) causes dataset enqueue serialization. DB2 row conflicts (B) cause lock contention. Tape drive exhaustion (D) forces one job to wait. Only shared reads (C) create no hidden dependency.


Question 5

A COBOL batch program processes 10 million records. Per-record timing: CPU = 0.02ms, I/O = 0.05ms, DB2 = 0.30ms. What is the approximate elapsed time?

A) 3.3 minutes B) 16.7 minutes C) 50.0 minutes D) 61.7 minutes

Answer: D

Explanation: Total time per record = 0.02 + 0.05 + 0.30 = 0.37 ms. Total time = 10,000,000 × 0.37 ms = 3,700,000 ms = 3,700 seconds = 61.7 minutes. Some overlap between CPU and I/O may occur in practice, but the question asks for approximate elapsed assuming sequential processing.


Question 6

In the above scenario (Question 5), which optimization would yield the greatest elapsed time reduction?

A) Rewriting the COBOL business logic to reduce CPU time by 50% B) Increasing BUFNO from 5 to 30 to reduce I/O time by 40% C) Adding an index to reduce DB2 SQL time by 30% D) Splitting the job into 2 parallel streams

Answer: D

Explanation: A: Saves 50% of CPU component (0.01ms/record) → saves ~1.7 minutes. B: Saves 40% of I/O component (0.02ms/record) → saves ~3.3 minutes. C: Saves 30% of DB2 component (0.09ms/record) → saves ~15 minutes. D: Cuts elapsed roughly in half → saves ~30 minutes. Parallelization (D) wins because it divides all components, while individual optimizations only affect one component.


Question 7

What is the primary purpose of the scheduler's resource management feature?

A) To track CPU utilization for billing B) To prevent more jobs from running simultaneously than the infrastructure can support C) To automatically tune job performance D) To generate resource usage reports for capacity planning

Answer: B

Explanation: Resource management controls concurrency. When you define RESOURCE(DB2-BATCH-THREADS) QUANTITY(8), the scheduler ensures no more than 8 jobs requiring that resource run simultaneously. This prevents overload and ensures predictable performance.


Question 8

Rob Calloway's team found 127 unnecessary dependencies in CNB's batch window. What was the primary impact of removing them?

A) Individual jobs ran faster B) The critical path shortened by 47 minutes C) DB2 lock contention decreased D) I/O throughput improved

Answer: B

Explanation: Removing unnecessary dependencies allowed previously serialized jobs to run in parallel, shortening the critical path. No individual job changed (A). DB2 (C) and I/O (D) weren't directly affected — the improvement came purely from scheduling optimization.


Question 9

A batch job currently takes 50 minutes to process 10 million records. You split it into 4 parallel jobs by key range. What is the THEORETICAL minimum elapsed time?

A) 12.5 minutes B) 12.5 minutes plus merge overhead C) 50 minutes (parallelization doesn't help a single job) D) 25 minutes (only 2x speedup is realistic)

Answer: B

Explanation: Each parallel job processes 2.5 million records ≈ 12.5 minutes. But you need a merge/reconciliation step afterward. The theoretical minimum is 12.5 minutes plus whatever the merge takes. Perfect 4x speedup (A) ignores merge overhead. Option C is wrong — parallelization absolutely helps when you split data across instances. Option D is an arbitrary limit.


Question 10

Why are time-based dependencies described as "critical path killers"?

A) They force jobs to use more CPU time B) They create idle gaps in the schedule where no work is running C) They cause jobs to fail if the time constraint isn't met D) They prevent the scheduler from optimizing job placement

Answer: B

Explanation: A time dependency like "don't start before 02:00 AM" creates dead time if predecessors finish before that. If predecessors complete at 01:15 AM, 45 minutes of window are wasted. These idle gaps directly extend the critical path without any productive work being done.


Question 11

What is the recommended first step when a batch window is running out of capacity?

A) Upgrade the hardware B) Extend the window by moving the online start time later C) Clean up unnecessary dependencies in the job DAG D) Rewrite the slowest COBOL programs

Answer: C

Explanation: Dependency cleanup is lowest cost, lowest risk, and often highest impact. Hardware upgrades (A) are expensive. Window extension (B) requires business approval and doesn't fix the root cause. Program rewrites (D) are high-effort and risky. Always start with the dependency graph.


Question 12

A checkpoint/restart mechanism in a COBOL batch program must save which of the following?

A) Only the current record count B) The current position in the input, all accumulators, and enough state to resume processing C) A complete copy of all working storage D) Only the DB2 cursor position

Answer: B

Explanation: A checkpoint must capture everything needed to resume processing: input position (key or record count), all running totals and accumulators, and any state variables that affect processing logic. Just the record count (A) loses accumulators. All working storage (C) is overkill and technically difficult. Just the cursor (D) misses non-DB2 state.


Question 13

At 4:30 AM, a critical-path job fails with 90 minutes remaining in the window. The job was 60% complete (checkpoint at 55%). Remaining jobs after this one total 50 minutes. Restart from checkpoint would take 30 minutes. Restart from scratch would take 65 minutes. What should you do?

A) Restart from scratch — it's the safest option B) Restart from checkpoint — 30 + 50 = 80 minutes, fits within 90 C) Skip the job and run remaining jobs — 50 minutes fits within 90 D) Extend the window — call the CIO

Answer: B

Explanation: Restart from checkpoint (30 min) + remaining jobs (50 min) = 80 minutes, within the 90-minute window with 10 minutes to spare. Restart from scratch (A) = 65 + 50 = 115 minutes — exceeds the window. Skipping (C) may violate data integrity. Extending (D) is a last resort.


Question 14

What is "volume elasticity" in batch window capacity planning?

A) The ability of a job to handle varying record sizes B) How much a job's elapsed time changes per unit of volume growth C) The maximum volume a job can process before failing D) The relationship between data compression and processing speed

Answer: B

Explanation: Volume elasticity measures the sensitivity of job duration to volume changes. An elasticity of 1.0 means duration scales linearly with volume (double records = double time). Elasticity < 1.0 means there's fixed overhead that doesn't scale with volume.


Question 15

Which DB2 isolation level eliminates ALL lock contention for batch read-only jobs?

A) CS (Cursor Stability) B) RR (Repeatable Read) C) UR (Uncommitted Read) D) RS (Read Stability)

Answer: C

Explanation: ISOLATION(UR) — uncommitted read — takes no locks at all. The tradeoff is that you may read uncommitted data, but for batch reporting jobs that run after all updates are complete, this is safe and eliminates all lock contention with update jobs. CS (A), RR (B), and RS (D) all take some form of read lock.


Question 16

GDG catalog serialization is a concern during batch processing because:

A) GDG generations can only be read sequentially B) Multiple jobs referencing the same GDG base serialize during OPEN for catalog updates C) GDGs use more disk space than regular sequential files D) GDG LIMIT parameter restricts how many jobs can read simultaneously

Answer: B

Explanation: When jobs OPEN a GDG dataset, the ICF catalog must be accessed to resolve the relative generation number. If multiple jobs reference the same GDG base simultaneously, they serialize on the catalog access — even if they're reading different generations. This can add seconds to minutes of delay in high-concurrency environments.


Question 17

A batch window dashboard shows the following at midnight:

  • Jobs completed: 280/500
  • Critical path: 2 minutes behind schedule
  • DB2 thread utilization: 92%
  • Projected completion: 5:48 AM (window ends 6:00 AM)

What is the appropriate response?

A) No action — projected completion is before the deadline B) Investigate DB2 thread utilization — 92% suggests contention that may worsen C) Immediately split critical-path jobs into parallel streams D) Extend the window proactively

Answer: B

Explanation: While the projection shows completion before 6:00 AM, the 92% DB2 thread utilization is a warning sign. If any additional jobs need DB2 threads, queueing will increase. The 2-minute delay and tight 12-minute margin, combined with high DB2 utilization, warrant investigation — not emergency action (C, D), but not complacency either (A).


Question 18

What is the maximum number of jobs that can run simultaneously if you have 8 initiators and the following resource constraints?

  • RESOURCE(DB2-THREAD) QUANTITY(4)
  • RESOURCE(TAPE-DRIVE) QUANTITY(2)
  • All jobs require 1 initiator and 1 DB2 thread
  • 2 jobs also require 1 tape drive each

A) 4 (limited by DB2 threads) B) 6 (4 DB2-only + 2 tape-capable) C) 8 (limited by initiators) D) 2 (limited by tape drives)

Answer: A

Explanation: Every job requires a DB2 thread, and only 4 are available. Even though 8 initiators exist, the DB2 thread constraint limits concurrency to 4 jobs. The tape drives further constrain which specific jobs can run within those 4 slots, but the maximum concurrent count is 4.


Question 19

Which of the following batch window metrics should be tracked as a TREND rather than a point-in-time measurement?

A) Last night's batch completion time B) The number of batch job failures last month C) Nightly batch completion time plotted over 90 days D) Current count of jobs in the scheduler

Answer: C

Explanation: A single night's completion time (A) is a data point. The failure count (B) is a summary statistic. The current job count (D) is a snapshot. Only completion time plotted over 90 days (C) reveals a trend — gradually increasing completion times indicate growing volume that will eventually break the window.


Question 20

CNB's batch window went from 270 minutes (critical path) to 420 minutes after a 30% volume increase. No individual job was significantly slower in isolation. This demonstrates that:

A) DB2 had a performance regression B) The batch window is a scheduling problem where cumulative growth across many jobs compounds along the critical path C) The hardware needed to be upgraded D) COBOL programs don't scale well with volume

Answer: B

Explanation: This is the chapter's threshold concept. No single job was dramatically slower — each grew proportionally with volume. But when every job on a 7-job critical path grows by 10–15 minutes, the total grows by 85 minutes. The problem is the scheduling structure (the length and composition of the critical path), not any individual program's performance. This is why the batch window is a scheduling problem, not a performance problem.