Quiz: DB2 Application Patterns
Question 1
What is the primary purpose of the WITH HOLD clause on a cursor declaration in a batch program?
A) To prevent the cursor from being closed by an application CLOSE statement B) To maintain cursor position across COMMIT operations C) To hold locks on fetched rows until the cursor is explicitly closed D) To prevent other programs from accessing the same table
Answer: B Explanation: WITH HOLD allows a cursor to survive a COMMIT. Without it, every COMMIT closes the cursor, requiring it to be reopened and repositioned. This is essential for the commit-checkpoint batch pattern where you commit periodically but continue processing from the same cursor position.
Question 2
A batch program processes 5 million rows with a commit interval of 2,000. The program abends after committing 3,000,000 rows (1,500 commits) and processing 1,847 additional rows. On restart, how many rows have been durably committed?
A) 3,001,847 B) 3,000,000 C) 5,000,000 D) 0
Answer: B Explanation: Only committed rows are durable. The 1,847 rows processed after the last commit are rolled back by DB2's automatic rollback during abend processing. The restart table records the key at the 3,000,000th row (the last commit point), and the restart logic resumes from that key.
Question 3
Which of the following is NOT a factor in calculating the optimal commit interval?
A) Lock escalation threshold (NUMLKTS) B) Active log space consumption C) The number of columns in the table D) Restart granularity requirements
Answer: C Explanation: The optimal commit interval is determined by lock escalation thresholds, log consumption (which depends on row size, not column count per se), restart granularity, and elapsed time per interval. The number of columns is not directly relevant — it's the total row size and the number of locks that matter.
Question 4
What is the "drain lock" problem associated with WITH HOLD cursors?
A) DB2 drains (empties) the buffer pool when a WITH HOLD cursor is open B) DB2 maintains a lock that prevents utilities like REORG from running on the tablespace C) The cursor drains system memory by holding too many row images D) Other programs cannot INSERT into the table while the cursor is open
Answer: B Explanation: DB2 holds a drain lock on the tablespace for WITH HOLD cursors to maintain cursor position. This lock prevents utilities that require exclusive access (REORG, LOAD REPLACE, etc.) from running. The mitigation is periodic cursor close-and-reopen cycles to release the drain lock.
Question 5
In the commit-checkpoint pattern, why is the restart key stored in a separate BATCH_RESTART table rather than in a flat file?
A) Flat files are slower to read B) The restart table update is within the same DB2 commit scope as the data changes, ensuring atomicity C) DB2 tables have better compression than flat files D) Flat files cannot store timestamp values
Answer: B Explanation: The critical property is atomicity. When the BATCH_RESTART table is updated in the same EXEC SQL COMMIT as the data changes, both succeed or both fail. If the restart key were in a flat file, a commit could succeed but the file write could fail (or vice versa), leaving the restart state inconsistent with the data state.
Question 6
A positioned update (WHERE CURRENT OF cursor-name) has which advantage over a searched update (WHERE primary-key = value)?
A) It can update any column, not just those in the FOR UPDATE OF clause B) It avoids the overhead of re-locating the row through the index C) It holds fewer locks D) It works with cursors that are not declared FOR UPDATE
Answer: B Explanation: A positioned update uses the cursor's current position to locate the row directly, avoiding an additional index probe. However, this advantage comes with a trade-off: the cursor must be declared FOR UPDATE, which prevents DB2 from using read-only optimizations like list prefetch and certain parallel access paths.
Question 7
Why is the "universal cursor" (a single cursor with many OR-ed optional predicates) an anti-pattern?
A) DB2 does not allow OR in WHERE clauses B) The optimizer must choose a single access path at BIND time that works for all predicate combinations, typically resulting in a tablespace scan C) OR predicates cause lock escalation D) Universal cursors cannot use WITH HOLD
Answer: B
Explanation: Static SQL is bound once, and the optimizer chooses one access path. With optional predicates like (:WS-ACCT = '' OR ACCOUNT_KEY = :WS-ACCT), the optimizer cannot assume that ACCOUNT_KEY will be provided, so it cannot reliably choose the index on ACCOUNT_KEY. The result is typically a tablespace scan. The cursor pool pattern — multiple cursors, each optimized for a specific access path — is the correct alternative.
Question 8
What is the primary defense against SQL injection in COBOL dynamic SQL?
A) Input length validation B) Using DYNAMICRULES(BIND) C) Parameter markers (?) in the PREPARE statement with values supplied via USING on EXECUTE D) Encrypting the SQL statement before PREPARE
Answer: C Explanation: Parameter markers are bound as typed values, not interpolated as text. They cannot contain SQL syntax, cannot modify the query structure, and cannot inject additional statements. This is Layer 1 of the five-layer injection prevention strategy. The other layers (input validation, DYNAMICRULES, audit logging, statement caching) provide defense in depth but are secondary.
Question 9
When using dynamic SQL with PREPARE/EXECUTE, what does the DESCRIBE statement do?
A) Describes the table structure to the optimizer B) Returns metadata about the columns in a prepared SELECT statement's result set C) Validates the SQL syntax without executing it D) Describes the parameters required by the prepared statement
Answer: B Explanation: DESCRIBE populates the SQLDA (SQL Descriptor Area) with metadata about the columns returned by a prepared SELECT statement — column names, types, and lengths. This is used by true ad-hoc query tools where the program doesn't know the result structure at compile time. Most COBOL dynamic SQL programs know the structure and don't need DESCRIBE.
Question 10
In a CICS-DB2 environment, what is a "protected thread"?
A) A DB2 thread that is encrypted for security B) A DB2 thread that remains allocated after a CICS task ends, waiting for the next task with the same transaction ID C) A DB2 thread that cannot be cancelled by the DBA D) A DB2 thread that is protected from deadlock detection
Answer: B Explanation: Protected threads (configured via PROTECTNUM on DB2ENTRY) remain allocated between CICS tasks, avoiding the overhead of thread creation and plan loading for the next task. They consume DB2 resources even when idle, so they should only be used for high-volume transactions where the thread creation overhead is significant.
Question 11
What is the recommended approach when a batch program receives SQLCODE -911 (timeout) or -913 (deadlock)?
A) Issue EXEC CICS ABEND immediately B) Roll back the current unit of work and retry up to a configured maximum C) Ignore the error and continue processing the next row D) Increase the lock timeout value in the DB2 subsystem parameters
Answer: B Explanation: Timeouts and deadlocks are normal in high-concurrency environments. The correct response is to roll back (which DB2 may have already done for deadlocks), wait briefly, and retry. A retry limit (typically 3) prevents infinite loops. Treating these as fatal errors is the anti-pattern described in section 12.7.
Question 12
Which thread type should be used for a CICS transaction that executes 4,000 times per hour with an average DB2 time of 5ms?
A) Pool thread — the volume is too low for entry threads B) Entry thread with PROTECTNUM based on peak volume analysis C) Pool thread with THREADWAIT(NO) for fast failure D) Entry thread with PROTECTNUM equal to the maximum concurrent transactions
Answer: B Explanation: At 4,000 transactions per hour (about 1.1 per second) with 5ms DB2 time, the average concurrent threads needed is about 5.6 at peak. An entry thread with PROTECTNUM of 4-8 (based on time-of-day analysis) eliminates thread creation overhead for this high-volume transaction. Pool threads would work but with unnecessary overhead; setting PROTECTNUM to the maximum concurrent count wastes resources during off-peak hours.
Question 13
What is the advantage of DB2 temporal tables over application-managed history?
A) Temporal tables are faster because they use asynchronous writes B) History is maintained automatically by DB2 for all UPDATE and DELETE operations, eliminating the need for application history logic in every program C) Temporal tables don't consume any additional disk space D) Temporal tables allow you to modify historical data
Answer: B Explanation: Temporal tables automate history maintenance at the DB2 engine level. Every UPDATE and DELETE automatically captures the previous version in the history table. This eliminates the need for every COBOL program to include its own history insertion logic — a significant reduction in code and a guarantee that no program can modify a row without history being captured.
Question 14
Why is partition detach preferred over mass DELETE for archiving old data?
A) Partition detach is a metadata-only operation that completes in milliseconds regardless of data volume B) DELETE cannot remove data from partitioned tables C) Partition detach automatically compresses the archived data D) DELETE requires the table to be taken offline
Answer: A Explanation: ALTER TABLE ... DETACH PARTITION is a metadata operation — it removes the partition from the table's definition without touching the data pages. A DELETE of millions of rows generates millions of log records, holds locks for the duration, and leaves empty pages that must be reclaimed by REORG. Detach is instantaneous and clean.
Question 15
In the cursor pool pattern for CICS, why are multiple cursors preferred over a single parameterized cursor?
A) CICS limits each program to one cursor per SQL call B) Each cursor can be backed by a different index, allowing the optimizer to choose the optimal access path for each search mode C) Multiple cursors consume less memory than a single cursor D) DB2 can execute multiple cursors in parallel
Answer: B Explanation: The cursor pool pattern declares separate cursors for each search mode (by name, by MRN, by DOB+name, etc.), each with predicates that align with a specific index. The optimizer can choose the optimal index for each cursor because the predicates are fixed. A single cursor with optional predicates forces the optimizer to choose one access path that works for all combinations, typically resulting in a tablespace scan.
Question 16
A CICS transaction holds a DB2 thread while waiting 500ms for an MQ response. This is an example of which anti-pattern?
A) The uncommitted marathon B) The thread hog C) The universal cursor D) The cursor that ate the workfile
Answer: B Explanation: The thread hog anti-pattern occurs when a CICS transaction holds a DB2 thread during non-DB2 work. The 500ms MQ wait wastes a DB2 thread that could serve other transactions. The fix is to restructure: perform MQ operations first, then enter a tight DB2 sequence (SQL operations followed immediately by SYNCPOINT).
Question 17
What does DYNAMICRULES(BIND) control?
A) Whether dynamic SQL statements are cached B) The authorization context for dynamic SQL — it inherits the static SQL (plan) authorization rather than the user's personal authorization C) Whether BIND operations can use dynamic SQL D) The maximum length of dynamic SQL statements
Answer: B Explanation: DYNAMICRULES(BIND) restricts dynamic SQL to the same authorization as the plan's static SQL. This means dynamic SQL can only access objects that the plan owner is authorized for, not everything the end user might have access to. This is Layer 3 of the injection prevention strategy — limiting the blast radius of any injected SQL.
Question 18
A scrollable cursor (DECLARE ... SCROLL CURSOR FOR) on a result set of 500,000 rows causes performance problems because:
A) Scrollable cursors hold exclusive locks on all rows B) DB2 must materialize the entire result set in a workfile before the first FETCH C) Scrollable cursors cannot use indexes D) Each FETCH PRIOR requires a full table scan
Answer: B Explanation: Scrollable cursors require DB2 to build the complete result set in workfile space before returning the first row. For large result sets, this means significant TEMP space consumption, an initial delay, and potential impact on other workfile users. The mitigation is to always limit scrollable cursor result sets with FETCH FIRST n ROWS ONLY.
Question 19
In the commit-checkpoint pattern, why does the restart table use MERGE instead of separate SELECT/INSERT/UPDATE logic?
A) MERGE is faster because it avoids a separate SELECT B) MERGE handles both the first-run case (INSERT) and restart case (UPDATE) atomically in a single statement, reducing code and eliminating race conditions C) MERGE uses less log space than separate statements D) DB2 requires MERGE for restart tables
Answer: B Explanation: MERGE combines the "if not exists, insert; if exists, update" logic into a single atomic statement. On the first run, there's no row in the restart table, so MERGE inserts. On subsequent commits (and restarts), the row exists, so MERGE updates. This eliminates the need for a SELECT to check existence followed by conditional INSERT or UPDATE — less code, no race condition between the check and the modification.
Question 20
You are designing the DB2 architecture for the HA Banking System. The nightly batch must process 2 million transactions in 3.5 hours. The ACCOUNTS table has 8 partitions. What is the most effective architectural approach?
A) A single batch program with a commit interval of 10,000 rows B) Eight parallel batch jobs, each processing one partition with its own commit-checkpoint cycle C) A set-based UPDATE that processes all 2 million rows in a single SQL statement D) A CICS-initiated batch process using protected threads
Answer: B Explanation: Partition-aware parallel processing divides the work across 8 jobs, each processing ~250,000 rows independently. Each job has its own commit-checkpoint cycle, its own lock scope (limited to one partition), and can be restarted independently. This approach reduces elapsed time by ~8x compared to serial processing, contains lock scope to prevent escalation across partitions, and provides granular restartability. A single program (A) would take too long; a single SQL statement (C) has no commit checkpointing, risk of lock escalation, and no restartability; CICS batch (D) is inappropriate for this volume.