Chapter 23 Quiz: Advanced DB2 Programming

Test your understanding of dynamic SQL, scrollable cursors, stored procedures, multi-row operations, LOBs, packages/plans, and SQL tuning in COBOL/DB2 programs. Each question is followed by a hidden answer -- try to answer before revealing it.


Question 1

What is the fundamental difference between EXECUTE IMMEDIATE and PREPARE/EXECUTE for dynamic SQL in COBOL?

Show Answer **EXECUTE IMMEDIATE** combines the prepare and execute steps into a single statement. It is used for SQL statements that are executed only once and that do not return result sets (typically DDL statements like GRANT, CREATE, or non-query DML like INSERT/UPDATE/DELETE without host variable output). **PREPARE/EXECUTE** is a two-step process. PREPARE compiles the SQL text into an executable form and stores it in a statement name. EXECUTE then runs the prepared statement, optionally with host variables in the USING clause. This approach is more efficient when the same SQL is executed multiple times (the PREPARE cost is paid once), and it supports parameter markers (?), which allow different values to be substituted on each EXECUTE without re-preparing. For any SQL that returns a result set (SELECT), you must use PREPARE followed by OPEN CURSOR -- EXECUTE IMMEDIATE cannot process SELECT statements.

Question 2

Which of the following correctly declares a scrollable cursor in DB2 for z/OS?

A) DECLARE CSR1 CURSOR SCROLLABLE FOR SELECT ... B) DECLARE CSR1 SCROLL CURSOR FOR SELECT ... C) DECLARE CSR1 CURSOR WITH SCROLL FOR SELECT ... D) DECLARE CSR1 DYNAMIC SCROLL CURSOR FOR SELECT ...

Show Answer **B) `DECLARE CSR1 SCROLL CURSOR FOR SELECT ...`** The correct DB2 for z/OS syntax places the keyword SCROLL before CURSOR. You can also specify sensitivity: `DECLARE CSR1 SENSITIVE STATIC SCROLL CURSOR FOR SELECT ...` or `INSENSITIVE SCROLL CURSOR`. Option A uses the wrong keyword order. Option C uses a non-standard WITH clause. Option D uses DYNAMIC SCROLL which is not valid DB2 z/OS syntax.

Question 3

A COBOL program performs a multi-row FETCH and receives SQLCODE = +100. The SQLERRD(3) field contains the value 47. What does this mean?

Show Answer When a multi-row FETCH returns SQLCODE +100, it means the end of the result set was reached during the fetch. However, **some rows were still returned** in this final fetch. The value in **SQLERRD(3)** indicates the actual number of rows fetched. In this case, 47 rows were returned in the final fetch even though the program may have requested a larger rowset (e.g., 100 rows). The program should process these 47 rows before closing the cursor. Ignoring the partial rowset would mean losing the last 47 rows of the result set. The correct pattern is:
       EXEC SQL
           FETCH ROWSET FROM CSR1
             FOR 100 ROWS
             INTO :HV-ARRAY
       END-EXEC
       IF SQLCODE = 0 OR SQLCODE = 100
           MOVE SQLERRD(3) TO WS-ROWS-FETCHED
           PERFORM VARYING WS-IDX FROM 1 BY 1
             UNTIL WS-IDX > WS-ROWS-FETCHED
               PERFORM 500-PROCESS-ROW
           END-PERFORM
       END-IF

Question 4

What is the purpose of parameter markers (?) in dynamic SQL, and why are they preferred over string concatenation for building SQL with variable values?

Show Answer Parameter markers are placeholders in a dynamic SQL statement that are replaced with actual host variable values at EXECUTE time. For example:
       MOVE 'SELECT BALANCE FROM ACCOUNTS WHERE ACCT_ID = ?'
         TO WS-SQL-STMT
       EXEC SQL PREPARE STMT1 FROM :WS-SQL-STMT END-EXEC
       EXEC SQL EXECUTE STMT1 USING :WS-ACCT-ID END-EXEC
Parameter markers are preferred over string concatenation for three reasons: 1. **SQL injection prevention**: Concatenating user input directly into SQL text allows malicious values (e.g., `' OR 1=1 --`) to alter the query logic. Parameter markers are bound separately and are never interpreted as SQL syntax. 2. **Performance**: When the SQL text is identical across executions (only the parameter values change), DB2 can reuse the prepared access path from the dynamic statement cache. With string concatenation, each unique SQL text requires a separate prepare. 3. **Data type handling**: DB2 manages type conversion and quoting for parameter values automatically. String concatenation requires the programmer to handle quoting, escaping, decimal formatting, and date formatting manually.

Question 5

True or False: A DB2 stored procedure written in COBOL runs in the same address space as the calling COBOL program.

Show Answer **False.** A DB2 stored procedure written in COBOL runs in a **DB2 stored procedure address space** (WLM-managed), not in the caller's address space. When a COBOL program issues `EXEC SQL CALL 'MYPROC' ... END-EXEC`, DB2 routes the call to a WLM-managed stored procedure address space where the COBOL stored procedure load module resides. This is different from a COBOL `CALL 'MYPROC'` statement, which invokes a subprogram in the same address space. The DB2 stored procedure runs under DB2's control, has its own thread, and can access DB2 resources independently. The calling program and the stored procedure communicate through parameter passing and result sets, not through shared memory. On DB2 for z/OS, stored procedures can run in different WLM environments with different priorities and resource limits, providing workload isolation.

Question 6

A COBOL program uses FETCH ROWSET to retrieve 100 rows at a time. What must be true about the host variables used in the INTO clause?

Show Answer The host variables in the INTO clause must be declared as **arrays** (OCCURS clause) with a dimension at least as large as the rowset size. Each column in the SELECT list requires a corresponding array host variable. If indicator variables are used, they must also be arrays of the same dimension. For example, to fetch 100 rows with three columns:
       01  HV-ACCT-ID       PIC X(10)
                             OCCURS 100 TIMES.
       01  HV-BALANCE        PIC S9(11)V99 COMP-3
                             OCCURS 100 TIMES.
       01  HV-STATUS         PIC X(01)
                             OCCURS 100 TIMES.
       01  IND-BALANCE       PIC S9(04) COMP
                             OCCURS 100 TIMES.
The FETCH statement references the arrays without subscripts -- DB2 populates the array elements automatically:
       EXEC SQL
           FETCH ROWSET FROM CSR1
             FOR 100 ROWS
             INTO :HV-ACCT-ID, :HV-BALANCE :IND-BALANCE,
                  :HV-STATUS
       END-EXEC

Question 7

What is the difference between ACQUIRE(USE) and ACQUIRE(ALLOCATE) in a DB2 BIND command?

Show Answer **ACQUIRE(USE)** causes DB2 to acquire locks on a table space or partition only when the application first accesses that object. This is the default and is generally preferred because it minimizes the duration of locks and reduces contention. **ACQUIRE(ALLOCATE)** causes DB2 to acquire locks on **all** table spaces and partitions referenced by the plan or package at the time the plan is first allocated (when the thread is created). This acquires locks earlier than necessary, potentially increasing contention, but it guarantees that all needed resources are available upfront, avoiding lock-related failures later in the transaction. ACQUIRE(ALLOCATE) is sometimes used in critical batch programs where failing partway through due to a lock timeout would be more expensive than holding locks from the start. For most online applications, ACQUIRE(USE) is the better choice because it keeps lock durations short.

Question 8

A bank's COBOL program stores scanned check images in a DB2 BLOB column. Which of the following is the correct way to declare a BLOB host variable in COBOL?

A) 01 WS-IMAGE PIC X(5000000). B) 01 WS-IMAGE SQL TYPE IS BLOB(5M). C) 01 WS-IMAGE USAGE IS BLOB(5000000). D) 01 WS-IMAGE PIC X(5000000) USAGE DISPLAY.

Show Answer **B) `01 WS-IMAGE SQL TYPE IS BLOB(5M).`** The `SQL TYPE IS BLOB(n)` declaration is the correct way to declare a BLOB host variable in COBOL. The DB2 precompiler expands this into a group structure with a length field and a data field. The expansion looks approximately like:
       01  WS-IMAGE.
           49  WS-IMAGE-LENGTH  PIC S9(09) COMP.
           49  WS-IMAGE-DATA    PIC X(5000000).
Option A would work for small binary data but does not carry the length information that DB2 expects for LOB processing. Option C uses invalid syntax. Option D is just a regular alphanumeric field. The SQL TYPE IS declaration tells the precompiler to handle the variable as a LOB with proper length management.

Question 9

What is the purpose of the RELEASE(COMMIT) vs. RELEASE(DEALLOCATE) bind option?

Show Answer **RELEASE(COMMIT)** causes DB2 to release all locks and close all cursors (except those declared WITH HOLD) at each COMMIT point. This minimizes the duration of locks and reduces contention with other applications. It is the preferred setting for most online transaction programs. **RELEASE(DEALLOCATE)** causes DB2 to hold locks until the application plan is deallocated -- typically when the thread terminates or the program ends. Locks accumulate across multiple commits and are not released until the program finishes. RELEASE(DEALLOCATE) can improve performance in batch programs that repeatedly access the same pages because it avoids the overhead of releasing and re-acquiring locks at each commit. However, it holds locks for much longer, which increases contention with other applications. For online systems with many concurrent users, RELEASE(COMMIT) is almost always correct.

Question 10

In a COBOL stored procedure, what is the difference between COMMIT ON RETURN YES and COMMIT ON RETURN NO?

Show Answer **COMMIT ON RETURN YES** means that DB2 will automatically issue a COMMIT when the stored procedure returns to the caller successfully (SQLCODE >= 0). This is useful when the stored procedure represents a complete unit of work and the caller does not need to do additional work within the same transaction. **COMMIT ON RETURN NO** (the default) means that DB2 does not commit when the stored procedure returns. The caller retains control of the transaction boundary and can do additional work before committing or can roll back the stored procedure's changes. This is essential when the caller needs to invoke multiple stored procedures or perform additional SQL as part of a single atomic transaction. For a fund transfer stored procedure that debits one account and credits another, you would typically use COMMIT ON RETURN NO so that the calling program can verify both operations succeeded before committing.

Question 11

What SQLCODE indicates a unique constraint violation, and how should a COBOL program handle it when inserting a transaction record?

Show Answer **SQLCODE -803** indicates a unique constraint violation (duplicate key on INSERT or UPDATE). When a COBOL program inserting a transaction record receives this SQLCODE, it should: 1. **Do not treat it as an unexpected error** -- this is a predictable business condition (e.g., a duplicate transaction ID). 2. **Check SQLERRM** to identify which index caused the violation (SQLERRM contains the index name). 3. **Take appropriate business action**: generate a new unique key and retry, reject the transaction with a user-friendly message, or log it for manual review.
       EXEC SQL
           INSERT INTO TRANSACTIONS
           (TRANS_ID, ACCT_ID, AMOUNT, TRANS_DATE)
           VALUES
           (:WS-TRANS-ID, :WS-ACCT-ID,
            :WS-AMOUNT, CURRENT DATE)
       END-EXEC
       EVALUATE SQLCODE
           WHEN 0
               CONTINUE
           WHEN -803
               PERFORM 900-HANDLE-DUPLICATE
           WHEN OTHER
               PERFORM 999-SQL-ERROR
       END-EVALUATE

Question 12

A program uses PREPARE to compile a dynamic SELECT statement. The user runs the same query 50 times with different parameter values. How many times should the PREPARE be executed?

Show Answer The PREPARE should be executed **once**. After the SQL text is prepared into a statement name, the same prepared statement can be executed multiple times with different parameter values via the USING clause. Re-preparing the same SQL text 50 times would waste CPU because DB2 would parse and optimize the statement each time (though DB2's dynamic statement cache may mitigate some of this cost). The correct pattern is:
       EXEC SQL
           PREPARE STMT1 FROM :WS-SQL-TEXT
       END-EXEC
       PERFORM 50 TIMES
           ACCEPT WS-PARAM-VALUE FROM CONSOLE
           EXEC SQL
               EXECUTE STMT1 USING :WS-PARAM-VALUE
           END-EXEC
       END-PERFORM
If the SQL text itself changes (e.g., different columns or tables), then a new PREPARE is needed. But if only the parameter values change, a single PREPARE with multiple EXECUTEs is both correct and efficient.

Question 13

What is the role of the SQLDA (SQL Descriptor Area) in fully dynamic SQL?

Show Answer The SQLDA is a data structure that describes the columns of a dynamic SQL result set when the column count, names, and types are not known at compile time. It is used with the DESCRIBE statement after a PREPARE to obtain metadata about the result set. The SQLDA contains: - **SQLN**: The number of SQLVAR entries allocated (set by the program before DESCRIBE) - **SQLD**: The actual number of columns in the result set (set by DB2 after DESCRIBE) - **SQLVAR array**: One entry per column, each containing: - SQLTYPE: The data type code (e.g., 452 for CHAR, 484 for DECIMAL) - SQLLEN: The length or precision/scale - SQLDATA: Pointer to the host variable buffer - SQLIND: Pointer to the indicator variable - SQLNAME: The column name After DESCRIBE fills the SQLDA, the program allocates appropriate buffers for each column based on SQLTYPE and SQLLEN, sets the SQLDATA pointers, and then uses `FETCH USING DESCRIPTOR :SQLDA-NAME` to retrieve rows into those buffers. This is the technique used by ad-hoc query tools that cannot know the result set structure at compile time.

Question 14

What is the difference between a DB2 DBRM (Database Request Module) and a package?

Show Answer A **DBRM** is the output of the DB2 precompiler. When the COBOL source containing EXEC SQL statements is precompiled, the precompiler extracts all SQL statements and places them into a DBRM, which is a member of a partitioned data set. The DBRM contains the SQL text but no access path information -- it is a raw, unbound representation of the SQL. A **package** is the result of BINDing a DBRM. During BIND, DB2's optimizer analyzes each SQL statement in the DBRM, selects access paths (index scans, table space scans, join methods), and stores the optimized execution plans in the DB2 catalog. The package is a DB2 catalog object, not a PDS member. The relationship is: COBOL source -> Precompile -> DBRM -> BIND -> Package -> included in Plan. A DBRM is a build-time artifact. A package is a runtime artifact that DB2 uses to execute the SQL. When table statistics change or indexes are added, rebinding the DBRM into a new package allows DB2 to choose better access paths without changing the COBOL source.

Question 15

True or False: When using a scrollable cursor with FETCH ABSOLUTE, if you specify a position beyond the last row, DB2 returns SQLCODE -802.

Show Answer **False.** When FETCH ABSOLUTE specifies a position beyond the last row of the result set, DB2 returns **SQLCODE +100** (not found), not -802. SQLCODE -802 indicates a data conversion or numeric overflow error, which is unrelated to cursor positioning. For example, if the result set has 200 rows and you execute `FETCH ABSOLUTE 250 FROM CSR1`, DB2 returns SQLCODE +100 and the cursor is positioned **after the last row**. A subsequent FETCH PRIOR would then return the last row (row 200). Similarly, `FETCH ABSOLUTE 0` is invalid and returns SQLCODE -144. `FETCH ABSOLUTE -1` fetches the last row (negative values count from the end).

Question 16

A COBOL program calls a stored procedure that returns a result set. What DB2 statements must the calling program use to process the result set?

Show Answer The calling program must use three statements to process a result set from a stored procedure: 1. **ASSOCIATE LOCATORS** -- Associates result set locators with the procedure call. A locator is a handle that references the result set returned by the procedure. 2. **ALLOCATE CURSOR** -- Allocates a cursor based on the result set locator. This gives the calling program a cursor it can FETCH from. 3. **FETCH** -- Uses the allocated cursor to retrieve rows from the result set.
       01  WS-RS-LOC1      SQL TYPE IS RESULT-SET-LOCATOR.

       EXEC SQL
           CALL BANK.ACCTLIST(:WS-BRANCH, :WS-STATUS,
                              :WS-COUNT)
       END-EXEC

       EXEC SQL
           ASSOCIATE LOCATORS (:WS-RS-LOC1)
             WITH PROCEDURE BANK.ACCTLIST
       END-EXEC

       EXEC SQL
           ALLOCATE CSR-RESULT CURSOR
             FOR RESULT SET :WS-RS-LOC1
       END-EXEC

       EXEC SQL
           FETCH CSR-RESULT
             INTO :WS-ACCT-ID, :WS-BALANCE
       END-EXEC
If the stored procedure returns multiple result sets, you declare multiple locator variables and associate them in order.

Question 17

What does the DYNAMICRULES(BIND) option control, and how does it differ from DYNAMICRULES(RUN)?

Show Answer DYNAMICRULES controls the **authorization ID** and **default qualifier** used for dynamic SQL statements at runtime. **DYNAMICRULES(BIND)** means that dynamic SQL statements use the authorization ID and default qualifier that were specified at BIND time (the OWNER and QUALIFIER bind options). This makes dynamic SQL behave like static SQL from an authorization perspective -- the end user's authority is not checked; instead, the binder's authority applies. **DYNAMICRULES(RUN)** means that dynamic SQL statements use the authorization ID of the user running the program (the primary auth ID of the current thread). The default qualifier for unqualified table names is also the runtime user's ID. For a banking application, DYNAMICRULES(BIND) is often preferred because: - It allows the application to control table access through the bind authority rather than granting each user direct table privileges. - It prevents users from executing arbitrary SQL against tables they should not access directly. - It ensures that unqualified table names resolve consistently regardless of who runs the program.

Question 18

Explain the difference between SENSITIVE STATIC SCROLL and INSENSITIVE SCROLL cursors.

Show Answer **INSENSITIVE SCROLL** creates a snapshot of the result set at OPEN time. The result set is materialized into a temporary table. Changes made to the underlying base tables after the cursor is opened (by the same or other transactions) are **not visible** through the cursor. The result set is fixed and immutable. This provides consistent, repeatable reads but consumes temporary storage for the materialized result. **SENSITIVE STATIC SCROLL** also materializes the result set at OPEN time, but it checks whether the base table rows have been updated or deleted when they are fetched. If a row has been updated, the cursor returns the **new values**. If a row has been deleted, DB2 reports a **hole** (the row is detected as missing, and the FETCH returns SQLCODE +222). New rows inserted after the cursor was opened are **not visible** (the set of row positions is fixed at OPEN time). For a bank statement viewer, INSENSITIVE is usually appropriate because you want a consistent point-in-time view of the transactions. For a teller queue application where updated status should be visible, SENSITIVE STATIC might be preferred.

Question 19

A COBOL program encounters SQLCODE -911 during an UPDATE statement. What happened, and what should the program do?

Show Answer SQLCODE -911 indicates that the current unit of work has been **rolled back** by DB2 due to a **deadlock or lock timeout**. Specifically: - DB2 detected a deadlock condition (two or more transactions waiting for each other's locks) and chose this transaction as the victim, OR - A lock request exceeded the lock timeout threshold (IRLMRWT or LOCKTIME parameter) When -911 occurs, DB2 has **already rolled back** the entire unit of work -- all changes since the last COMMIT are undone. The program should: 1. **Not issue a ROLLBACK** (it has already been done by DB2). 2. **Log the event** with the SQLCODE and SQLERRM for diagnostic purposes. 3. **Retry the entire unit of work** from the last COMMIT point (re-read, re-process, re-update). 4. **Limit retries** to avoid infinite loops (typically 3 attempts). 5. If retries are exhausted, **report the failure** and proceed to the next transaction or terminate. This differs from SQLCODE -913, which indicates a deadlock or timeout where DB2 has **not** rolled back -- the program must issue an explicit ROLLBACK.

Question 20

What is the purpose of the FOR n ROWS clause on an INSERT statement, and how does it differ from executing n individual INSERT statements in a loop?

Show Answer The `INSERT ... FOR n ROWS` clause performs a **multi-row INSERT**, sending multiple rows to DB2 in a single operation. The host variables must be arrays (OCCURS n TIMES), and DB2 inserts all n rows with a single network round-trip and a single SQL execution.
       EXEC SQL
           INSERT INTO TRANSACTIONS
           (TRANS_ID, ACCT_ID, AMOUNT)
           VALUES (:HV-TRANS-ID, :HV-ACCT-ID, :HV-AMOUNT)
           FOR 100 ROWS
       END-EXEC
Compared to executing 100 individual INSERT statements in a PERFORM loop, multi-row INSERT provides: 1. **Reduced overhead**: One SQL execution instead of 100, meaning one PREPARE/access path evaluation instead of 100. 2. **Reduced network traffic**: One message to DB2 instead of 100 (critical when the DB2 subsystem is on a different LPAR or system). 3. **Better buffer pool efficiency**: DB2 can batch the page modifications, reducing log writes and buffer pool operations. 4. **Significant elapsed time reduction**: In benchmarks, multi-row INSERT is typically 3-10x faster than single-row INSERT loops for large batch loads. The tradeoff is that error handling is less granular -- if one row fails, you need to check SQLERRD(3) to determine how many rows were successfully inserted before the failure.

Question 21

Explain the role of LOB locators and when they should be used instead of materializing the entire LOB into a host variable.

Show Answer A **LOB locator** is a lightweight handle (a 4-byte token) that represents a LOB value without actually transferring the LOB data to the application's memory. It is declared in COBOL as:
       01  WS-DOC-LOC  SQL TYPE IS CLOB-LOCATOR.
LOB locators should be used instead of full materialization when: 1. **The LOB is very large**: Materializing a 10 MB BLOB into WORKING-STORAGE would consume 10 MB of application memory. A locator uses 4 bytes. 2. **Only a portion is needed**: Using `SUBSTR(:WS-DOC-LOC, 1, 200)` with a locator lets you extract a preview without reading the entire LOB. 3. **The LOB is being passed to another SQL operation**: You can use a locator to INSERT a LOB into another table or pass it to a stored procedure without materializing it in the application. 4. **Multiple operations on the same LOB**: You can perform LENGTH, SUBSTR, CONCAT, and POSITION operations on the locator without repeated data transfer. LOB locators are valid only within the unit of work in which they are created. A COMMIT or ROLLBACK invalidates all active locators. You can explicitly free a locator with `FREE LOCATOR :WS-DOC-LOC` to release DB2 resources before the commit.

Question 22

What is the significance of the CURRENT DEGREE special register and the DEGREE(ANY) bind option for COBOL batch programs?

Show Answer The **DEGREE** bind option and **CURRENT DEGREE** special register control whether DB2 uses **query parallelism** to execute SQL statements. **DEGREE(ANY)** (specified at BIND time) tells DB2 that it may use parallel processing for queries in this package or plan. DB2 can split a query across multiple parallel tasks (CP parallelism for CPU-bound queries, or I/O parallelism for I/O-bound queries) to reduce elapsed time. **CURRENT DEGREE** is a special register that can be set at runtime:
       EXEC SQL SET CURRENT DEGREE = 'ANY' END-EXEC
For COBOL batch programs that process large tables (millions of rows), enabling parallelism can dramatically reduce elapsed time -- a full table scan that takes 2 hours on a single task might complete in 30 minutes with 4-way parallelism. However, parallelism increases total CPU consumption (parallel overhead) and requires more buffer pool and memory resources. It should be used for long-running batch queries, not for high-volume short online transactions. The DBA should verify that sufficient system resources are available before enabling DEGREE(ANY) for batch workloads.

Question 23

A development team is debating whether to use a stored procedure or an application-level COBOL program for a complex account reconciliation process. List three advantages of each approach.

Show Answer **Advantages of a stored procedure:** 1. **Reduced network traffic**: All SQL executes within the DB2 address space. A reconciliation that requires 50 SQL statements only needs one network round-trip (the CALL) instead of 50. This is especially beneficial when the application runs on a different LPAR or system. 2. **Encapsulated business logic**: The reconciliation rules are defined in one place. Multiple calling programs (batch, online, web service) can invoke the same procedure, ensuring consistent behavior. 3. **Atomic operations**: The stored procedure can perform multiple updates within a single unit of work, and with COMMIT ON RETURN YES, the entire operation is committed atomically when the procedure completes. **Advantages of an application-level COBOL program:** 1. **Easier debugging**: Standard COBOL debugging tools (Xpediter, Debug Tool) work more naturally with application-level programs. Debugging stored procedures requires additional setup for the WLM-managed address space. 2. **Greater control over commit scope**: The application can combine the reconciliation with other operations (file I/O, MQ messages, CICS operations) in a single transaction boundary. 3. **Simpler deployment**: The program is compiled, linked, and deployed like any other COBOL load module. Stored procedures require additional steps (procedure definition in DB2 catalog, WLM environment configuration, GRANT EXECUTE).

Question 24

What are the three forms of the GRANT statement needed to allow a COBOL stored procedure to be created, bound, and executed by different teams?

Show Answer Three distinct GRANT statements are needed: 1. **GRANT to the developer** who creates the stored procedure definition in the DB2 catalog:
GRANT CREATE IN SCHEMA BANK TO DEVTEAM;
This allows the team to execute the `CREATE PROCEDURE` statement that registers the procedure in the DB2 catalog. 2. **GRANT to the binder** who binds the DBRM into a package with the procedure's SQL:
GRANT BINDADD TO BINDUSER;
GRANT SELECT, INSERT, UPDATE ON BANK.ACCOUNTS TO BINDUSER;
The binder needs BINDADD authority to create packages, plus the specific table privileges for all SQL in the package (or DBADM on the database). 3. **GRANT to the callers** who execute the stored procedure at runtime:
GRANT EXECUTE ON PROCEDURE BANK.CALC_INTEREST TO APPUSER;
The caller only needs EXECUTE authority on the procedure -- they do not need direct table privileges because the procedure's package was bound with the binder's authority. This separation of duties is a key security feature: end users never have direct table access, only the ability to invoke controlled operations through stored procedures.

Question 25

What happens to a WITH HOLD cursor across a COMMIT, and why is this important for COBOL batch programs?

Show Answer A cursor declared **WITH HOLD** remains open across COMMIT points. After a COMMIT, the cursor stays positioned on the last fetched row, and subsequent FETCH operations continue from that position. Without WITH HOLD, a COMMIT closes all open cursors, and the program would need to reopen the cursor and reposition to continue processing. This is critical for COBOL batch programs that process millions of rows with periodic commits. A typical pattern is:
       EXEC SQL
           DECLARE CSR-BATCH CURSOR WITH HOLD FOR
             SELECT ACCT_ID, BALANCE
               FROM ACCOUNTS
              WHERE STATUS = 'A'
              ORDER BY ACCT_ID
       END-EXEC

       PERFORM UNTIL SQLCODE = +100
           EXEC SQL FETCH CSR-BATCH
             INTO :WS-ACCT-ID, :WS-BALANCE END-EXEC
           IF SQLCODE = 0
               PERFORM 500-PROCESS-ACCOUNT
               ADD 1 TO WS-ROW-COUNT
               IF WS-ROW-COUNT >= 1000
                   EXEC SQL COMMIT END-EXEC
                   MOVE 0 TO WS-ROW-COUNT
               END-IF
           END-IF
       END-PERFORM
Without WITH HOLD, committing every 1,000 rows would close the cursor, and the program would lose its position. The program would have to reopen the cursor and skip over already-processed rows, which is both complex and inefficient.