25 min read

> "The first time you write a SQL query in COBOL, you think you understand DB2. The first time that query runs in production against 40 million rows during peak hours, you realize you've only just begun."

Chapter 28: Advanced DB2 Programming

"The first time you write a SQL query in COBOL, you think you understand DB2. The first time that query runs in production against 40 million rows during peak hours, you realize you've only just begun." — Maria Chen, senior developer, GlobalBank

In your introductory COBOL course, you likely wrote static SQL statements embedded in COBOL programs — simple SELECT, INSERT, UPDATE, and DELETE operations with hardcoded table and column names. Those fundamentals serve you well, but production COBOL-DB2 applications demand far more. They require queries that adapt to user input at runtime, cursors that scroll in any direction, stored procedures that encapsulate complex business logic on the database server, and carefully tuned access paths that keep a system responsive under load.

This chapter takes you beyond the basics of embedded SQL into the techniques that production COBOL-DB2 programmers use daily. We will explore dynamic SQL for flexible reporting, scrollable cursors for interactive browsing, stored procedures for server-side logic, and the performance considerations that separate a program that works from one that works well.

28.1 A Quick Review: Static SQL in COBOL

Before we advance, let us make sure our foundation is solid. Static SQL is SQL whose complete text is known at compile time. The DB2 precompiler (DBRM) processes the SQL, validates it against the DB2 catalog, and produces an optimized access plan during the BIND step.

           EXEC SQL
               SELECT ACCT-BALANCE, ACCT-STATUS
               INTO :WS-BALANCE, :WS-STATUS
               FROM ACCOUNT
               WHERE ACCT-NUMBER = :WS-ACCT-NUM
           END-EXEC

Static SQL is predictable, fast, and secure — DB2 optimizes it once during BIND, and the access plan is reused for every execution. But static SQL has a fundamental limitation: you cannot change the table name, column list, or WHERE clause structure at runtime.

💡 Key Insight: Static SQL is not inferior to dynamic SQL — it is preferred whenever the query structure is known at compile time. Dynamic SQL introduces complexity and security considerations. Use it only when static SQL genuinely cannot meet your requirements.

SQLCA and SQLCODE: Your Error Dashboard

Every embedded SQL statement populates the SQL Communication Area (SQLCA). The most critical field is SQLCODE:

SQLCODE Meaning
0 Successful execution
+100 No row found (SELECT) or no more rows (FETCH)
< 0 Error — check specific code
-803 Duplicate key on INSERT
-811 SELECT returned more than one row
-904 Resource unavailable (lock timeout)
-911 Deadlock or timeout — rollback occurred
-922 Authorization failure
       01  WS-SQLCODE              PIC S9(9) COMP.

           EXEC SQL
               INCLUDE SQLCA
           END-EXEC

           MOVE SQLCODE TO WS-SQLCODE
           EVALUATE TRUE
               WHEN SQLCODE = 0
                   CONTINUE
               WHEN SQLCODE = +100
                   SET WS-NO-DATA-FOUND TO TRUE
               WHEN SQLCODE < 0
                   PERFORM 9000-DB2-ERROR-HANDLER
           END-EVALUATE

⚠️ Production Rule: Always check SQLCODE after every SQL statement. In production, an unchecked -911 (deadlock rollback) can cause data corruption if the program continues processing as if the update succeeded.

28.2 Dynamic SQL: Queries That Adapt at Runtime

Dynamic SQL allows your program to construct, prepare, and execute SQL statements whose text is determined at runtime. This is essential for:

  • Ad hoc reporting where users choose columns, tables, and filter criteria
  • Generic utilities that operate on any table
  • Applications where the query structure changes based on business rules

DB2 provides three mechanisms for dynamic SQL, each suited to different scenarios.

EXECUTE IMMEDIATE: Fire and Forget

EXECUTE IMMEDIATE is the simplest form of dynamic SQL. It prepares and executes a SQL statement in a single step. Use it for statements that return no result set and execute only once.

       WORKING-STORAGE SECTION.
       01  WS-SQL-STMT             PIC X(500).
       01  WS-TABLE-NAME           PIC X(18).
       01  WS-ARCHIVE-DATE         PIC X(10).

       PROCEDURE DIVISION.
           MOVE 'TXN_HISTORY' TO WS-TABLE-NAME
           MOVE '2024-01-01' TO WS-ARCHIVE-DATE

           STRING 'DELETE FROM '
                  WS-TABLE-NAME
                  ' WHERE TXN_DATE < '''
                  WS-ARCHIVE-DATE
                  ''''
               DELIMITED BY SIZE
               INTO WS-SQL-STMT
           END-STRING

           EXEC SQL
               EXECUTE IMMEDIATE :WS-SQL-STMT
           END-EXEC

           EVALUATE SQLCODE
               WHEN 0
                   DISPLAY 'Rows deleted: ' SQLERRD(3)
               WHEN +100
                   DISPLAY 'No rows matched the criteria'
               WHEN OTHER
                   DISPLAY 'Error: SQLCODE = ' SQLCODE
                   PERFORM 9000-DB2-ERROR-HANDLER
           END-EVALUATE

🔴 Security Warning — SQL Injection: When constructing dynamic SQL from user input, you must validate and sanitize every value. Never concatenate raw user input into a SQL string. In the example above, WS-TABLE-NAME and WS-ARCHIVE-DATE should be validated against an allowed list before use. SQL injection is not just a web application problem — it exists wherever dynamic SQL meets untrusted input.

PREPARE and EXECUTE: Reusable Dynamic Statements

When you need to execute a dynamic SQL statement multiple times — perhaps with different parameter values — use PREPARE to compile the statement once, then EXECUTE it repeatedly with parameter markers.

       WORKING-STORAGE SECTION.
       01  WS-DYN-STMT             PIC X(500).
       01  WS-ACCT-STATUS          PIC X(1).
       01  WS-BRANCH-CODE          PIC X(4).
       01  WS-ROWS-UPDATED         PIC S9(9) COMP.

       PROCEDURE DIVISION.
      *--------------------------------------------------------------
      * Build a dynamic UPDATE with parameter markers
      *--------------------------------------------------------------
           MOVE 'UPDATE ACCOUNT SET ACCT_STATUS = ? '
             &  'WHERE BRANCH_CODE = ? '
             &  'AND ACCT_STATUS <> ?'
               TO WS-DYN-STMT

           EXEC SQL
               PREPARE STMT1 FROM :WS-DYN-STMT
           END-EXEC

           IF SQLCODE NOT = 0
               DISPLAY 'PREPARE failed: ' SQLCODE
               PERFORM 9000-DB2-ERROR-HANDLER
               STOP RUN
           END-IF

      *--------------------------------------------------------------
      * Execute with different parameters for each branch
      *--------------------------------------------------------------
           MOVE 'I' TO WS-ACCT-STATUS

           PERFORM VARYING WS-BRANCH-IDX
               FROM 1 BY 1
               UNTIL WS-BRANCH-IDX > WS-BRANCH-COUNT

               MOVE WS-BRANCH-TABLE(WS-BRANCH-IDX)
                   TO WS-BRANCH-CODE

               EXEC SQL
                   EXECUTE STMT1
                       USING :WS-ACCT-STATUS,
                             :WS-BRANCH-CODE,
                             :WS-ACCT-STATUS
               END-EXEC

               EVALUATE SQLCODE
                   WHEN 0
                       ADD SQLERRD(3) TO WS-ROWS-UPDATED
                   WHEN +100
                       CONTINUE
                   WHEN OTHER
                       PERFORM 9000-DB2-ERROR-HANDLER
               END-EVALUATE
           END-PERFORM

The parameter markers (?) serve the same purpose as host variables in static SQL — they let DB2 bind values safely without string concatenation, eliminating SQL injection risks.

📊 Performance Note: PREPARE is expensive — it invokes the DB2 optimizer to build an access plan. Always PREPARE once and EXECUTE many times. Never PREPARE inside a loop.

PREPARE with CURSOR: Dynamic Queries That Return Results

For dynamic SELECT statements, you combine PREPARE with a cursor declaration:

       WORKING-STORAGE SECTION.
       01  WS-QUERY-STMT           PIC X(1000).
       01  WS-COL-LIST             PIC X(200).
       01  WS-WHERE-CLAUSE         PIC X(500).
       01  WS-ORDER-CLAUSE         PIC X(100).

       01  WS-REPORT-LINE.
           05  WS-RPT-ACCT-NUM    PIC X(10).
           05  FILLER              PIC X(2) VALUE SPACES.
           05  WS-RPT-NAME        PIC X(30).
           05  FILLER              PIC X(2) VALUE SPACES.
           05  WS-RPT-BALANCE     PIC Z(7)9.99-.

       PROCEDURE DIVISION.
       1000-BUILD-QUERY.
      *--------------------------------------------------------------
      * Build dynamic query based on user selections
      *--------------------------------------------------------------
           EVALUATE WS-REPORT-TYPE
               WHEN 'SUMMARY'
                   MOVE 'ACCT_NUMBER, ACCT_NAME, ACCT_BALANCE'
                       TO WS-COL-LIST
               WHEN 'DETAIL'
                   MOVE 'ACCT_NUMBER, ACCT_NAME, ACCT_BALANCE, '
                     &  'OPEN_DATE, BRANCH_CODE, ACCT_TYPE'
                       TO WS-COL-LIST
               WHEN OTHER
                   DISPLAY 'Invalid report type'
                   STOP RUN
           END-EVALUATE

           STRING 'SELECT ' WS-COL-LIST
                  ' FROM ACCOUNT'
                  ' WHERE 1=1'
               DELIMITED BY '  '
               INTO WS-QUERY-STMT
           END-STRING

      *    Add optional filters
           IF WS-FILTER-BRANCH NOT = SPACES
               STRING WS-QUERY-STMT
                      ' AND BRANCH_CODE = ?'
                   DELIMITED BY '  '
                   INTO WS-QUERY-STMT
               END-STRING
           END-IF

           IF WS-FILTER-MIN-BAL NOT = ZEROS
               STRING WS-QUERY-STMT
                      ' AND ACCT_BALANCE >= ?'
                   DELIMITED BY '  '
                   INTO WS-QUERY-STMT
               END-STRING
           END-IF

           STRING WS-QUERY-STMT
                  ' ORDER BY ACCT_BALANCE DESC'
               DELIMITED BY '  '
               INTO WS-QUERY-STMT
           END-STRING

      *--------------------------------------------------------------
      * Prepare and open cursor
      *--------------------------------------------------------------
           EXEC SQL
               PREPARE RPTQUERY FROM :WS-QUERY-STMT
           END-EXEC

           IF SQLCODE NOT = 0
               DISPLAY 'PREPARE failed: ' SQLCODE
               PERFORM 9000-DB2-ERROR-HANDLER
               STOP RUN
           END-IF

           EXEC SQL
               DECLARE RPT-CURSOR CURSOR FOR RPTQUERY
           END-EXEC

           EXEC SQL
               OPEN RPT-CURSOR
                   USING :WS-FILTER-BRANCH,
                         :WS-FILTER-MIN-BAL
           END-EXEC

           PERFORM 2000-FETCH-AND-PRINT
               UNTIL WS-END-OF-DATA

           EXEC SQL
               CLOSE RPT-CURSOR
           END-EXEC.

💡 The WHERE 1=1 Trick: Notice the WHERE 1=1 in the base query. This seemingly pointless condition lets you append additional filters with AND without worrying about whether a WHERE clause already exists. It is an extremely common pattern in dynamic SQL construction. DB2's optimizer recognizes 1=1 as a tautology and eliminates it from the access plan at zero cost.

DESCRIBE: Discovering Result Set Structure

When your program does not know the columns a dynamic query will return, you can use DESCRIBE to interrogate the result set structure via the SQL Descriptor Area (SQLDA):

       WORKING-STORAGE SECTION.
       01  WS-SQLDA.
           05  SQLDAID         PIC X(8) VALUE 'SQLDA   '.
           05  SQLDABC         PIC S9(9) COMP VALUE 0.
           05  SQLN            PIC S9(4) COMP VALUE 50.
           05  SQLD            PIC S9(4) COMP VALUE 0.
           05  SQLVAR OCCURS 50 TIMES.
               10  SQLTYPE     PIC S9(4) COMP.
               10  SQLLEN      PIC S9(4) COMP.
               10  SQLDATA     POINTER.
               10  SQLIND      POINTER.
               10  SQLNAME.
                   15  SQLNAMEL PIC S9(4) COMP.
                   15  SQLNAMEC PIC X(30).

       PROCEDURE DIVISION.
           EXEC SQL
               PREPARE DYNSTMT FROM :WS-QUERY-STMT
           END-EXEC

           EXEC SQL
               DESCRIBE DYNSTMT INTO :WS-SQLDA
           END-EXEC

           DISPLAY 'Query returns ' SQLD ' columns:'
           PERFORM VARYING WS-COL-IDX FROM 1 BY 1
               UNTIL WS-COL-IDX > SQLD
               DISPLAY '  Column ' WS-COL-IDX ': '
                   SQLNAMEC(WS-COL-IDX)
                   ' Type=' SQLTYPE(WS-COL-IDX)
                   ' Len='  SQLLEN(WS-COL-IDX)
           END-PERFORM

28.3 DCLGEN: Your Bridge Between DB2 and COBOL

The Declaration Generator (DCLGEN) creates COBOL data structures that mirror DB2 table definitions. In production shops, DCLGEN output is stored in copybook libraries and included in programs via the SQL INCLUDE statement.

      *--------------------------------------------------------------
      * DCLGEN output for ACCOUNT table
      * Generated: 2024-11-15  Table: GLOBALBANK.ACCOUNT
      *--------------------------------------------------------------
           EXEC SQL DECLARE ACCOUNT TABLE
               ( ACCT_NUMBER     CHAR(10)       NOT NULL,
                 ACCT_NAME       VARCHAR(50)     NOT NULL,
                 ACCT_TYPE       CHAR(2)         NOT NULL,
                 ACCT_BALANCE    DECIMAL(13,2)   NOT NULL
                                 WITH DEFAULT 0,
                 ACCT_STATUS     CHAR(1)         NOT NULL
                                 WITH DEFAULT 'A',
                 OPEN_DATE       DATE            NOT NULL,
                 CLOSE_DATE      DATE,
                 BRANCH_CODE     CHAR(4)         NOT NULL,
                 LAST_ACTIVITY   TIMESTAMP       NOT NULL
               ) END-EXEC.

      *--------------------------------------------------------------
      * COBOL host variable declarations
      *--------------------------------------------------------------
       01  DCLACCOUNT.
           10  ACCT-NUMBER         PIC X(10).
           10  ACCT-NAME.
               49  ACCT-NAME-LEN  PIC S9(4) COMP.
               49  ACCT-NAME-TEXT PIC X(50).
           10  ACCT-TYPE           PIC X(2).
           10  ACCT-BALANCE        PIC S9(11)V9(2) COMP-3.
           10  ACCT-STATUS         PIC X(1).
           10  OPEN-DATE           PIC X(10).
           10  CLOSE-DATE          PIC X(10).
           10  BRANCH-CODE         PIC X(4).
           10  LAST-ACTIVITY       PIC X(26).

DCLGEN Best Practices

Maria Chen maintains strict DCLGEN standards at GlobalBank:

  1. Regenerate after every DDL change. When a column is added, altered, or dropped, regenerate DCLGEN and recompile all affected programs.
  2. Store DCLGEN in a dedicated PDS. GLOBALBANK.DB2.DCLGEN is separate from application copybooks.
  3. Never hand-edit DCLGEN output. If you need a different layout, create a separate working-storage structure and MOVE fields from the DCLGEN structure.
  4. Use INCLUDE, not COPY. EXEC SQL INCLUDE is processed by the DB2 precompiler; COPY is processed by the COBOL compiler. They look similar but operate at different stages.
       WORKING-STORAGE SECTION.
      *--------------------------------------------------------------
      * Include DCLGEN — processed by DB2 precompiler
      *--------------------------------------------------------------
           EXEC SQL INCLUDE DCLACCT END-EXEC.
           EXEC SQL INCLUDE SQLCA   END-EXEC.

Best Practice: Keep your DCLGEN library versioned and synchronized with DDL changes. At GlobalBank, the database change management process requires simultaneous updates to DDL scripts, DCLGEN members, and a master cross-reference spreadsheet that maps tables to programs.

28.4 NULL Handling with Indicator Variables

In DB2, any nullable column can contain the special value NULL, meaning "unknown" or "not applicable." COBOL has no native concept of NULL, so DB2 uses indicator variables — small integer fields that accompany each host variable.

       01  WS-CLOSE-DATE           PIC X(10).
       01  WS-CLOSE-DATE-IND       PIC S9(4) COMP.

           EXEC SQL
               SELECT CLOSE_DATE
               INTO :WS-CLOSE-DATE :WS-CLOSE-DATE-IND
               FROM ACCOUNT
               WHERE ACCT_NUMBER = :WS-ACCT-NUM
           END-EXEC

           EVALUATE WS-CLOSE-DATE-IND
               WHEN 0
                   DISPLAY 'Account closed on: ' WS-CLOSE-DATE
               WHEN -1
                   DISPLAY 'Account is still open (NULL close date)'
               WHEN -2
                   DISPLAY 'Value was truncated during retrieval'
           END-EVALUATE

Indicator Variable Values

Value Meaning
0 Column contains a valid non-NULL value
-1 Column is NULL
-2 Column value was truncated
> 0 Original length before truncation (VARCHAR)

Setting NULL on INSERT or UPDATE

To insert a NULL value, set the indicator variable to -1:

           MOVE -1 TO WS-CLOSE-DATE-IND

           EXEC SQL
               INSERT INTO ACCOUNT
                   (ACCT_NUMBER, ACCT_NAME, ACCT_TYPE,
                    ACCT_BALANCE, ACCT_STATUS, OPEN_DATE,
                    CLOSE_DATE, BRANCH_CODE, LAST_ACTIVITY)
               VALUES
                   (:ACCT-NUMBER,
                    :ACCT-NAME,
                    :ACCT-TYPE,
                    :ACCT-BALANCE,
                    :ACCT-STATUS,
                    :OPEN-DATE,
                    :WS-CLOSE-DATE :WS-CLOSE-DATE-IND,
                    :BRANCH-CODE,
                    CURRENT TIMESTAMP)
           END-EXEC

Indicator Arrays for Multi-Column Operations

When working with many nullable columns, use indicator arrays to keep your code manageable:

       01  WS-INDICATORS.
           05  WS-IND              PIC S9(4) COMP
                                   OCCURS 10 TIMES.

       01  WS-IND-CLOSE-DATE      PIC S9(4) COMP
                                   RENAMES WS-IND(7).
       01  WS-IND-LAST-STMT       PIC S9(4) COMP
                                   RENAMES WS-IND(8).

⚠️ Common Bug: If you SELECT a nullable column without an indicator variable and the column is NULL, DB2 returns SQLCODE -305. This is one of the most common runtime errors in COBOL-DB2 programs. Defensive programming demands indicator variables for every nullable column.

NULL-Safe Comparison Logic

NULL introduces three-valued logic into your SQL: TRUE, FALSE, and UNKNOWN. A WHERE clause condition that evaluates to UNKNOWN is treated as FALSE — the row is excluded. This means WHERE CLOSE_DATE = '2024-12-31' will never match a row where CLOSE_DATE is NULL, even though that might be the logical intent.

In COBOL, you must handle NULL comparisons explicitly:

      *--------------------------------------------------------------
      * ANTI-PATTERN: This will never find NULL close dates
      *--------------------------------------------------------------
           EXEC SQL
               SELECT COUNT(*)
               INTO :WS-COUNT
               FROM ACCOUNT
               WHERE CLOSE_DATE = :WS-TARGET-DATE
           END-EXEC
      *    If WS-TARGET-DATE is set to represent "no date,"
      *    this query returns 0 even if many accounts have NULL

      *--------------------------------------------------------------
      * CORRECT: Use IS NULL for NULL comparison
      *--------------------------------------------------------------
           IF WS-SEARCH-FOR-NULL = 'Y'
               EXEC SQL
                   SELECT COUNT(*)
                   INTO :WS-COUNT
                   FROM ACCOUNT
                   WHERE CLOSE_DATE IS NULL
               END-EXEC
           ELSE
               EXEC SQL
                   SELECT COUNT(*)
                   INTO :WS-COUNT
                   FROM ACCOUNT
                   WHERE CLOSE_DATE = :WS-TARGET-DATE
               END-EXEC
           END-IF

The COALESCE Function for NULL Defaults

DB2's COALESCE function returns the first non-NULL argument. It is invaluable for providing default values in reports and calculations:

           EXEC SQL
               SELECT ACCT_NUMBER,
                      ACCT_NAME,
                      COALESCE(CLOSE_DATE, '9999-12-31')
               INTO :WS-ACCT-NUM,
                    :WS-ACCT-NAME,
                    :WS-CLOSE-DATE
               FROM ACCOUNT
               WHERE BRANCH_CODE = :WS-BRANCH
           END-EXEC
      *    CLOSE_DATE will be '9999-12-31' instead of NULL
      *    No indicator variable needed with COALESCE

💡 When to Use COALESCE vs. Indicator Variables: COALESCE is convenient when you want a default value and do not care about distinguishing NULL from the default. Indicator variables are necessary when your program logic must know whether a value was genuinely NULL (as opposed to a specific value that happens to match the default). In production, many shops use both: COALESCE in report queries and indicator variables in transaction processing.

Try It Yourself: NULL Handling Exercise

In your Student Mainframe Lab, create a table with three nullable columns (a date, a varchar, and a decimal). Insert five rows: two with all values populated, one with the date NULL, one with the varchar NULL, and one with all three NULL. Then write a COBOL program that:

  1. Selects all rows with indicator variables for each nullable column
  2. For each row, displays either the value or "NULL" based on the indicator
  3. Uses COALESCE to produce a "clean" report where NULLs are replaced with defaults
  4. Counts the number of NULL values per column

28.5 Scrollable Cursors

Standard cursors are forward-only — you can only FETCH the next row. Scrollable cursors let you move in any direction through the result set, which is essential for interactive browse screens where users page forward and backward.

Declaring a Scrollable Cursor

           EXEC SQL
               DECLARE ACCT-BROWSE SCROLL CURSOR FOR
                   SELECT ACCT_NUMBER, ACCT_NAME, ACCT_BALANCE
                   FROM ACCOUNT
                   WHERE BRANCH_CODE = :WS-BRANCH-CODE
                   ORDER BY ACCT_NAME
                   FOR FETCH ONLY
           END-EXEC

The SCROLL keyword enables bidirectional movement. FOR FETCH ONLY tells DB2 that no updates will occur through this cursor, allowing it to optimize the access path.

Scrollable FETCH Operations

      *    Position at the first row
           EXEC SQL
               FETCH FIRST ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

      *    Move forward one row (same as standard FETCH)
           EXEC SQL
               FETCH NEXT ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

      *    Move backward one row
           EXEC SQL
               FETCH PRIOR ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

      *    Jump to the last row
           EXEC SQL
               FETCH LAST ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

      *    Jump to a specific row position
           EXEC SQL
               FETCH ABSOLUTE :WS-ROW-NUMBER ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

      *    Move relative to current position (skip 5 rows forward)
           MOVE +5 TO WS-OFFSET
           EXEC SQL
               FETCH RELATIVE :WS-OFFSET ACCT-BROWSE
               INTO :WS-ACCT-NUM, :WS-ACCT-NAME, :WS-BALANCE
           END-EXEC

Scrollable Cursor Sensitivity

DB2 offers two sensitivity options:

  • INSENSITIVE: The result set is a snapshot taken when the cursor opens. Changes made by other transactions are not visible. This guarantees a stable view but uses a temporary result table.
  • SENSITIVE STATIC: Changes made through the cursor are visible, but changes by other transactions are not.
  • SENSITIVE DYNAMIC: All changes are visible, but this can be resource-intensive.
           EXEC SQL
               DECLARE STABLE-BROWSE INSENSITIVE SCROLL CURSOR FOR
                   SELECT ACCT_NUMBER, ACCT_NAME, ACCT_BALANCE
                   FROM ACCOUNT
                   WHERE BRANCH_CODE = :WS-BRANCH
                   ORDER BY ACCT_NAME
           END-EXEC

📊 Performance Consideration: Scrollable cursors consume more resources than forward-only cursors because DB2 must maintain positional information. Use them only when you genuinely need bidirectional navigation — typically in CICS online browse transactions. For batch sequential processing, a standard forward-only cursor is always more efficient.

28.6 Multi-Row FETCH and INSERT

Processing one row at a time incurs overhead for each FETCH call — context switching between the COBOL runtime and the DB2 engine. Multi-row FETCH retrieves multiple rows in a single call, dramatically reducing this overhead in batch programs.

Multi-Row FETCH with Rowset Cursors

       WORKING-STORAGE SECTION.
       01  WS-ROWSET-SIZE          PIC S9(9) COMP VALUE 100.
       01  WS-ROWS-FETCHED         PIC S9(9) COMP.

       01  WS-ACCT-ARRAY.
           05  WS-ACCT-ROW OCCURS 100 TIMES.
               10  WS-ARR-ACCT-NUM    PIC X(10).
               10  WS-ARR-ACCT-NAME   PIC X(50).
               10  WS-ARR-BALANCE     PIC S9(11)V9(2) COMP-3.
               10  WS-ARR-STATUS      PIC X(1).

       01  WS-IND-ARRAY.
           05  WS-IND-ROW OCCURS 100 TIMES.
               10  WS-IND-ACCT-NUM    PIC S9(4) COMP.
               10  WS-IND-ACCT-NAME   PIC S9(4) COMP.
               10  WS-IND-BALANCE     PIC S9(4) COMP.
               10  WS-IND-STATUS      PIC S9(4) COMP.

       PROCEDURE DIVISION.
           EXEC SQL
               DECLARE BATCH-CURSOR CURSOR
                   WITH ROWSET POSITIONING FOR
                   SELECT ACCT_NUMBER, ACCT_NAME,
                          ACCT_BALANCE, ACCT_STATUS
                   FROM ACCOUNT
                   WHERE BRANCH_CODE = :WS-BRANCH-CODE
                   ORDER BY ACCT_NUMBER
           END-EXEC

           EXEC SQL
               OPEN BATCH-CURSOR
           END-EXEC

           PERFORM UNTIL SQLCODE = +100
               EXEC SQL
                   FETCH NEXT ROWSET FROM BATCH-CURSOR
                       FOR :WS-ROWSET-SIZE ROWS
                   INTO :WS-ARR-ACCT-NUM,
                        :WS-ARR-ACCT-NAME,
                        :WS-ARR-BALANCE,
                        :WS-ARR-STATUS
                   :WS-IND-ACCT-NUM,
                   :WS-IND-ACCT-NAME,
                   :WS-IND-BALANCE,
                   :WS-IND-STATUS
               END-EXEC

               IF SQLCODE = 0 OR SQLCODE = +100
                   MOVE SQLERRD(3) TO WS-ROWS-FETCHED
                   PERFORM VARYING WS-ROW-IDX
                       FROM 1 BY 1
                       UNTIL WS-ROW-IDX > WS-ROWS-FETCHED
                       PERFORM 3000-PROCESS-ACCOUNT-ROW
                   END-PERFORM
               ELSE
                   PERFORM 9000-DB2-ERROR-HANDLER
               END-IF
           END-PERFORM

           EXEC SQL
               CLOSE BATCH-CURSOR
           END-EXEC

Multi-Row INSERT

Similarly, you can insert multiple rows in a single operation:

       01  WS-INSERT-COUNT         PIC S9(9) COMP VALUE 50.
       01  WS-TXN-ARRAY.
           05  WS-TXN-ROW OCCURS 50 TIMES.
               10  WS-INS-TXN-ID      PIC X(15).
               10  WS-INS-ACCT-NUM    PIC X(10).
               10  WS-INS-AMOUNT      PIC S9(11)V9(2) COMP-3.
               10  WS-INS-TXN-TYPE    PIC X(2).

       PROCEDURE DIVISION.
      *    ... (populate the array) ...

           EXEC SQL
               INSERT INTO TXN_HISTORY
                   (TXN_ID, ACCT_NUMBER, TXN_AMOUNT, TXN_TYPE)
               VALUES (:WS-INS-TXN-ID,
                        :WS-INS-ACCT-NUM,
                        :WS-INS-AMOUNT,
                        :WS-INS-TXN-TYPE)
               FOR :WS-INSERT-COUNT ROWS
           END-EXEC

           IF SQLCODE = 0
               DISPLAY 'Inserted ' SQLERRD(3) ' rows'
           ELSE
               PERFORM 9000-DB2-ERROR-HANDLER
           END-IF

💡 Batch Tuning Rule of Thumb: Multi-row FETCH with a rowset size of 100 typically reduces elapsed time by 30-50% compared to single-row FETCH in batch programs. Beyond 100 rows, the gains diminish while memory usage increases. Profile your specific workload to find the optimal size.

Try It Yourself: Multi-Row FETCH Performance Comparison

In your Student Mainframe Lab (or GnuCOBOL with a local database), create a table with 10,000 rows and write two programs:

  1. A single-row FETCH program that processes all rows one at a time
  2. A multi-row FETCH program with a rowset size of 100

Record the elapsed time of each. You should see a significant improvement with multi-row FETCH.

Multi-Row FETCH Error Handling Considerations

When a multi-row FETCH encounters an error on one row within the rowset, DB2 may still return data for the other rows. The SQLERRD(3) field tells you how many rows were successfully fetched. Always process SQLERRD(3) rows, even when SQLCODE is not zero:

           EXEC SQL
               FETCH NEXT ROWSET FROM BATCH-CURSOR
                   FOR :WS-ROWSET-SIZE ROWS
               INTO :WS-ARR-ACCT-NUM,
                    :WS-ARR-ACCT-NAME,
                    :WS-ARR-BALANCE,
                    :WS-ARR-STATUS
           END-EXEC

      *    Process whatever was fetched, even on error
           IF SQLERRD(3) > 0
               MOVE SQLERRD(3) TO WS-ROWS-FETCHED
               PERFORM VARYING WS-ROW-IDX
                   FROM 1 BY 1
                   UNTIL WS-ROW-IDX > WS-ROWS-FETCHED
                   PERFORM 3000-PROCESS-ROW
               END-PERFORM
           END-IF

      *    Then handle the error condition
           IF SQLCODE < 0 AND SQLCODE NOT = -100
               PERFORM 9000-DB2-ERROR-HANDLER
           END-IF

Choosing the Right Rowset Size

The optimal rowset size depends on several factors that Tomás Rivera tracks at MedClaim:

Factor Smaller Rowset (10-50) Larger Rowset (100-500)
Memory per task Lower Higher
SQL call overhead More calls, more overhead Fewer calls, less overhead
Lock duration Shorter (commit more often) Longer
Restart granularity Finer Coarser
Optimal for Online (CICS) transactions Batch programs with large volumes

For CICS programs, a rowset size of 10-20 is typical because the program must complete quickly and memory is shared among thousands of tasks. For batch programs, 100-200 is the sweet spot. Beyond 500, the diminishing returns rarely justify the increased memory and working storage complexity.

      *--------------------------------------------------------------
      * Memory calculation for rowset sizing
      * Each row: 10 (acct) + 50 (name) + 7 (bal) + 1 (status) = 68 bytes
      * Plus indicator array: 4 fields x 2 bytes = 8 bytes per row
      * Total per row: 76 bytes
      *
      * Rowset 100: 7,600 bytes
      * Rowset 500: 38,000 bytes
      * Rowset 1000: 76,000 bytes
      *--------------------------------------------------------------

28.7 Stored Procedures: Server-Side Business Logic

A stored procedure is a named routine stored in the DB2 catalog and executed on the database server. From COBOL, you invoke a stored procedure with the CALL statement. Stored procedures offer several advantages:

  • Reduced network traffic: Multiple SQL operations execute on the server with a single call from the client
  • Centralized business logic: Rules encoded in stored procedures are enforced regardless of which application accesses the data
  • Security: Users can be granted EXECUTE authority on a procedure without direct access to underlying tables

Calling a Stored Procedure from COBOL

       WORKING-STORAGE SECTION.
       01  WS-ACCT-NUM             PIC X(10).
       01  WS-TXN-TYPE             PIC X(2).
       01  WS-TXN-AMOUNT           PIC S9(11)V9(2) COMP-3.
       01  WS-NEW-BALANCE          PIC S9(11)V9(2) COMP-3.
       01  WS-RETURN-CODE          PIC S9(9) COMP.
       01  WS-RETURN-MSG           PIC X(80).

       PROCEDURE DIVISION.
           MOVE 'A001234567' TO WS-ACCT-NUM
           MOVE 'CR' TO WS-TXN-TYPE
           MOVE 1500.00 TO WS-TXN-AMOUNT

           EXEC SQL
               CALL GLOBALBANK.PROCESS_TRANSACTION
                   (:WS-ACCT-NUM,
                    :WS-TXN-TYPE,
                    :WS-TXN-AMOUNT,
                    :WS-NEW-BALANCE,
                    :WS-RETURN-CODE,
                    :WS-RETURN-MSG)
           END-EXEC

           EVALUATE SQLCODE
               WHEN 0
                   EVALUATE WS-RETURN-CODE
                       WHEN 0
                           DISPLAY 'Transaction successful'
                           DISPLAY 'New balance: ' WS-NEW-BALANCE
                       WHEN 1
                           DISPLAY 'Insufficient funds'
                       WHEN 2
                           DISPLAY 'Account frozen'
                       WHEN OTHER
                           DISPLAY 'Business error: '
                               WS-RETURN-MSG
                   END-EVALUATE
               WHEN +466
                   DISPLAY 'Stored procedure returned result set'
               WHEN OTHER
                   DISPLAY 'DB2 error calling procedure: '
                       SQLCODE
                   PERFORM 9000-DB2-ERROR-HANDLER
           END-EVALUATE

Parameter Modes: IN, OUT, INOUT

Stored procedure parameters have three modes:

Mode Direction Example Use
IN Caller → Procedure Account number to look up
OUT Procedure → Caller Computed balance to return
INOUT Both directions Counter incremented by procedure

The CREATE PROCEDURE DDL defines these modes:

CREATE PROCEDURE GLOBALBANK.PROCESS_TRANSACTION
    (IN  P_ACCT_NUM     CHAR(10),
     IN  P_TXN_TYPE     CHAR(2),
     IN  P_TXN_AMOUNT   DECIMAL(13,2),
     OUT P_NEW_BALANCE  DECIMAL(13,2),
     OUT P_RETURN_CODE  INTEGER,
     OUT P_RETURN_MSG   VARCHAR(80))
    LANGUAGE COBOL
    EXTERNAL NAME TXNPROC
    PARAMETER STYLE GENERAL
    MODIFIES SQL DATA
    COMMIT ON RETURN NO
    WLM ENVIRONMENT WLMENV1

Stored Procedures That Return Result Sets

A stored procedure can return one or more result sets by declaring and opening cursors without closing them:

      *--------------------------------------------------------------
      * Inside the stored procedure (server-side COBOL program)
      *--------------------------------------------------------------
           EXEC SQL
               DECLARE TXN-CURSOR CURSOR WITH RETURN FOR
                   SELECT TXN_ID, TXN_DATE, TXN_AMOUNT,
                          TXN_TYPE, TXN_DESC
                   FROM TXN_HISTORY
                   WHERE ACCT_NUMBER = :P-ACCT-NUM
                   ORDER BY TXN_DATE DESC
                   FETCH FIRST 50 ROWS ONLY
           END-EXEC

           EXEC SQL OPEN TXN-CURSOR END-EXEC
      *    Do NOT close the cursor — it becomes the result set

The calling program processes the result set:

      *--------------------------------------------------------------
      * In the calling program — process result set
      *--------------------------------------------------------------
           EXEC SQL
               CALL GLOBALBANK.GET_TXN_HISTORY
                   (:WS-ACCT-NUM)
           END-EXEC

           IF SQLCODE = +466
      *        Procedure returned a result set
               EXEC SQL
                   ASSOCIATE LOCATOR (:WS-RESULT-LOC)
                       WITH PROCEDURE GLOBALBANK.GET_TXN_HISTORY
               END-EXEC

               EXEC SQL
                   ALLOCATE TXN-RS CURSOR FOR RESULT SET
                       :WS-RESULT-LOC
               END-EXEC

               PERFORM UNTIL SQLCODE = +100
                   EXEC SQL
                       FETCH TXN-RS
                       INTO :WS-TXN-ID, :WS-TXN-DATE,
                            :WS-TXN-AMOUNT, :WS-TXN-TYPE,
                            :WS-TXN-DESC
                   END-EXEC
                   IF SQLCODE = 0
                       PERFORM 5000-DISPLAY-TXN-ROW
                   END-IF
               END-PERFORM
           END-IF

🔗 Connection to Chapter 29: Stored procedures are particularly powerful in CICS environments. A CICS transaction can CALL a stored procedure to perform complex database operations in a single round trip, keeping the transaction response time low. We will see this pattern in action when we build CICS transactions in the next chapter.

28.8 Performance Tuning: Making DB2 Work with You

In production, a poorly tuned SQL statement can bring an entire DB2 subsystem to its knees. Tomás Rivera, MedClaim's DBA, has a saying: "Every SQL statement is innocent until it hits production volume."

Understanding Access Paths with EXPLAIN

The EXPLAIN statement populates the PLAN_TABLE with DB2's chosen access path for a query. This tells you how DB2 will execute your SQL.

EXPLAIN PLAN SET QUERYNO = 1001 FOR
    SELECT C.CLAIM_NUMBER, C.CLAIM_AMOUNT,
           M.MEMBER_NAME, P.PROVIDER_NAME
    FROM CLAIM C
    JOIN MEMBER M ON C.MEMBER_ID = M.MEMBER_ID
    JOIN PROVIDER P ON C.PROVIDER_ID = P.PROVIDER_ID
    WHERE C.CLAIM_STATUS = 'PENDING'
    AND C.SUBMIT_DATE >= CURRENT DATE - 30 DAYS
    ORDER BY C.CLAIM_AMOUNT DESC

Key columns in the PLAN_TABLE output:

Column What It Tells You
ACCESSTYPE I (index), R (tablespace scan), N (index only)
MATCHCOLS Number of index columns matched in predicate
ACCESSNAME Name of the index used
INDEXONLY Y if query can be satisfied from index alone
SORTC_JOIN Y if a sort is needed for the join
TSLOCKMODE Lock mode: IS, IX, S, X, U

Access Path Red Flags

Tomás teaches his team to watch for these warning signs:

  1. Tablespace scan (ACCESSTYPE = 'R') on a large table: Usually means a missing or unusable index.
  2. MATCHCOLS = 0: The index is being scanned entirely, not used efficiently.
  3. Multiple sorts: Each sort requires temporary work space and CPU.
  4. High PREFETCH values: May indicate DB2 is reading far more data than needed.

Index Usage Patterns

The order of columns in a composite index matters enormously:

-- Index: IX_CLAIM_STATUS_DATE (CLAIM_STATUS, SUBMIT_DATE)

-- GOOD: Both index columns match (MATCHCOLS = 2)
SELECT * FROM CLAIM
WHERE CLAIM_STATUS = 'PENDING'
AND SUBMIT_DATE >= '2024-01-01'

-- OK: First column matches (MATCHCOLS = 1)
SELECT * FROM CLAIM
WHERE CLAIM_STATUS = 'PENDING'

-- BAD: Cannot use index efficiently (MATCHCOLS = 0)
SELECT * FROM CLAIM
WHERE SUBMIT_DATE >= '2024-01-01'
-- The index starts with CLAIM_STATUS; skipping it
-- means DB2 cannot use the index efficiently

📊 The Column Order Rule: In a composite index on columns (A, B, C), predicates must match from left to right without gaps. A predicate on A and C skips B, so MATCHCOLS = 1 (only A matches). Always design indexes to match your most common query patterns.

COBOL-Specific Performance Patterns

      *--------------------------------------------------------------
      * ANTI-PATTERN: Fetching inside a loop with individual lookups
      *--------------------------------------------------------------
           PERFORM VARYING WS-IDX FROM 1 BY 1
               UNTIL WS-IDX > WS-CLAIM-COUNT
               EXEC SQL
                   SELECT PROVIDER_NAME
                   INTO :WS-PROV-NAME
                   FROM PROVIDER
                   WHERE PROVIDER_ID = :WS-PROV-ID(WS-IDX)
               END-EXEC
           END-PERFORM
      *    This makes WS-CLAIM-COUNT separate DB2 calls!

      *--------------------------------------------------------------
      * BETTER: Use a JOIN to get everything in one pass
      *--------------------------------------------------------------
           EXEC SQL
               DECLARE CLAIM-PROV-CURSOR CURSOR FOR
                   SELECT C.CLAIM_NUMBER, C.CLAIM_AMOUNT,
                          P.PROVIDER_NAME
                   FROM CLAIM C
                   JOIN PROVIDER P
                       ON C.PROVIDER_ID = P.PROVIDER_ID
                   WHERE C.BATCH_ID = :WS-BATCH-ID
                   ORDER BY C.CLAIM_NUMBER
           END-EXEC

🔵 Derek's Learning Moment: When Derek first saw the JOIN version, he asked: "But the JOIN query is more complex — isn't it slower?" Tomás explained: "The optimizer is very good at JOINs. What it cannot optimize is 500,000 separate SQL calls. Each call crosses the boundary between the COBOL runtime and the DB2 engine. That boundary crossing — the context switch — costs about 0.1 milliseconds. Multiply by 500,000 and you get 50 seconds of pure overhead, before DB2 even starts reading data."

Predicate Pushdown and Filter Factor

Understanding how DB2 evaluates predicates helps you write more efficient WHERE clauses. DB2 classifies predicates into categories:

Stage 1 predicates are evaluated by the Data Manager during index or data access. They are the most efficient: - Column = value - Column > value, Column < value - Column BETWEEN value AND value - Column IN (list) - Column LIKE 'prefix%' (with a leading literal)

Stage 2 predicates are evaluated by the Relational Data System after data retrieval. They are less efficient: - Column LIKE '%substring%' (leading wildcard) - SUBSTR(column, n, m) = value - YEAR(date_column) = 2024 - Column1 = Column2 (different tables) - Arithmetic expressions on columns

      *--------------------------------------------------------------
      * ANTI-PATTERN: Stage 2 predicate (function on column)
      *--------------------------------------------------------------
           EXEC SQL
               SELECT COUNT(*)
               INTO :WS-COUNT
               FROM CLAIM
               WHERE YEAR(SUBMIT_DATE) = 2024
                 AND MONTH(SUBMIT_DATE) = 11
           END-EXEC
      *    DB2 must read every row and apply YEAR/MONTH functions

      *--------------------------------------------------------------
      * BETTER: Stage 1 predicate (range on raw column)
      *--------------------------------------------------------------
           EXEC SQL
               SELECT COUNT(*)
               INTO :WS-COUNT
               FROM CLAIM
               WHERE SUBMIT_DATE >= '2024-11-01'
                 AND SUBMIT_DATE < '2024-12-01'
           END-EXEC
      *    DB2 can use an index on SUBMIT_DATE directly

The filter factor of a predicate estimates what fraction of rows it will match. A predicate with filter factor 0.01 (matches 1% of rows) is more selective than one with 0.50 (50%). DB2 uses filter factors to choose access paths — more selective predicates are applied first.

📊 Performance Principle: Always write predicates that can be evaluated at Stage 1. Avoid applying functions to columns in WHERE clauses. If you need to filter by a function result (like YEAR), consider adding a computed column or adjusting your predicate to use a range comparison on the original column.

The BIND Process and Package Versioning

Before a COBOL-DB2 program can run, its SQL must be bound into a plan or package. The BIND step associates the program's SQL with the DB2 catalog and generates access plans.

DBRM (from precompile) → BIND PACKAGE → Package (stored in DB2)
                          or BIND PLAN

Key BIND options that affect performance:

Option Values Effect
ISOLATION UR, CS, RS, RR Default isolation level for all SQL
CURRENTDATA YES, NO Whether to refetch data for currency
DEGREE 1, ANY Parallel processing for queries
REOPT NONE, ALWAYS, ONCE When to re-optimize dynamic SQL
VALIDATE BIND, RUN When to validate table/column existence
//BIND     EXEC PGM=IKJEFT01
//SYSTSPRT DD  SYSOUT=*
//SYSTSIN  DD  *
  DSN SYSTEM(DB2P)
  BIND PACKAGE(GLBANKPK) -
       MEMBER(RPTBUILD) -
       ISOLATION(CS) -
       VALIDATE(BIND) -
       REOPT(ALWAYS) -
       ACTION(REPLACE)
  END
/*

For programs with dynamic SQL, REOPT(ALWAYS) tells DB2 to regenerate the access plan at each PREPARE, using current parameter values and table statistics. This produces better plans for skewed data distributions but adds optimizer overhead. For programs where the data distribution is uniform, REOPT(NONE) is more efficient.

COMMIT Frequency in Batch Programs

Long-running batch programs must issue periodic COMMITs to release locks and allow restart in case of failure:

       01  WS-COMMIT-FREQ          PIC S9(9) COMP VALUE 500.
       01  WS-ROWS-PROCESSED       PIC S9(9) COMP VALUE 0.
       01  WS-LAST-KEY-COMMITTED   PIC X(10).

       PROCEDURE DIVISION.
      *    ... cursor processing loop ...

           ADD 1 TO WS-ROWS-PROCESSED

           IF FUNCTION MOD(WS-ROWS-PROCESSED, WS-COMMIT-FREQ) = 0
               MOVE WS-CURRENT-KEY TO WS-LAST-KEY-COMMITTED
               EXEC SQL COMMIT END-EXEC
               IF SQLCODE NOT = 0
                   DISPLAY 'COMMIT failed: ' SQLCODE
                   PERFORM 9000-DB2-ERROR-HANDLER
               END-IF
               DISPLAY 'Committed at row: '
                   WS-ROWS-PROCESSED
                   ' Key: ' WS-LAST-KEY-COMMITTED
           END-IF

⚠️ Critical Warning: After a COMMIT, all cursors declared WITHOUT the WITH HOLD option are closed. If your batch program commits within a cursor loop, declare the cursor WITH HOLD or be prepared to reposition after each COMMIT.

           EXEC SQL
               DECLARE BATCH-CURSOR CURSOR WITH HOLD FOR
                   SELECT CLAIM_NUMBER, CLAIM_AMOUNT
                   FROM CLAIM
                   WHERE CLAIM_STATUS = 'PENDING'
                   ORDER BY CLAIM_NUMBER
           END-EXEC

28.9 Concurrency: Isolation Levels and Locking

When multiple programs access the same data simultaneously, DB2 uses isolation levels to control what each program can see and how long locks are held.

DB2 Isolation Levels

Level Abbreviation Lock Duration Phantom Reads Non-Repeatable Reads Dirty Reads
Uncommitted Read UR No locks acquired Yes Yes Yes
Cursor Stability CS Lock released when cursor moves Yes Yes No
Read Stability RS Locks held on accessed rows until COMMIT Yes No No
Repeatable Read RR Locks held on all qualifying rows until COMMIT No No No
      *    Set isolation level at cursor level
           EXEC SQL
               DECLARE ONLINE-CURSOR CURSOR FOR
                   SELECT ACCT_NUMBER, ACCT_BALANCE
                   FROM ACCOUNT
                   WHERE BRANCH_CODE = :WS-BRANCH
                   WITH CS
           END-EXEC

      *    Or for read-only reporting with no locking
           EXEC SQL
               DECLARE REPORT-CURSOR CURSOR FOR
                   SELECT ACCT_NUMBER, ACCT_BALANCE
                   FROM ACCOUNT
                   WHERE BRANCH_CODE = :WS-BRANCH
                   WITH UR
           END-EXEC

Choosing the Right Isolation Level

The choice of isolation level has profound implications for both correctness and performance. Maria Chen uses this decision framework at GlobalBank:

Uncommitted Read (UR) — Use for read-only queries where approximate data is acceptable: balance inquiries (the user will verify before transacting), report generation, record counting, and dashboard displays. UR acquires no locks at all, meaning it never blocks and is never blocked. The trade-off: you might read data that another transaction has changed but not yet committed. If that transaction rolls back, you have seen "phantom" data that never really existed.

Cursor Stability (CS) — The default, and the right choice for most online transactions. It locks the current row only while the cursor is positioned on it. When you FETCH the next row, the previous lock is released. This provides a good balance between concurrency and consistency for typical CICS transactions.

Read Stability (RS) — Use when you need all rows you read to remain stable for the duration of your unit of work — for example, when you read data to make a decision and then update based on that decision. RS holds locks on all accessed rows until COMMIT. Be careful: if you read 10,000 rows, you hold 10,000 locks.

Repeatable Read (RR) — The strongest isolation level. Not only are accessed rows locked, but DB2 also locks qualifying rows that could have been accessed (to prevent phantom inserts). Use sparingly — it significantly reduces concurrency. Appropriate for financial closing operations where absolute consistency is required.

      *--------------------------------------------------------------
      * Example: Monthly interest calculation requires RS
      * We read the balance, calculate interest, then update.
      * RS ensures the balance doesn't change between read and update.
      *--------------------------------------------------------------
           EXEC SQL
               DECLARE INTEREST-CURSOR CURSOR WITH HOLD FOR
                   SELECT ACCT_NUMBER, ACCT_BALANCE
                   FROM ACCOUNT
                   WHERE ACCT_STATUS = 'A'
                   AND ACCT_TYPE IN ('SA', 'MM', 'CD')
                   FOR UPDATE OF ACCT_BALANCE
                   WITH RS
           END-EXEC

Lock Escalation and Timeout

When a program acquires too many page or row locks, DB2 escalates to a table lock — which blocks all other programs trying to access that table. This is why COMMIT frequency matters:

      *--------------------------------------------------------------
      * Handle lock timeout gracefully
      *--------------------------------------------------------------
           EXEC SQL
               UPDATE ACCOUNT
               SET ACCT_BALANCE = ACCT_BALANCE + :WS-AMOUNT
               WHERE ACCT_NUMBER = :WS-ACCT-NUM
           END-EXEC

           EVALUATE SQLCODE
               WHEN 0
                   CONTINUE
               WHEN -911
      *            Deadlock — DB2 already rolled back
                   DISPLAY 'Deadlock detected, retrying...'
                   ADD 1 TO WS-RETRY-COUNT
                   IF WS-RETRY-COUNT <= 3
                       PERFORM 2500-RETRY-UPDATE
                   ELSE
                       PERFORM 9000-DB2-ERROR-HANDLER
                   END-IF
               WHEN -904
      *            Resource unavailable — wait and retry
                   DISPLAY 'Lock timeout, retrying...'
                   ADD 1 TO WS-RETRY-COUNT
                   IF WS-RETRY-COUNT <= 3
                       PERFORM 2500-RETRY-UPDATE
                   ELSE
                       PERFORM 9000-DB2-ERROR-HANDLER
                   END-IF
               WHEN OTHER
                   PERFORM 9000-DB2-ERROR-HANDLER
           END-EVALUATE

💡 Deadlock Strategy: When DB2 detects a deadlock (two programs waiting for each other's locks), it chooses one as the "victim" and rolls back its work (SQLCODE -911). Your program must handle this by retrying the operation. Always implement a retry limit — infinite retries can cause CPU spins.

28.10 GlobalBank Scenario: Dynamic Reporting and Stored Procedures

Maria Chen needs to build a flexible reporting system for GlobalBank's branch managers. Each manager wants different columns, filters, and sort orders. Static SQL would require dozens of separate programs; dynamic SQL makes one program serve all needs.

Meanwhile, Derek Washington is tasked with creating a stored procedure that encapsulates the core account transfer logic — debit one account, credit another, log the transaction, and enforce business rules — all in a single atomic operation.

The Dynamic Report Builder

       IDENTIFICATION DIVISION.
       PROGRAM-ID. RPTBUILD.
      *--------------------------------------------------------------
      * Dynamic Report Builder for Branch Managers
      * Uses dynamic SQL to construct queries from user parameters
      *--------------------------------------------------------------

       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01  WS-REPORT-PARAMS.
           05  WS-RPT-BRANCH      PIC X(4).
           05  WS-RPT-ACCT-TYPE   PIC X(2).
           05  WS-RPT-MIN-BAL     PIC S9(11)V9(2) COMP-3.
           05  WS-RPT-SORT-COL    PIC X(20).
           05  WS-RPT-SORT-DIR    PIC X(4).
           05  WS-RPT-MAX-ROWS    PIC S9(4) COMP.

       01  WS-SQL-BUFFER           PIC X(2000).
       01  WS-PARAM-COUNT          PIC S9(4) COMP VALUE 0.
       01  WS-PARAM-VALUES.
           05  WS-PARAM OCCURS 10 TIMES.
               10  WS-PARAM-CHAR  PIC X(50).
               10  WS-PARAM-NUM   PIC S9(11)V9(2) COMP-3.
               10  WS-PARAM-TYPE  PIC X(1).

       01  WS-RESULT-LINE.
           05  WS-RES-ACCT-NUM    PIC X(10).
           05  WS-RES-ACCT-NAME   PIC X(50).
           05  WS-RES-BALANCE     PIC S9(11)V9(2) COMP-3.
           05  WS-RES-STATUS      PIC X(1).
           05  WS-RES-OPEN-DATE   PIC X(10).

       01  WS-ROW-COUNT            PIC S9(9) COMP VALUE 0.

       PROCEDURE DIVISION.
       0000-MAIN.
           PERFORM 1000-GET-PARAMETERS
           PERFORM 2000-BUILD-QUERY
           PERFORM 3000-EXECUTE-REPORT
           PERFORM 9999-CLEANUP
           STOP RUN.

       2000-BUILD-QUERY.
           INITIALIZE WS-SQL-BUFFER
           MOVE 0 TO WS-PARAM-COUNT

           STRING
               'SELECT ACCT_NUMBER, ACCT_NAME, '
               'ACCT_BALANCE, ACCT_STATUS, OPEN_DATE '
               'FROM ACCOUNT WHERE 1=1'
               DELIMITED BY SIZE
               INTO WS-SQL-BUFFER
           END-STRING

           IF WS-RPT-BRANCH NOT = SPACES
               STRING WS-SQL-BUFFER
                      ' AND BRANCH_CODE = ?'
                   DELIMITED BY '  '
                   INTO WS-SQL-BUFFER
               END-STRING
               ADD 1 TO WS-PARAM-COUNT
               MOVE WS-RPT-BRANCH
                   TO WS-PARAM-CHAR(WS-PARAM-COUNT)
               MOVE 'C' TO WS-PARAM-TYPE(WS-PARAM-COUNT)
           END-IF

           IF WS-RPT-ACCT-TYPE NOT = SPACES
               STRING WS-SQL-BUFFER
                      ' AND ACCT_TYPE = ?'
                   DELIMITED BY '  '
                   INTO WS-SQL-BUFFER
               END-STRING
               ADD 1 TO WS-PARAM-COUNT
               MOVE WS-RPT-ACCT-TYPE
                   TO WS-PARAM-CHAR(WS-PARAM-COUNT)
               MOVE 'C' TO WS-PARAM-TYPE(WS-PARAM-COUNT)
           END-IF

           IF WS-RPT-MIN-BAL > ZEROS
               STRING WS-SQL-BUFFER
                      ' AND ACCT_BALANCE >= ?'
                   DELIMITED BY '  '
                   INTO WS-SQL-BUFFER
               END-STRING
               ADD 1 TO WS-PARAM-COUNT
               MOVE WS-RPT-MIN-BAL
                   TO WS-PARAM-NUM(WS-PARAM-COUNT)
               MOVE 'N' TO WS-PARAM-TYPE(WS-PARAM-COUNT)
           END-IF

      *    Validate and append ORDER BY
           EVALUATE WS-RPT-SORT-COL
               WHEN 'BALANCE'
                   STRING WS-SQL-BUFFER
                          ' ORDER BY ACCT_BALANCE'
                       DELIMITED BY '  '
                       INTO WS-SQL-BUFFER
                   END-STRING
               WHEN 'NAME'
                   STRING WS-SQL-BUFFER
                          ' ORDER BY ACCT_NAME'
                       DELIMITED BY '  '
                       INTO WS-SQL-BUFFER
                   END-STRING
               WHEN 'ACCOUNT'
                   STRING WS-SQL-BUFFER
                          ' ORDER BY ACCT_NUMBER'
                       DELIMITED BY '  '
                       INTO WS-SQL-BUFFER
                   END-STRING
               WHEN OTHER
                   STRING WS-SQL-BUFFER
                          ' ORDER BY ACCT_NUMBER'
                       DELIMITED BY '  '
                       INTO WS-SQL-BUFFER
                   END-STRING
           END-EVALUATE

           IF WS-RPT-SORT-DIR = 'DESC'
               STRING WS-SQL-BUFFER ' DESC'
                   DELIMITED BY '  '
                   INTO WS-SQL-BUFFER
               END-STRING
           END-IF

      *    Limit result set
           IF WS-RPT-MAX-ROWS > 0
               STRING WS-SQL-BUFFER
                      ' FETCH FIRST ? ROWS ONLY'
                   DELIMITED BY '  '
                   INTO WS-SQL-BUFFER
               END-STRING
               ADD 1 TO WS-PARAM-COUNT
               MOVE WS-RPT-MAX-ROWS
                   TO WS-PARAM-NUM(WS-PARAM-COUNT)
               MOVE 'N' TO WS-PARAM-TYPE(WS-PARAM-COUNT)
           END-IF

           EXEC SQL
               PREPARE DYN-RPT FROM :WS-SQL-BUFFER
           END-EXEC
           IF SQLCODE NOT = 0
               DISPLAY 'PREPARE failed: ' SQLCODE
               DISPLAY 'SQL: ' WS-SQL-BUFFER
               PERFORM 9000-DB2-ERROR-HANDLER
           END-IF.

🔵 Derek's Observation: "Why not just let the user type raw SQL?" Maria's answer: "Three reasons. First, SQL injection. Second, users do not know SQL. Third, even if they did, we cannot let arbitrary queries run against production — a bad query could lock the ACCOUNT table and stop all transactions." This exchange captures a core principle of defensive programming: never expose raw database access to end users.

The Account Transfer Stored Procedure

       IDENTIFICATION DIVISION.
       PROGRAM-ID. TXNXFER.
      *--------------------------------------------------------------
      * Stored Procedure: GLOBALBANK.TRANSFER_FUNDS
      * Performs atomic fund transfer between two accounts
      *--------------------------------------------------------------
       DATA DIVISION.
       WORKING-STORAGE SECTION.
           EXEC SQL INCLUDE SQLCA END-EXEC.

       01  WS-FROM-BALANCE     PIC S9(11)V9(2) COMP-3.
       01  WS-TO-BALANCE       PIC S9(11)V9(2) COMP-3.
       01  WS-FROM-STATUS      PIC X(1).
       01  WS-TO-STATUS        PIC X(1).
       01  WS-TXN-TIMESTAMP    PIC X(26).

       LINKAGE SECTION.
       01  LS-FROM-ACCT        PIC X(10).
       01  LS-TO-ACCT          PIC X(10).
       01  LS-AMOUNT           PIC S9(11)V9(2) COMP-3.
       01  LS-NEW-FROM-BAL     PIC S9(11)V9(2) COMP-3.
       01  LS-NEW-TO-BAL       PIC S9(11)V9(2) COMP-3.
       01  LS-RETURN-CODE      PIC S9(9) COMP.
       01  LS-RETURN-MSG       PIC X(80).

       PROCEDURE DIVISION USING LS-FROM-ACCT LS-TO-ACCT
                                LS-AMOUNT LS-NEW-FROM-BAL
                                LS-NEW-TO-BAL LS-RETURN-CODE
                                LS-RETURN-MSG.
       0000-MAIN.
           INITIALIZE LS-RETURN-CODE LS-RETURN-MSG
           MOVE 0 TO LS-NEW-FROM-BAL LS-NEW-TO-BAL

      *    Validate amount
           IF LS-AMOUNT <= 0
               MOVE 10 TO LS-RETURN-CODE
               MOVE 'Transfer amount must be positive'
                   TO LS-RETURN-MSG
               GOBACK
           END-IF

      *    Lock and read source account (FOR UPDATE)
           EXEC SQL
               SELECT ACCT_BALANCE, ACCT_STATUS
               INTO :WS-FROM-BALANCE, :WS-FROM-STATUS
               FROM ACCOUNT
               WHERE ACCT_NUMBER = :LS-FROM-ACCT
               FOR UPDATE OF ACCT_BALANCE
           END-EXEC

           EVALUATE SQLCODE
               WHEN 0    CONTINUE
               WHEN +100
                   MOVE 20 TO LS-RETURN-CODE
                   MOVE 'Source account not found'
                       TO LS-RETURN-MSG
                   GOBACK
               WHEN -911
                   MOVE 30 TO LS-RETURN-CODE
                   MOVE 'Deadlock on source account'
                       TO LS-RETURN-MSG
                   GOBACK
               WHEN OTHER
                   MOVE 99 TO LS-RETURN-CODE
                   STRING 'DB2 error on source: SQLCODE='
                          SQLCODE
                       DELIMITED BY SIZE
                       INTO LS-RETURN-MSG
                   END-STRING
                   GOBACK
           END-EVALUATE

      *    Business rule: account must be active
           IF WS-FROM-STATUS NOT = 'A'
               MOVE 21 TO LS-RETURN-CODE
               MOVE 'Source account is not active'
                   TO LS-RETURN-MSG
               EXEC SQL ROLLBACK END-EXEC
               GOBACK
           END-IF

      *    Business rule: sufficient funds
           IF WS-FROM-BALANCE < LS-AMOUNT
               MOVE 22 TO LS-RETURN-CODE
               MOVE 'Insufficient funds in source account'
                   TO LS-RETURN-MSG
               EXEC SQL ROLLBACK END-EXEC
               GOBACK
           END-IF

      *    Lock and read destination account
           EXEC SQL
               SELECT ACCT_BALANCE, ACCT_STATUS
               INTO :WS-TO-BALANCE, :WS-TO-STATUS
               FROM ACCOUNT
               WHERE ACCT_NUMBER = :LS-TO-ACCT
               FOR UPDATE OF ACCT_BALANCE
           END-EXEC

           EVALUATE SQLCODE
               WHEN 0    CONTINUE
               WHEN +100
                   MOVE 23 TO LS-RETURN-CODE
                   MOVE 'Destination account not found'
                       TO LS-RETURN-MSG
                   EXEC SQL ROLLBACK END-EXEC
                   GOBACK
               WHEN OTHER
                   MOVE 99 TO LS-RETURN-CODE
                   STRING 'DB2 error on destination: SQLCODE='
                          SQLCODE
                       DELIMITED BY SIZE
                       INTO LS-RETURN-MSG
                   END-STRING
                   EXEC SQL ROLLBACK END-EXEC
                   GOBACK
           END-EVALUATE

           IF WS-TO-STATUS NOT = 'A'
               MOVE 24 TO LS-RETURN-CODE
               MOVE 'Destination account is not active'
                   TO LS-RETURN-MSG
               EXEC SQL ROLLBACK END-EXEC
               GOBACK
           END-IF

      *    Perform the transfer
           COMPUTE WS-FROM-BALANCE =
               WS-FROM-BALANCE - LS-AMOUNT
           COMPUTE WS-TO-BALANCE =
               WS-TO-BALANCE + LS-AMOUNT

           EXEC SQL
               UPDATE ACCOUNT
               SET ACCT_BALANCE = :WS-FROM-BALANCE,
                   LAST_ACTIVITY = CURRENT TIMESTAMP
               WHERE ACCT_NUMBER = :LS-FROM-ACCT
           END-EXEC
           IF SQLCODE NOT = 0
               MOVE 99 TO LS-RETURN-CODE
               MOVE 'Failed to update source account'
                   TO LS-RETURN-MSG
               EXEC SQL ROLLBACK END-EXEC
               GOBACK
           END-IF

           EXEC SQL
               UPDATE ACCOUNT
               SET ACCT_BALANCE = :WS-TO-BALANCE,
                   LAST_ACTIVITY = CURRENT TIMESTAMP
               WHERE ACCT_NUMBER = :LS-TO-ACCT
           END-EXEC
           IF SQLCODE NOT = 0
               MOVE 99 TO LS-RETURN-CODE
               MOVE 'Failed to update dest account'
                   TO LS-RETURN-MSG
               EXEC SQL ROLLBACK END-EXEC
               GOBACK
           END-IF

      *    Log the transaction
           EXEC SQL
               SELECT CURRENT TIMESTAMP
               INTO :WS-TXN-TIMESTAMP
               FROM SYSIBM.SYSDUMMY1
           END-EXEC

           EXEC SQL
               INSERT INTO TXN_HISTORY
                   (TXN_ID, ACCT_NUMBER, TXN_AMOUNT,
                    TXN_TYPE, TXN_DATE, RELATED_ACCT)
               VALUES
                   (NEXT VALUE FOR GLOBALBANK.TXN_SEQ,
                    :LS-FROM-ACCT, :LS-AMOUNT,
                    'DR', :WS-TXN-TIMESTAMP, :LS-TO-ACCT)
           END-EXEC

           EXEC SQL
               INSERT INTO TXN_HISTORY
                   (TXN_ID, ACCT_NUMBER, TXN_AMOUNT,
                    TXN_TYPE, TXN_DATE, RELATED_ACCT)
               VALUES
                   (NEXT VALUE FOR GLOBALBANK.TXN_SEQ,
                    :LS-TO-ACCT, :LS-AMOUNT,
                    'CR', :WS-TXN-TIMESTAMP, :LS-FROM-ACCT)
           END-EXEC

      *    Success
           MOVE WS-FROM-BALANCE TO LS-NEW-FROM-BAL
           MOVE WS-TO-BALANCE TO LS-NEW-TO-BAL
           MOVE 0 TO LS-RETURN-CODE
           MOVE 'Transfer completed successfully'
               TO LS-RETURN-MSG
           GOBACK.

⚖️ Design Decision — Lock Ordering: Notice that this procedure always locks the source account before the destination account. What if two simultaneous transfers go in opposite directions (A→B and B→A)? This could cause a deadlock. A production-quality version would lock accounts in a canonical order (e.g., lower account number first) regardless of transfer direction. This is a classic concurrency pattern called lock ordering.

28.11 MedClaim Scenario: Batch Claim Processing with Optimized DB2

James Okafor's team processes 500,000 insurance claims per month. Their batch adjudication program must read claims, look up member and provider information, apply business rules, update claim status, and generate payment records — all within a four-hour batch window.

Tomás Rivera profiled the original program and found:

  • 500,000 single-row FETCHes for claims
  • 500,000 individual SELECTs for member lookup
  • 500,000 individual SELECTs for provider lookup
  • 500,000 individual UPDATEs for claim status
  • Total: 2,000,000 SQL calls = 4.5 hours elapsed

After optimization:

  • Multi-row FETCH with rowset of 100 for claims: 5,000 SQL calls
  • JOIN member and provider data into the cursor: eliminates 1,000,000 calls
  • Multi-row UPDATE using positioned updates: 5,000 SQL calls
  • Periodic COMMIT every 500 rows with WITH HOLD cursor
  • Total: ~10,000 SQL calls = 45 minutes elapsed — an 83% reduction
      *--------------------------------------------------------------
      * Optimized batch claim adjudication cursor
      * Joins three tables to eliminate row-by-row lookups
      *--------------------------------------------------------------
           EXEC SQL
               DECLARE ADJUD-CURSOR CURSOR WITH HOLD
                   WITH ROWSET POSITIONING FOR
                   SELECT C.CLAIM_NUMBER, C.CLAIM_AMOUNT,
                          C.SERVICE_CODE, C.SUBMIT_DATE,
                          M.MEMBER_NAME, M.PLAN_CODE,
                          M.COVERAGE_LEVEL,
                          P.PROVIDER_NAME, P.PROVIDER_TYPE,
                          P.NETWORK_STATUS
                   FROM CLAIM C
                   JOIN MEMBER M
                       ON C.MEMBER_ID = M.MEMBER_ID
                   JOIN PROVIDER P
                       ON C.PROVIDER_ID = P.PROVIDER_ID
                   WHERE C.CLAIM_STATUS = 'SUBMITTED'
                   ORDER BY C.CLAIM_NUMBER
           END-EXEC

🧪 Lab Exercise: Create a simplified version of this scenario in your Student Mainframe Lab. Create three tables (CLAIM, MEMBER, PROVIDER) with 1,000 rows each. Write two programs: one that processes claims with individual lookups, and one that uses a JOIN cursor. Compare the SQL call counts using the DB2 trace facility or by counting your FETCH operations.

The Restart Strategy

Every production batch program must support restart — the ability to resume processing from where it left off after a failure. Tomás Rivera's restart strategy for the MedClaim adjudication program has three components:

Component 1: Checkpoint Table. Before each COMMIT, the program writes the last-processed CLAIM_NUMBER to a checkpoint table:

       01  WS-CHECKPOINT-KEY       PIC X(15).

      *    After processing each claim batch
           MOVE WS-CURRENT-CLAIM TO WS-CHECKPOINT-KEY

           EXEC SQL
               UPDATE BATCH_CHECKPOINT
               SET LAST_KEY = :WS-CHECKPOINT-KEY,
                   LAST_TIMESTAMP = CURRENT TIMESTAMP,
                   ROWS_PROCESSED = :WS-TOTAL-PROCESSED
               WHERE JOB_NAME = 'CLMADJUD'
           END-EXEC

           EXEC SQL COMMIT END-EXEC

Component 2: Restart Logic. When the program starts, it checks the checkpoint table and positions the cursor accordingly:

       1000-CHECK-RESTART.
           EXEC SQL
               SELECT LAST_KEY, ROWS_PROCESSED
               INTO :WS-CHECKPOINT-KEY, :WS-RESTART-COUNT
               FROM BATCH_CHECKPOINT
               WHERE JOB_NAME = 'CLMADJUD'
           END-EXEC

           IF SQLCODE = 0 AND WS-CHECKPOINT-KEY NOT = SPACES
               DISPLAY 'RESTART detected'
               DISPLAY 'Last key: ' WS-CHECKPOINT-KEY
               DISPLAY 'Rows already processed: '
                   WS-RESTART-COUNT
               SET WS-IS-RESTART TO TRUE
      *        Cursor WHERE clause uses > to skip processed rows:
      *        WHERE CLAIM_STATUS = 'SUBMITTED'
      *          AND CLAIM_NUMBER > :WS-CHECKPOINT-KEY
           ELSE
               SET WS-IS-RESTART TO FALSE
           END-IF.

Component 3: Idempotent Processing. Each claim update must be idempotent — safe to execute more than once. If the program fails between updating a claim and committing, the claim may already be updated when the restart processes it again. The program checks the current status before updating:

      *    Skip already-adjudicated claims (idempotent processing)
           IF WS-CLAIM-STATUS NOT = 'SUBMITTED'
               DISPLAY 'Skipping ' WS-CLAIM-NUMBER
                   ' — already ' WS-CLAIM-STATUS
               EXIT PARAGRAPH
           END-IF

⚖️ Design Decision — Restartability vs. Simplicity: Making a program restartable adds complexity (checkpoint logic, restart detection, idempotent processing). James Okafor's rule: "If the batch window is generous and the volume is small, restartability is optional — just rerun from the beginning. If the batch window is tight or the volume is large, restartability is mandatory. Our adjudication program falls squarely in the mandatory category."

Monitoring DB2 Resource Consumption

Tomás tracks several key metrics for every production COBOL-DB2 program. Here is a summary of what he monitors and what healthy ranges look like:

Metric How to Measure Healthy Range Red Flag
SQL calls per unit of work DB2 trace / accounting 1-10 for online, varies for batch > 100 per transaction
Elapsed time per SQL call DB2 performance monitor < 1 ms for indexed reads > 10 ms average
Getpages (buffer pool reads) DB2 accounting trace Low relative to rows returned Getpages >> rows
Lock escalations DB2 statistics 0 Any > 0
Timeouts / deadlocks SQLCODE -904, -911 < 0.1% of transactions > 1%
Log records per COMMIT DB2 accounting Proportional to updates Extremely high = commit too infrequently

28.12 Putting It All Together: A Defensive DB2 Programming Checklist

Before we move on to CICS programming in the next chapter, let us consolidate the defensive programming practices for DB2:

The Production DB2 Checklist

Practice Why It Matters
Check SQLCODE after every SQL statement Unchecked errors cause silent data corruption
Use indicator variables for all nullable columns Prevents -305 abends
PREPARE once, EXECUTE many Avoid optimizer overhead in loops
Validate dynamic SQL inputs Prevents SQL injection
Use parameter markers instead of string concatenation Security and performance
Declare WITH HOLD for cursors in commit-loop programs Prevents cursor closure on COMMIT
Implement deadlock retry logic (with limit) Handles -911 gracefully
Use appropriate isolation levels Balances concurrency and consistency
COMMIT periodically in batch programs Releases locks, enables restart
Use multi-row FETCH for batch processing Reduces SQL call overhead
Use JOINs instead of row-by-row lookups Eliminates N+1 query problem
Review EXPLAIN output before production deployment Catches tablespace scans early
Save last-committed key for restart Enables batch restart after failure

Maria's Rule: "Every COBOL-DB2 program should be reviewed for three things before it goes to production: error handling, commit strategy, and access paths. Get those three right, and you have a program that will run reliably for years."

28.13 Advanced Topics: Brief Survey

Several advanced DB2 topics deserve mention even though full coverage would require chapters of their own:

Temporal Tables

DB2 temporal tables maintain a history of all changes automatically. Queries can include FOR SYSTEM_TIME AS OF to read data as it existed at a specific point in time:

           EXEC SQL
               SELECT ACCT_BALANCE
               INTO :WS-HIST-BALANCE
               FROM ACCOUNT
               FOR SYSTEM_TIME AS OF '2024-06-30-23.59.59'
               WHERE ACCT_NUMBER = :WS-ACCT-NUM
           END-EXEC

Global Temporary Tables

For complex processing that requires intermediate result storage, DB2 provides Declared Global Temporary Tables (DGTTs). These tables exist only for the duration of your session and are visible only to your task — no locking conflicts with other users.

           EXEC SQL
               DECLARE GLOBAL TEMPORARY TABLE SESSION.TEMP_CALC
                   (ACCT_NUMBER    CHAR(10),
                    CALC_BALANCE   DECIMAL(13,2),
                    CALC_FLAG      CHAR(1))
                   ON COMMIT PRESERVE ROWS
           END-EXEC

The ON COMMIT PRESERVE ROWS option keeps the data across COMMITs within the same session. Without it, a COMMIT would empty the table. DGTTs are invaluable for multi-pass processing — first populate the temporary table with intermediate results, then join it with other tables for final processing:

      *    Step 1: Populate temp table with calculated values
           EXEC SQL
               INSERT INTO SESSION.TEMP_CALC
                   SELECT ACCT_NUMBER,
                          ACCT_BALANCE * 1.0425 / 365 * 30,
                          CASE WHEN ACCT_BALANCE > 100000
                               THEN 'H' ELSE 'L' END
                   FROM ACCOUNT
                   WHERE ACCT_STATUS = 'A'
                   AND ACCT_TYPE = 'SA'
           END-EXEC

      *    Step 2: Use temp table in a join for the final report
           EXEC SQL
               DECLARE INTEREST-REPORT CURSOR FOR
                   SELECT A.ACCT_NUMBER, A.ACCT_NAME,
                          A.ACCT_BALANCE, T.CALC_BALANCE,
                          T.CALC_FLAG
                   FROM ACCOUNT A
                   JOIN SESSION.TEMP_CALC T
                       ON A.ACCT_NUMBER = T.ACCT_NUMBER
                   ORDER BY T.CALC_FLAG, T.CALC_BALANCE DESC
           END-EXEC

💡 When to Use DGTTs vs. Working-Storage Tables: If the intermediate result set fits comfortably in memory (a few hundred rows), use a COBOL working-storage table with SEARCH or binary lookup. If the result set is large (thousands of rows) or you need to join it with other DB2 tables, use a DGTT. The DGTT approach lets DB2's optimizer handle the join, which is almost always more efficient than row-by-row lookup in COBOL.

MERGE Statement

The MERGE statement combines INSERT and UPDATE logic, eliminating the common pattern of "try UPDATE, if no rows affected then INSERT":

           EXEC SQL
               MERGE INTO ACCOUNT_SUMMARY AS T
               USING (SELECT ACCT_NUMBER,
                             SUM(TXN_AMOUNT) AS TOTAL
                      FROM TXN_HISTORY
                      WHERE TXN_DATE = CURRENT DATE
                      GROUP BY ACCT_NUMBER) AS S
               ON T.ACCT_NUMBER = S.ACCT_NUMBER
               WHEN MATCHED THEN
                   UPDATE SET T.DAILY_TOTAL = S.TOTAL
               WHEN NOT MATCHED THEN
                   INSERT (ACCT_NUMBER, DAILY_TOTAL)
                   VALUES (S.ACCT_NUMBER, S.TOTAL)
           END-EXEC

Common Table Expressions (CTEs)

Common Table Expressions — the WITH clause — let you define named subqueries that make complex SQL more readable and maintainable:

           EXEC SQL
               DECLARE CTE-CURSOR CURSOR FOR
               WITH HIGH_VALUE_ACCTS AS (
                   SELECT ACCT_NUMBER, ACCT_BALANCE,
                          BRANCH_CODE
                   FROM ACCOUNT
                   WHERE ACCT_BALANCE > 100000
                   AND ACCT_STATUS = 'A'
               ),
               BRANCH_TOTALS AS (
                   SELECT BRANCH_CODE,
                          COUNT(*) AS ACCT_COUNT,
                          SUM(ACCT_BALANCE) AS TOTAL_BAL
                   FROM HIGH_VALUE_ACCTS
                   GROUP BY BRANCH_CODE
               )
               SELECT B.BRANCH_CODE, B.ACCT_COUNT,
                      B.TOTAL_BAL, BR.BRANCH_NAME
               FROM BRANCH_TOTALS B
               JOIN BRANCH BR
                   ON B.BRANCH_CODE = BR.BRANCH_CODE
               ORDER BY B.TOTAL_BAL DESC
           END-EXEC

CTEs are particularly useful for recursive queries (traversing hierarchical data like organizational structures) and for breaking complex queries into logical, testable steps. The optimizer treats CTEs as inline views and can often merge them for better performance than equivalent subqueries.

Summary

This chapter has taken you well beyond basic embedded SQL into the techniques that production COBOL-DB2 programmers rely on daily. You have learned to construct dynamic SQL safely with parameter markers, navigate result sets in any direction with scrollable cursors, encapsulate business logic in stored procedures, process data efficiently with multi-row operations, and tune performance by understanding access paths and isolation levels.

The defensive programming theme runs throughout: check every SQLCODE, use indicator variables, validate dynamic SQL inputs, implement deadlock retry logic, and commit periodically in batch programs. These practices separate a program that works in testing from one that runs reliably in production for decades.

The performance tuning skills you developed in this chapter — understanding EXPLAIN output, designing indexes for your query patterns, choosing appropriate isolation levels, and implementing efficient COMMIT strategies — are among the most valued skills in production mainframe shops. A well-tuned COBOL-DB2 program can process millions of rows per hour while holding minimal locks and consuming modest CPU. A poorly tuned program doing the same work can run for hours, lock out other users, and consume more CPU than the rest of the batch window combined.

In the next chapter, we turn from DB2 to CICS — the transaction processing system that brings COBOL applications online. Many of the DB2 techniques you learned here will appear in a new context as we build interactive transactions that respond to user input in real time. The singleton SELECT, the indicator variable pattern, and the deadlock retry logic will all reappear — adapted for the unique constraints of an online, pseudo-conversational environment.


"I've seen COBOL-DB2 programs that have run in production for 30 years without modification. That doesn't happen by accident — it happens because someone took the time to handle every error, tune every access path, and think through every concurrent access scenario." — Tomás Rivera, DBA, MedClaim Health Services