> "Every night, when the trading floors go dark and the branches close their doors, COBOL programs wake up. They post millions of transactions, calculate interest on every account, generate statements, and reconcile ledgers. They have done this...
In This Chapter
- 32.1 Why Embedded SQL Still Matters
- 32.2 The Embedded SQL Lifecycle
- 32.3 Host Variables
- 32.4 Basic Embedded SQL Statements
- 32.5 The SQLCA — Communicating with DB2
- 32.6 Cursors — Processing Multiple Rows
- 32.7 Multi-Row FETCH and INSERT
- 32.8 Static vs Dynamic SQL
- 32.9 The BIND Process
- 32.10 Embedded SQL in C
- 32.11 Meridian Bank COBOL Batch Programs
- Spaced Review: Connecting to Earlier Chapters
- Debugging Embedded SQL Programs
- Summary
Chapter 32: Embedded SQL for COBOL and C — Static SQL, Cursors, and the Bind Process
"Every night, when the trading floors go dark and the branches close their doors, COBOL programs wake up. They post millions of transactions, calculate interest on every account, generate statements, and reconcile ledgers. They have done this reliably for decades — and embedded SQL is how they talk to DB2." — Senior Systems Programmer, large commercial bank
32.1 Why Embedded SQL Still Matters
Walk into any large bank, insurance company, or government agency that runs IBM z/OS, and you will find COBOL programs with embedded SQL at the heart of their batch processing infrastructure. This is not legacy stubbornness — it is engineering pragmatism. Let us understand why.
The Scale of Mainframe Batch Processing
Consider the nightly batch cycle at a bank the size of Meridian National Bank. Between 10 PM and 6 AM, the system must:
- Post every debit and credit transaction from the day (potentially millions of rows)
- Calculate daily interest accrual on every savings, checking, and loan account
- Generate regulatory reports (Call Reports, HMDA, CRA data)
- Run anti-money-laundering screening
- Produce customer statements at month-end
- Reconcile general ledger balances
These jobs do not run one at a time. They run in orchestrated parallel streams coordinated by a job scheduler like CA-7 or TWS (Tivoli Workload Scheduler). Each individual program might process anywhere from thousands to hundreds of millions of rows. When you multiply this across an entire financial institution, you begin to understand why performance is not optional — it is existential. If the batch window overruns, the bank cannot open in the morning.
The Static SQL Advantage
Embedded SQL programs that use static SQL have a profound performance advantage over dynamic alternatives in many batch scenarios. Here is why:
-
Access paths are determined at BIND time. The DB2 optimizer evaluates the SQL statements during the BIND process, not at runtime. The resulting access plan is stored in the DB2 catalog. When the program runs, DB2 does not need to parse, validate, or optimize the SQL — it simply executes the pre-determined plan.
-
Authorization is checked at BIND time. The user who runs the program does not need direct table privileges. They need only EXECUTE authority on the plan or package. This simplifies security management enormously.
-
CPU cost is lower. Eliminating runtime SQL preparation saves CPU cycles on every execution. On z/OS, where CPU is measured in MSUs (Millions of Service Units) and directly affects your monthly software bill, this matters financially.
-
Predictable performance. Because the access path is fixed, you know exactly how DB2 will execute each statement. There are no surprises from the optimizer choosing a different plan based on runtime statistics.
-
Reduced catalog contention. Static SQL does not require runtime access to the DB2 catalog for statement preparation, reducing contention on catalog tables.
Who Uses Embedded SQL Today?
Every major bank, insurance company, airline, and government agency running DB2 on z/OS uses embedded SQL. According to industry surveys, there are over 200 billion lines of COBOL code in active production globally. A significant percentage of that code contains embedded SQL statements.
At Meridian National Bank, embedded SQL COBOL programs handle:
- End-of-day transaction posting — PROG-TXN-POST
- Interest calculation — PROG-INT-CALC
- Statement generation — PROG-STMT-GEN
- Loan amortization — PROG-LOAN-AMORT
- Regulatory reporting — PROG-REG-RPT
These programs have been refined over years. They are well-understood, well-tested, and they perform. Replacing them with a Java microservice would not improve anything — it would introduce risk and likely degrade throughput.
32.2 The Embedded SQL Lifecycle
Understanding how an embedded SQL program goes from source code to executing against DB2 is fundamental. The process involves several distinct steps, each producing specific artifacts.
Step 1: Write the Source Code
You write a COBOL (or C) program with embedded SQL statements delimited by EXEC SQL and END-EXEC. The source file typically has a .sqb extension for COBOL (some shops use .cbl or .cob) and .sqc for C.
IDENTIFICATION DIVISION.
PROGRAM-ID. ACCTINQ.
DATA DIVISION.
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 WS-ACCT-NUM PIC X(10).
01 WS-ACCT-NAME PIC X(40).
01 WS-BALANCE PIC S9(13)V99 COMP-3.
01 WS-BALANCE-IND PIC S9(4) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
PROCEDURE DIVISION.
EXEC SQL
SELECT ACCOUNT_NAME, BALANCE
INTO :WS-ACCT-NAME, :WS-BALANCE :WS-BALANCE-IND
FROM MERIDIAN.ACCOUNTS
WHERE ACCOUNT_NUM = :WS-ACCT-NUM
END-EXEC.
Step 2: Precompile (DB2 Precompiler)
The DB2 precompiler (DSNHPC on z/OS) reads your source file and does two things:
-
Extracts all SQL statements and places them into a DBRM (Database Request Module). The DBRM is a dataset that contains the SQL text, host variable information, and statement numbers.
-
Replaces each SQL statement in the source code with COBOL CALL statements that invoke DB2's runtime interface (the language interface module, or attachment facility). The resulting file is a pure COBOL program with no SQL — just calls to DB2 runtime routines.
The precompiler also validates SQL syntax and host variable declarations at this stage. If your SQL is malformed or your host variables do not match, you get errors here — before the COBOL compiler ever sees the code.
Precompiler JCL (simplified):
//PRECOMP EXEC PGM=DSNHPC,PARM='HOST(COB2),SOURCE,XREF'
//STEPLIB DD DSN=DSNC10.SDSNLOAD,DISP=SHR
//DBRMLIB DD DSN=USER.DBRMLIB(ACCTINQ),DISP=SHR
//SYSCIN DD DSN=USER.COBOL.SRC(ACCTINQ),DISP=SHR
//SYSLIB DD DSN=USER.COPYLIB,DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSIN DD DSN=USER.SQBSRC(ACCTINQ),DISP=SHR
Step 3: Compile the Modified Source
The modified COBOL source (with SQL replaced by CALLs) is compiled with the standard COBOL compiler (Enterprise COBOL on z/OS). This produces an object module — standard COBOL compilation, nothing DB2-specific.
Step 4: Link-Edit
The object module is link-edited (linked) with the DB2 language interface module (DSNELI for batch, DSNALI for CICS, DSNRLI for IMS). This produces the load module — the executable program.
Step 5: BIND
The DBRM produced in Step 2 is bound into a plan or package using the BIND command. During BIND, DB2:
- Validates all SQL statements against the current catalog (tables exist, columns match, data types are compatible)
- Checks authorization (the binder or plan owner must have appropriate privileges)
- Optimizes every SQL statement — choosing access paths, index usage, join methods
- Stores the resulting access plan in the DB2 catalog (SYSPLAN/SYSPACKAGE tables)
BIND PLAN(ACCTINQ) -
MEMBER(ACCTINQ) -
LIBRARY('USER.DBRMLIB') -
ACTION(REPLACE) -
ISOLATION(CS) -
VALIDATE(BIND) -
EXPLAIN(YES)
The Complete Pipeline
Source Code (.sqb)
│
▼
┌──────────┐
│Precompile│──────► DBRM
└──────────┘ │
│ │
▼ ▼
Modified Source ┌──────┐
(pure COBOL) │ BIND │──► Plan/Package
│ └──────┘ (in DB2 catalog)
▼
┌─────────┐
│ Compile │
└─────────┘
│
▼
Object Module
│
▼
┌───────────┐
│ Link-Edit │──► Load Module
└───────────┘ (executable)
At runtime, the load module and the plan/package come together. The load module contains the program logic and DB2 calls; the plan/package contains the optimized access paths. DB2 matches them by statement number.
Plans vs Packages
Plans were the original binding mechanism. A plan could contain SQL from one or more DBRMs directly. The entire plan had to be rebound if any DBRM changed.
Packages (introduced in DB2 V2.3) are more granular. Each DBRM is bound into its own package. A plan then references a collection of packages. If one DBRM changes, you rebind only that package — not the entire plan.
Modern practice strongly favors packages. A plan typically contains a package list rather than directly bound DBRMs:
BIND PACKAGE(COLLECTION1) -
MEMBER(ACCTINQ) -
LIBRARY('USER.DBRMLIB') -
ACTION(REPLACE)
BIND PLAN(MERIDIAN_BATCH) -
PKLIST(COLLECTION1.*) -
ACTION(REPLACE)
This approach allows independent maintenance of individual programs without rebinding the entire plan.
32.3 Host Variables
Host variables are the bridge between your COBOL (or C) program and DB2. They carry data in both directions: input values for WHERE clauses and INSERT/UPDATE statements, and output values from SELECT and FETCH operations.
Declaring Host Variables
Host variables must be declared between EXEC SQL BEGIN DECLARE SECTION and EXEC SQL END DECLARE SECTION:
WORKING-STORAGE SECTION.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 HV-ACCT-NUM PIC X(10).
01 HV-ACCT-NAME PIC X(40).
01 HV-BALANCE PIC S9(13)V99 COMP-3.
01 HV-OPEN-DATE PIC X(10).
01 HV-STATUS-CODE PIC X(1).
01 HV-INTEREST-RATE PIC S9(3)V9(6) COMP-3.
01 HV-BRANCH-ID PIC S9(9) COMP.
* Indicator variables
01 IND-ACCT-NAME PIC S9(4) COMP.
01 IND-BALANCE PIC S9(4) COMP.
01 IND-OPEN-DATE PIC S9(4) COMP.
01 IND-INTEREST-RATE PIC S9(4) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
COBOL-to-DB2 Data Type Mapping
Understanding the correspondence between COBOL data types and DB2 column types is critical. Mismatches cause data truncation, conversion errors, or incorrect results.
| DB2 Column Type | COBOL Host Variable | Notes |
|---|---|---|
SMALLINT |
PIC S9(4) COMP |
2-byte binary |
INTEGER |
PIC S9(9) COMP |
4-byte binary |
BIGINT |
PIC S9(18) COMP |
8-byte binary |
DECIMAL(p,s) |
PIC S9(p-s)V9(s) COMP-3 |
Packed decimal |
CHAR(n) |
PIC X(n) |
Fixed-length character |
VARCHAR(n) |
See below | Requires 49-level structure |
DATE |
PIC X(10) |
Format: YYYY-MM-DD |
TIMESTAMP |
PIC X(26) |
Format: YYYY-MM-DD-HH.MM.SS.NNNNNN |
FLOAT |
COMP-1 (single) / COMP-2 (double) |
Floating point |
VARCHAR Host Variables
VARCHAR columns require a special two-part structure in COBOL:
01 HV-DESCRIPTION.
49 HV-DESC-LEN PIC S9(4) COMP.
49 HV-DESC-TEXT PIC X(200).
The level-49 is mandatory. The first field holds the actual length of the data; the second holds the data itself. DB2 uses the length field to know how many bytes are significant. When fetching, DB2 sets the length field to the actual length of the retrieved data. When inserting or updating, you must set the length field before executing the SQL statement.
Indicator Variables
Indicator variables handle NULL values — arguably the trickiest aspect of embedded SQL programming. Every nullable column should have an associated indicator variable:
EXEC SQL
SELECT ACCOUNT_NAME, BALANCE, INTEREST_RATE
INTO :HV-ACCT-NAME :IND-ACCT-NAME,
:HV-BALANCE :IND-BALANCE,
:HV-INTEREST-RATE :IND-INTEREST-RATE
FROM MERIDIAN.ACCOUNTS
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
Indicator variable values on FETCH/SELECT:
| Value | Meaning |
|---|---|
0 |
Column is not NULL; host variable contains valid data |
-1 |
Column is NULL; host variable content is undefined |
-2 |
Column is NULL due to numeric conversion error |
> 0 |
Column was truncated; indicator value = original length |
Using indicators for INSERT/UPDATE with NULLs:
* Set the interest rate to NULL
MOVE -1 TO IND-INTEREST-RATE.
EXEC SQL
UPDATE MERIDIAN.ACCOUNTS
SET INTEREST_RATE = :HV-INTEREST-RATE :IND-INTEREST-RATE
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
When the indicator is -1, DB2 ignores the host variable value and stores NULL.
Null Indicator Arrays
When working with multiple columns, you can declare an indicator array:
01 IND-ARRAY.
02 IND-VALUES PIC S9(4) COMP OCCURS 10 TIMES.
This is commonly used with multi-row FETCH operations.
32.4 Basic Embedded SQL Statements
SELECT INTO — Single Row Retrieval
SELECT INTO retrieves exactly one row into host variables. If the query returns zero rows, SQLCODE is +100. If it returns more than one row, SQLCODE is -811.
EXEC SQL
SELECT ACCT_NAME, BALANCE, ACCT_TYPE, OPEN_DATE
INTO :HV-ACCT-NAME :IND-ACCT-NAME,
:HV-BALANCE :IND-BALANCE,
:HV-ACCT-TYPE,
:HV-OPEN-DATE :IND-OPEN-DATE
FROM MERIDIAN.ACCOUNTS
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
EVALUATE SQLCODE
WHEN 0
PERFORM PROCESS-ACCOUNT
WHEN +100
DISPLAY 'ACCOUNT NOT FOUND: ' HV-ACCT-NUM
WHEN OTHER
PERFORM SQL-ERROR-HANDLER
END-EVALUATE.
INSERT
MOVE 'ACC0001234' TO HV-ACCT-NUM.
MOVE 'SMITH, JOHN Q' TO HV-ACCT-NAME.
MOVE 10000.00 TO HV-BALANCE.
MOVE 'CHK' TO HV-ACCT-TYPE.
EXEC SQL
INSERT INTO MERIDIAN.ACCOUNTS
(ACCOUNT_NUM, ACCT_NAME, BALANCE, ACCT_TYPE,
OPEN_DATE)
VALUES
(:HV-ACCT-NUM, :HV-ACCT-NAME, :HV-BALANCE,
:HV-ACCT-TYPE, CURRENT DATE)
END-EXEC.
IF SQLCODE = 0
DISPLAY 'ACCOUNT CREATED SUCCESSFULLY'
ELSE IF SQLCODE = -803
DISPLAY 'DUPLICATE ACCOUNT NUMBER'
ELSE
PERFORM SQL-ERROR-HANDLER
END-IF.
UPDATE
EXEC SQL
UPDATE MERIDIAN.ACCOUNTS
SET BALANCE = BALANCE + :HV-TXN-AMOUNT
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
IF SQLCODE = 0
DISPLAY 'ROWS UPDATED: ' SQLERRD(3)
ELSE
PERFORM SQL-ERROR-HANDLER
END-IF.
Note: SQLERRD(3) contains the number of rows affected by INSERT, UPDATE, or DELETE.
DELETE
EXEC SQL
DELETE FROM MERIDIAN.TRANSACTION_LOG
WHERE TXN_DATE < :HV-CUTOFF-DATE
AND ARCHIVED = 'Y'
END-EXEC.
DISPLAY 'ROWS DELETED: ' SQLERRD(3).
WHENEVER — Declarative Error Handling
The EXEC SQL WHENEVER statement sets up automatic branching on specific conditions. It is a precompiler directive that affects all subsequent SQL statements until overridden:
* Branch to error handler on any SQL error
EXEC SQL WHENEVER SQLERROR GO TO SQL-ERROR-PARA END-EXEC.
* Branch when no more rows found
EXEC SQL WHENEVER NOT FOUND GO TO END-OF-DATA END-EXEC.
* Display a warning but continue
EXEC SQL WHENEVER SQLWARNING CONTINUE END-EXEC.
Conditions:
- SQLERROR — SQLCODE < 0
- NOT FOUND — SQLCODE = +100
- SQLWARNING — SQLWARN0 = 'W'
Actions:
- GO TO label — Branch to the specified paragraph
- CONTINUE — Take no automatic action; let the program continue
Caution: WHENEVER is positional in the source code, not scoped by paragraph or section. Once you code WHENEVER SQLERROR GO TO ERROR-PARA, every subsequent SQL statement in the source will branch there on error — until you code another WHENEVER SQLERROR directive. This can cause unexpected branching if you are not careful about placement.
A common pattern is to set up WHENEVER at the top of the program and override it locally when you need different behavior:
* Default error handling
EXEC SQL WHENEVER SQLERROR GO TO SQL-ERROR-PARA END-EXEC.
EXEC SQL WHENEVER NOT FOUND CONTINUE END-EXEC.
* ... normal processing ...
* Override for a specific section where NOT FOUND is expected
EXEC SQL WHENEVER NOT FOUND GO TO NO-MORE-ROWS END-EXEC.
* ... cursor fetch loop ...
* Restore default
EXEC SQL WHENEVER NOT FOUND CONTINUE END-EXEC.
32.5 The SQLCA — Communicating with DB2
The SQL Communications Area (SQLCA) is a data structure that DB2 populates after every SQL statement execution. It is your program's window into what happened — success, failure, warnings, and diagnostics.
Including the SQLCA
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
This expands to a COBOL data structure with these key fields:
01 SQLCA.
05 SQLCAID PIC X(8).
05 SQLCABC PIC S9(9) COMP.
05 SQLCODE PIC S9(9) COMP.
05 SQLERRM.
49 SQLERRML PIC S9(4) COMP.
49 SQLERRMC PIC X(70).
05 SQLERRP PIC X(8).
05 SQLERRD PIC S9(9) COMP OCCURS 6 TIMES.
05 SQLWARN.
10 SQLWARN0 PIC X.
10 SQLWARN1 PIC X.
10 SQLWARN2 PIC X.
10 SQLWARN3 PIC X.
10 SQLWARN4 PIC X.
10 SQLWARN5 PIC X.
10 SQLWARN6 PIC X.
10 SQLWARN7 PIC X.
10 SQLWARN8 PIC X.
10 SQLWARN9 PIC X.
10 SQLWARNA PIC X.
05 SQLSTATE PIC X(5).
SQLCODE — The Primary Return Code
SQLCODE is the most important field. Every embedded SQL programmer memorizes the critical values:
| SQLCODE | Meaning |
|---|---|
0 |
Successful execution |
+100 |
No row found (SELECT INTO, FETCH) or no more rows |
-803 |
Duplicate key on INSERT (unique index violation) |
-805 |
DBRM/package not found in plan |
-811 |
SELECT INTO returned more than one row |
-818 |
Timestamp mismatch between DBRM and plan (need to rebind) |
-904 |
Resource unavailable (tablespace, index in restricted state) |
-911 |
Deadlock or timeout; current unit of work rolled back |
-913 |
Deadlock or timeout; only current statement rolled back |
-922 |
Authorization failure |
-927 |
Language interface not available (DB2 connection issue) |
-180 |
Invalid date/time/timestamp value |
-181 |
Invalid date/time/timestamp string |
-305 |
Indicator variable required but not supplied (NULL fetched) |
-530 |
Referential integrity violation (parent key not found) |
-532 |
Referential integrity violation (dependent rows exist) |
SQLERRD Array
The SQLERRD array provides six integer values with diagnostic information:
| Element | Meaning |
|---|---|
SQLERRD(1) |
Reserved |
SQLERRD(2) |
Reserved |
SQLERRD(3) |
Number of rows affected by INSERT, UPDATE, DELETE; or number of rows fetched |
SQLERRD(4) |
Estimated cost (set during PREPARE for dynamic SQL) |
SQLERRD(5) |
Position of error in SQL statement text |
SQLERRD(6) |
Internal DB2 diagnostic code |
SQLWARN Flags
| Flag | Meaning |
|---|---|
SQLWARN0 |
'W' if any other SQLWARN flag is set |
SQLWARN1 |
'W' if a string column was truncated on retrieval |
SQLWARN2 |
'W' if NULL values were eliminated from a function |
SQLWARN3 |
'W' if number of host variables differs from result columns |
SQLWARN4 |
'W' if a prepared UPDATE or DELETE has no WHERE clause |
SQLWARN5 |
'W' if SQL statement is not valid for the SQL dialect |
SQLWARN6 |
'W' if date arithmetic results in end-of-month adjustment |
SQLWARN7 |
Reserved |
SQLSTATE — The Portable Alternative
SQLSTATE is a 5-character code defined by the SQL standard. The first two characters indicate the class; the last three indicate the subclass. SQLSTATE is more portable across databases than SQLCODE.
Common SQLSTATE values:
| SQLSTATE | Class | Meaning |
|---|---|---|
00000 |
Success | Successful completion |
01xxx |
Warning | Various warnings |
02000 |
No Data | No row found (equivalent to SQLCODE +100) |
23xxx |
Constraint | Integrity constraint violation |
40001 |
Rollback | Deadlock-related rollback |
42xxx |
Syntax/Access | Syntax error or access rule violation |
A Robust Error Handler
SQL-ERROR-HANDLER.
DISPLAY '*** SQL ERROR ***'.
DISPLAY 'SQLCODE : ' SQLCODE.
DISPLAY 'SQLSTATE : ' SQLSTATE.
DISPLAY 'SQLERRMC : ' SQLERRMC.
DISPLAY 'SQLERRD(3): ' SQLERRD(3).
DISPLAY 'SQLERRD(5): ' SQLERRD(5).
EVALUATE TRUE
WHEN SQLCODE = -911 OR SQLCODE = -913
DISPLAY 'DEADLOCK/TIMEOUT - ROLLING BACK'
EXEC SQL ROLLBACK END-EXEC
PERFORM RETRY-LOGIC
WHEN SQLCODE = -904
DISPLAY 'RESOURCE UNAVAILABLE - RETRYING'
PERFORM WAIT-AND-RETRY
WHEN SQLCODE = -818
DISPLAY 'TIMESTAMP MISMATCH - REBIND NEEDED'
MOVE 16 TO RETURN-CODE
STOP RUN
WHEN OTHER
DISPLAY 'UNRECOVERABLE SQL ERROR'
EXEC SQL ROLLBACK END-EXEC
MOVE 16 TO RETURN-CODE
STOP RUN
END-EVALUATE.
32.6 Cursors — Processing Multiple Rows
A SELECT INTO can only return one row. When your query returns multiple rows — which is the common case in batch processing — you need a cursor. A cursor is essentially a pointer into a result set that you advance one row (or a block of rows) at a time.
The Cursor Lifecycle
DECLARE → OPEN → FETCH (loop) → CLOSE
DECLARE CURSOR
The DECLARE statement defines the cursor's SQL query. It does not execute anything — it is a compile-time declaration:
EXEC SQL
DECLARE CSR-ACTIVE-ACCTS CURSOR FOR
SELECT ACCOUNT_NUM, ACCT_NAME, BALANCE,
ACCT_TYPE, INTEREST_RATE
FROM MERIDIAN.ACCOUNTS
WHERE STATUS = 'ACTIVE'
AND BRANCH_ID = :HV-BRANCH-ID
ORDER BY ACCOUNT_NUM
END-EXEC.
OPEN
The OPEN statement executes the query and establishes the result set:
EXEC SQL OPEN CSR-ACTIVE-ACCTS END-EXEC.
IF SQLCODE NOT = 0
DISPLAY 'ERROR OPENING CURSOR: ' SQLCODE
PERFORM SQL-ERROR-HANDLER
END-IF.
At OPEN time, DB2 evaluates any host variable values used in the WHERE clause. If you change the host variables after OPEN, the cursor still uses the values from OPEN time.
FETCH
The FETCH statement retrieves one row at a time and advances the cursor position:
PERFORM UNTIL SQLCODE = +100
EXEC SQL
FETCH CSR-ACTIVE-ACCTS
INTO :HV-ACCT-NUM,
:HV-ACCT-NAME :IND-ACCT-NAME,
:HV-BALANCE :IND-BALANCE,
:HV-ACCT-TYPE,
:HV-INTEREST-RATE :IND-INTEREST-RATE
END-EXEC
IF SQLCODE = 0
PERFORM PROCESS-ACCOUNT-ROW
ELSE IF SQLCODE NOT = +100
PERFORM SQL-ERROR-HANDLER
END-IF
END-PERFORM.
CLOSE
Always close your cursors when done. This releases DB2 resources (locks, memory, thread storage):
EXEC SQL CLOSE CSR-ACTIVE-ACCTS END-EXEC.
If you do not explicitly close a cursor, DB2 closes it at COMMIT (unless declared WITH HOLD).
Complete Cursor Processing Pattern
Here is the standard pattern used in production COBOL programs:
PROCESS-ACCOUNTS.
* Initialize counters
MOVE 0 TO WS-ROWS-PROCESSED.
MOVE 0 TO WS-ERROR-COUNT.
* Open cursor
EXEC SQL OPEN CSR-ACTIVE-ACCTS END-EXEC.
IF SQLCODE NOT = 0
PERFORM SQL-ERROR-HANDLER
GO TO PROCESS-ACCOUNTS-EXIT
END-IF.
* Fetch loop
PERFORM FETCH-AND-PROCESS
UNTIL WS-END-OF-DATA = 'Y'.
* Close cursor
EXEC SQL CLOSE CSR-ACTIVE-ACCTS END-EXEC.
* Report results
DISPLAY 'ACCOUNTS PROCESSED: ' WS-ROWS-PROCESSED.
DISPLAY 'ERRORS: ' WS-ERROR-COUNT.
PROCESS-ACCOUNTS-EXIT.
EXIT.
FETCH-AND-PROCESS.
EXEC SQL
FETCH CSR-ACTIVE-ACCTS
INTO :HV-ACCT-NUM,
:HV-ACCT-NAME :IND-ACCT-NAME,
:HV-BALANCE :IND-BALANCE,
:HV-ACCT-TYPE,
:HV-INTEREST-RATE :IND-INTEREST-RATE
END-EXEC.
EVALUATE SQLCODE
WHEN 0
PERFORM PROCESS-SINGLE-ACCOUNT
ADD 1 TO WS-ROWS-PROCESSED
WHEN +100
MOVE 'Y' TO WS-END-OF-DATA
WHEN OTHER
ADD 1 TO WS-ERROR-COUNT
PERFORM SQL-ERROR-HANDLER
END-EVALUATE.
WITH HOLD Cursors
Normally, all cursors are closed when you issue a COMMIT. In batch processing, you often want to commit periodically (say every 1,000 rows) to release locks, but continue processing from where you left off. A WITH HOLD cursor survives a COMMIT:
EXEC SQL
DECLARE CSR-BATCH-PROC CURSOR WITH HOLD FOR
SELECT ACCOUNT_NUM, BALANCE
FROM MERIDIAN.ACCOUNTS
WHERE STATUS = 'ACTIVE'
ORDER BY ACCOUNT_NUM
END-EXEC.
With WITH HOLD, after a COMMIT, the cursor remains open and positioned on the last row fetched. The next FETCH retrieves the next row. Without WITH HOLD, a COMMIT closes the cursor and you lose your position.
Best practice for batch programs: Use WITH HOLD and commit every N rows (typically 500-5,000 depending on the operation). This limits lock escalation and allows other work to proceed.
PERFORM FETCH-AND-PROCESS
UNTIL WS-END-OF-DATA = 'Y'.
FETCH-AND-PROCESS.
EXEC SQL
FETCH CSR-BATCH-PROC
INTO :HV-ACCT-NUM, :HV-BALANCE
END-EXEC.
IF SQLCODE = +100
MOVE 'Y' TO WS-END-OF-DATA
ELSE IF SQLCODE = 0
PERFORM UPDATE-ACCOUNT
ADD 1 TO WS-COMMIT-COUNT
IF WS-COMMIT-COUNT >= 1000
EXEC SQL COMMIT END-EXEC
MOVE 0 TO WS-COMMIT-COUNT
END-IF
ELSE
PERFORM SQL-ERROR-HANDLER
END-IF.
FOR UPDATE OF — Positioned Updates and Deletes
When you need to update or delete the row currently under the cursor, declare the cursor FOR UPDATE OF:
EXEC SQL
DECLARE CSR-UPD-ACCTS CURSOR FOR
SELECT ACCOUNT_NUM, BALANCE, INTEREST_RATE
FROM MERIDIAN.ACCOUNTS
WHERE STATUS = 'ACTIVE'
AND ACCT_TYPE = 'SAV'
FOR UPDATE OF BALANCE
END-EXEC.
Then use a positioned UPDATE (or DELETE) with WHERE CURRENT OF:
EXEC SQL
FETCH CSR-UPD-ACCTS
INTO :HV-ACCT-NUM, :HV-BALANCE, :HV-INTEREST-RATE
END-EXEC.
* Calculate new balance with interest
COMPUTE HV-NEW-BALANCE =
HV-BALANCE + (HV-BALANCE * HV-INTEREST-RATE / 365).
EXEC SQL
UPDATE MERIDIAN.ACCOUNTS
SET BALANCE = :HV-NEW-BALANCE
WHERE CURRENT OF CSR-UPD-ACCTS
END-EXEC.
Positioned updates avoid the need to re-specify the WHERE clause and guarantee you are updating exactly the row you just fetched. DB2 holds an update lock on the current row, which is promoted to an exclusive lock when the UPDATE executes.
Scrollable Cursors
Standard cursors move forward only. DB2 also supports scrollable cursors that can move in any direction:
EXEC SQL
DECLARE CSR-SCROLL SCROLL CURSOR FOR
SELECT ACCOUNT_NUM, ACCT_NAME, BALANCE
FROM MERIDIAN.ACCOUNTS
ORDER BY BALANCE DESC
END-EXEC.
With a scrollable cursor, you can use:
EXEC SQL FETCH NEXT FROM CSR-SCROLL INTO ... END-EXEC.
EXEC SQL FETCH PRIOR FROM CSR-SCROLL INTO ... END-EXEC.
EXEC SQL FETCH FIRST FROM CSR-SCROLL INTO ... END-EXEC.
EXEC SQL FETCH LAST FROM CSR-SCROLL INTO ... END-EXEC.
EXEC SQL FETCH ABSOLUTE +5 FROM CSR-SCROLL INTO ... END-EXEC.
EXEC SQL FETCH RELATIVE -3 FROM CSR-SCROLL INTO ... END-EXEC.
Scrollable cursors have overhead because DB2 must materialize the result set. Use them only when you genuinely need non-sequential access.
Sensitivity Options
Cursors can be declared with sensitivity options that control whether changes made by other transactions (or the same transaction) are visible:
- INSENSITIVE — The result set is a snapshot; changes are not visible
- SENSITIVE STATIC — The result set size is fixed, but updates to existing rows are visible
- SENSITIVE DYNAMIC — Inserts, updates, and deletes by others are visible
- ASENSITIVE (default) — DB2 decides; behavior may vary
32.7 Multi-Row FETCH and INSERT
Single-row FETCH is straightforward but can be inefficient when processing large result sets. Each FETCH is a call to DB2 — and each call has overhead. Multi-row FETCH (also called rowset FETCH) retrieves multiple rows in a single call, dramatically reducing overhead.
Host Variable Arrays
To use multi-row FETCH, declare host variable arrays:
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 HV-ACCT-ARRAY.
05 HV-ACCT-NUM-ARR PIC X(10) OCCURS 100 TIMES.
01 HV-NAME-ARRAY.
05 HV-ACCT-NAME-ARR PIC X(40) OCCURS 100 TIMES.
01 HV-BAL-ARRAY.
05 HV-BALANCE-ARR PIC S9(13)V99 COMP-3
OCCURS 100 TIMES.
01 IND-BAL-ARRAY.
05 IND-BAL-ARR PIC S9(4) COMP OCCURS 100 TIMES.
01 HV-ROW-COUNT PIC S9(9) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
Multi-Row FETCH
EXEC SQL
DECLARE CSR-BULK CURSOR WITH ROWSET POSITIONING FOR
SELECT ACCOUNT_NUM, ACCT_NAME, BALANCE
FROM MERIDIAN.ACCOUNTS
WHERE BRANCH_ID = :HV-BRANCH-ID
ORDER BY ACCOUNT_NUM
END-EXEC.
EXEC SQL OPEN CSR-BULK END-EXEC.
PERFORM UNTIL SQLCODE = +100
EXEC SQL
FETCH NEXT ROWSET FROM CSR-BULK
FOR 100 ROWS
INTO :HV-ACCT-NUM-ARR,
:HV-ACCT-NAME-ARR,
:HV-BALANCE-ARR :IND-BAL-ARR
END-EXEC
IF SQLCODE = 0 OR SQLCODE = +100
MOVE SQLERRD(3) TO HV-ROW-COUNT
PERFORM PROCESS-ROWSET
ELSE
PERFORM SQL-ERROR-HANDLER
END-IF
END-PERFORM.
When SQLCODE = +100 on a multi-row FETCH, it means the last batch was retrieved (fewer than 100 rows). SQLERRD(3) tells you how many rows were actually returned.
Multi-Row INSERT
Similarly, you can insert multiple rows in a single statement:
EXEC SQL
INSERT INTO MERIDIAN.DAILY_INTEREST
(ACCOUNT_NUM, INTEREST_DATE, INTEREST_AMT)
VALUES (:HV-ACCT-NUM-ARR, :HV-INT-DATE-ARR,
:HV-INT-AMT-ARR)
FOR :HV-INSERT-COUNT ROWS
END-EXEC.
Performance Impact
Multi-row operations can improve performance by 30-70% compared to single-row operations. The improvement comes from:
- Fewer calls to DB2 — Each call has fixed overhead for thread management, authorization checking, and result formatting
- Better buffer pool utilization — DB2 can prefetch more efficiently when it knows you want multiple rows
- Reduced lock management overhead — Lock requests can be batched
- Less context switching — Fewer transitions between the application address space and the DB2 address space
The optimal rowset size depends on row width and available memory. Common values range from 100 to 1,000 rows. Testing with your specific workload is the only way to determine the best value.
32.8 Static vs Dynamic SQL
So far, we have focused on static SQL — statements that are fully known at precompile time. But there are situations where the SQL text must be constructed at runtime. This is dynamic SQL.
Static SQL Characteristics
- SQL text is fixed in the source code
- Host variables can change values, but the statement structure is constant
- Access paths determined at BIND time
- No runtime SQL parsing or optimization
- Authorization checked at BIND time
Dynamic SQL: PREPARE and EXECUTE
Dynamic SQL is built as a character string at runtime and then prepared and executed:
WORKING-STORAGE SECTION.
01 WS-SQL-TEXT PIC X(500).
PROCEDURE DIVISION.
* Build SQL dynamically
STRING 'SELECT ACCT_NAME, BALANCE '
'FROM MERIDIAN.ACCOUNTS '
'WHERE BRANCH_ID = ? '
'AND STATUS = ?'
DELIMITED BY SIZE
INTO WS-SQL-TEXT.
* Prepare the statement
EXEC SQL
PREPARE STMT1 FROM :WS-SQL-TEXT
END-EXEC.
* Execute with parameter markers
EXEC SQL
EXECUTE STMT1
USING :HV-BRANCH-ID, :HV-STATUS
END-EXEC.
EXECUTE IMMEDIATE
For SQL that needs no output and runs once, EXECUTE IMMEDIATE combines prepare and execute:
STRING 'DELETE FROM MERIDIAN.TEMP_WORK_TABLE '
'WHERE PROCESS_ID = '''
WS-PROCESS-ID
''''
DELIMITED BY SIZE
INTO WS-SQL-TEXT.
EXEC SQL
EXECUTE IMMEDIATE :WS-SQL-TEXT
END-EXEC.
Warning: EXECUTE IMMEDIATE cannot be used with SELECT statements. Use PREPARE + DECLARE CURSOR + OPEN + FETCH for dynamic queries.
Dynamic Cursors
When the SELECT column list or table is not known until runtime:
EXEC SQL
PREPARE DYNSTMT FROM :WS-SQL-TEXT
END-EXEC.
EXEC SQL
DECLARE CSR-DYNAMIC CURSOR FOR DYNSTMT
END-EXEC.
EXEC SQL OPEN CSR-DYNAMIC
USING :HV-PARAM1, :HV-PARAM2
END-EXEC.
When to Use Each
| Scenario | Use Static | Use Dynamic |
|---|---|---|
| Fixed batch processing queries | Yes | |
| Queries with variable WHERE columns | Yes | |
| Ad hoc query tools | Yes | |
| High-volume transaction processing | Yes | |
| DDL statements (CREATE, ALTER) | Yes (EXECUTE IMMEDIATE) | |
| Known queries with changing parameters | Yes (host variables) | |
| Queries against variable tables | Yes |
Performance Comparison
In a batch program processing millions of rows, the performance difference is significant:
-
Static SQL: No PREPARE cost. The access path is pre-determined. Each execution simply runs the stored plan. CPU time per statement execution is minimized.
-
Dynamic SQL with PREPARE once, EXECUTE many: The PREPARE cost is paid once. If you prepare the statement once and execute it thousands of times (with different host variable values), the overhead is amortized. This can be nearly as efficient as static SQL.
-
Dynamic SQL with EXECUTE IMMEDIATE: Full prepare cost on every execution. This is the most expensive option and should be avoided in loops.
Dynamic SQL Caching
DB2 has a dynamic statement cache (controlled by MAXKEEPD and EDMSTMTC parameters). If two programs execute identical SQL text, DB2 can reuse the cached access plan. This is one reason using parameter markers (?) instead of literal values in dynamic SQL is critical — it maximizes cache hits.
32.9 The BIND Process
The BIND process is the mechanism by which DB2 converts SQL statements into executable access plans. Understanding BIND options is essential for performance tuning and operational management.
BIND PACKAGE
A package is bound from a single DBRM:
BIND PACKAGE(MERIDIAN_COLL) -
MEMBER(ACCTINQ) -
LIBRARY('MERIDIAN.DBRMLIB') -
ACTION(REPLACE) -
ISOLATION(CS) -
VALIDATE(BIND) -
EXPLAIN(YES) -
CURRENTDATA(NO) -
DEGREE(ANY) -
DYNAMICRULES(BIND) -
QUALIFIER(MERIDIAN) -
ENCODING(EBCDIC) -
RELEASE(COMMIT) -
SQLERROR(NOPACKAGE)
BIND PLAN
A plan references packages through a package list:
BIND PLAN(MERIDIAN_BATCH) -
PKLIST(MERIDIAN_COLL.*) -
ACTION(REPLACE) -
ISOLATION(CS) -
ACQUIRE(USE) -
RELEASE(COMMIT) -
CACHESIZE(4096)
Key BIND Options
ISOLATION — Controls the locking behavior:
| Level | Description | Lock Duration |
|---|---|---|
UR |
Uncommitted Read | No row locks acquired for read |
CS |
Cursor Stability (default) | Row lock released when cursor moves |
RS |
Read Stability | Row locks held until COMMIT |
RR |
Repeatable Read | Row and range locks held until COMMIT |
For batch programs that only read data, UR can dramatically improve throughput by eliminating lock contention. For programs that update data, CS is the most common choice. Use RS or RR only when the application requires it.
VALIDATE — When to check authorization and table existence:
BIND— Check at BIND time; bind fails if objects do not existRUN— Defer checking to runtime; useful when tables might not exist yet
EXPLAIN — Whether to populate the PLAN_TABLE with access path information:
YES— Populate PLAN_TABLE (essential for performance tuning)NO— Do not populate
CURRENTDATA — Controls data currency for ambiguous cursors:
YES— Always return current data (more locking overhead)NO— Allow block fetch optimization (better performance)
DEGREE — Parallel query processing:
1— No parallelismANY— Let DB2 decide based on available resources
DYNAMICRULES — Controls how dynamic SQL is processed:
RUN— Dynamic SQL uses the authorization ID of the runtime userBIND— Dynamic SQL uses the authorization ID of the package ownerDEFINEBIND/DEFINERUN/INVOKEBIND/INVOKERUN— Fine-grained control
RELEASE — When to release locks:
COMMIT— Release at commit (default, reduces lock duration)DEALLOCATE— Release when the plan is deallocated (reduces repeated lock acquisition)
ACQUIRE (plan-level only):
USE— Acquire locks when objects are first usedALLOCATE— Acquire all locks when the plan is allocated (reduces deadlock risk but holds locks longer)
QUALIFIER — Default qualifier for unqualified table names in the SQL. Critical for moving programs between environments (dev, test, production) without code changes.
REBIND
When statistics change, new indexes are created, or DB2 is upgraded, you should REBIND to get new access paths:
REBIND PACKAGE(MERIDIAN_COLL.ACCTINQ)
REBIND reoptimizes all SQL statements in the package using current catalog statistics but does not require the original DBRM. The existing package is replaced in place.
Free and Rebind strategies:
- REBIND — Optimizes using current statistics. If an error occurs (e.g., a table was dropped), the original package remains.
- FREE PACKAGE / BIND PACKAGE — Removes the old package and creates a new one from the DBRM. If BIND fails, the old package is gone.
Most shops schedule regular REBIND jobs after RUNSTATS to ensure access paths reflect current data distributions.
Bind in the Development Lifecycle
A typical workflow:
- Developer modifies COBOL source
- Precompile produces new DBRM
- New DBRM is bound into a package in the DEV collection
- Testing in DEV environment
- DBRM promoted to TEST collection; BIND PACKAGE in TEST
- Testing in TEST environment
- DBRM promoted to PROD collection; BIND PACKAGE in PROD
- BIND PLAN (or update package list) in PROD
The QUALIFIER option is what allows the same SQL code to reference different schemas in each environment.
32.10 Embedded SQL in C
While COBOL dominates z/OS batch processing, C is also supported for embedded SQL and is more common in distributed DB2 environments (Linux, UNIX, Windows). The concepts are identical, but the syntax differs.
Host Variable Declaration in C
EXEC SQL BEGIN DECLARE SECTION;
char hv_acct_num[11]; /* CHAR(10) + null terminator */
char hv_acct_name[41]; /* CHAR(40) + null terminator */
double hv_balance; /* FLOAT or DECIMAL */
long hv_branch_id; /* INTEGER */
short ind_balance; /* Indicator variable */
short ind_acct_name;
/* VARCHAR in C */
struct {
short len;
char data[201];
} hv_description;
EXEC SQL END DECLARE SECTION;
Key differences from COBOL:
- Null terminators: C strings need an extra byte for the null terminator
- DECIMAL precision: C does not have packed decimal. You typically use
doublefor DECIMAL columns, which can introduce floating-point precision issues. For financial calculations, consider usingchararrays and converting manually, or use theDECFLOATdata type. - Pointer handling: You can use pointers as host variables, but they must point to valid allocated memory
- Structure mapping: C structures can map to entire rows
C Cursor Processing
EXEC SQL DECLARE csr_accounts CURSOR FOR
SELECT ACCOUNT_NUM, ACCT_NAME, BALANCE
FROM MERIDIAN.ACCOUNTS
WHERE BRANCH_ID = :hv_branch_id;
EXEC SQL OPEN csr_accounts;
while (1) {
EXEC SQL FETCH csr_accounts
INTO :hv_acct_num,
:hv_acct_name :ind_acct_name,
:hv_balance :ind_balance;
if (SQLCODE == +100) break;
if (SQLCODE < 0) {
printf("SQL Error: %d\n", SQLCODE);
break;
}
/* Process row */
process_account(hv_acct_num, hv_acct_name, hv_balance);
}
EXEC SQL CLOSE csr_accounts;
Precompile for C
On distributed platforms, the db2 prep command handles precompilation:
db2 prep acctinq.sqc bindfile target c
db2 bind acctinq.bnd
cc -c acctinq.c -I$DB2PATH/include
cc -o acctinq acctinq.o -L$DB2PATH/lib -ldb2
On z/OS, the precompiler is the same DSNHPC but with HOST(C) instead of HOST(COB2).
32.11 Meridian Bank COBOL Batch Programs
Let us bring everything together with complete COBOL programs that Meridian National Bank runs nightly.
Program 1: End-of-Day Interest Calculation (INTCALC)
This program calculates daily interest accrual for all savings and money market accounts:
IDENTIFICATION DIVISION.
PROGRAM-ID. INTCALC.
*=============================================================*
* PROGRAM: INTCALC *
* PURPOSE: CALCULATE DAILY INTEREST ACCRUAL FOR ALL *
* SAVINGS AND MONEY MARKET ACCOUNTS *
* AUTHOR: MERIDIAN NATIONAL BANK - BATCH OPERATIONS *
* DATE: 2024-01-15 *
*=============================================================*
ENVIRONMENT DIVISION.
DATA DIVISION.
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 HV-ACCT-NUM PIC X(10).
01 HV-BALANCE PIC S9(13)V99 COMP-3.
01 HV-INTEREST-RATE PIC S9(3)V9(6) COMP-3.
01 HV-ACCRUED-INT PIC S9(13)V99 COMP-3.
01 HV-DAILY-INT PIC S9(13)V9(6) COMP-3.
01 HV-PROCESS-DATE PIC X(10).
01 HV-ACCT-TYPE PIC X(3).
01 IND-INTEREST-RATE PIC S9(4) COMP.
01 IND-ACCRUED-INT PIC S9(4) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
01 WS-END-OF-DATA PIC X(1) VALUE 'N'.
01 WS-ROWS-PROCESSED PIC 9(9) VALUE 0.
01 WS-ROWS-UPDATED PIC 9(9) VALUE 0.
01 WS-ERROR-COUNT PIC 9(9) VALUE 0.
01 WS-COMMIT-COUNT PIC 9(9) VALUE 0.
01 WS-COMMIT-FREQ PIC 9(9) VALUE 1000.
01 WS-TOTAL-INTEREST PIC S9(15)V99 COMP-3 VALUE 0.
01 WS-RETURN-CODE PIC S9(4) COMP VALUE 0.
PROCEDURE DIVISION.
0000-MAIN.
PERFORM 1000-INITIALIZE.
PERFORM 2000-PROCESS-ACCOUNTS.
PERFORM 3000-FINALIZE.
MOVE WS-RETURN-CODE TO RETURN-CODE.
STOP RUN.
1000-INITIALIZE.
MOVE FUNCTION CURRENT-DATE(1:4) TO
HV-PROCESS-DATE(1:4).
MOVE '-' TO HV-PROCESS-DATE(5:1).
MOVE FUNCTION CURRENT-DATE(5:2) TO
HV-PROCESS-DATE(6:2).
MOVE '-' TO HV-PROCESS-DATE(8:1).
MOVE FUNCTION CURRENT-DATE(7:2) TO
HV-PROCESS-DATE(9:2).
DISPLAY 'INTCALC - DAILY INTEREST CALCULATION'.
DISPLAY 'PROCESS DATE: ' HV-PROCESS-DATE.
DISPLAY '================================'.
2000-PROCESS-ACCOUNTS.
EXEC SQL
DECLARE CSR-INTEREST CURSOR WITH HOLD FOR
SELECT A.ACCOUNT_NUM, A.BALANCE,
A.INTEREST_RATE, A.ACCRUED_INTEREST,
A.ACCT_TYPE
FROM MERIDIAN.ACCOUNTS A
WHERE A.STATUS = 'ACTIVE'
AND A.ACCT_TYPE IN ('SAV', 'MMA')
AND A.BALANCE > 0
ORDER BY A.ACCOUNT_NUM
FOR UPDATE OF ACCRUED_INTEREST
END-EXEC.
EXEC SQL OPEN CSR-INTEREST END-EXEC.
IF SQLCODE NOT = 0
DISPLAY 'ERROR OPENING CURSOR: ' SQLCODE
MOVE 16 TO WS-RETURN-CODE
GO TO 2000-EXIT
END-IF.
PERFORM 2100-FETCH-AND-CALC
UNTIL WS-END-OF-DATA = 'Y'.
EXEC SQL CLOSE CSR-INTEREST END-EXEC.
EXEC SQL COMMIT END-EXEC.
2000-EXIT.
EXIT.
2100-FETCH-AND-CALC.
EXEC SQL
FETCH CSR-INTEREST
INTO :HV-ACCT-NUM, :HV-BALANCE,
:HV-INTEREST-RATE :IND-INTEREST-RATE,
:HV-ACCRUED-INT :IND-ACCRUED-INT,
:HV-ACCT-TYPE
END-EXEC.
EVALUATE SQLCODE
WHEN 0
ADD 1 TO WS-ROWS-PROCESSED
PERFORM 2200-CALCULATE-INTEREST
WHEN +100
MOVE 'Y' TO WS-END-OF-DATA
WHEN OTHER
ADD 1 TO WS-ERROR-COUNT
DISPLAY 'FETCH ERROR: ' SQLCODE
' ACCT: ' HV-ACCT-NUM
END-EVALUATE.
2200-CALCULATE-INTEREST.
* Skip if interest rate is NULL
IF IND-INTEREST-RATE = -1
GO TO 2200-EXIT
END-IF.
* Calculate daily interest: balance * rate / 365
COMPUTE HV-DAILY-INT =
HV-BALANCE * HV-INTEREST-RATE / 365
ON SIZE ERROR
DISPLAY 'OVERFLOW: ' HV-ACCT-NUM
ADD 1 TO WS-ERROR-COUNT
GO TO 2200-EXIT
END-COMPUTE.
* Accumulate interest
IF IND-ACCRUED-INT = -1
MOVE 0 TO HV-ACCRUED-INT
END-IF.
ADD HV-DAILY-INT TO HV-ACCRUED-INT.
* Update the account
MOVE 0 TO IND-ACCRUED-INT.
EXEC SQL
UPDATE MERIDIAN.ACCOUNTS
SET ACCRUED_INTEREST =
:HV-ACCRUED-INT :IND-ACCRUED-INT
WHERE CURRENT OF CSR-INTEREST
END-EXEC.
IF SQLCODE = 0
ADD 1 TO WS-ROWS-UPDATED
ADD HV-DAILY-INT TO WS-TOTAL-INTEREST
ELSE
ADD 1 TO WS-ERROR-COUNT
DISPLAY 'UPDATE ERROR: ' SQLCODE
' ACCT: ' HV-ACCT-NUM
END-IF.
* Periodic commit
ADD 1 TO WS-COMMIT-COUNT.
IF WS-COMMIT-COUNT >= WS-COMMIT-FREQ
EXEC SQL COMMIT END-EXEC
MOVE 0 TO WS-COMMIT-COUNT
END-IF.
2200-EXIT.
EXIT.
3000-FINALIZE.
DISPLAY '================================'.
DISPLAY 'INTCALC - PROCESSING COMPLETE'.
DISPLAY 'ROWS FETCHED : ' WS-ROWS-PROCESSED.
DISPLAY 'ROWS UPDATED : ' WS-ROWS-UPDATED.
DISPLAY 'ERRORS : ' WS-ERROR-COUNT.
DISPLAY 'TOTAL INT : ' WS-TOTAL-INTEREST.
DISPLAY '================================'.
IF WS-ERROR-COUNT > 0
MOVE 4 TO WS-RETURN-CODE
END-IF.
Program 2: Transaction Posting (TXNPOST)
This program reads a sequential file of transactions and posts them to accounts:
IDENTIFICATION DIVISION.
PROGRAM-ID. TXNPOST.
*=============================================================*
* PROGRAM: TXNPOST *
* PURPOSE: POST DAILY TRANSACTIONS FROM INPUT FILE TO *
* ACCOUNT BALANCES *
*=============================================================*
ENVIRONMENT DIVISION.
INPUT-OUTPUT SECTION.
FILE-CONTROL.
SELECT TXN-FILE ASSIGN TO TXNINPUT
FILE STATUS IS WS-FILE-STATUS.
DATA DIVISION.
FILE SECTION.
FD TXN-FILE RECORDING MODE F.
01 TXN-RECORD.
05 TXN-ACCT-NUM PIC X(10).
05 TXN-TYPE PIC X(2).
05 TXN-AMOUNT PIC S9(11)V99.
05 TXN-DESC PIC X(40).
05 TXN-BRANCH PIC X(4).
05 FILLER PIC X(21).
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 HV-ACCT-NUM PIC X(10).
01 HV-TXN-AMOUNT PIC S9(11)V99 COMP-3.
01 HV-TXN-TYPE PIC X(2).
01 HV-TXN-DESC PIC X(40).
01 HV-TXN-BRANCH PIC X(4).
01 HV-CURRENT-BAL PIC S9(13)V99 COMP-3.
01 HV-NEW-BAL PIC S9(13)V99 COMP-3.
01 HV-TXN-SEQ PIC S9(9) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
01 WS-FILE-STATUS PIC X(2).
01 WS-EOF PIC X(1) VALUE 'N'.
01 WS-TXN-READ PIC 9(9) VALUE 0.
01 WS-TXN-POSTED PIC 9(9) VALUE 0.
01 WS-TXN-REJECTED PIC 9(9) VALUE 0.
01 WS-COMMIT-CTR PIC 9(9) VALUE 0.
PROCEDURE DIVISION.
0000-MAIN.
PERFORM 1000-INIT.
PERFORM 2000-PROCESS UNTIL WS-EOF = 'Y'.
PERFORM 3000-CLEANUP.
STOP RUN.
1000-INIT.
OPEN INPUT TXN-FILE.
IF WS-FILE-STATUS NOT = '00'
DISPLAY 'FILE OPEN ERROR: ' WS-FILE-STATUS
MOVE 16 TO RETURN-CODE
STOP RUN
END-IF.
DISPLAY 'TXNPOST - TRANSACTION POSTING STARTED'.
2000-PROCESS.
READ TXN-FILE
AT END MOVE 'Y' TO WS-EOF
NOT AT END PERFORM 2100-POST-TXN
END-READ.
2100-POST-TXN.
ADD 1 TO WS-TXN-READ.
MOVE TXN-ACCT-NUM TO HV-ACCT-NUM.
MOVE TXN-AMOUNT TO HV-TXN-AMOUNT.
MOVE TXN-TYPE TO HV-TXN-TYPE.
MOVE TXN-DESC TO HV-TXN-DESC.
MOVE TXN-BRANCH TO HV-TXN-BRANCH.
* Get current balance
EXEC SQL
SELECT BALANCE INTO :HV-CURRENT-BAL
FROM MERIDIAN.ACCOUNTS
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
IF SQLCODE = +100
DISPLAY 'ACCOUNT NOT FOUND: ' HV-ACCT-NUM
ADD 1 TO WS-TXN-REJECTED
GO TO 2100-EXIT
END-IF.
IF SQLCODE NOT = 0
DISPLAY 'SELECT ERROR: ' SQLCODE
ADD 1 TO WS-TXN-REJECTED
GO TO 2100-EXIT
END-IF.
* Calculate new balance
IF HV-TXN-TYPE = 'CR'
ADD HV-TXN-AMOUNT TO HV-CURRENT-BAL
GIVING HV-NEW-BAL
ELSE IF HV-TXN-TYPE = 'DR'
SUBTRACT HV-TXN-AMOUNT FROM HV-CURRENT-BAL
GIVING HV-NEW-BAL
ELSE
DISPLAY 'INVALID TXN TYPE: ' HV-TXN-TYPE
ADD 1 TO WS-TXN-REJECTED
GO TO 2100-EXIT
END-IF.
* Update account balance
EXEC SQL
UPDATE MERIDIAN.ACCOUNTS
SET BALANCE = :HV-NEW-BAL
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
END-EXEC.
IF SQLCODE = 0
* Insert transaction record
EXEC SQL
INSERT INTO MERIDIAN.TRANSACTIONS
(ACCOUNT_NUM, TXN_TYPE, TXN_AMOUNT,
TXN_DESC, BRANCH_ID, TXN_DATE,
NEW_BALANCE)
VALUES
(:HV-ACCT-NUM, :HV-TXN-TYPE,
:HV-TXN-AMOUNT, :HV-TXN-DESC,
:HV-TXN-BRANCH, CURRENT DATE,
:HV-NEW-BAL)
END-EXEC
IF SQLCODE = 0
ADD 1 TO WS-TXN-POSTED
ELSE
DISPLAY 'INSERT ERROR: ' SQLCODE
ADD 1 TO WS-TXN-REJECTED
EXEC SQL ROLLBACK END-EXEC
GO TO 2100-EXIT
END-IF
ELSE
DISPLAY 'UPDATE ERROR: ' SQLCODE
ADD 1 TO WS-TXN-REJECTED
EXEC SQL ROLLBACK END-EXEC
GO TO 2100-EXIT
END-IF.
* Periodic commit
ADD 1 TO WS-COMMIT-CTR.
IF WS-COMMIT-CTR >= 500
EXEC SQL COMMIT END-EXEC
MOVE 0 TO WS-COMMIT-CTR
END-IF.
2100-EXIT.
EXIT.
3000-CLEANUP.
EXEC SQL COMMIT END-EXEC.
CLOSE TXN-FILE.
DISPLAY 'TXNPOST - COMPLETE'.
DISPLAY 'READ : ' WS-TXN-READ.
DISPLAY 'POSTED : ' WS-TXN-POSTED.
DISPLAY 'REJECTED: ' WS-TXN-REJECTED.
These programs illustrate the core patterns: cursor processing with periodic commits, error handling, indicator variables, and the interplay between file I/O and database operations that characterizes mainframe batch processing.
Program 3: Monthly Statement Generation (STMTGEN)
The statement generation program is more complex because it involves nested cursors — an outer cursor over accounts and an inner cursor over transactions for each account:
IDENTIFICATION DIVISION.
PROGRAM-ID. STMTGEN.
*=============================================================*
* PROGRAM: STMTGEN *
* PURPOSE: GENERATE MONTHLY STATEMENTS FOR ALL ACTIVE *
* CHECKING, SAVINGS, AND MONEY MARKET ACCOUNTS *
*=============================================================*
DATA DIVISION.
WORKING-STORAGE SECTION.
EXEC SQL INCLUDE SQLCA END-EXEC.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
01 HV-ACCT-NUM PIC X(10).
01 HV-ACCT-NAME PIC X(40).
01 HV-BALANCE PIC S9(13)V99 COMP-3.
01 HV-BRANCH-ID PIC S9(9) COMP.
01 HV-STMT-START PIC X(10).
01 HV-STMT-END PIC X(10).
01 HV-TXN-DATE PIC X(10).
01 HV-TXN-TYPE PIC X(2).
01 HV-TXN-AMT PIC S9(11)V99 COMP-3.
01 HV-TXN-DESC PIC X(40).
01 HV-OPEN-BAL PIC S9(13)V99 COMP-3.
01 IND-ACCT-NAME PIC S9(4) COMP.
01 IND-TXN-DESC PIC S9(4) COMP.
EXEC SQL END DECLARE SECTION END-EXEC.
01 WS-ACCTS-PROCESSED PIC 9(9) VALUE 0.
01 WS-TXNS-WRITTEN PIC 9(9) VALUE 0.
01 WS-ACCT-TXN-CTR PIC 9(6) VALUE 0.
01 WS-END-ACCTS PIC X(1) VALUE 'N'.
01 WS-END-TXNS PIC X(1) VALUE 'N'.
01 WS-COMMIT-CTR PIC 9(9) VALUE 0.
The key architectural point is the nested cursor pattern. The outer cursor iterates over accounts; for each account, the inner cursor retrieves that account's transactions for the statement period:
* Outer cursor: all accounts needing statements
EXEC SQL
DECLARE CSR-STMT-ACCTS CURSOR WITH HOLD FOR
SELECT ACCOUNT_NUM, ACCT_NAME, BALANCE,
BRANCH_ID
FROM MERIDIAN.ACCOUNTS
WHERE STATUS = 'A'
AND ACCT_TYPE IN ('CHK', 'SAV', 'MMA')
ORDER BY BRANCH_ID, ACCOUNT_NUM
END-EXEC.
* Inner cursor: transactions for one account
EXEC SQL
DECLARE CSR-ACCT-TXNS CURSOR FOR
SELECT TXN_DATE, TXN_TYPE, TXN_AMOUNT,
TXN_DESC
FROM MERIDIAN.TRANSACTIONS
WHERE ACCOUNT_NUM = :HV-ACCT-NUM
AND TXN_DATE BETWEEN :HV-STMT-START
AND :HV-STMT-END
ORDER BY TXN_DATE, TXN_TIMESTAMP
END-EXEC.
Notice that CSR-STMT-ACCTS is declared WITH HOLD (it must survive periodic commits), but CSR-ACCT-TXNS is not — it is opened and closed for each account within a single commit scope. The inner cursor uses the current value of :HV-ACCT-NUM from the outer cursor, which is set before each OPEN of the inner cursor.
This nested cursor pattern is one of the most common patterns in mainframe batch processing. It efficiently handles the one-to-many relationship between accounts and transactions without loading all data into memory at once.
Operational Considerations for Batch Programs
Production batch programs at Meridian Bank follow several operational standards beyond the SQL itself:
Return codes. Every program sets a return code before STOP RUN: - RC=0: Successful completion, no issues - RC=4: Completed with warnings (e.g., some records skipped) - RC=8: Completed with errors (e.g., some records failed) - RC=12: Severe error, but some work was committed - RC=16: Fatal error, program could not continue
The job scheduler uses these return codes to decide whether downstream jobs should execute.
Restart and recovery. If a batch program fails midway (RC=16), it must be restartable. The WITH HOLD cursor and periodic commit pattern supports this — the program committed work at regular intervals, so you only need to reprocess the uncommitted tail. Some programs write a checkpoint record (the last committed account number) to a restart file. On restart, they skip forward to that account.
Statistics and logging. Every program displays its processing statistics (rows read, rows updated, errors, totals) to the JES spool output. This information is critical for operations monitoring and debugging. At Meridian Bank, these statistics are also written to a DB2 control table for trend analysis.
Concurrency with online systems. Batch programs run during the batch window, but some online systems remain active (ATMs, mobile banking). The BIND options ISOLATION(CS) and RELEASE(COMMIT) ensure that batch programs release locks frequently, minimizing impact on online transactions. The periodic commit pattern (every 500-1,000 rows) is calibrated to prevent lock escalation while maintaining reasonable throughput.
Spaced Review: Connecting to Earlier Chapters
From Chapter 5: Data Types and Column Definitions
The host variable declarations in Section 32.3 directly map to the DB2 data types you learned in Chapter 5. When you defined a column as DECIMAL(15,2), you now know that the corresponding COBOL host variable is PIC S9(13)V99 COMP-3. The precision and scale must match exactly, or you risk truncation or conversion errors.
Quick check: What COBOL picture clause corresponds to a DB2 CHAR(10) column? Answer: PIC X(10).
From Chapter 9: Joins and Subqueries in Cursors
The cursor declarations in Section 32.6 can contain any valid SELECT statement — including the complex joins you mastered in Chapter 9. A production cursor for Meridian Bank might join ACCOUNTS with CUSTOMERS and BRANCHES:
EXEC SQL
DECLARE CSR-CUST-ACCTS CURSOR FOR
SELECT C.CUST_NAME, A.ACCOUNT_NUM,
A.BALANCE, B.BRANCH_NAME
FROM MERIDIAN.CUSTOMERS C
INNER JOIN MERIDIAN.ACCOUNTS A
ON C.CUST_ID = A.CUST_ID
INNER JOIN MERIDIAN.BRANCHES B
ON A.BRANCH_ID = B.BRANCH_ID
WHERE C.STATUS = 'ACTIVE'
ORDER BY C.CUST_NAME
END-EXEC.
Quick check: If this cursor returns 50,000 rows and you are using single-row FETCH, how many calls to DB2 are made for FETCH alone? Answer: 50,000 (plus OPEN and CLOSE). Multi-row FETCH with a rowset size of 100 would reduce this to approximately 500 FETCH calls.
From Chapter 22: Authorization and the BIND Process
In Chapter 22, you learned about DB2 authorization. The BIND process (Section 32.9) is where authorization intersects with embedded SQL. The person who binds the plan/package needs the appropriate SQL privileges on the tables. At runtime, the user who executes the program needs only EXECUTE authority on the plan — they do not need direct table access. This is a form of encapsulated authorization that simplifies security management.
Quick check: If developer JSMITH binds a plan and the SQL includes SELECT FROM MERIDIAN.ACCOUNTS, who needs SELECT authority on the table — JSMITH or the runtime user? Answer: JSMITH (the binder/plan owner), not the runtime user.
Debugging Embedded SQL Programs
Before we close this chapter, let us address a practical concern: what do you do when an embedded SQL program fails in production?
Common Diagnostic Steps
Step 1: Check SQLCODE and SQLSTATE. The program's spool output should display the SQLCODE and SQLSTATE from the error handler. Cross-reference these with the DB2 Codes manual (GC19-4770) or the SQLCODE tables in Section 32.5.
Step 2: Check SQLERRMC. The SQLERRMC field often contains token values that identify the specific object or constraint involved. For example, SQLCODE -803 includes the index name that detected the duplicate key. SQLCODE -530 includes the constraint name for the referential integrity violation.
Step 3: Check the DB2 diagnostic log. On z/OS, the DSNJ004I messages in the DB2 diagnostic log (SDSNDIAG) provide additional context for errors that the SQLCA cannot fully explain.
Step 4: Verify the BIND. If you get SQLCODE -805 (package not found) or -818 (timestamp mismatch), the problem is in the BIND, not the SQL. Verify that the DBRM used for the load module matches the DBRM used for the package. The consistency token (a timestamp embedded during precompile) must match.
Step 5: Review PLAN_TABLE. If the program runs but performs poorly, check the access paths. BIND with EXPLAIN(YES) populates the PLAN_TABLE, showing which indexes DB2 chose and what join methods it selected. Compare the actual access path with what you expected.
The Consistency Token Issue
The most frustrating embedded SQL error is SQLCODE -818 — timestamp mismatch. This happens when the precompile/compile and bind steps use different versions of the DBRM. The typical cause is a build process that precompiles the source to produce a new DBRM but then accidentally binds the old DBRM from a previous build.
The fix: ensure your JCL or build script uses the same DBRM for both the compile input and the BIND input. Never have a separate copy of the DBRM that might be stale. The JCL template in Section 32.2 addresses this by passing the DBRM through temporary datasets within a single job.
Debugging Dynamic SQL
Dynamic SQL errors are harder to debug because the SQL text is not visible in the precompiler listing. When a PREPARE fails, check SQLERRMC and SQLERRD(5) — SQLERRD(5) contains the position within the SQL string where the error was detected. Display the SQL string in your error handler so you can see exactly what was prepared.
Summary
Embedded SQL is the time-tested interface between host-language programs and DB2. In this chapter, you learned the complete lifecycle — from writing SQL embedded in COBOL source code, through precompilation and binding, to runtime execution. You now understand host variables and indicator variables, the SQLCA for error handling, cursors for multi-row processing, and the critical distinction between static and dynamic SQL. The Meridian Bank batch programs demonstrated how these concepts come together in production systems that process millions of transactions every night.
The BIND process is not just an operational step — it is where DB2's optimizer makes its access path decisions for static SQL. Understanding bind options like ISOLATION, CURRENTDATA, and QUALIFIER gives you fine-grained control over performance and portability. In the next chapter, we will cross into the modern application access world — JDBC, ODBC, and Python — where dynamic SQL is the default and connection management becomes the primary concern.
Related Reading
Explore this topic in other books
IBM DB2 SQL Fundamentals Advanced COBOL Advanced SQL Learning COBOL Embedded SQL & DB2 Intermediate COBOL Embedded SQL Fundamentals