Case Study 27.1: GlobalBank Account Inquiry System
Background
GlobalBank's account inquiry system is one of the most frequently used functions in the core banking application. Customer service representatives access it hundreds of times per hour to answer customer questions: "What is my balance?" "Did my deposit go through?" "Why was I charged a fee?"
The original system read VSAM files directly. When GlobalBank migrated account data to DB2 in 2015, Maria Chen's team rewrote the inquiry system using embedded SQL. The project provided hard lessons about cursor management, error handling, and the difference between VSAM thinking and relational thinking.
The Requirements
Sarah Kim (who consults across both GlobalBank and MedClaim projects) defined the requirements:
- Account lookup: Given an account number, display account details
- Transaction history: Display the 20 most recent transactions for the account
- Balance calculation: Show current balance, available balance (accounting for holds), and pending transactions
- Response time: Under 500 milliseconds for the full inquiry
Design Decisions
Decision 1: Separate queries vs. JOINs
Derek's first design used a single JOIN across ACCOUNT_MASTER and TRANSACTION_HISTORY:
SELECT A.*, T.*
FROM ACCOUNT_MASTER A
JOIN TRANSACTION_HISTORY T
ON A.ACCT_NUMBER = T.ACCT_NUMBER
WHERE A.ACCT_NUMBER = :HV-ACCT
ORDER BY T.TXN_DATE DESC
FETCH FIRST 20 ROWS ONLY
Maria rejected this: "If the account has no transactions, the JOIN returns zero rows — and you lose the account details. Use a LEFT JOIN or separate queries."
Derek switched to separate queries: one SELECT INTO for the account, one cursor for the transactions. This was more COBOL code but more robust.
Decision 2: Cursor fetch limit
The requirement specified 20 transactions. Derek used FETCH FIRST 20 ROWS ONLY in the cursor's SELECT. Maria suggested an alternative: fetch up to 20 rows using a counter in the COBOL fetch loop, without the FETCH FIRST clause. Her reasoning: "FETCH FIRST is efficient, but if we ever need to change the limit, I'd rather change a WORKING-STORAGE value than rebind the DB2 package."
They compromised: FETCH FIRST 25 ROWS ONLY in the SQL (slightly over-fetching as a buffer), and a COBOL counter that stopped at 20. This gave DB2 the optimization hint while keeping the exact display count flexible.
Decision 3: Error handling granularity
The first version had a single SQL-ERROR-HANDLER paragraph. Maria insisted on specific error handling for each SQL statement:
- Account lookup: +100 = "Account not found" (user-facing message)
- Cursor open: non-zero = "System error, please try again" (log details, don't expose to user)
- Cursor fetch: -904 = "System temporarily unavailable" (resource contention)
- Any other negative: Log SQLCODE, SQLSTATE, and SQLERRMC for diagnostics
The Cursor Leak Incident
Three weeks after deployment, the system began experiencing intermittent -904 (resource unavailable) errors during peak hours. Tomás Rivera (who consults across projects) diagnosed the problem: a cursor leak.
The bug was in the error path. When the account lookup failed with an unexpected error, the program branched to SQL-ERROR-HANDLER without closing the transaction history cursor. But the cursor had not been opened yet — so this was not the leak.
The real leak was subtler: when the transaction history cursor fetch encountered an error (not +100, but a rare -911 deadlock), the program branched to the error handler, which issued ROLLBACK but did not CLOSE the cursor. The ROLLBACK implicitly closed the cursor on most paths — but under CICS, the task's cursor resources were not fully released until CLOSE was explicitly issued.
The fix was simple: always CLOSE the cursor in the error handler, regardless of whether it was open. Closing an already-closed cursor returns SQLCODE -501, which the error handler ignored:
SQL-ERROR-HANDLER.
DISPLAY "SQL Error: " SQLCODE
EXEC SQL CLOSE TXN-HIST-CURSOR END-EXEC
EXEC SQL ROLLBACK END-EXEC
MOVE 99 TO LS-RETURN-CODE
GOBACK.
Performance Tuning
Initial response times were 800ms — too slow. Tomás identified two issues:
-
Missing index: TRANSACTION_HISTORY was indexed by ACCT_NUMBER but not by TXN_DATE within the account. Adding a composite index (ACCT_NUMBER, TXN_DATE DESC) reduced the history query from 400ms to 50ms.
-
Unnecessary COMMITs: The inquiry program issued COMMIT after the read-only queries. This forced DB2 to write a log record for each inquiry. Removing the COMMIT for read-only transactions saved 100ms per inquiry.
Final response time: 120ms average.
Discussion Questions
- Maria preferred separate queries over JOINs for this use case. Under what circumstances would a JOIN be preferable? Consider the trade-off between fewer network round-trips and more flexible error handling.
- The cursor leak incident was caused by an error path that was rarely exercised. How would you test error paths in an embedded SQL program? What test techniques can simulate SQL errors?
- The performance tuning removed COMMITs after read-only queries. Is this always safe? What about lock duration and resource release?
- Derek's response time budget was 500ms. The final system achieved 120ms. Was the extra optimization effort worthwhile, or should they have stopped at "under 500ms"?