Answers to Selected Exercises
This appendix provides worked solutions for selected exercises from each chapter. The intent is to show the reasoning process, not just the final answer. Where multiple solutions exist, we present the most straightforward approach and note alternatives.
Part I: Foundations
Chapter 1: Introduction to DB2
Exercise 1.2. List three differences between DB2 for z/OS and Db2 for LUW.
Answer. (1) DB2 for z/OS uses EBCDIC encoding by default; LUW uses ASCII or UTF-8. (2) z/OS administers table spaces and index spaces as named objects with explicit data set allocation; LUW uses automatic storage by default. (3) z/OS uses JCL-based batch utilities (REORG, RUNSTATS, COPY); LUW uses CLP commands. Other valid differences include bind semantics, lock avoidance techniques, and data sharing vs. HADR/pureScale.
Exercise 1.4. Why would an organization run DB2 on z/OS instead of LUW?
Answer. z/OS provides hardware-level reliability (ECC memory, redundant I/O paths, Parallel Sysplex), mature workload management, and decades of backward compatibility for COBOL/CICS/IMS applications. Organizations with existing mainframe investments, regulatory requirements for proven availability, or transaction volumes exceeding millions of transactions per hour often remain on z/OS. The total cost of ownership argument favors z/OS when the workload density justifies the hardware.
Exercise 1.5. Install Db2 Community Edition using Docker and verify connectivity.
Answer. See Appendix C, Section C.1.1 for the step-by-step procedure. Verification:
SELECT CURRENT TIMESTAMP, CURRENT SCHEMA FROM SYSIBM.SYSDUMMY1;
A successful result confirms connectivity and basic catalog access.
Chapter 2: Relational Model and Database Design
Exercise 2.3. Normalize the following flat record to 3NF: ORDER(OrderID, CustName, CustAddr, ItemName, ItemPrice, Qty, OrderDate).
Answer.
1NF: The record has a repeating group (ItemName, ItemPrice, Qty can repeat per order). Split into: - ORDER(OrderID, CustName, CustAddr, OrderDate) - ORDER_ITEM(OrderID, ItemName, ItemPrice, Qty)
2NF: In ORDER_ITEM, ItemPrice depends on ItemName alone (partial dependency on the composite key). Split: - ORDER_ITEM(OrderID, ItemName, Qty) - ITEM(ItemName, ItemPrice)
3NF: In ORDER, CustAddr depends on CustName (transitive dependency through a non-key attribute). Split: - ORDER(OrderID, CustName, OrderDate) - CUSTOMER(CustName, CustAddr)
Better design with surrogate keys: - CUSTOMER(CustID PK, CustName, CustAddr) - ITEM(ItemID PK, ItemName, ItemPrice) - ORDER(OrderID PK, CustID FK, OrderDate) - ORDER_ITEM(OrderID FK, ItemID FK, Qty, PRIMARY KEY(OrderID, ItemID))
Exercise 2.5. What is the difference between 3NF and BCNF?
Answer. In 3NF, every non-key attribute must depend on the key, the whole key, and nothing but the key. BCNF strengthens this: every determinant must be a candidate key. The difference arises when a table has overlapping composite candidate keys. In practice, most 3NF designs are also in BCNF; the distinction matters mainly in academic exercises or tables with multiple overlapping natural keys.
Chapter 3: Creating Tables and Defining Data
Exercise 3.4. Write CREATE TABLE for an EMPLOYEE table with: EMPNO (6-char, PK), FIRSTNAME, LASTNAME (VARCHAR 30), HIREDATE (DATE, default today), SALARY (DECIMAL 9,2, must be >= 0), DEPTNO (FK to DEPARTMENT).
Answer.
CREATE TABLE EMPLOYEE (
EMPNO CHAR(6) NOT NULL,
FIRSTNAME VARCHAR(30) NOT NULL,
LASTNAME VARCHAR(30) NOT NULL,
HIREDATE DATE NOT NULL WITH DEFAULT CURRENT DATE,
SALARY DECIMAL(9,2) NOT NULL,
DEPTNO CHAR(3),
CONSTRAINT PK_EMP PRIMARY KEY (EMPNO),
CONSTRAINT FK_EMP_DEPT FOREIGN KEY (DEPTNO)
REFERENCES DEPARTMENT (DEPTNO),
CONSTRAINT CHK_SAL CHECK (SALARY >= 0)
);
Note: DEPTNO is nullable to allow employees not yet assigned to a department. If business rules require a department, add NOT NULL.
Exercise 3.6. What happens if you INSERT a row with a DEPTNO that does not exist in DEPARTMENT?
Answer. DB2 raises SQLCODE -530 (SQLSTATE 23503): "The insert or update value of the FOREIGN KEY is not equal to any value of the parent key." The INSERT is rejected. The referential integrity constraint FK_EMP_DEPT prevents orphan rows.
Chapter 5: Modifying Data
Exercise 5.3. Write a MERGE statement that synchronizes EMPLOYEE_STAGING into EMPLOYEE: update salary and department for existing employees; insert new employees.
Answer.
MERGE INTO EMPLOYEE T
USING EMPLOYEE_STAGING S
ON T.EMPNO = S.EMPNO
WHEN MATCHED THEN
UPDATE SET T.SALARY = S.SALARY,
T.DEPTNO = S.DEPTNO
WHEN NOT MATCHED THEN
INSERT (EMPNO, FIRSTNAME, LASTNAME, HIREDATE, SALARY, DEPTNO)
VALUES (S.EMPNO, S.FIRSTNAME, S.LASTNAME, S.HIREDATE, S.SALARY, S.DEPTNO);
Exercise 5.5. Delete all employees who are not assigned to any department (DEPTNO IS NULL) and have a hire date before 2000.
Answer.
DELETE FROM EMPLOYEE
WHERE DEPTNO IS NULL
AND HIREDATE < '2000-01-01';
Before running this in production, verify the count first:
SELECT COUNT(*) FROM EMPLOYEE
WHERE DEPTNO IS NULL AND HIREDATE < '2000-01-01';
Chapter 6: Querying Data
Exercise 6.4. Find the top 3 highest-paid employees in each department.
Answer.
WITH RANKED AS (
SELECT EMPNO, FIRSTNAME, LASTNAME, DEPTNO, SALARY,
ROW_NUMBER() OVER (PARTITION BY DEPTNO ORDER BY SALARY DESC) AS RN
FROM EMPLOYEE
)
SELECT EMPNO, FIRSTNAME, LASTNAME, DEPTNO, SALARY
FROM RANKED
WHERE RN <= 3
ORDER BY DEPTNO, RN;
Alternative: use RANK() instead of ROW_NUMBER() if you want ties to share the same rank (which could return more than 3 per department). DENSE_RANK() is another option that handles ties differently.
Exercise 6.7. Write a recursive CTE to display an organizational hierarchy starting from the CEO (MGRNO IS NULL), showing employee name and level.
Answer.
WITH ORG (EMPNO, LASTNAME, MGRNO, LVL) AS (
-- Anchor: the CEO
SELECT EMPNO, LASTNAME, MGRNO, 0
FROM EMPLOYEE
WHERE MGRNO IS NULL
UNION ALL
-- Recursive: direct reports
SELECT E.EMPNO, E.LASTNAME, E.MGRNO, O.LVL + 1
FROM EMPLOYEE E
JOIN ORG O ON E.MGRNO = O.EMPNO
)
SELECT REPEAT(' ', LVL) || LASTNAME AS ORG_CHART, LVL
FROM ORG
ORDER BY LVL, LASTNAME;
The REPEAT(' ', LVL) creates visual indentation. The ORDER BY produces a level-by-level listing; for a true tree traversal, you would need additional path tracking.
Part II: Physical Design and Storage
Chapter 8: Indexes
Exercise 8.3. Given the query SELECT EMPNO, LASTNAME FROM EMPLOYEE WHERE DEPTNO = 'D01' AND SALARY > 50000, design an optimal index.
Answer. The optimal index is:
CREATE INDEX IX_EMP_OPT ON EMPLOYEE (DEPTNO, SALARY) INCLUDE (LASTNAME);
Reasoning: DEPTNO is an equality predicate (most selective first in the key). SALARY is a range predicate (second in the key). LASTNAME and EMPNO are in the SELECT list; EMPNO is typically the primary key (already in the index leaf on most platforms), and LASTNAME is included to enable index-only access. This avoids any data page reads.
Exercise 8.5. What is the height of a B+ tree index on a 500-million-row table with an 8-byte key, 4 KB pages?
Answer. Using the formula from Appendix A:
Fanout = floor(4096 / (8 + 4 + 10)) = floor(4096 / 22) = 186
Leaf pages = ceil(500,000,000 / 186) = 2,688,173
Height = ceil(log_186(2,688,173)) + 1
log_186(2,688,173) = ln(2,688,173) / ln(186) = 14.80 / 5.23 = 2.83
Height = ceil(2.83) + 1 = 3 + 1 = 4
The index has 4 levels. With the root and first non-leaf level cached, only 2 physical I/Os are needed per lookup.
Chapter 9: Buffer Pools and Memory
Exercise 9.2. A buffer pool has 100,000 GETPAGE requests and 5,000 synchronous reads. What is the hit ratio?
Answer.
Hit Ratio = (100,000 - 5,000) / 100,000 * 100 = 95%
This is a good hit ratio for a data buffer pool. If it drops below 80%, consider increasing the buffer pool size.
Exercise 9.4. An OLTP system processes 3,000 TPS, each transaction doing 15 logical reads. With a 92% buffer pool hit ratio, how many physical IOPS does the storage system need to support?
Answer.
IOPS = 3,000 * 15 * (1 - 0.92) = 3,000 * 15 * 0.08 = 3,600 IOPS
This is within the capability of a single modern SSD (which typically provides 50,000-100,000+ random IOPS). For spinning disk, you would need approximately 36 drives at 100 IOPS each.
Part III: Administration
Chapter 11: Statistics and Maintenance
Exercise 11.3. Write a RUNSTATS command for the ORDERS table that collects distribution statistics, column group statistics for (CUSTOMER_ID, ORDER_DATE), and index statistics for all indexes.
Answer (LUW):
RUNSTATS ON TABLE MYSCHEMA.ORDERS
WITH DISTRIBUTION
ON COLUMNS ((CUSTOMER_ID, ORDER_DATE) LIKE STATISTICS)
AND DETAILED INDEXES ALL
ALLOW WRITE ACCESS;
Answer (z/OS, via utility JCL):
RUNSTATS TABLESPACE ORDERSDB.ORDERTS
TABLE(MYSCHEMA.ORDERS) SAMPLE 25
INDEX(ALL)
COLGROUP(CUSTOMER_ID, ORDER_DATE)
HISTOGRAM(CUSTOMER_ID, ORDER_DATE) NUMCOLS 20 NUMQUANTILES 100
SHRLEVEL CHANGE
REPORT YES
The COLGROUP statistics help the optimizer when these two columns are used together in predicates, avoiding the independence assumption.
Exercise 11.5. A table has 10 million rows. After a bulk delete of 4 million rows, performance degrades. What maintenance steps should you take and in what order?
Answer. The recommended order is:
- REORG the table (and its indexes) to reclaim the space from the deleted rows and restore physical clustering:
- LUW:
REORG TABLE schema.tablename INPLACE ALLOW WRITE ACCESS- z/OS: REORG TABLESPACE utility with SHRLEVEL CHANGE - RUNSTATS to update catalog statistics to reflect the new row count and distribution:
- LUW:
RUNSTATS ON TABLE schema.tablename WITH DISTRIBUTION AND DETAILED INDEXES ALL- z/OS: RUNSTATS utility with TABLE(ALL) INDEX(ALL) - REBIND affected packages (z/OS) or flush the dynamic statement cache (LUW) so that query plans reflect the new statistics.
The order matters: REORG first (so the physical layout is clean), then RUNSTATS (so statistics reflect the reorganized state), then REBIND (so plans use the new statistics).
Chapter 13: Concurrency and Locking Strategies
Exercise 13.2. Two transactions frequently deadlock: Transaction A updates ACCOUNTS then TRANSACTIONS; Transaction B updates TRANSACTIONS then ACCOUNTS. How do you fix this?
Answer. Establish a consistent lock ordering. Both transactions should access tables in the same order---for example, always ACCOUNTS before TRANSACTIONS. Rewrite Transaction B to:
-- Transaction B (fixed)
UPDATE ACCOUNTS SET BALANCE = BALANCE - 100 WHERE ACCT_ID = 5001;
UPDATE TRANSACTIONS SET STATUS = 'POSTED' WHERE TXN_ID = 9001;
COMMIT;
This eliminates the circular wait condition that causes deadlocks. If you cannot control the access order (e.g., in packaged applications), consider reducing lock duration by committing more frequently or lowering the isolation level from RR to CS where business rules permit.
Exercise 13.5. Explain the difference between SQLCODE -911 and SQLCODE -913 on z/OS.
Answer. Both indicate a lock conflict (deadlock or timeout), but they differ in rollback behavior: - SQLCODE -911: DB2 has already rolled back the current unit of work. The application should not issue ROLLBACK (it is already done) and should retry the transaction. - SQLCODE -913: DB2 has NOT rolled back. The application must decide whether to ROLLBACK and retry, or COMMIT what it has done so far. This code is specific to z/OS and occurs for timeouts (not deadlocks) when the ROLLBACK option is not automatically triggered.
On LUW, only -911 is used (DB2 always rolls back on deadlock/timeout).
Chapter 14: Logging and Recovery Concepts
Exercise 14.2. A system generates 2,500 TPS with an average of 400 bytes of log per transaction. How much archive log space is needed per day?
Answer.
Log rate = 2,500 * 400 = 1,000,000 bytes/sec = ~0.95 MB/sec
Daily volume = 0.95 MB/sec * 86,400 sec = ~82 GB/day
Allow 20% overhead for log record headers and partial-page padding: ~100 GB/day. For a week of retention, budget ~700 GB of archive log storage.
Exercise 14.5. Explain the difference between LOGPRIMARY and LOGSECOND on LUW.
Answer. LOGPRIMARY specifies the number of log files pre-allocated at database activation. These are always available and their disk space is reserved. LOGSECOND specifies additional log files that DB2 allocates on demand when primary logs are exhausted. If both primary and secondary logs fill, you get SQLCODE -964 (log full). Best practice: set LOGPRIMARY large enough for normal workload; use LOGSECOND as a safety margin for batch operations or long-running transactions. Setting LOGSECOND to -1 enables infinite secondary logs (DB2 will allocate as many as needed), which prevents log-full conditions but can fill the disk if a transaction runs away.
Part IV: Performance
Chapter 12: Query Optimization and Access Plans
Exercise 12.3. Explain why the optimizer might choose a table scan over an index scan even when an index exists on the filtered column.
Answer. The optimizer estimates total cost (I/O + CPU). An index scan is cheaper only when it eliminates a large fraction of rows. If the filter factor is high (e.g., the predicate matches 30% of rows), the index scan must random-read 30% of data pages---which can be more expensive than a sequential table scan that reads all pages via prefetch. The crossover point is typically 5-15% of rows for non-clustered indexes and higher (20-40%) for clustered indexes. The optimizer also considers buffer pool caching: if the table is small enough to fit in the buffer pool, a table scan may be nearly free.
Exercise 12.6. Given EXPLAIN output showing a TBSCAN with estimated cardinality of 1,000,000 rows but actual rows of only 50, what is likely wrong?
Answer. The statistics are stale or missing. The optimizer estimated 1,000,000 rows because CARD(table) is high and there are no statistics to accurately estimate the filter factor. The actual result of 50 rows means the predicate is highly selective, and an index scan would have been far more efficient. Solution: run RUNSTATS on the table, including distribution statistics if the data is skewed.
Part V: High Availability
Chapter 15: Backup and Recovery
Exercise 15.2. Write the commands to perform an online backup and a point-in-time recovery to 2026-03-15 12:00:00 on LUW.
Answer.
Backup:
db2 "BACKUP DATABASE PRODDB ONLINE TO /backup/db2 COMPRESS"
Recovery (assume the database needs to be restored first):
db2 "RESTORE DATABASE PRODDB FROM /backup/db2 TAKEN AT 20260314200000"
db2 "ROLLFORWARD DATABASE PRODDB TO 2026-03-15-12.00.00 USING LOCAL TIME AND STOP OVERFLOW LOG PATH (/archlog)"
The OVERFLOW LOG PATH is needed if archived logs are stored separately from the default log path.
Chapter 17: HADR
Exercise 17.3. What is the difference between SYNC, NEARSYNC, and ASYNC HADR modes?
Answer. - SYNC: The primary waits for the standby to write the log to disk before returning COMMIT to the application. Zero data loss guaranteed. Highest latency impact. - NEARSYNC: The primary waits for the standby to receive the log in memory (but not write to disk). Near-zero data loss (only data in standby memory is at risk). Lower latency than SYNC. - ASYNC: The primary does not wait for the standby at all. Lowest latency impact. Potential data loss equals the amount of log not yet shipped at the time of failure.
Choose based on RPO requirements and network latency between primary and standby sites.
Exercise 17.5. Your HADR standby is in ASYNC mode and the primary fails. How much data might be lost?
Answer. Data loss equals the volume of log records generated on the primary but not yet received by the standby at the moment of failure. This is typically measured in seconds of transactions. To estimate:
Potential_Data_Loss = Log_Rate * Replication_Lag
If the primary generates 2 MB/sec of log and the replication lag is 3 seconds, you could lose up to 6 MB of log data (several hundred to several thousand transactions, depending on transaction size). Monitor replication lag using db2pd -db DBNAME -hadr on the primary, checking the LOG_GAP and PRIMARY_LOG_TIME vs. STANDBY_REPLAY_LOG_TIME fields.
Chapter 18: Data Sharing and Clustering
Exercise 18.3. In a z/OS data sharing group with 4 members, member DB2B fails. What happens to in-flight transactions on DB2B?
Answer. When DB2B fails, the remaining members detect the failure through the Coupling Facility (XCF signaling). One of the surviving members (typically DB2A, as the group restart member) performs group restart for DB2B. This includes: (1) Reading the log for DB2B's in-flight transactions. (2) Rolling back uncommitted transactions. (3) Releasing locks held by DB2B. During this process, data that was being modified by DB2B is temporarily unavailable (GRECP status). Applications connected to other members that access the same data may experience brief waits until group restart completes. Applications connected to DB2B are disconnected and must reconnect to another member (via Sysplex workload balancing or WLM).
Part VI: Application Development
Chapter 19: Application Programming
Exercise 19.4. Write a JDBC method that inserts an order and returns the generated order ID.
Answer.
public int insertOrder(Connection conn, int customerId, Date orderDate)
throws SQLException {
String sql = "SELECT ORDER_ID FROM FINAL TABLE ("
+ "INSERT INTO ORDERS (CUSTOMER_ID, ORDER_DATE) "
+ "VALUES (?, ?))";
try (PreparedStatement ps = conn.prepareStatement(sql)) {
ps.setInt(1, customerId);
ps.setDate(2, orderDate);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
return rs.getInt(1);
}
throw new SQLException("No generated key returned");
}
}
}
This uses DB2's SELECT FROM FINAL TABLE syntax to retrieve the generated identity value in a single round trip, which is more efficient and race-condition-free compared to calling IDENTITY_VAL_LOCAL() in a separate statement.
Chapter 21: Stored Procedures and UDFs
Exercise 21.3. Write a stored procedure that takes a department number and returns the count of employees and the average salary.
Answer.
CREATE OR REPLACE PROCEDURE GET_DEPT_STATS(
IN p_deptno CHAR(3),
OUT p_count INTEGER,
OUT p_avg_sal DECIMAL(9,2)
)
LANGUAGE SQL
READS SQL DATA
BEGIN
SELECT COUNT(*), AVG(SALARY)
INTO p_count, p_avg_sal
FROM EMPLOYEE
WHERE DEPTNO = p_deptno;
-- Handle case where department has no employees
IF p_count = 0 THEN
SET p_avg_sal = 0;
END IF;
END;
Call it:
CALL GET_DEPT_STATS('D01', ?, ?);
Exercise 21.5. Write a table function that returns all employees hired in a given year.
Answer.
CREATE OR REPLACE FUNCTION EMPS_HIRED_IN_YEAR(p_year INTEGER)
RETURNS TABLE (
EMPNO CHAR(6),
LASTNAME VARCHAR(30),
HIREDATE DATE,
SALARY DECIMAL(9,2)
)
LANGUAGE SQL
READS SQL DATA
RETURN
SELECT EMPNO, LASTNAME, HIREDATE, SALARY
FROM EMPLOYEE
WHERE YEAR(HIREDATE) = p_year;
Usage:
SELECT * FROM TABLE(EMPS_HIRED_IN_YEAR(2025)) AS T;
Part VII: Advanced Topics
Chapter 27: Columnar Processing (BLU Acceleration)
Exercise 27.2. When should you choose column-organized over row-organized tables?
Answer. Column-organized tables excel at analytical queries that scan large numbers of rows but access only a few columns (SELECT SUM(AMOUNT) FROM SALES WHERE REGION = 'EAST' GROUP BY PRODUCT). They benefit from: (1) I/O reduction---only accessed columns are read from disk. (2) Actionable compression---columnar layout compresses much better. (3) SIMD processing---CPU vectorization operates on compressed data. Choose row-organized for OLTP workloads with single-row lookups, frequent updates, and wide-row fetches. Many DB2 shops use both: row-organized for OLTP tables, column-organized for reporting/warehouse tables.
Chapter 30: Temporal Data
Exercise 30.3. Write a query against a system-time temporal table POLICY to find all policy holder names as they existed on January 1, 2025.
Answer.
SELECT POLICY_ID, HOLDER_NAME
FROM POLICY
FOR SYSTEM_TIME AS OF '2025-01-01'
ORDER BY POLICY_ID;
DB2 automatically queries both the base table and the history table, returning the version of each row that was current on the specified date. Rows inserted after January 1, 2025 are excluded; rows deleted before that date are included with their values at deletion time.
Exercise 30.5. Write a query that shows all changes to policy 1001 over time, including when each version was effective.
Answer.
SELECT POLICY_ID, HOLDER_NAME, SYS_START, SYS_END
FROM POLICY
FOR SYSTEM_TIME FROM '0001-01-01' TO '9999-12-30'
WHERE POLICY_ID = 1001
ORDER BY SYS_START;
This returns all versions of policy 1001, from the earliest to the latest. Each row shows when that version became effective (SYS_START) and when it was superseded (SYS_END). The current version has SYS_END equal to the maximum timestamp value. Alternatively, on LUW you can use FOR SYSTEM_TIME BETWEEN '0001-01-01' AND '9999-12-30' which uses inclusive endpoints.
Chapter 22: Security
Exercise 22.3. Write an RCAC row permission that allows users with the TELLER role to see only accounts in their own branch.
Answer.
CREATE PERMISSION TELLER_BRANCH_FILTER ON ACCOUNTS
FOR ROWS WHERE
(VERIFY_ROLE_FOR_USER(SESSION_USER, 'TELLER') = 1
AND BRANCH_ID = (SELECT BRANCH_ID FROM TELLER_ASSIGNMENTS
WHERE TELLER_ID = SESSION_USER))
OR VERIFY_ROLE_FOR_USER(SESSION_USER, 'MANAGER') = 1
ENFORCED FOR ALL ACCESS
ENABLE;
This permission allows tellers to see only accounts in their assigned branch, while managers (with the MANAGER role) can see all accounts. The VERIFY_ROLE_FOR_USER function checks whether the current user has the specified role. The ENFORCED FOR ALL ACCESS clause means the filter applies to SELECT, UPDATE, and DELETE.
Exercise 22.5. Write a column mask that shows full account numbers only to authorized users and masks them for everyone else.
Answer.
CREATE MASK ACCOUNT_NUM_MASK ON ACCOUNTS
FOR COLUMN ACCOUNT_NUMBER
RETURN
CASE
WHEN VERIFY_ROLE_FOR_USER(SESSION_USER, 'ACCOUNT_ADMIN') = 1
THEN ACCOUNT_NUMBER
ELSE 'XXXX-XXXX-' || RIGHT(ACCOUNT_NUMBER, 4)
END
ENABLE;
-- Activate RCAC on the table
ALTER TABLE ACCOUNTS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL;
Non-admin users see XXXX-XXXX-1234 (only last 4 digits visible). Admin users see the full account number.
Part VIII: Ecosystem and Migration
Chapter 35: Migration from Oracle to DB2
Exercise 35.2. Translate the following Oracle SQL to DB2: SELECT NVL(COMMISSION, 0) FROM EMPLOYEES WHERE ROWNUM <= 10.
Answer.
SELECT COALESCE(COMMISSION, 0)
FROM EMPLOYEES
FETCH FIRST 10 ROWS ONLY;
NVL maps to COALESCE (standard SQL; also works in Oracle). ROWNUM <= 10 maps to FETCH FIRST 10 ROWS ONLY. If DB2 compatibility mode is enabled (DB2_COMPATIBILITY_VECTOR=ORA), NVL and ROWNUM work directly, but standard SQL syntax is preferred for new code.
Exercise 35.4. What are the three most common incompatibilities when migrating PL/SQL to SQL PL?
Answer. (1) Package state: Oracle PL/SQL packages maintain session state via package variables. SQL PL has no equivalent; use global temporary tables or session variables. (2) Exception handling: Oracle uses EXCEPTION WHEN...THEN blocks; SQL PL uses DECLARE HANDLER FOR SQLSTATE/SQLEXCEPTION. (3) Implicit cursor FOR loops: Oracle's FOR rec IN (SELECT ...) LOOP syntax has no direct equivalent; use explicit cursors or the FOR statement in SQL PL. The DB2 Oracle compatibility features (DB2_COMPATIBILITY_VECTOR) address many syntactic differences but do not cover semantic differences in transaction scoping and package state.
Chapter 37: Future Directions
Exercise 37.2. Identify three areas where AI/ML integration is likely to change DB2 administration in the next five years.
Answer. (1) Automatic index management: ML models will predict workload changes and proactively create, modify, or drop indexes without DBA intervention, going beyond the current index advisor. (2) Anomaly detection: Continuous monitoring with ML-based baselines will detect performance regressions, security anomalies, and data quality issues before they become incidents. (3) Query optimization: ML models trained on actual execution statistics will supplement the cost-based optimizer, improving cardinality estimation for complex queries where traditional histograms and column-group statistics fall short. IBM has already started this direction with Db2's machine learning-based query optimization features.
These solutions demonstrate the reasoning and approach expected. Many exercises have multiple valid answers; full credit should be given for any logically correct and well-explained solution, even if it differs from the one shown here.