Case Study 2: Oracle to DB2 Migration — A 6-Month Journey
"We Knew It Would Be Hard. We Didn't Know Which Parts Would Be Hard."
The Company
Coastal Insurance Group, a property and casualty insurer based in Tampa, Florida, had been running their core policy management system on Oracle 12c since 2008. The system managed 1.2 million active policies, 3.4 million claims across 15 years of history, and processed 12,000 policy transactions per day. The Oracle database was 2.8 TB across 420 tables, with 180 PL/SQL stored procedures, 95 views, and 42 triggers.
In 2023, Coastal's CTO, Andrea Ruiz, made the decision to migrate to DB2 11.5 on Linux. The reasons were primarily economic — Oracle licensing costs had climbed to $1.8 million annually, and the next renewal included a 22% increase. DB2's licensing model, combined with Coastal's IBM middleware stack (MQ, WebSphere), offered projected savings of $900,000 per year. But Andrea was candid with her board: "The migration will cost us $1.2 million and six months of focused effort. The ROI breaks even in 16 months. After that, it's pure savings."
The Team
Coastal assembled a dedicated migration team:
- Tom Reeves, Database Architect — 12 years on Oracle, 3 years of DB2 experience from a previous employer.
- Priya Sharma, Senior Developer — owned the PL/SQL codebase and knew every procedure by name.
- James Okafor, QA Lead — designed the testing strategy.
- Wei Lin, IBM Services Consultant — brought DB2 migration expertise from 8 previous Oracle-to-DB2 projects.
Wei's first statement to the team set the tone: "The schema conversion will be the easy part. The stored procedures will be the medium part. Testing will be the hard part. And testing is where projects like this succeed or fail."
Month 1: Assessment and Planning
Running the Database Conversion Workbench
Wei ran IBM's Database Conversion Workbench (DCW) against the Oracle schema. The results:
| Category | Total Objects | Auto-Converted | Manual Work |
|---|---|---|---|
| Tables | 420 | 398 (94.8%) | 22 |
| Indexes | 680 | 672 (98.8%) | 8 |
| Views | 95 | 81 (85.3%) | 14 |
| Stored Procedures | 180 | 118 (65.6%) | 62 |
| Triggers | 42 | 34 (81.0%) | 8 |
| Sequences | 55 | 55 (100%) | 0 |
The 94.8% table conversion rate looked encouraging, but the 65.6% stored procedure rate was concerning. Priya reviewed the 62 manual-intervention procedures and categorized the issues:
- 28 procedures: Oracle-specific exception handling (WHEN OTHERS, custom named exceptions, RAISE_APPLICATION_ERROR patterns).
- 15 procedures: Dynamic SQL using Oracle's EXECUTE IMMEDIATE with INTO clause and DBMS_SQL package.
- 11 procedures: Cursor variables (REF CURSOR) returned to the application layer.
- 5 procedures: AUTONOMOUS_TRANSACTION pragma for audit logging.
- 3 procedures: Calls to Oracle-specific packages (DBMS_OUTPUT, UTL_FILE, DBMS_LOB).
The Oracle DATE Problem
Tom identified what would become the migration's most insidious issue during the assessment phase. The Oracle schema used the DATE data type in 287 columns across 156 tables. Oracle's DATE stores date and time (to the second). A naive mapping to DB2 DATE would lose all time information.
"We have 287 columns to convert from Oracle DATE to DB2 TIMESTAMP," Tom told the team. "And every application query that references these columns needs to be checked. Some queries use DATE arithmetic that works differently in DB2."
Specific examples:
-- Oracle: DATE arithmetic returns a number (days)
SELECT SYSDATE - claim_date AS days_since_claim FROM claims;
-- DB2: TIMESTAMP arithmetic requires explicit function
SELECT DAYS(CURRENT DATE) - DAYS(claim_date) AS days_since_claim FROM claims;
-- Oracle: Adding days to a date
SELECT claim_date + 30 FROM claims;
-- DB2: Adding days to a timestamp
SELECT claim_date + 30 DAYS FROM claims;
Tom cataloged 142 distinct SQL statements that used Oracle DATE arithmetic. Every one needed manual review and conversion.
Month 2: Schema Conversion
The DCW-generated DDL was the starting point, but Tom made several manual adjustments:
Tablespace strategy: Oracle's tablespace model differs from DB2's. Tom designed a new tablespace layout optimized for DB2's buffer pool architecture, grouping tables by access pattern rather than mirroring the Oracle layout.
Data type decisions: Beyond the DATE-to-TIMESTAMP mapping, Tom made these choices:
| Oracle Column | Oracle Type | DB2 Type | Rationale |
|---|---|---|---|
| POLICY_AMOUNT | NUMBER(12,2) | DECIMAL(12,2) | Direct mapping |
| COVERAGE_LIMIT | NUMBER | DECFLOAT | NUMBER without precision is variable |
| STATE_CODE | CHAR(2) | CHAR(2) | Direct mapping |
| DESCRIPTION | VARCHAR2(4000) | VARCHAR(4000) | Direct mapping |
| POLICY_DOCUMENT | BLOB | BLOB | Direct mapping |
| CLAIM_STATUS | VARCHAR2(20) | VARCHAR(20) | Direct mapping |
Index conversion: 672 of 680 indexes converted automatically. The 8 manual indexes were function-based indexes that Oracle supports but DB2 handles differently:
-- Oracle: Function-based index
CREATE INDEX idx_upper_name ON customer(UPPER(last_name));
-- DB2: Expression-based index (LUW 11.1+)
CREATE INDEX idx_upper_name ON customer(UPPER(last_name));
-- Supported on DB2 LUW — but Tom verified each expression was compatible
Month 3: Stored Procedure Conversion
This was the most labor-intensive month. Priya and Wei worked side by side, converting the 62 manual-intervention procedures.
Exception handling conversion (28 procedures):
The most common pattern was Oracle's WHEN OTHERS clause:
-- Oracle
EXCEPTION
WHEN NO_DATA_FOUND THEN
v_result := 'NOT_FOUND';
WHEN OTHERS THEN
v_error_msg := SQLERRM;
INSERT INTO error_log VALUES (SYSDATE, 'proc_name', v_error_msg);
RAISE;
-- DB2
DECLARE CONTINUE HANDLER FOR NOT FOUND
SET v_result = 'NOT_FOUND';
DECLARE EXIT HANDLER FOR SQLEXCEPTION
BEGIN
GET DIAGNOSTICS EXCEPTION 1 v_error_msg = MESSAGE_TEXT;
INSERT INTO error_log VALUES (CURRENT TIMESTAMP, 'proc_name', v_error_msg);
RESIGNAL;
END;
The semantic difference was subtle but important: Oracle's EXCEPTION block is evaluated top-to-bottom at the point of the exception. DB2's DECLARE HANDLER is declared at the beginning of the block and executes based on the handler type (CONTINUE vs. EXIT). The execution flow is different, and several procedures needed structural reorganization, not just syntax changes.
The AUTONOMOUS_TRANSACTION problem (5 procedures):
Five Oracle procedures used PRAGMA AUTONOMOUS_TRANSACTION to write audit log entries in a separate transaction that would commit even if the parent transaction rolled back. DB2 has no direct equivalent.
Wei's solution: rewrite the audit logging to use a separate connection through a loopback alias. The audit procedure connects back to the same database through a TCP/IP alias, creating an independent transaction scope:
-- DB2: Simulate autonomous transaction via loopback
-- In the db2 catalog:
-- CATALOG TCPIP NODE loopback REMOTE localhost SERVER 50000
-- CATALOG DATABASE COASTDB AS COASTLB AT NODE loopback
-- In the procedure:
CONNECT TO COASTLB USER audit_user USING audit_pwd;
INSERT INTO COASTLB.AUDIT_LOG VALUES (...);
COMMIT;
DISCONNECT COASTLB;
This was functional but inelegant, and it introduced a dependency on the loopback connection being available. Wei added error handling to fall back to a log file if the loopback connection failed.
REF CURSOR conversion (11 procedures):
Oracle REF CURSORs were converted to DB2 result set cursors:
-- Oracle
CREATE PROCEDURE get_claims(p_policy_id IN NUMBER, p_cursor OUT SYS_REFCURSOR)
AS BEGIN
OPEN p_cursor FOR SELECT * FROM claims WHERE policy_id = p_policy_id;
END;
-- DB2
CREATE PROCEDURE get_claims(IN p_policy_id BIGINT)
DYNAMIC RESULT SETS 1
LANGUAGE SQL
BEGIN
DECLARE c1 CURSOR WITH RETURN TO CALLER FOR
SELECT * FROM claims WHERE policy_id = p_policy_id;
OPEN c1;
END;
The DB2 approach is semantically different — the cursor is declared inside the procedure and remains open when the procedure returns, rather than being passed as a parameter. The Java application layer used JDBC's CallableStatement, which required changes to how it retrieved result sets from stored procedure calls.
Month 4: Data Migration
Tom designed a phased data migration:
Phase 1: Reference data (50 small tables, < 1 million rows total). Exported from Oracle using SQL Developer, imported into DB2 using LOAD. Completed in 2 hours.
Phase 2: Master data (customer, policy, agent tables — 80 tables, 200 GB). Tom used Oracle Data Pump to export, converted the dump to delimited files using a custom script, then used DB2 LOAD with NONRECOVERABLE option. Completed in 8 hours with 4-way parallelism.
Phase 3: Transaction data (claims, payments, audit logs — 290 tables, 2.6 TB). This required 3 days of continuous LOAD operations, running 8 streams in parallel.
Data validation: After each phase, James ran validation queries comparing Oracle and DB2:
-- Row count comparison (run on both databases, compare results)
SELECT table_name, COUNT(*) FROM each_table GROUP BY table_name;
-- Checksum comparison for financial columns
SELECT SUM(CAST(policy_amount AS DECIMAL(31,2))) FROM policies; -- Compare between Oracle and DB2
-- Spot-check: Random sample of 1000 rows by primary key
-- Compare all column values between systems
Phase 2 revealed the first data issue: 3,400 rows in the CUSTOMER table had Oracle DATE values that, when converted to DB2 TIMESTAMP, lost sub-second precision that the original Oracle application had stored using SYSTIMESTAMP rather than SYSDATE. The fix was to re-extract those rows with explicit TIMESTAMP conversion.
Month 5: Testing
James designed a four-level testing strategy:
Level 1 — Unit tests (2 weeks): Every stored procedure tested individually. 180 procedures, approximately 8 test cases per procedure = 1,440 test cases. Pass rate after first run: 91%. The 9% failures were split between: date arithmetic issues (52 failures), exception handling behavior differences (38 failures), and numeric precision differences (39 failures).
Level 2 — Integration tests (2 weeks): End-to-end policy lifecycle tests — create policy, endorse, file claim, process payment, renew. 120 integration scenarios. Pass rate: 88%. Failures were concentrated in the claims processing workflow, where a complex PL/SQL package with 12 procedures had subtle interaction effects.
Level 3 — Performance tests (1 week): The top 50 queries by execution frequency were benchmarked on both Oracle and DB2. Results:
- 38 queries: DB2 faster (average 15% improvement)
- 9 queries: Comparable (within 5%)
- 3 queries: DB2 slower (2 queries 20% slower, 1 query 45% slower)
The 45%-slower query was a complex join across 7 tables that Oracle's optimizer handled differently. Tom added a DB2 optimization hint and a supporting index, bringing it within 5% of Oracle's time.
Level 4 — Parallel run (2 weeks): Both Oracle and DB2 ran simultaneously, processing the same transactions. A comparison job ran nightly, verifying that both databases contained identical data. This was the most expensive phase but the most confidence-building.
The parallel run uncovered one critical issue: a trigger on the PAYMENT table used Oracle's :OLD and :NEW pseudo-records with slightly different null-handling behavior than DB2's REFERENCING OLD and NEW. The trigger produced different AUDIT_LOG entries for payments where the old value was NULL. Fix: add explicit NULL checks in the DB2 trigger.
Month 6: Cutover
Coastal chose a Friday-to-Sunday cutover window:
- Friday 6:00 PM: Stop all application writes to Oracle.
- Friday 6:30 PM: Final incremental data sync (changes since last parallel-run sync).
- Friday 8:00 PM: Validation queries confirm Oracle and DB2 are identical.
- Friday 9:00 PM: Application configuration switched to DB2 connection strings.
- Friday 9:15 PM: Smoke tests (10 critical transactions) run against DB2.
- Friday 9:30 PM: Applications opened to internal users only.
- Saturday 8:00 AM: Internal users confirm normal operations.
- Saturday 12:00 PM: Applications opened to all users.
- Sunday: Monitoring, with Oracle kept on standby.
- Monday: Oracle moved to read-only archival mode.
The cutover surprise: At Friday 9:15 PM, the first smoke test — creating a new policy — failed. The error was in a sequence value: the DB2 identity column for POLICY_ID started at 1, but the migrated data had POLICY_IDs up to 4,800,000. The identity column's internal counter had not been adjusted after the LOAD.
Tom fixed it in 3 minutes:
ALTER TABLE POLICY ALTER COLUMN POLICY_ID
RESTART WITH 4800001;
This was a known issue in the migration checklist — but it had been missed during the cutover execution. Post-mortem action: add a validation step that checks every identity column's current value against the MAX value in the table.
The Numbers
| Metric | Value |
|---|---|
| Migration duration (total project) | 6 months |
| Migration team size | 4 dedicated + 6 part-time |
| Total project cost | $1.15 million |
| Objects migrated | 1,472 (tables, indexes, views, procedures, triggers, sequences) |
| Auto-converted by DCW | 1,358 (92.3%) |
| Manual conversion items | 114 |
| Data migrated | 2.8 TB |
| Test cases executed | 3,280 |
| Defects found during testing | 147 |
| Defects found after cutover | 3 (all minor, fixed within 24 hours) |
| Cutover downtime | 3 hours 30 minutes (Friday 6:00 PM to 9:30 PM) |
| Annual licensing savings | $920,000 |
| Break-even point | 15 months post-cutover |
Lessons Learned
-
Oracle DATE is the migration's hidden landmine. The 287 DATE-to-TIMESTAMP conversions touched every part of the system. Budget 20% of your stored procedure conversion time for date-handling issues alone.
-
DCW is a starting point, not a finishing line. The 92.3% auto-conversion rate sounds high, but the remaining 7.7% (114 items) consumed 60% of the manual effort. The hard items are disproportionately hard.
-
Exception handling is a semantic conversion, not a syntactic one. You cannot mechanically translate EXCEPTION blocks to DECLARE HANDLER. The execution flow is fundamentally different. Each procedure with complex exception handling needs to be re-thought, not just re-typed.
-
Parallel run catches issues that unit tests miss. The trigger null-handling difference was invisible to unit tests (which tested with non-NULL values) and only appeared during the parallel run when real data included NULLs.
-
Identity column restart is a trivial fix but a catastrophic miss. Always validate that identity columns and sequences are positioned correctly after data migration. Add this to your cutover checklist in bold, underlined, and highlighted.
-
Budget for the unexpected 10%. Coastal's original estimate was $1.0 million. The actual cost was $1.15 million — 15% over budget. The overruns were in stored procedure conversion (more complex than estimated) and extended testing (the parallel run was originally planned for 1 week but extended to 2). In Wei's experience, 10-20% overrun is typical for Oracle-to-DB2 migrations.
This case study illustrates Section 16.9 (Oracle to DB2 migration), including SQL dialect differences, data type mapping, stored procedure translation, and testing strategies.