36 min read

> Your schema will change. Business requirements evolve, regulations shift, and that column you were sure you'd never need becomes critical. The question is not whether your schema will change, but whether you can change it without waking up the...

Chapter 16: Schema Evolution and Data Migration — ALTER TABLE, Online Schema Changes, and Zero-Downtime Migrations

Your schema will change. Business requirements evolve, regulations shift, and that column you were sure you'd never need becomes critical. The question is not whether your schema will change, but whether you can change it without waking up the on-call team. This chapter is about the discipline of change — how to modify live database schemas safely, how to migrate data between systems without losing a single row, and how to do all of it while the business keeps running. If Chapter 13 taught you to design a schema, and Chapters 14 and 15 taught you to implement and index it, this chapter teaches you to evolve it through years of production life.


Learning Objectives

After completing this chapter, you will be able to:

  1. Plan and execute schema changes in production DB2 systems with confidence and repeatability.
  2. Use ALTER TABLE safely for common modifications including column additions, data type widening, and constraint changes.
  3. Understand REORG PENDING and advisory states on z/OS, and respond to them appropriately.
  4. Implement online schema changes with minimal or zero downtime using proven strategies.
  5. Plan and execute data migrations between DB2 systems using platform-appropriate tools.
  6. Manage cross-platform migrations (Oracle to DB2) and version-to-version upgrades with fallback planning.

16.1 The Reality of Schema Change

If you have spent any time administering a production database, you already know: the schema you deploy on day one is not the schema you will be running five years later. It is not even the schema you will be running five months later.

Schema change is not a failure of design. It is a natural consequence of living systems. The business discovers new requirements. Regulators mandate new data retention fields. A vendor integration requires columns that nobody anticipated. Performance analysis reveals that a table partitioning strategy needs to evolve. A column that was VARCHAR(50) in 2018 needs to be VARCHAR(200) in 2025 because customer names from certain markets exceed the original length.

How Often Do Schemas Change?

In a typical enterprise environment running DB2, you can expect:

  • Monthly: Minor changes — adding nullable columns, creating new indexes, adjusting VARCHAR lengths.
  • Quarterly: Moderate changes — adding new tables, modifying constraint relationships, introducing new reference data.
  • Annually: Major changes — restructuring table hierarchies, introducing partitioning where none existed, splitting or merging tables, migrating to new DB2 versions.

At a large financial institution like our Meridian National Bank, the DDL change log might show 200-400 individual ALTER statements per year across development, test, and production environments. That is not chaos — that is a healthy, evolving system.

The Cost of Schema Rigidity

The opposite of schema evolution is schema rigidity, and it is far more dangerous than change. When teams are afraid to modify the schema, they resort to workarounds:

  • Column overloading: Repurposing an existing VARCHAR column to hold data it was never designed for. "We can just put the new regulatory code in the NOTES field."
  • Shadow tables: Creating unofficial parallel tables that duplicate data from the real tables, with no referential integrity between them.
  • Application-layer encoding: Storing structured data as delimited strings or JSON blobs in VARCHAR columns, bypassing the relational model entirely.
  • EAV (Entity-Attribute-Value) patterns: Creating a generic "properties" table with columns like ENTITY_ID, ATTRIBUTE_NAME, ATTRIBUTE_VALUE — destroying type safety, query performance, and referential integrity in one stroke.

Every one of these workarounds trades short-term convenience for long-term technical debt. The NorthStar case study in Chapter 2 showed how JSON blobs in a TEXT column led to a $2.3 million remediation project. Schema rigidity does not prevent change; it prevents managed change.

The Discipline of Schema Evolution

The goal of this chapter is not to make schema changes easy — some of them are inherently complex. The goal is to make them repeatable, reversible, and safe. That means:

  1. Every schema change is scripted. No one ever types ALTER TABLE directly into a production console. Changes are written as SQL scripts, reviewed, tested in lower environments, and promoted through a release pipeline.
  2. Every schema change has a rollback plan. Before you run the forward script, you write (and test) the rollback script. What will you do if the change causes problems?
  3. Every schema change is assessed for impact. Will it require a REORG? Will it invalidate packages? Will it lock the table? How long will it take?
  4. Every schema change is coordinated with application releases. Schema changes and application code changes must be deployed in the correct order, which sometimes means the schema change goes first (adding a new column) and sometimes means the application change goes first (stopping use of a column before dropping it).

This is the practice of schema evolution, and it is what separates a DBA who sleeps through the night from one who gets paged at 3 AM.


16.2 ALTER TABLE — Safe Changes

The ALTER TABLE statement is the primary tool for schema evolution in DB2. It can do many things, but not all of them are equally safe or equally online. Understanding which changes are "instant" and which require reorganization is essential knowledge for any DB2 practitioner.

16.2.1 Adding a Nullable Column

The most common and safest schema change in DB2 is adding a new nullable column to an existing table:

-- z/OS and LUW
ALTER TABLE MERIDIAN.ACCOUNT
    ADD COLUMN REGULATORY_FLAG CHAR(1) DEFAULT NULL;

On both platforms, this operation is essentially instantaneous regardless of table size. DB2 records the new column definition in the catalog, but it does not touch any existing data pages. Existing rows are treated as having NULL in the new column until they are updated. This is sometimes called a "logical add" — the column exists logically but does not consume physical space in existing rows until those rows are rewritten.

Key rules for safe column addition:

  • The column should allow NULLs (no NOT NULL constraint) or have a DEFAULT value.
  • Adding a column with NOT NULL and no DEFAULT will fail on LUW if the table contains data. On z/OS, the behavior depends on the DB2 version and compatibility settings.
  • The column is added at the end of the row — you cannot specify column position in DB2.
  • On z/OS, the tablespace enters ADVISORY REORG (AREOR) status, meaning DB2 recommends but does not require a REORG before the new column is physically materialized in existing rows.
-- Adding a column with a default value (safe on both platforms)
ALTER TABLE MERIDIAN.CUSTOMER
    ADD COLUMN PREFERRED_LANGUAGE VARCHAR(10) DEFAULT 'EN';

-- Adding a column with NOT NULL and DEFAULT (DB2 12 for z/OS, LUW 11.1+)
ALTER TABLE MERIDIAN.CUSTOMER
    ADD COLUMN COUNTRY_CODE CHAR(3) NOT NULL DEFAULT 'USA';

The second form — NOT NULL with DEFAULT — is safe on modern DB2 versions because DB2 stores the default value in the catalog and applies it to existing rows without rewriting them. This is a significant improvement over older versions where NOT NULL columns required immediate materialization.

16.2.2 Widening a Column

You can increase the length of a VARCHAR column without reorganizing data:

-- z/OS and LUW: Widen a VARCHAR column
ALTER TABLE MERIDIAN.CUSTOMER
    ALTER COLUMN LAST_NAME SET DATA TYPE VARCHAR(200);

What is safe to widen: - VARCHAR(n) to VARCHAR(m) where m > n — always safe, instant. - CHAR(n) to CHAR(m) where m > n — safe but triggers REORG PENDING on z/OS because existing rows must be padded to the new length. - DECIMAL(p,s) to DECIMAL(p2,s) where p2 > p (same scale, wider precision) — safe on both platforms. - SMALLINT to INTEGER to BIGINT — safe, but on z/OS this places the tablespace in REORG PENDING because the physical width of the column changes.

What is NOT safe (or not allowed): - Narrowing a VARCHAR from VARCHAR(200) to VARCHAR(100) — not allowed if existing data exceeds the new length. - Changing scale of DECIMAL (e.g., DECIMAL(10,2) to DECIMAL(10,4)) — this changes data interpretation and is not a simple widen. - Changing data type family (e.g., INTEGER to VARCHAR) — not allowed via simple ALTER.

16.2.3 Adding and Dropping Constraints

Constraints can be added and dropped on live tables, but the implications vary:

-- Add a CHECK constraint
ALTER TABLE MERIDIAN.ACCOUNT
    ADD CONSTRAINT CK_ACCT_STATUS
    CHECK (STATUS IN ('ACTIVE', 'CLOSED', 'SUSPENDED', 'FROZEN'));

-- Add a FOREIGN KEY constraint
ALTER TABLE MERIDIAN.ACCOUNT
    ADD CONSTRAINT FK_ACCT_CUST
    FOREIGN KEY (CUSTOMER_ID) REFERENCES MERIDIAN.CUSTOMER(CUSTOMER_ID);

-- Drop a constraint
ALTER TABLE MERIDIAN.ACCOUNT
    DROP CONSTRAINT CK_ACCT_STATUS;

Adding a CHECK constraint: DB2 validates existing data against the new constraint. If any row violates it, the ALTER fails. On large tables, this validation can take time and acquire locks. On z/OS, you can use ALTER TABLE ... ADD CONSTRAINT ... NOT ENFORCED to add the constraint as documentation without validation, then later switch it to ENFORCED.

Adding a FOREIGN KEY: DB2 checks that all existing values in the child column exist in the parent table. This requires a scan of the child table and index lookups into the parent table. On very large tables, this can be resource-intensive.

Dropping a constraint: Always instant. DB2 removes the constraint definition from the catalog. No data validation is needed.

16.2.4 Adding and Dropping a Primary Key

-- Drop existing primary key
ALTER TABLE MERIDIAN.ACCOUNT
    DROP PRIMARY KEY;

-- Add a new primary key
ALTER TABLE MERIDIAN.ACCOUNT
    ADD CONSTRAINT PK_ACCOUNT
    PRIMARY KEY (ACCOUNT_ID);

Dropping a primary key will also drop any foreign keys that reference it — a cascading effect that can be devastating if you are not careful. Always check for dependent foreign keys before dropping a primary key:

-- z/OS: Find foreign keys referencing a table
SELECT CREATOR, TBNAME, RELNAME, REFTBNAME
FROM SYSIBM.SYSRELS
WHERE REFTBCREATOR = 'MERIDIAN' AND REFTBNAME = 'ACCOUNT';

-- LUW: Find foreign keys referencing a table
SELECT TABSCHEMA, TABNAME, CONSTNAME, REFTABNAME
FROM SYSCAT.REFERENCES
WHERE REFTABSCHEMA = 'MERIDIAN' AND REFTABNAME = 'ACCOUNT';

16.2.5 What Requires a REORG?

Here is a summary table that every DBA should keep bookmarked:

Operation z/OS Impact LUW Impact
ADD COLUMN (nullable) ADVISORY REORG Instant, no REORG needed
ADD COLUMN (NOT NULL DEFAULT) ADVISORY REORG (DB2 12) Instant (DB2 11.1+)
ALTER COLUMN (widen VARCHAR) ADVISORY REORG Instant
ALTER COLUMN (widen CHAR) REORG PENDING REORG recommended
ALTER COLUMN (SMALLINT to INT) REORG PENDING REORG PENDING
ADD CHECK CONSTRAINT No REORG No REORG
ADD FOREIGN KEY No REORG No REORG
DROP COLUMN REORG PENDING (z/OS 12) Offline REORG required
ALTER COLUMN SET NOT NULL No REORG (validation only) No REORG (validation only)
ROTATE PARTITION Instant N/A (LUW uses DETACH)

Understanding this table saves you from the most common mistake in production schema management: running an ALTER that places a tablespace into REORG PENDING during business hours, effectively making the table unavailable until you run a REORG.


16.3 [z/OS] Pending States and REORG

On DB2 for z/OS, certain ALTER TABLE operations do not take immediate physical effect. Instead, they place the tablespace (or partition) into a pending state — a condition that must be resolved before certain operations can proceed. Understanding pending states is one of the most critical z/OS DBA skills.

16.3.1 The Pending States

There are several pending states, but the most important are:

REORG PENDING (REORP): The tablespace must be reorganized before data can be accessed. This is the most restrictive state. SQL statements against the object will receive SQLCODE -904 (resource unavailable). This state is triggered by changes that alter the physical row format — for example, changing a SMALLINT column to INTEGER, which changes the column from 2 bytes to 4 bytes in every row.

REBUILD PENDING (RBDP): An index must be rebuilt. The index is not usable until REBUILD INDEX is run. This commonly occurs after a LOAD REPLACE operation or certain partition-level operations.

ADVISORY REORG (AREOR): DB2 recommends a REORG but does not require one. Data is still fully accessible. This is the state you enter after adding a nullable column. DB2 can handle the discrepancy between old rows (without the new column) and new rows (with the new column) at read time, but a REORG would physically add the column to all rows and improve efficiency.

ADVISORY REBUILD (ARBDP): Similar to ADVISORY REORG but for indexes. DB2 recommends rebuilding an index, typically after statistics suggest significant structural degradation.

16.3.2 Checking Pending States

You check pending states through the DISPLAY DATABASE command or by querying the Real-Time Statistics (RTS) tables:

-- z/OS: Check pending states via RTS
SELECT DBNAME, TSNAME, PARTITION,
       REORGPENDINGFLAG,
       REBUILDPENDINGFLAG,
       REORGRECOMMENDED,
       STATUS
FROM SYSIBM.SYSTABLESPACESTATS
WHERE DBNAME = 'MERIDDB'
  AND REORGPENDINGFLAG = 'Y';

-- z/OS: Display database command
-DISPLAY DATABASE(MERIDDB) SPACENAM(ACCTTS) RESTRICT

The DISPLAY DATABASE command with the RESTRICT option shows which objects are in restrictive pending states. In a well-run production environment, you should have monitoring that alerts you whenever an object enters REORG PENDING or REBUILD PENDING status.

16.3.3 Strategies to Minimize Pending States

The best pending state is one that never happens. Here are strategies to minimize disruptive pending states:

Plan column types generously from the start. If you think a column might need to hold larger values in the future, start with the wider type. The storage cost of declaring VARCHAR(200) instead of VARCHAR(50) is zero until you actually store longer values — VARCHAR only consumes space for the data actually stored plus a 2-byte length prefix. INTEGER versus SMALLINT costs only 2 extra bytes per row, but changing from SMALLINT to INTEGER later triggers REORG PENDING.

Use ADD COLUMN with NULL default whenever possible. This triggers only ADVISORY REORG, not REORG PENDING. The table remains fully accessible.

Batch multiple ALTERs before a single REORG. If you have five ALTER TABLE statements that each trigger ADVISORY REORG, run all five, then run one REORG. There is no benefit to reorganizing between each ALTER.

Schedule REORGs in maintenance windows. When you know an ALTER will trigger REORG PENDING, plan the ALTER and the subsequent REORG as a pair. Run them together in a maintenance window, not separately.

Use online REORG where possible. DB2 12 for z/OS supports SHRLEVEL(CHANGE) REORG for many tablespace types, allowing read and write access during reorganization. This dramatically reduces the impact of REORG PENDING states.

-- z/OS: Online REORG with SHRLEVEL CHANGE
REORG TABLESPACE MERIDDB.ACCTTS
    SHRLEVEL CHANGE
    MAPPINGTABLE MERIDIAN.ACCT_MAPPING
    LOG YES
    DRAIN_WAIT 60
    RETRY 3
    RETRY_DELAY 30;

The SHRLEVEL(CHANGE) option requires a mapping table (which you must create in advance) and allows applications to continue reading and writing data during the REORG. At the end of the REORG, there is a brief drain period where DB2 acquires exclusive access to apply the final changes. The DRAIN_WAIT, RETRY, and RETRY_DELAY parameters control how DB2 handles contention during this drain phase.

16.3.4 Creating the Mapping Table

Before you can run an online REORG with SHRLEVEL(CHANGE), you need a mapping table:

-- z/OS: Create mapping table for online REORG
CREATE TABLE MERIDIAN.ACCT_MAPPING (
    TYPE       CHAR(1)     NOT NULL,
    SOURCE_RID CHAR(5)     NOT NULL,
    TARGET_RID CHAR(5)     NOT NULL,
    SOURCE_LEN SMALLINT    NOT NULL,
    TARGET_LEN SMALLINT    NOT NULL
) IN MERIDDB.MAPPINGTS;

The mapping table tracks where rows move during reorganization, allowing DB2 to redirect queries to the correct location even as data is being rearranged. After the REORG completes, the mapping table can be dropped or retained for future use.


16.4 [LUW] Online Table Reorganization

On DB2 for LUW, the REORG TABLE command serves a similar purpose to z/OS REORG TABLESPACE, but the mechanics and options differ significantly.

16.4.1 Classic vs. Inplace REORG

LUW offers two REORG modes:

Classic (offline) REORG: Rebuilds the table from scratch. The table is unavailable for write operations during the process (reads may be allowed depending on the ALLOW option). This is the most thorough reorganization.

-- LUW: Classic REORG
REORG TABLE MERIDIAN.ACCOUNT
    ALLOW READ ACCESS;

Inplace (online) REORG: Reorganizes the table while it remains fully accessible for reads and writes. This is the LUW equivalent of z/OS SHRLEVEL(CHANGE) REORG.

-- LUW: Inplace REORG
REORG TABLE MERIDIAN.ACCOUNT INPLACE
    ALLOW WRITE ACCESS
    START;

The inplace REORG runs in the background and has three phases:

  1. Forward phase: Scans the table and reorganizes pages. Applications continue reading and writing normally. Changes made by applications are logged.
  2. Replay phase: Applies logged changes that occurred during the forward phase. This phase may repeat if the volume of changes is high.
  3. Truncate phase: Reclaims empty pages at the end of the table. Requires a brief exclusive lock.

16.4.2 Monitoring REORG Progress

On LUW, you can monitor an inplace REORG in real time:

-- LUW: Check REORG status
SELECT REORG_PHASE, REORG_STATUS, REORG_CURRENT_COUNTER,
       REORG_MAX_COUNTER, REORG_COMPLETION
FROM SYSIBMADM.ADMINTABINFO
WHERE TABSCHEMA = 'MERIDIAN' AND TABNAME = 'ACCOUNT';

You can also pause and resume an inplace REORG:

-- Pause an inplace REORG
REORG TABLE MERIDIAN.ACCOUNT INPLACE PAUSE;

-- Resume a paused REORG
REORG TABLE MERIDIAN.ACCOUNT INPLACE RESUME;

-- Stop (cancel) an inplace REORG
REORG TABLE MERIDIAN.ACCOUNT INPLACE STOP;

The ability to pause a REORG is invaluable during unexpected peak load. If your monitoring shows that the REORG is consuming too many resources during a spike in transaction volume, you can pause it, let the spike pass, and resume.

16.4.3 When to REORG on LUW

LUW provides the REORGCHK command to assess whether a table needs reorganization:

-- LUW: Check if tables need REORG
REORGCHK UPDATE STATISTICS ON TABLE MERIDIAN.ACCOUNT;

This command runs RUNSTATS to refresh statistics, then evaluates the table against formula-based thresholds. The output shows asterisks (*) next to any formula where the threshold is exceeded. If three or more formulas show asterisks, DB2 recommends a REORG.

The formulas evaluate: - F1: Overflow rows (rows that were updated and no longer fit on their original page). - F2: Free space utilization (how much wasted space exists within pages). - F3: Page fetch pairs (how many pages must be read for sequential access).

Do not REORG on a schedule. REORG when the data tells you it is needed.


16.5 Online Schema Change Strategies

Sometimes ALTER TABLE alone is not sufficient. You need to make a change that ALTER TABLE does not support, or you need to transform data as part of the schema change, or the change is so fundamental that it requires a coordinated migration. For these situations, you need a strategy.

16.5.1 Expand-Then-Contract

The expand-then-contract pattern is the safest approach for complex schema changes. It works in three phases:

Phase 1 — Expand: Add the new structure alongside the old one. Both coexist.

-- Step 1: Add new column alongside old one
ALTER TABLE MERIDIAN.CUSTOMER
    ADD COLUMN EMAIL_ADDRESS_NEW VARCHAR(320);

-- Step 2: Populate the new column from the old one
UPDATE MERIDIAN.CUSTOMER
    SET EMAIL_ADDRESS_NEW = LOWER(TRIM(EMAIL_ADDRESS));
-- Run this in batches for large tables

Phase 2 — Migrate: Update the application to use the new column. During this phase, the application writes to both the old and new columns (dual-write). Reads shift to the new column.

Phase 3 — Contract: Once all applications have been updated and the old column is no longer referenced, drop it.

-- Step 3: After all applications are migrated
ALTER TABLE MERIDIAN.CUSTOMER
    DROP COLUMN EMAIL_ADDRESS;

-- Step 4: Rename new column (LUW only — z/OS does not support RENAME COLUMN)
ALTER TABLE MERIDIAN.CUSTOMER
    RENAME COLUMN EMAIL_ADDRESS_NEW TO EMAIL_ADDRESS;

On z/OS, where RENAME COLUMN is not available, you may need to use a view to present the new column under the old name, or coordinate an application-side rename.

The expand-then-contract pattern is deliberately slow. It can take days or weeks, with each phase deployed independently. That slowness is its strength — at every point, you can stop, assess, and roll back if needed.

16.5.2 Dual-Write Pattern

The dual-write pattern is a variation that works well when you are moving data from one table to a completely different table with a new structure:

  1. Create the new table with the desired schema.
  2. Modify the application to write to both old and new tables on every INSERT, UPDATE, and DELETE.
  3. Backfill: Copy historical data from the old table to the new table.
  4. Verify: Confirm that the new table is consistent with the old table.
  5. Switch reads: Point read queries at the new table.
  6. Stop dual-write: Remove writes to the old table.
  7. Drop the old table.

This pattern requires application changes and is more complex than expand-then-contract, but it handles cases where the new schema is so different from the old one that ALTER TABLE cannot bridge the gap.

16.5.3 Shadow Table Approach

The shadow table approach uses DB2's native replication or trigger-based mechanisms to keep a new table synchronized with the old one:

-- Create the new table with the desired structure
CREATE TABLE MERIDIAN.ACCOUNT_V2 (
    ACCOUNT_ID       BIGINT NOT NULL,
    CUSTOMER_ID      BIGINT NOT NULL,
    ACCOUNT_TYPE     SMALLINT NOT NULL,  -- Was CHAR(2), now normalized
    ACCOUNT_NUMBER   VARCHAR(34) NOT NULL, -- Widened for IBAN
    BALANCE          DECIMAL(17,2) NOT NULL,
    CURRENCY_CODE    CHAR(3) NOT NULL DEFAULT 'USD',
    OPENED_DATE      DATE NOT NULL,
    CLOSED_DATE      DATE,
    STATUS           SMALLINT NOT NULL DEFAULT 1,
    LAST_ACTIVITY_TS TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    PRIMARY KEY (ACCOUNT_ID)
);

-- Create a trigger to keep the shadow table in sync
CREATE TRIGGER MERIDIAN.TRG_ACCT_SHADOW_INS
AFTER INSERT ON MERIDIAN.ACCOUNT
REFERENCING NEW AS N
FOR EACH ROW
INSERT INTO MERIDIAN.ACCOUNT_V2
VALUES (N.ACCOUNT_ID, N.CUSTOMER_ID,
        CASE N.ACCOUNT_TYPE_CODE
            WHEN 'CK' THEN 1 WHEN 'SV' THEN 2
            WHEN 'MM' THEN 3 WHEN 'CD' THEN 4
            ELSE 99
        END,
        N.ACCOUNT_NUMBER, N.BALANCE, 'USD',
        N.OPENED_DATE, N.CLOSED_DATE,
        CASE N.STATUS WHEN 'ACTIVE' THEN 1 WHEN 'CLOSED' THEN 2
            ELSE 0 END,
        CURRENT TIMESTAMP);

The shadow table approach adds overhead (every write to the original table triggers a write to the shadow table), but it keeps the two tables automatically synchronized without application changes.

16.5.4 View-Based Abstraction

A more elegant but longer-term approach is to introduce a view layer between applications and tables:

-- Create a view that abstracts the physical table
CREATE VIEW MERIDIAN.V_ACCOUNT AS
SELECT ACCOUNT_ID, CUSTOMER_ID, ACCOUNT_TYPE_CODE,
       ACCOUNT_NUMBER, BALANCE, OPENED_DATE, STATUS
FROM MERIDIAN.ACCOUNT;

If all applications access data through views, you can change the underlying table structure and update the view definition without changing application code. This is the ultimate expression of physical data independence.

In practice, view-based abstraction works best when planned from the beginning. Retrofitting it onto an existing system where applications query tables directly requires a coordinated application migration to the view layer.


16.6 Column Type Changes

Changing a column's data type is one of the most delicate schema evolution operations. Some changes are straightforward; others require careful multi-step migrations.

16.6.1 Widening vs. Narrowing

Widening (making a column hold more data) is generally safe:

-- Safe widenings
ALTER TABLE MERIDIAN.CUSTOMER
    ALTER COLUMN MIDDLE_NAME SET DATA TYPE VARCHAR(100);  -- Was VARCHAR(50)

ALTER TABLE MERIDIAN.TRANSACTION
    ALTER COLUMN AMOUNT SET DATA TYPE DECIMAL(15,2);  -- Was DECIMAL(10,2)

Narrowing (making a column hold less data) is dangerous because existing data might not fit:

-- DANGEROUS: Will fail if any LAST_NAME exceeds 50 characters
ALTER TABLE MERIDIAN.CUSTOMER
    ALTER COLUMN LAST_NAME SET DATA TYPE VARCHAR(50);  -- Was VARCHAR(200)

Before narrowing, you must verify that no existing data exceeds the new limit:

-- Check before narrowing
SELECT COUNT(*) AS VIOLATIONS, MAX(LENGTH(LAST_NAME)) AS MAX_LEN
FROM MERIDIAN.CUSTOMER
WHERE LENGTH(LAST_NAME) > 50;

16.6.2 Adding NOT NULL to an Existing Column

Adding a NOT NULL constraint to a column that currently allows NULLs requires that no existing rows contain NULL in that column:

-- Step 1: Check for NULLs
SELECT COUNT(*) AS NULL_COUNT
FROM MERIDIAN.CUSTOMER
WHERE PHONE_NUMBER IS NULL;

-- Step 2: Fix NULL values (if any exist)
UPDATE MERIDIAN.CUSTOMER
SET PHONE_NUMBER = 'UNKNOWN'
WHERE PHONE_NUMBER IS NULL;

-- Step 3: Add the NOT NULL constraint
ALTER TABLE MERIDIAN.CUSTOMER
    ALTER COLUMN PHONE_NUMBER SET NOT NULL;

On large tables, Step 2 can be a massive UPDATE that locks millions of rows. Run it in batches:

-- Batch update on LUW
BEGIN
    DECLARE v_count INTEGER DEFAULT 1;
    WHILE v_count > 0 DO
        UPDATE MERIDIAN.CUSTOMER
        SET PHONE_NUMBER = 'UNKNOWN'
        WHERE PHONE_NUMBER IS NULL
        FETCH FIRST 10000 ROWS ONLY;
        GET DIAGNOSTICS v_count = ROW_COUNT;
        COMMIT;
    END WHILE;
END;

On z/OS, use a cursor-driven UPDATE with commit frequency control, or use an application program that commits every N rows.

16.6.3 Changing CHAR to VARCHAR

Converting a fixed-length CHAR column to variable-length VARCHAR is a common evolution. DB2 does not support this as a direct ALTER. You must use the expand-then-contract pattern:

-- Step 1: Add new VARCHAR column
ALTER TABLE MERIDIAN.BRANCH
    ADD COLUMN BRANCH_CODE_NEW VARCHAR(10);

-- Step 2: Populate from old column (TRIM removes CHAR padding)
UPDATE MERIDIAN.BRANCH
SET BRANCH_CODE_NEW = TRIM(TRAILING ' ' FROM BRANCH_CODE);

-- Step 3: Migrate applications to use BRANCH_CODE_NEW

-- Step 4: Drop old column (after application migration is complete)
ALTER TABLE MERIDIAN.BRANCH
    DROP COLUMN BRANCH_CODE;

The reverse — VARCHAR to CHAR — requires the same pattern, with an additional check that all values fit within the CHAR length (including padding).

16.6.4 INTEGER to BIGINT Migration

Converting INTEGER to BIGINT is necessary when a table approaches the 2.1 billion row limit of a 4-byte integer primary key. This is a change you want to make proactively, not when your sequence generator is about to overflow.

On LUW (DB2 11.1+), you can do this directly:

-- LUW: Direct widening (triggers REORG)
ALTER TABLE MERIDIAN.TRANSACTION
    ALTER COLUMN TRANSACTION_ID SET DATA TYPE BIGINT;
REORG TABLE MERIDIAN.TRANSACTION;

On z/OS, this places the tablespace in REORG PENDING because the physical column width changes from 4 bytes to 8 bytes:

-- z/OS: Same ALTER, but REORG is mandatory
ALTER TABLE MERIDIAN.TRANSACTION
    ALTER COLUMN TRANSACTION_ID SET DATA TYPE BIGINT;
-- Tablespace enters REORG PENDING — must REORG before table is accessible
REORG TABLESPACE MERIDDB.TRANSTS;

For very large tables (hundreds of millions of rows), the REORG after an INTEGER to BIGINT change can take hours. Plan accordingly.


16.7 Adding and Dropping Tables in Production

Adding a new table to a production database seems simple — just run CREATE TABLE. But in an enterprise environment, even this "simple" operation requires planning.

16.7.1 Adding Tables — Dependency Analysis

Before creating a new table, consider:

  • Tablespace placement: On z/OS, which database and tablespace will hold the table? On LUW, which tablespace and buffer pool? Does the tablespace exist, or does it need to be created?
  • Storage estimation: How much data will this table hold initially? How fast will it grow? Is the allocated storage sufficient?
  • Index requirements: What indexes will the table need? Create them at the same time as the table, not as an afterthought.
  • Authority: Who needs SELECT, INSERT, UPDATE, DELETE access? Grant privileges explicitly; do not rely on PUBLIC access in production.
  • Referential integrity: Does the new table have foreign keys to existing tables? Do existing tables need new foreign keys pointing to the new table?
  • Application impact: Which applications will use this table? Are they ready?
-- Complete table creation script (production-quality)

-- Step 1: Create tablespace (z/OS)
CREATE TABLESPACE PRODLINETS
    IN MERIDDB
    USING STOGROUP MERIDSGL
    PRIQTY 720000
    SECQTY 72000
    SEGSIZE 64
    BUFFERPOOL BP0
    LOCKSIZE ROW
    CLOSE NO
    COMPRESS YES;

-- Step 2: Create the table
CREATE TABLE MERIDIAN.PRODUCT_LINE (
    PRODUCT_LINE_ID   INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY,
    PRODUCT_CODE      VARCHAR(20) NOT NULL,
    PRODUCT_NAME      VARCHAR(100) NOT NULL,
    CATEGORY          VARCHAR(50) NOT NULL,
    EFFECTIVE_DATE    DATE NOT NULL DEFAULT CURRENT DATE,
    EXPIRATION_DATE   DATE,
    STATUS            CHAR(1) NOT NULL DEFAULT 'A'
        CONSTRAINT CK_PL_STATUS CHECK (STATUS IN ('A', 'I', 'D')),
    CREATED_TS        TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    MODIFIED_TS       TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    CONSTRAINT PK_PRODUCT_LINE PRIMARY KEY (PRODUCT_LINE_ID),
    CONSTRAINT UQ_PRODUCT_CODE UNIQUE (PRODUCT_CODE)
) IN MERIDDB.PRODLINETS;

-- Step 3: Create indexes
CREATE INDEX MERIDIAN.IX_PL_CATEGORY
    ON MERIDIAN.PRODUCT_LINE (CATEGORY);

CREATE INDEX MERIDIAN.IX_PL_STATUS_EFF
    ON MERIDIAN.PRODUCT_LINE (STATUS, EFFECTIVE_DATE);

-- Step 4: Grant access
GRANT SELECT ON MERIDIAN.PRODUCT_LINE TO ROLE MERIDIAN_READ;
GRANT SELECT, INSERT, UPDATE ON MERIDIAN.PRODUCT_LINE TO ROLE MERIDIAN_APP;
GRANT ALL ON MERIDIAN.PRODUCT_LINE TO ROLE MERIDIAN_DBA;

-- Step 5: Collect statistics
RUNSTATS ON TABLE MERIDIAN.PRODUCT_LINE WITH DISTRIBUTION AND DETAILED INDEXES ALL;

16.7.2 Dropping Tables — The Most Dangerous DDL

DROP TABLE is irreversible. Once you drop a table, the data is gone. There is no UNDO, no recycle bin (unless you are using LUW's dropped table recovery feature), no second chance.

Before dropping a table, always:

  1. Verify that no application, job, view, trigger, or stored procedure references the table.
  2. Take a backup or export the data.
  3. Drop dependent objects (foreign keys, views, triggers, MQTs) first.
  4. Have the rollback script ready (CREATE TABLE + data reload).
-- Pre-drop dependency check (LUW)
SELECT TABSCHEMA, TABNAME, TYPE, CONSTNAME
FROM SYSCAT.TABDEP
WHERE BSCHEMA = 'MERIDIAN' AND BNAME = 'OLD_AUDIT_LOG';

SELECT ROUTINESCHEMA, ROUTINENAME, ROUTINETYPE
FROM SYSCAT.ROUTINEDEP
WHERE BSCHEMA = 'MERIDIAN' AND BNAME = 'OLD_AUDIT_LOG';

SELECT VIEWSCHEMA, VIEWNAME
FROM SYSCAT.VIEWDEP
WHERE BSCHEMA = 'MERIDIAN' AND BNAME = 'OLD_AUDIT_LOG';

-- z/OS: Check for dependent packages
SELECT DNAME, BQUALIFIER, BNAME, BTYPE
FROM SYSIBM.SYSPACKDEP
WHERE BQUALIFIER = 'MERIDIAN' AND BNAME = 'OLD_AUDIT_LOG';

16.7.3 Rollback Planning

Every table drop should have a corresponding restoration script. At minimum:

-- Rollback script: Restore OLD_AUDIT_LOG
-- Step 1: Recreate the table (save the DDL BEFORE dropping)
-- [Full CREATE TABLE statement here]

-- Step 2: Reload data
-- LUW: IMPORT FROM old_audit_log.ixf OF IXF INSERT INTO MERIDIAN.OLD_AUDIT_LOG
-- z/OS: LOAD DATA ... INTO TABLE MERIDIAN.OLD_AUDIT_LOG

-- Step 3: Recreate indexes
-- Step 4: Recreate foreign keys
-- Step 5: RUNSTATS
-- Step 6: Verify row count matches pre-drop count

On LUW, you can enable dropped table recovery at the tablespace level:

-- LUW: Enable dropped table recovery
CREATE TABLESPACE RECOVERABLE_TS
    MANAGED BY DATABASE USING (FILE '/data/recov_ts' 500M)
    DROPPED TABLE RECOVERY ON;

With this feature enabled, dropped table data pages are retained and can be recovered using the RECOVER DROPPED TABLE option of the ROLLFORWARD command, as long as you have the necessary log files.


16.8 Data Migration Between DB2 Systems

Data migration — moving data from one DB2 system to another — is a regular activity in enterprise environments. You migrate data when refreshing test environments from production, when consolidating systems, when moving to new hardware, or when upgrading DB2 versions.

16.8.1 [z/OS] UNLOAD and LOAD

On z/OS, the primary tools for high-volume data movement are the UNLOAD and LOAD utilities:

//UNLOAD   EXEC DSNUPROC,SYSTEM=DB2P,UID='UNLOAD01'
//SYSIN    DD *
  UNLOAD TABLESPACE MERIDDB.ACCTTS
    FROM TABLE MERIDIAN.ACCOUNT
    UNLOAD DATA
    HEADER OBID
    PUNCHDDN SYSPUNCH
//*
//SYSREC   DD DSN=MERIDIAN.ACCOUNT.UNLOAD.DATA,
//            DISP=(NEW,CATLG),SPACE=(CYL,(500,50),RLSE),
//            UNIT=SYSDA
//SYSPUNCH DD DSN=MERIDIAN.ACCOUNT.UNLOAD.SYSPUNCH,
//            DISP=(NEW,CATLG),SPACE=(TRK,(5,1),RLSE),
//            UNIT=SYSDA

The UNLOAD utility produces a flat file in DB2's internal format, which can then be loaded into another DB2 z/OS system using the LOAD utility. UNLOAD is the fastest way to extract data from DB2 z/OS — it bypasses the SQL engine and reads directly from the tablespace.

For loading into the target system:

//LOAD     EXEC DSNUPROC,SYSTEM=DB2T,UID='LOAD01'
//SYSIN    DD *
  LOAD DATA INDDN SYSREC
    RESUME NO REPLACE
    LOG NO
    ENFORCE CONSTRAINTS
    INTO TABLE MERIDIAN.ACCOUNT
    WHEN (1:1) = X'00'
//*
//SYSREC   DD DSN=MERIDIAN.ACCOUNT.UNLOAD.DATA,
//            DISP=SHR

Key LOAD options: - RESUME NO REPLACE: Delete existing data and load new data. - LOG NO: Do not log individual row inserts — dramatically faster for bulk loads, but requires a backup afterward for recoverability. - ENFORCE CONSTRAINTS: Check referential and check constraints after loading.

16.8.2 [LUW] EXPORT, IMPORT, and LOAD

On LUW, the equivalent tools are EXPORT, IMPORT, and LOAD:

-- LUW: Export data to IXF format (preserves metadata)
EXPORT TO /data/migration/account.ixf OF IXF
    SELECT * FROM MERIDIAN.ACCOUNT;

-- LUW: Import data from IXF format
IMPORT FROM /data/migration/account.ixf OF IXF
    INSERT INTO MERIDIAN.ACCOUNT;

-- LUW: Load data (faster than import, but with restrictions)
LOAD FROM /data/migration/account.ixf OF IXF
    INSERT INTO MERIDIAN.ACCOUNT
    NONRECOVERABLE;

EXPORT/IMPORT vs. LOAD:

Feature IMPORT LOAD
Speed Slower (uses SQL INSERT) Faster (writes directly to pages)
Logging Full logging Minimal or no logging
Triggers Fired Not fired
Constraints Checked per row Checked after load (SET INTEGRITY)
Table availability Available during import May be in LOAD PENDING state
Recovery Table is recoverable May need backup (NONRECOVERABLE)

For large data migrations, LOAD is dramatically faster — often 5-10x faster than IMPORT for tables with millions of rows.

16.8.3 db2move — Bulk Migration Tool

The db2move utility on LUW can export and import all tables in a database (or a subset) in a single command:

# Export all tables from a database
db2move MERIDIANDB EXPORT

# Export specific tables
db2move MERIDIANDB EXPORT -tn MERIDIAN.ACCOUNT,MERIDIAN.CUSTOMER,MERIDIAN.BRANCH

# Import into target database
db2move MERIDIANDB_TARGET IMPORT

# Load into target database (faster)
db2move MERIDIANDB_TARGET LOAD

db2move generates individual IXF files for each table and a control file (db2move.lst) that tracks the mapping. It is the fastest way to migrate an entire schema between LUW systems.

16.8.4 High-Speed Data Movement

For very large migrations (terabytes of data), consider these approaches:

Parallel UNLOAD/LOAD (z/OS): UNLOAD and LOAD individual partitions in parallel using separate jobs. If a table has 12 partitions, you can run 12 UNLOAD jobs simultaneously, then 12 LOAD jobs on the target.

Pipe between systems: On LUW, you can pipe the output of db2 export directly to db2 import on another system using named pipes or SSH tunnels, avoiding the need to write intermediate files to disk.

IBM InfoSphere Data Replication (IIDR): For ongoing replication or large-scale migration with minimal downtime, IIDR captures changes from the DB2 log and applies them to the target system. This is the preferred approach when you need to keep source and target in sync during a migration cutover.

Db2 HADR (LUW): High Availability Disaster Recovery can be used as a migration mechanism. Set up HADR between the old and new systems, let them synchronize, then perform a takeover on the new system.


16.9 Cross-Platform Migration — Oracle to DB2

Migrating from Oracle to DB2 is one of the most complex database projects an organization can undertake. It involves not just moving data, but translating SQL dialects, converting stored procedures, remapping data types, and thoroughly testing every application.

16.9.1 SQL Dialect Differences

Oracle and DB2 SQL are both based on the SQL standard, but they diverge in many practical areas:

Feature Oracle DB2
String concatenation \|\| or CONCAT (2 args) \|\| or CONCAT (2+ args in LUW)
Current date/time SYSDATE CURRENT DATE, CURRENT TIMESTAMP
Null handling NVL(x, default) COALESCE(x, default) or VALUE(x, default)
Row limiting ROWNUM, FETCH FIRST (12c+) FETCH FIRST n ROWS ONLY
Outer join (old syntax) (+) operator Standard ANSI JOIN syntax
Sequences sequence.NEXTVAL NEXT VALUE FOR sequence
Dual table SELECT ... FROM DUAL SELECT ... FROM SYSIBM.SYSDUMMY1 (z/OS), VALUES(...) (LUW)
Auto-increment IDENTITY or GENERATED GENERATED ALWAYS/BY DEFAULT AS IDENTITY
Decode DECODE(col, v1, r1, v2, r2, def) CASE WHEN col = v1 THEN r1 ... END
NVL2 NVL2(x, val_if_not_null, val_if_null) CASE WHEN x IS NOT NULL THEN ... ELSE ... END

16.9.2 Data Type Mapping

Oracle Type DB2 Equivalent Notes
NUMBER(p,s) DECIMAL(p,s) Direct mapping
NUMBER (no precision) DECFLOAT or DOUBLE NUMBER without precision is a floating-point type
VARCHAR2(n) VARCHAR(n) DB2 max is 32672 (LUW) or 32704 (z/OS)
CLOB CLOB Direct mapping
BLOB BLOB Direct mapping
DATE TIMESTAMP Oracle DATE includes time; DB2 DATE does not
TIMESTAMP TIMESTAMP Direct mapping
RAW(n) VARCHAR(n) FOR BIT DATA Or BINARY/VARBINARY on newer DB2
LONG RAW BLOB LONG RAW is deprecated in Oracle
CHAR(n) CHAR(n) Direct mapping
NVARCHAR2(n) NVARCHAR(n) or VARGRAPHIC(n) Unicode handling differs

The Oracle DATE-to-DB2-TIMESTAMP mapping is the single most common source of migration bugs. Oracle's DATE includes hours, minutes, and seconds. DB2's DATE does not. If you map Oracle DATE to DB2 DATE, you lose time information. Always map Oracle DATE to DB2 TIMESTAMP.

16.9.3 Stored Procedure Translation

Oracle PL/SQL and DB2 SQL PL are superficially similar but differ in many ways:

-- Oracle PL/SQL
CREATE OR REPLACE PROCEDURE update_balance(
    p_account_id IN NUMBER,
    p_amount     IN NUMBER
) AS
    v_current_balance NUMBER;
BEGIN
    SELECT balance INTO v_current_balance
    FROM accounts WHERE account_id = p_account_id
    FOR UPDATE;

    UPDATE accounts
    SET balance = balance + p_amount,
        last_modified = SYSDATE
    WHERE account_id = p_account_id;

    COMMIT;
EXCEPTION
    WHEN NO_DATA_FOUND THEN
        RAISE_APPLICATION_ERROR(-20001, 'Account not found');
END;
/
-- DB2 SQL PL equivalent
CREATE OR REPLACE PROCEDURE MERIDIAN.UPDATE_BALANCE(
    IN P_ACCOUNT_ID BIGINT,
    IN P_AMOUNT     DECIMAL(15,2)
)
LANGUAGE SQL
BEGIN
    DECLARE V_CURRENT_BALANCE DECIMAL(15,2);
    DECLARE SQLSTATE CHAR(5);

    SELECT BALANCE INTO V_CURRENT_BALANCE
    FROM MERIDIAN.ACCOUNT WHERE ACCOUNT_ID = P_ACCOUNT_ID
    FOR UPDATE;

    IF SQLSTATE = '02000' THEN
        SIGNAL SQLSTATE '75001'
            SET MESSAGE_TEXT = 'Account not found';
    END IF;

    UPDATE MERIDIAN.ACCOUNT
    SET BALANCE = BALANCE + P_AMOUNT,
        MODIFIED_TS = CURRENT TIMESTAMP
    WHERE ACCOUNT_ID = P_ACCOUNT_ID;

    COMMIT;
END;

Key differences to watch: - Exception handling: Oracle uses EXCEPTION blocks with named exceptions (NO_DATA_FOUND, TOO_MANY_ROWS). DB2 uses DECLARE HANDLER or checks SQLSTATE directly. - COMMIT in procedures: Both support it, but DB2 procedures that issue COMMIT cannot be called from within a larger transaction that expects atomic behavior. - Package-level variables: Oracle packages can hold session-level state. DB2 has no direct equivalent; use global variables or session tables. - Cursors: Oracle uses REF CURSORs extensively. DB2 uses result set cursors or global temporary tables. - Autonomous transactions: Oracle supports PRAGMA AUTONOMOUS_TRANSACTION. DB2 does not have a direct equivalent; use a separate connection or a loopback connection.

16.9.4 IBM Database Conversion Workbench

IBM provides the Database Conversion Workbench (DCW), a free tool that automates much of the Oracle-to-DB2 conversion:

  1. Schema analysis: DCW scans Oracle schemas and reports compatibility with DB2, flagging constructs that require manual intervention.
  2. DDL conversion: Converts Oracle CREATE TABLE, CREATE INDEX, CREATE VIEW, and other DDL statements to DB2 syntax.
  3. SQL conversion: Translates Oracle SQL in stored procedures, triggers, and views to DB2 SQL PL.
  4. Data type mapping: Applies configurable data type mapping rules.
  5. Report generation: Produces detailed reports showing what was converted automatically, what needs manual review, and estimated effort.

DCW typically converts 70-85% of schema objects automatically. The remaining 15-30% require manual work, usually concentrated in complex PL/SQL packages, dynamic SQL, and Oracle-specific features like materialized view refresh groups, database links, and Oracle-specific optimizer hints.

16.9.5 Testing Strategies for Cross-Platform Migration

Testing is where most migration projects succeed or fail. A thorough testing strategy includes:

  1. Unit tests: Every stored procedure, function, and trigger must be individually tested with the same inputs that produce known outputs on Oracle.
  2. SQL comparison testing: Run the same queries on both Oracle (source) and DB2 (target) and compare result sets row by row. Tools like IBM Data Studio can help automate this.
  3. Performance baseline: Establish performance baselines on Oracle, then verify that DB2 meets or exceeds them for the critical queries.
  4. Data validation: After migration, run row counts, checksums, and spot-check comparisons on every table.
  5. Application integration testing: Run the full application test suite against DB2. This catches issues that unit testing misses — connection handling, transaction isolation behavior, error code interpretation.
  6. Parallel run: Run both systems simultaneously with the same inputs for a defined period. Compare outputs daily.

16.10 Version-to-Version Migration

Upgrading DB2 from one version to another is a major undertaking, but IBM designs the upgrade path to be as smooth as possible. Both platforms use a phased approach that allows you to move forward incrementally and fall back if problems arise.

16.10.1 [z/OS] DB2 11 to DB2 12 Migration

DB2 for z/OS uses a multi-phase migration process:

Compatibility Mode (CM): After installing DB2 12 code and running the catalog migration jobs, the system starts in Compatibility Mode. In this mode, DB2 12 is running, but it behaves like DB2 11 in most respects. New DB2 12 features are not yet available. This mode allows you to verify that existing applications work correctly with the new code level.

Enabling New Function Mode (ENFM): After validating CM, you activate new features in phases. ENFM has multiple sub-stages (ENFM*, ENFM8, ENFM12, etc.) that progressively enable new catalog structures and features. Each stage involves running specific migration jobs that update the DB2 catalog.

New Function Mode (NFM): Once all ENFM stages are complete, DB2 is fully in New Function Mode and all DB2 12 features are available.

Fallback: You can fall back from CM to DB2 11. You can fall back from early ENFM stages. Once you reach NFM, fallback to DB2 11 is no longer possible without a full restore from a DB2 11 backup. This is why the phased approach exists — it gives you multiple checkpoints to verify before committing.

Migration Timeline:
DB2 11 --> Install DB2 12 --> CM (validate) --> ENFM* (convert catalog)
     --> ENFM8 --> ENFM12 --> NFM (full DB2 12)

Fallback possible: CM, early ENFM
Fallback NOT possible: NFM

Key migration tasks: 1. Review DB2 12 incompatibilities documentation (every release has them). 2. Run DB2 12 premigration jobs to check for issues. 3. Install DB2 12 code on the LPAR. 4. Stop DB2 11, start DB2 12 in CM. 5. Run CATMAINT to update the catalog. 6. Validate applications in CM for a defined soak period (typically 2-4 weeks). 7. Proceed through ENFM stages, validating at each stage. 8. Enter NFM. 9. REBIND packages to take advantage of DB2 12 optimizer improvements.

16.10.2 [LUW] DB2 10.5 to DB2 11.5 Migration

On LUW, version-to-version migration is instance-based:

# Step 1: Install DB2 11.5 (does not affect running DB2 10.5)
# The installation creates a new DB2 copy

# Step 2: Stop the instance
db2stop force

# Step 3: Upgrade the instance
db2iupgrade -u db2fenc1 db2inst1

# Step 4: Upgrade the databases
db2 UPGRADE DATABASE MERIDIANDB

# Step 5: Post-upgrade tasks
db2 BIND ALL
db2 RUNSTATS ON TABLE SYSIBM.SYSTABLES WITH DISTRIBUTION AND DETAILED INDEXES ALL
db2 REBIND ALL PACKAGES

The db2iupgrade command converts the instance configuration to the new version format. The UPGRADE DATABASE command converts the database catalog and system objects.

Fallback on LUW: Unlike z/OS, LUW does not have a built-in multi-phase fallback. If the upgrade fails or causes problems, you must: 1. Stop the DB2 11.5 instance. 2. Restore the instance from a backup taken before the upgrade. 3. Restore databases from backups taken before the upgrade.

This is why a complete, tested backup before any upgrade is absolutely non-negotiable.

16.10.3 Post-Upgrade Validation

After any version upgrade, perform these validations:

-- Check database manager and database configuration
-- Look for deprecated parameters or changed defaults
db2 GET DBM CFG;
db2 GET DB CFG FOR MERIDIANDB;

-- Verify catalog integrity
-- LUW:
db2 INSPECT CHECK DATABASE RESULTS KEEP meridian_inspect.log;

-- z/OS: Run CHECK DATA utility
-- CHECK DATA TABLESPACE DSNDB06.SYSUSER

-- Rebind all packages (critical for optimizer improvements)
-- LUW:
db2 REBIND ALL PACKAGES;

-- z/OS:
-- REBIND PACKAGE (MERIDIAN.*)

-- Run application smoke tests
-- Execute the 20-30 most critical queries and verify results

16.11 Zero-Downtime Migration Patterns

In modern enterprises, planned downtime is increasingly unacceptable. A bank cannot tell its customers that online banking will be unavailable for six hours while the database is upgraded. Zero-downtime migration patterns address this requirement.

16.11.1 Blue-Green Database Deployment

The blue-green pattern maintains two identical database environments:

              ┌──────────────┐
              │  Load        │
              │  Balancer    │
              └──────┬───────┘
                     │
           ┌─────────┴─────────┐
           │                   │
    ┌──────┴──────┐     ┌─────┴───────┐
    │   BLUE      │     │   GREEN     │
    │   (Active)  │     │   (Standby) │
    │   DB2 v11   │     │   DB2 v12   │
    │   Schema v1 │     │   Schema v2 │
    └──────┬──────┘     └─────┬───────┘
           │                   │
           └─────────┬─────────┘
                     │
              ┌──────┴──────┐
              │  Replication │
              │  (IIDR/HADR)│
              └─────────────┘

Process: 1. Green environment is set up with the new DB2 version and new schema. 2. Replication keeps Green synchronized with Blue (the active system). 3. Applications are tested against Green. 4. At cutover, the load balancer switches traffic from Blue to Green. 5. If problems arise, switch back to Blue (which is still running and current).

The challenge is maintaining bidirectional replication during the cutover window. If the schema has changed between Blue and Green, replication from Green back to Blue may not work. This means you have a finite window to validate Green before Blue becomes stale.

16.11.2 [z/OS] Rolling Migration in Data Sharing Groups

DB2 for z/OS data sharing groups provide a unique advantage for zero-downtime migration. In a data sharing group, multiple DB2 members access the same data simultaneously through the Coupling Facility.

Rolling migration process: 1. The data sharing group has members DB2A, DB2B, DB2C, DB2D. 2. Quiesce DB2D (drain workload to other members). 3. Upgrade DB2D to the new version (in CM). 4. Bring DB2D back into the group. 5. Repeat for DB2C, DB2B, DB2A.

At every step, the data sharing group is operational with at least three members handling workload. The Coupling Facility ensures data consistency across members running different code levels (during CM). This is the closest thing to true zero-downtime database upgrade available in the industry.

Data Sharing Group: DSNDB2G
Step 1: DB2A(v11) DB2B(v11) DB2C(v11) DB2D(v11)  <- All active
Step 2: DB2A(v11) DB2B(v11) DB2C(v11) DB2D(down)  <- D quiesced
Step 3: DB2A(v11) DB2B(v11) DB2C(v11) DB2D(v12-CM) <- D upgraded
Step 4: DB2A(v11) DB2B(v11) DB2C(down) DB2D(v12-CM) <- C quiesced
Step 5: DB2A(v11) DB2B(v11) DB2C(v12-CM) DB2D(v12-CM) <- C upgraded
... continue until all members upgraded ...
Final:  DB2A(v12) DB2B(v12) DB2C(v12) DB2D(v12)  <- All upgraded

16.11.3 HADR-Based Migration (LUW)

On LUW, HADR (High Availability Disaster Recovery) can serve as a migration vehicle:

-- Step 1: Set up HADR between old system (primary) and new system (standby)
-- On the primary (old system):
UPDATE DB CFG FOR MERIDIANDB USING
    HADR_LOCAL_HOST old_host
    HADR_LOCAL_SVC hadr_port
    HADR_REMOTE_HOST new_host
    HADR_REMOTE_SVC hadr_port
    HADR_REMOTE_INST db2inst1
    HADR_SYNCMODE NEARSYNC
    HADR_PEER_WINDOW 30;

-- On the standby (new system):
UPDATE DB CFG FOR MERIDIANDB USING
    HADR_LOCAL_HOST new_host
    HADR_LOCAL_SVC hadr_port
    HADR_REMOTE_HOST old_host
    HADR_REMOTE_SVC hadr_port
    HADR_REMOTE_INST db2inst1
    HADR_SYNCMODE NEARSYNC
    HADR_PEER_WINDOW 30;

-- Start HADR
-- On standby: db2 START HADR ON DB MERIDIANDB AS STANDBY
-- On primary: db2 START HADR ON DB MERIDIANDB AS PRIMARY

-- Step 2: Let HADR synchronize (monitor with db2pd -hadr)

-- Step 3: Takeover
-- On standby: db2 TAKEOVER HADR ON DB MERIDIANDB

After takeover, the new system becomes the primary. The old system can be kept as standby (providing fallback) or decommissioned.

HADR for version migration: DB2 supports HADR between certain version combinations, allowing the standby to run a newer version than the primary. Check IBM documentation for supported version pairs — not all combinations are supported.


16.12 The Meridian Bank Schema Evolution

Let us bring everything in this chapter together with a realistic scenario. Meridian National Bank is launching a new product line — international wire transfers — which requires changes to the existing schema. Simultaneously, new regulatory requirements demand additional customer data fields. And the transaction table, which started as INTEGER primary keys, is approaching 1.5 billion rows and needs to move to BIGINT before the sequence exhausts.

16.12.1 The Change Requirements

  1. New table: WIRE_TRANSFER — stores international wire transfer details.
  2. New columns on ACCOUNT: IBAN_NUMBER (VARCHAR(34)), SWIFT_CODE (VARCHAR(11)), CURRENCY_CODE (CHAR(3)).
  3. New column on CUSTOMER: TAX_RESIDENCY_COUNTRY (CHAR(3)) — regulatory requirement.
  4. Column widening: CUSTOMER.EMAIL_ADDRESS from VARCHAR(100) to VARCHAR(320) — to accommodate new RFC-compliant email standards.
  5. Type change: TRANSACTION.TRANSACTION_ID from INTEGER to BIGINT.

16.12.2 The Migration Plan

Week 1: Non-disruptive changes (no REORG required)

-- All of these are instant on both platforms

-- Add nullable columns to ACCOUNT
ALTER TABLE MERIDIAN.ACCOUNT
    ADD COLUMN IBAN_NUMBER VARCHAR(34);
ALTER TABLE MERIDIAN.ACCOUNT
    ADD COLUMN SWIFT_CODE VARCHAR(11);
ALTER TABLE MERIDIAN.ACCOUNT
    ADD COLUMN CURRENCY_CODE CHAR(3) DEFAULT 'USD';

-- Add nullable column to CUSTOMER
ALTER TABLE MERIDIAN.CUSTOMER
    ADD COLUMN TAX_RESIDENCY_COUNTRY CHAR(3);

-- Widen VARCHAR column
ALTER TABLE MERIDIAN.CUSTOMER
    ALTER COLUMN EMAIL_ADDRESS SET DATA TYPE VARCHAR(320);

-- Create the new WIRE_TRANSFER table
CREATE TABLE MERIDIAN.WIRE_TRANSFER (
    WIRE_ID              BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY
                             (START WITH 1, INCREMENT BY 1),
    ACCOUNT_ID           BIGINT NOT NULL,
    BENEFICIARY_NAME     VARCHAR(140) NOT NULL,
    BENEFICIARY_IBAN     VARCHAR(34) NOT NULL,
    BENEFICIARY_SWIFT    VARCHAR(11) NOT NULL,
    BENEFICIARY_BANK     VARCHAR(200),
    AMOUNT               DECIMAL(17,2) NOT NULL,
    CURRENCY_CODE        CHAR(3) NOT NULL,
    EXCHANGE_RATE        DECIMAL(15,8),
    USD_EQUIVALENT       DECIMAL(17,2),
    PURPOSE_CODE         VARCHAR(10),
    REGULATORY_REPORT_ID VARCHAR(50),
    STATUS               CHAR(1) NOT NULL DEFAULT 'P'
        CHECK (STATUS IN ('P','A','R','C','X')),
    INITIATED_TS         TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    COMPLETED_TS         TIMESTAMP,
    CREATED_BY           VARCHAR(50) NOT NULL,
    CONSTRAINT PK_WIRE_TRANSFER PRIMARY KEY (WIRE_ID),
    CONSTRAINT FK_WIRE_ACCOUNT
        FOREIGN KEY (ACCOUNT_ID) REFERENCES MERIDIAN.ACCOUNT(ACCOUNT_ID)
);

CREATE INDEX MERIDIAN.IX_WIRE_ACCT ON MERIDIAN.WIRE_TRANSFER(ACCOUNT_ID);
CREATE INDEX MERIDIAN.IX_WIRE_STATUS ON MERIDIAN.WIRE_TRANSFER(STATUS, INITIATED_TS);

-- Grant access
GRANT SELECT ON MERIDIAN.WIRE_TRANSFER TO ROLE MERIDIAN_READ;
GRANT SELECT, INSERT, UPDATE ON MERIDIAN.WIRE_TRANSFER TO ROLE MERIDIAN_APP;

Week 2: Application deployment (Phase 1 — dual-write)

Deploy application changes that write to both old and new columns. The new columns begin receiving data. Old columns continue to be maintained for backward compatibility with any systems not yet updated.

Week 3: Backfill historical data

-- Backfill IBAN for domestic accounts (generate from routing + account number)
UPDATE MERIDIAN.ACCOUNT
SET IBAN_NUMBER = 'US' || '00' || RIGHT('000000000' || CAST(ACCOUNT_ID AS VARCHAR(20)), 9)
                  || ACCOUNT_NUMBER
WHERE IBAN_NUMBER IS NULL
  AND STATUS = 'ACTIVE';
-- Run in batches of 10,000 with commits between batches

-- Backfill TAX_RESIDENCY_COUNTRY from address data
UPDATE MERIDIAN.CUSTOMER
SET TAX_RESIDENCY_COUNTRY = 'USA'
WHERE TAX_RESIDENCY_COUNTRY IS NULL
  AND COUNTRY = 'United States';

Week 4: BIGINT migration (scheduled maintenance window)

This is the one change that requires downtime on z/OS (REORG PENDING) and a REORG on LUW. Schedule it for the lowest-traffic window available.

-- Change TRANSACTION_ID from INTEGER to BIGINT
ALTER TABLE MERIDIAN.TRANSACTION
    ALTER COLUMN TRANSACTION_ID SET DATA TYPE BIGINT;

-- z/OS: REORG immediately (online with SHRLEVEL CHANGE if possible)
-- REORG TABLESPACE MERIDDB.TRANSTS SHRLEVEL CHANGE ...

-- LUW: REORG
REORG TABLE MERIDIAN.TRANSACTION;

-- Rebuild indexes
-- z/OS: REBUILD INDEX (ALL) TABLESPACE MERIDDB.TRANSTS
-- LUW: REORG INDEXES ALL FOR TABLE MERIDIAN.TRANSACTION;

-- Update statistics
RUNSTATS ON TABLE MERIDIAN.TRANSACTION WITH DISTRIBUTION AND DETAILED INDEXES ALL;

-- Verify
SELECT MAX(TRANSACTION_ID), COUNT(*) FROM MERIDIAN.TRANSACTION;

Week 5: Advisory REORG cleanup

Run REORGs on the tables that entered ADVISORY REORG status during Week 1. These are not urgent — the tables have been fully functional — but cleaning up advisory states improves long-term performance.

-- z/OS: Online REORG for tables with advisory states
-- REORG TABLESPACE MERIDDB.ACCTTS SHRLEVEL CHANGE ...
-- REORG TABLESPACE MERIDDB.CUSTTS SHRLEVEL CHANGE ...

-- LUW: Inplace REORG
REORG TABLE MERIDIAN.ACCOUNT INPLACE ALLOW WRITE ACCESS START;
REORG TABLE MERIDIAN.CUSTOMER INPLACE ALLOW WRITE ACCESS START;

16.12.3 Rollback Contingency

For each week, the rollback plan is documented:

  • Week 1 rollback: DROP the new columns, DROP the new table. Instant, no data loss.
  • Week 2 rollback: Redeploy the previous application version. The new columns exist but are ignored.
  • Week 3 rollback: No action needed — backfill data can be re-NULLed if necessary.
  • Week 4 rollback: This is the hardest to reverse. If the REORG fails midway, restore from the pre-change backup. This is why we take a full backup before Week 4.
  • Week 5 rollback: Not applicable — REORG is an optimization, not a schema change.

16.12.4 Lessons from Meridian's Evolution

This five-week plan illustrates the core principles of schema evolution:

  1. Front-load the safe changes. Nullable column additions and VARCHAR widenings go first because they are instant and fully reversible.
  2. Separate schema changes from application changes. The schema is modified in Week 1; the application is deployed in Week 2. This allows independent validation.
  3. Isolate the risky change. The INTEGER-to-BIGINT conversion — the only change requiring downtime — is scheduled independently in a maintenance window.
  4. Defer optimization. Advisory REORGs are cleaned up last, after all functional changes are validated.
  5. Every step has a rollback plan. At no point are we in a state where "we can't go back."

Spaced Review: Chapters 11, 13, and 14

These questions revisit concepts from earlier chapters to strengthen long-term retention.

From Chapter 11 (SQL Joins and Subqueries)

SR-1. What is the difference between a correlated subquery and a non-correlated subquery? Which one is re-executed for every row of the outer query?

Show Answer A non-correlated subquery is independent of the outer query — it executes once and its result is reused. A correlated subquery references columns from the outer query, so it is (logically) re-executed for every row of the outer query. In practice, the DB2 optimizer may transform a correlated subquery into a join for better performance, but conceptually, the correlated subquery runs once per outer row.

SR-2. When would you use a LEFT OUTER JOIN instead of an INNER JOIN?

Show Answer Use a LEFT OUTER JOIN when you want all rows from the left (driving) table, even if there is no matching row in the right table. For example, listing all customers including those with no accounts: `SELECT c.*, a.ACCOUNT_ID FROM CUSTOMER c LEFT JOIN ACCOUNT a ON c.CUSTOMER_ID = a.CUSTOMER_ID`. An INNER JOIN would exclude customers with no accounts.

From Chapter 13 (Logical Design)

SR-3. What is Third Normal Form (3NF), and why is it the standard target for enterprise database design?

Show Answer A table is in 3NF when: (1) it is in 2NF (no partial dependencies on a composite key), and (2) every non-key column depends only on the primary key, not on other non-key columns (no transitive dependencies). 3NF is the standard target because it eliminates the most common forms of data redundancy and the update, insert, and delete anomalies that come with it, while remaining practical for query performance. Higher normal forms (BCNF, 4NF, 5NF) address rarer anomalies and are not always worth the additional complexity.

SR-4. What is the difference between a surrogate key and a natural key? Give an example of when you would use each.

Show Answer A surrogate key is a system-generated identifier (like an auto-incrementing integer) with no business meaning. A natural key is a value from the business domain (like Social Security Number or account number) that uniquely identifies a row. Use a surrogate key as the primary key for stability (it never changes) and performance (narrow, sequential integers are ideal for joins and indexing). Use natural keys as UNIQUE constraints to enforce business rules. For example, CUSTOMER_ID (surrogate, INTEGER GENERATED ALWAYS AS IDENTITY) is the primary key, while SSN (natural) has a UNIQUE constraint.

From Chapter 14 (Physical Design)

SR-5. What is the purpose of a buffer pool, and why is it called "the single most important performance lever" in DB2?

Show Answer A buffer pool is an area of memory that caches data and index pages read from disk. When DB2 needs a page, it first checks the buffer pool; if the page is there (a "hit"), DB2 avoids a physical I/O — which is orders of magnitude slower than a memory access. It is called the most important performance lever because the buffer pool hit ratio directly determines how much physical I/O DB2 must perform. A well-sized buffer pool can keep the most frequently accessed pages in memory, reducing I/O waits dramatically. Tuning buffer pools (sizing, separating data and index pools, isolating high-traffic tablespaces) often produces larger performance gains than any other single change.

Summary

Schema evolution is not an exception — it is the normal state of a production database. The tools and strategies in this chapter give you the ability to manage change confidently:

  • ALTER TABLE handles the majority of schema changes, but you must understand which operations are instant and which require REORG.
  • Pending states on z/OS (REORG PENDING, ADVISORY REORG) are signals, not emergencies — if you understand what triggered them and have a plan to resolve them.
  • Online REORG (SHRLEVEL CHANGE on z/OS, INPLACE on LUW) allows you to reorganize tables without denying access to applications.
  • Expand-then-contract, dual-write, and shadow table patterns handle schema changes that ALTER TABLE alone cannot accomplish.
  • Data migration between DB2 systems uses UNLOAD/LOAD (z/OS) or EXPORT/IMPORT/LOAD (LUW), with db2move for bulk operations.
  • Cross-platform migration from Oracle to DB2 requires careful attention to SQL dialect differences, data type mapping (especially Oracle DATE), and stored procedure translation.
  • Version-to-version migration follows a phased approach — CM/ENFM/NFM on z/OS, db2iupgrade/UPGRADE DATABASE on LUW — with built-in fallback points.
  • Zero-downtime patterns — blue-green deployment, rolling data sharing migration, HADR-based migration — make it possible to evolve without downtime.

The Meridian Bank scenario demonstrated how to sequence these tools: safe changes first, risky changes isolated in maintenance windows, every step with a rollback plan. That is the discipline of schema evolution, and it is what this chapter — and this part of the book — has been building toward.


This concludes Part III: Database Design. In Part IV, we turn from the structure of data to the programs that manipulate it — SQL in depth, stored procedures, triggers, and the application programming interfaces that connect DB2 to the outside world.