Chapter 6 Key Takeaways: DB2 Optimizer Internals


The Threshold Concept

Cost-based optimization transforms your understanding of SQL performance. DB2 does not execute your SQL the way you wrote it. It evaluates access paths, estimates costs using catalog statistics, and picks the cheapest plan. When statistics change, the plan can change — even if your code hasn't been touched. Once you internalize this, you stop blaming DB2 and start managing the information the optimizer depends on.


Ten Things You Must Remember

1. The Optimizer Uses Statistics, Not Your Intentions

DB2 doesn't know what you expect the data to look like. It knows what RUNSTATS told it. If RUNSTATS hasn't run (COST_CATEGORY = 'B'), the optimizer is guessing. If RUNSTATS captured a transient state, the optimizer is confidently wrong.

2. Filter Factors Drive Everything

Filter factor = estimated fraction of rows satisfying a predicate. Equality: 1/COLCARDF. Range: interpolation from HIGH2KEY/LOW2KEY. Compound AND: FF1 * FF2 (independence assumption). Every access path decision flows from these estimates.

3. The Independence Assumption Is the #1 Source of Misestimates

DB2 multiplies filter factors for AND predicates, assuming columns are statistically independent. Correlated columns (STATE/CITY, DEPARTMENT/JOB_TITLE, DATE/STATUS) produce wildly wrong estimates. Fix it with COLGROUP statistics on RUNSTATS.

4. Matching Columns Stop at the First Range Predicate

In a composite index, matching columns are consecutive leading columns with sargable predicates. After the first range predicate (>, <, BETWEEN), all subsequent columns can only be screening columns. Design indexes with equality columns first, then the most selective range column.

5. CLUSTERRATIO Determines Random vs. Sequential I/O

A high CLUSTERRATIO (>95%) means index access produces near-sequential I/O — fast. A low CLUSTERRATIO (<50%) means mostly random I/O — the optimizer may prefer a tablespace scan. REORG restores clustering. Monitor CLUSTERRATIO as a leading indicator.

6. Index-Only Access Is the Goal for High-Volume Narrow Queries

When all columns in the SELECT, WHERE, ORDER BY, and GROUP BY exist in a single index (key + INCLUDE columns), DB2 never touches the base table. For CICS transactions executing 200,000 times daily, the difference between index-only and index-plus-base-table is enormous.

7. Join Method Matters as Much as Access Path

Nested loop (METHOD=1): best for small outer + indexed inner. Merge scan (METHOD=2): best for large-large without good index. Hash join (METHOD=4): best for large equi-joins. A join method change can be more impactful than an access type change.

8. EXPLAIN Is Your Primary Diagnostic Tool

Every performance investigation starts with EXPLAIN. Read PLAN_TABLE (ACCESSTYPE, MATCHCOLS, METHOD, PREFETCH, INDEXONLY) and DSN_STATEMNT_TABLE (COST_CATEGORY, PROCSU, PROCMS). Compare against your baseline. The gap between expected and actual plan reveals the problem.

9. RUNSTATS Is Preventive Medicine

Run RUNSTATS after significant data changes. Always collect frequency statistics (FREQVAL) and column group statistics (COLGROUP) for predicate columns. Use statistics profiles for consistency. Use inline RUNSTATS with REORG. Schedule RUNSTATS when data is in its steady state, not during transient operations.

10. Plan Stability Is an Operational Discipline

Use PLANMGMT(EXTENDED) on all critical packages — it's cheap insurance. Use APREUSE(WARN) during maintenance windows. Build automated EXPLAIN baseline comparison to catch regressions before production. Resort to OPTHINT only when everything else has failed, and document it.


The Quick-Reference Diagnostic Checklist

When a query performs poorly:

  • [ ] Run EXPLAIN. What are ACCESSTYPE, MATCHCOLS, METHOD, PREFETCH?
  • [ ] Check COST_CATEGORY. If 'B', run RUNSTATS immediately.
  • [ ] Compare current PLAN_TABLE to your baseline. What changed?
  • [ ] Query SYSIBM.SYSCOLUMNS for COLCARDF, HIGH2KEY, LOW2KEY.
  • [ ] Query SYSIBM.SYSINDEXES for CLUSTERRATIO, NLEAF, NLEVELS.
  • [ ] Calculate filter factors manually. Do they match expectations?
  • [ ] Check for non-sargable predicates (functions on columns, type mismatches).
  • [ ] Check for correlated predicates missing COLGROUP statistics.
  • [ ] Evaluate index design: column order, INCLUDE columns, covering potential.
  • [ ] If regression: PLANMGMT SWITCH for immediate fix, then investigate root cause.

What Changes After This Chapter

Before this chapter, you wrote SQL and hoped for good performance. After this chapter, you:

  • Design indexes for the optimizer, not for yourself. Column order, INCLUDE columns, and clustering are deliberate decisions based on predicate patterns and the cost model.
  • Read EXPLAIN output fluently, understanding what each PLAN_TABLE column means and what it reveals about the optimizer's reasoning.
  • Manage statistics as infrastructure, not as an afterthought. RUNSTATS profiles, frequency statistics, column groups, and scheduling are part of your operational discipline.
  • Prevent plan regressions proactively through PLANMGMT, APREUSE, automated baselines, and data management protocols.
  • Diagnose performance problems from the optimizer's perspective, asking "What did the cost model see?" instead of "Why is DB2 slow?"

This is the foundation for everything in Part II. Chapters 7–12 build on optimizer concepts: locking (Chapter 7) interacts with access path choices, stored procedures (Chapter 8) create new optimization contexts, and partitioning (Chapter 9) introduces partition-level access paths. The optimizer is always present.