Part V: Performance
Performance is where you find out whether you actually understand DB2 or whether you have just been going through the motions. Everything in Parts I through IV — the relational model, the SQL, the design, the administration — converges here. A performance problem does not care which chapter you skipped. It will exploit every gap in your knowledge, and it will do so at the worst possible time.
This is the most technical part of the book. It is quantitative. It is metric-driven. There is no hand-waving. When I say a particular access path is better than another, I will show you the cost numbers. When I say a buffer pool configuration change will help, I will explain the math behind the hit ratio improvement. When I say a locking problem exists, I will show you the trace output that proves it. If you are not comfortable with numbers, get comfortable. Performance tuning without metrics is just superstition.
What This Part Covers
Six chapters, each targeting a specific dimension of DB2 performance.
Chapter 22 dissects the DB2 optimizer. This is not a surface-level overview. We go inside the optimizer's decision-making process: how it estimates cardinality, how it calculates cost, how it evaluates access paths, how it chooses join methods (nested loop, merge scan, hybrid join on z/OS; nested loop, merge join, hash join on LUW), and how it determines join order. We cover filter factor estimation, host variable effects, literal replacement, and the circumstances under which the optimizer gets it wrong. You will learn to think like the optimizer, which is the single most valuable performance skill a DB2 professional can develop.
Chapter 23 is the EXPLAIN chapter. EXPLAIN is how you see what the optimizer decided and why. On z/OS, we cover the PLAN_TABLE, DSN_STATEMNT_TABLE, DSN_FUNCTION_TABLE, and the full set of EXPLAIN tables that DB2 12 and later populate. On LUW, we cover db2expln, db2exfmt, and the explain tables. We work through dozens of EXPLAIN output examples, teaching you to read access paths the way a radiologist reads an X-ray — quickly, accurately, and with immediate recognition of problems. By the end of this chapter, you will be able to look at an EXPLAIN and know within seconds whether the access path is reasonable or whether something has gone wrong.
Chapter 24 addresses SQL tuning — the art and science of rewriting SQL to guide the optimizer toward better access paths. We cover predicate analysis: sargable versus non-sargable predicates, Stage 1 versus Stage 2 filtering on z/OS, index-matching predicates, and the impact of data type mismatches on predicate pushdown. We cover the rewrite techniques: converting correlated subqueries to joins, replacing OR with UNION, materializing CTEs when beneficial, and using optimizer hints (OPTIMIZE FOR n ROWS, FETCH FIRST n ROWS ONLY) when the optimizer needs guidance. Every technique is demonstrated with before-and-after EXPLAIN output showing the concrete improvement.
Chapter 25 covers buffer pool tuning and memory management. Buffer pools are the primary performance lever in DB2 — the difference between reading from memory and reading from disk is orders of magnitude, and buffer pool configuration determines which pages stay in memory and which do not. We cover sizing methodology, pool separation strategies (separating index pages from data pages, separating hot tables from cold ones), prefetch configuration, and the monitoring metrics that tell you whether your buffer pool configuration is working. On z/OS, we cover the ALTER BUFFERPOOL command, group buffer pools in data sharing, and virtual pool sizing in the context of LPAR memory constraints. On LUW, we cover ALTER BUFFERPOOL, self-tuning memory manager (STMM), and the interaction between buffer pools and the operating system file cache.
Chapter 26 tackles locking and concurrency. DB2's locking mechanism ensures data integrity under concurrent access, but it also creates contention that limits throughput. We cover the lock hierarchy (tablespace, table, page, row on z/OS; table, row on LUW), lock modes (S, U, X, IS, IX, SIX), lock escalation, lock timeout, deadlock detection, and the isolation levels (RR, RS, CS, UR) that control the trade-off between consistency and concurrency. We diagnose real lock contention scenarios using Meridian Bank workloads: the month-end batch job that escalates to a tablespace lock and blocks online transactions, the hot-row update pattern that causes repeated timeouts, the poorly written cursor that holds locks longer than necessary. Each scenario comes with a diagnosis methodology and a concrete resolution.
Chapter 27 pulls everything together into a performance diagnosis methodology. When someone reports that "the database is slow," what do you do? This chapter provides a systematic approach: identify the scope (system-wide versus single query), gather baseline metrics, form hypotheses, test them with the tools from Chapters 22-26, implement changes, and verify improvement. We work through end-to-end case studies using the Meridian Bank environment, starting from a symptom and following the diagnostic trail to root cause and resolution. This is the chapter that transforms isolated tuning techniques into a coherent professional practice.
Why It Matters
Performance is the most visible aspect of database quality. Users do not see your beautiful schema design or your robust backup strategy. They see response time. When queries are fast, nobody thanks the DBA. When queries are slow, everybody notices.
More importantly, performance problems are almost never self-contained. A slow query consumes CPU cycles that other queries need. A lock contention issue blocks transactions, which backs up application threads, which exhausts connection pools, which causes application errors that cascade to the end user. A buffer pool that is too small causes excessive I/O, which saturates the storage subsystem, which slows down every database on the shared infrastructure. Performance problems propagate, and they propagate fast.
The skills in this part are what separate a DBA who can manage DB2 from a DBA who can troubleshoot DB2 under pressure. When the VP of operations is standing behind you asking why the trading system is slow, you need a methodology. You need to know which metrics to check first, how to interpret what you see, and how to take action that will improve the situation without creating a new problem. That is what this part teaches.
The Quantitative Expectation
I want to set an explicit expectation: this part uses numbers extensively. You will calculate filter factors from catalog statistics. You will estimate I/O counts from EXPLAIN output. You will compute buffer pool hit ratios and project the impact of adding memory. You will measure lock wait times and determine whether they represent a real problem or acceptable overhead.
This is not optional. Performance tuning that is not grounded in measurement is guesswork, and guesswork in production is negligence. If an approach works, you should be able to prove it with numbers. If it does not work, the numbers will tell you why.
Every claim in this part is backed by measurable evidence. When we tune a query and the access path changes, you will see the before-and-after cost estimates. When we adjust a buffer pool, you will see the hit ratio change. When we resolve a locking problem, you will see the reduction in lock wait time and timeout counts. This discipline is not just pedagogical — it is how professionals work.
Building on the Foundation
If you skimmed the architecture chapter in Part I, go back and read it now. Buffer pool tuning makes no sense without understanding DB2's memory architecture. If you rushed through the optimizer basics in Part III's index design chapter, revisit them. SQL tuning depends on understanding how the optimizer uses indexes. If you did not practice running RUNSTATS in Part IV, do it now. Half of all optimizer problems trace back to missing or stale statistics, and understanding RUNSTATS is prerequisite knowledge for Chapter 22.
This part rewards thorough preparation. It punishes gaps.
How to Approach This Part
Read Chapters 22 and 23 together. The optimizer and EXPLAIN are two sides of the same coin — you cannot tune what you cannot see, and you cannot interpret what you see without understanding how the optimizer works.
Chapter 24 (SQL tuning) should come next, followed by Chapter 25 (buffer pools) and Chapter 26 (locking). These three chapters address the three primary performance levers: access path selection, memory management, and concurrency control.
Chapter 27 is the capstone for this part. Read it after everything else. It assumes familiarity with all the tools and techniques from the preceding chapters and synthesizes them into a diagnostic workflow.
Work the exercises with your lab environment's monitoring tools active. Watch the metrics change as you apply the techniques. Performance tuning is an empirical discipline, and the only way to develop intuition is to observe cause and effect repeatedly.
Chapters in This Part
- Chapter 22: How the DB2 Optimizer Works — Cost-Based Optimization, Access Path Selection, and Why Your Query Plan Changed
- Chapter 23: Reading and Interpreting EXPLAIN — The Single Most Important DBA Skill
- Chapter 24: SQL Tuning — Rewriting Queries for Performance Without Changing Results
- Chapter 25: Buffer Pool and Memory Tuning — Giving DB2 the Right Resources
- Chapter 26: Locking, Concurrency, and Isolation Levels — Balancing Throughput and Consistency
- Chapter 27: Performance Diagnosis Methodology — A Systematic Approach to Finding and Fixing the Bottleneck