Chapter 21: Key Takeaways

These are the essential points from Chapter 21 that you will need throughout the rest of this book. If you can explain each of these clearly and from memory, you have a solid foundation for catalog-driven database administration.


Core Concepts to Remember

  1. The DB2 catalog is the definitive source of truth about your database. It stores metadata about every object: tables, columns, indexes, relationships, tablespaces, privileges, packages, triggers, and more. DB2 maintains it automatically and transactionally as part of DDL operations. You never insert into or delete from the catalog under normal circumstances.

  2. On z/OS, the catalog lives in SYSIBM tables within DSNDB06. On LUW, you access the catalog through SYSCAT views. The underlying structures differ, but the concept is the same. The key tables/views you must know: SYSTABLES/SYSCAT.TABLES, SYSCOLUMNS/SYSCAT.COLUMNS, SYSINDEXES/SYSCAT.INDEXES, SYSKEYS/SYSCAT.INDEXCOLUSE, SYSRELS/SYSCAT.REFERENCES, SYSTABLESPACE/SYSCAT.TABLESPACES, and SYSTABAUTH/SYSCAT.TABAUTH.

  3. The z/OS directory (DSNDB01) is separate from the catalog and more critical for DB2 startup. The directory stores runtime structures: DBDs (database descriptors), SPT01 (skeleton packages), SCT02 (skeleton plans), SYSLGRNX (log ranges), and SYSUTILX (utility tracking). Losing the directory prevents DB2 from starting; losing the catalog prevents only DDL operations and new binds. DB2 for LUW has no separate directory — runtime structures are managed through other mechanisms.

  4. The catalog is queryable with standard SQL. This is one of the most powerful features of the relational model. You can SELECT from catalog tables, JOIN them together, filter with WHERE clauses, and aggregate with GROUP BY. Every DBA task — finding objects, checking structures, auditing privileges, performing impact analysis — starts with a catalog query.

  5. Statistics columns in the catalog are set by RUNSTATS, not maintained in real-time. CARD, NPAGES, COLCARDF, HIGH2KEY, LOW2KEY, CLUSTERRATIOF, and similar columns are point-in-time snapshots. A NULL STATSTIME means RUNSTATS has never been run. Stale statistics lead to poor optimizer decisions — monitoring statistics freshness through the catalog is a core DBA responsibility.

  6. Real-Time Statistics (z/OS) provide continuous change tracking without RUNSTATS overhead. SYSIBM.SYSTABLESPACESTATS and SYSIBM.SYSINDEXSPACESTATS track inserts, deletes, updates, space usage, and change counts since the last REORG, COPY, and RUNSTATS. These enable intelligent utility scheduling based on actual need rather than fixed calendars.

  7. Impact analysis is a catalog problem. Before any schema change, query the catalog for foreign key dependencies (SYSRELS), view dependencies (SYSVIEWDEP), package dependencies (SYSPACKDEP), trigger dependencies (SYSTRIGGERS), and index dependencies (SYSKEYS). Missing a dependency leads to cascading failures after deployment.

  8. Catalog-based automation transforms DBA productivity. Generating DDL from catalog metadata, building automated impact analysis scripts, creating change detection reports, scheduling RUNSTATS based on volatility, and building daily health dashboards — these are the practices that separate reactive firefighting from proactive management.

  9. Security auditing is a catalog query. SYSTABAUTH/SYSCAT.TABAUTH records every table-level privilege. SYSCOLAUTH/SYSCAT.COLAUTH records column-level privileges. SYSDBAUTH/SYSCAT.DBAUTH records database-level privileges. Every security audit question — "who can access what?" — has a catalog answer.

  10. Build a personal catalog toolkit. A working DBA needs a curated set of catalog queries for daily health checks, index analysis, space monitoring, privilege reviews, and impact analysis. The Meridian Bank Catalog Toolkit in Section 21.12 is a starting point; customize and extend it for your own environment.


What to Carry Forward

As you proceed through the remaining chapters, keep these catalog connections in mind:

  • Chapter 22 (Optimizer): The optimizer reads catalog statistics to choose access paths. Understanding what statistics the catalog stores and when they become stale is prerequisite knowledge for optimizer tuning.

  • Chapter 23 (EXPLAIN): EXPLAIN output references catalog objects (tables, indexes, tablespaces). Your ability to interpret EXPLAIN depends on your knowledge of the catalog structures those objects are defined in.

  • Chapter 24 (SQL Tuning): Many SQL tuning techniques involve checking catalog statistics — column cardinality, index clustering ratio, table size — to understand why the optimizer chose a particular plan.

  • Chapter 27 (Performance Diagnosis): Systematic performance diagnosis begins with catalog queries to understand the objects involved, their statistics, their indexes, and their physical layout.


Preview of Chapter 22

In the next chapter, you will learn how the DB2 optimizer works — how it reads catalog statistics to estimate costs, compares alternative access paths, and selects the plan it believes will be most efficient. You will see the direct connection between the catalog statistics you learned about in this chapter and the optimizer decisions they influence. Every CARD value, every COLCARDF value, every CLUSTERRATIOF value matters — the optimizer uses them all.


Return to Chapter 21 | Continue to Further Reading