Case Study 1: GlobalBank's IMS Customer Database Migration

Background

GlobalBank's customer master database has been running in IMS since 1989. Originally designed to support teller terminals, it now serves as the authoritative source of customer data for the entire organization. The hierarchy:

CUSTOMER (root) — 3.2 million records
├── ACCOUNT — avg 2.7 per customer
│   ├── TRANSACTION — avg 847 per account/year
│   └── LOAN — avg 0.4 per account
│       └── PAYMENT — avg 24 per loan/year
├── ADDRESS — avg 1.3 per customer
├── PHONE — avg 2.1 per customer
└── DOCUMENT — avg 4.2 per customer
    └── DOC-IMAGE — avg 1.0 per document

The database processes 2.3 million transactions daily with sub-second response times for online inquiries and completes the nightly batch cycle in under 90 minutes.

The Challenge

In 2023, GlobalBank launched a mobile banking app and a customer analytics platform. Both needed access to customer data, but neither could (or should) make DL/I calls directly:

  • The mobile app team uses React Native and Node.js
  • The analytics platform uses Python and Apache Spark
  • Neither team has mainframe skills
  • Both need real-time or near-real-time data

Priya Kapoor, the systems architect, was tasked with enabling modern access without disrupting the production IMS system.

The Solution

Priya designed a three-tier strategy:

Tier 1: API Wrapper via IMS Connect + z/OS Connect EE

For real-time customer lookups (mobile app), Priya exposed existing IMS inquiry transactions as RESTful APIs:

  1. The existing CICS transaction CINQ (Customer Inquiry) already retrieves customer data via DL/I calls
  2. z/OS Connect EE maps the COBOL copybook to JSON
  3. An API gateway routes mobile app requests to z/OS Connect EE
  4. Response time: 15-25ms (compared to 3-5ms from a CICS terminal)

The COBOL inquiry program did not change at all. The modernization was purely in the infrastructure layer.

Tier 2: Changed Data Capture to DB2

For analytics (Python/Spark), Priya implemented IMS-to-DB2 replication:

  1. IMS log records are captured by IBM InfoSphere CDC
  2. Changes are replicated to a DB2 shadow database in near-real-time (< 30 second lag)
  3. The analytics platform queries DB2 via JDBC
  4. The DB2 schema denormalizes the hierarchy into relational tables

Tier 3: Event Streaming to Kafka

For real-time event processing (fraud detection, notifications):

  1. IMS transactions publish events to IBM MQ
  2. A Kafka Connect bridge moves events to Kafka topics
  3. Stream processing applications consume the events

Discussion Questions

  1. Why did Priya choose to keep the IMS database rather than migrate to DB2?
  2. What are the risks of the data replication approach (Tier 2)? What happens if replication falls behind?
  3. The mobile app team wanted direct DB2 access for writes. Why did Priya insist that all writes go through IMS?
  4. How does this architecture exemplify the "Modernization Spectrum" theme?
  5. If GlobalBank were starting from scratch today, would they choose IMS? What factors would influence that decision?

Lessons Learned

  • The existing COBOL/IMS programs were the most reliable components in the architecture
  • The replication lag (< 30 seconds) was acceptable for analytics but would not be acceptable for account balance queries
  • Training costs for maintaining IMS expertise were a growing concern — only 3 of GlobalBank's 42 developers could work on IMS programs
  • The API wrapper approach delivered 80% of the business value with 20% of the effort compared to a full migration