25 min read

> "The mainframe doesn't need to be replaced; it needs a modern front door. z/OS Connect is that front door — it lets your fifty-year-old COBOL talk to a twenty-day-old mobile app without either of them knowing the difference." — An IBM...

Chapter 35: RESTful APIs and Modern Access Patterns: z/OS Connect, GraphQL, and Event Streaming

"The mainframe doesn't need to be replaced; it needs a modern front door. z/OS Connect is that front door — it lets your fifty-year-old COBOL talk to a twenty-day-old mobile app without either of them knowing the difference." — An IBM Distinguished Engineer, at a z/OS modernization summit

The world has changed around DB2. When DB2 was born, applications connected to databases through embedded SQL, CLI calls, or ODBC drivers — all synchronous, tightly coupled, and purpose-built. Today, a single database might serve a React-based web application, an iOS mobile app, a Python machine learning pipeline, a Kafka-based event stream, a GraphQL aggregation layer, and a batch COBOL program — all simultaneously, all with different latency expectations, data shape requirements, and authentication mechanisms.

This chapter bridges the gap between DB2's powerful data management capabilities and the modern application architectures that need to consume that data. We will explore four major patterns: native REST services built into DB2, the z/OS Connect EE gateway for mainframe modernization, application-layer REST and GraphQL APIs, and event streaming with Kafka. For Meridian National Bank, this means transforming the stored procedures and data models we built in previous chapters into services that mobile banking apps, analytics dashboards, and partner integrations can consume.

This is not a chapter about abandoning DB2's strengths. It is about amplifying them — making the decades of investment in data integrity, security, and performance accessible to every modern client, without compromising what makes DB2 enterprise-grade.


35.1 The Modern Data Access Challenge

35.1.1 The Landscape Has Shifted

Consider what Meridian National Bank's IT organization faces today:

Mobile banking. Customers expect real-time balance checks, instant transfers, mobile check deposit, and push notifications — all from a smartphone app that communicates over HTTPS with JSON payloads. The backend is DB2 on z/OS, surrounded by decades of COBOL business logic.

Partner integrations. Fintech partners need to query account status, initiate payments, and receive transaction notifications through standardized APIs. They expect OpenAPI (Swagger) documentation, OAuth2 authentication, and rate limiting.

Real-time analytics. The fraud detection team needs a continuous stream of transaction events — not a batch file generated overnight, but a real-time event feed that can trigger alerts within seconds of a suspicious transaction.

Microservices migration. New features are being built as microservices in Java and Node.js, deployed on Kubernetes. These services need to access DB2 data without embedding JDBC connection strings and SQL statements into every service.

Internal tools. Developers building internal dashboards want a flexible query interface — not pre-built reports, but the ability to request exactly the data they need. GraphQL fits this pattern perfectly.

Each of these use cases requires a different data access pattern. The common thread: they all need DB2 data, and they all need it through modern protocols.

35.1.2 The Access Pattern Spectrum

Modern DB2 data access exists on a spectrum:

Pattern Latency Coupling Complexity Best For
Direct JDBC/ODBC Lowest Highest Low Internal services, batch
DB2 REST Services Low Medium Low Simple CRUD APIs
z/OS Connect EE Low Low Medium Mainframe modernization
Application REST API Medium Low Medium General-purpose APIs
GraphQL Medium Low High Complex, flexible queries
Kafka/Event Stream Variable Lowest High Real-time event processing

No single pattern serves all needs. Enterprise architectures typically employ three or four of these simultaneously, each for its appropriate use case.

35.1.3 Architectural Principles

Before diving into implementation, establish these principles that guide every pattern in this chapter:

  1. DB2 remains the source of truth. APIs and events are windows into the database, not replacements for it.
  2. Security is non-negotiable. Every access path must authenticate, authorize, encrypt, and audit.
  3. Encapsulate, don't expose. APIs should expose business operations (transfer funds, get statement), not database operations (UPDATE ACCOUNTS, SELECT FROM TRANSACTIONS).
  4. Design for failure. Network calls fail. APIs must handle timeouts, retries, and partial failures gracefully.
  5. Monitor everything. Every API call should be traceable from the mobile app, through the API gateway, to the specific DB2 SQL statement it executed.

35.2 [z/OS] z/OS Connect EE

z/OS Connect Enterprise Edition is IBM's strategic solution for exposing z/OS assets — including DB2 stored procedures, CICS transactions, and IMS programs — as RESTful APIs. If Meridian Bank runs DB2 on z/OS (and most banks do), z/OS Connect is the primary path to modernization.

35.2.1 Architecture

z/OS Connect EE runs as a Liberty server on z/OS. Its architecture has three layers:

Mobile App / Web Client
        |
        v
  [API Gateway / Load Balancer]
        |
        v
  [z/OS Connect EE - Liberty Server]
        |
    +---+---+
    |       |
    v       v
[Service    [Service
 Provider:   Provider:
 DB2 SP]     CICS TXN]
    |           |
    v           v
  DB2         CICS

API Requester — the client (mobile app, web service, partner system) that makes REST calls.

z/OS Connect EE — receives REST requests, transforms JSON to the format expected by the backend service, invokes the service, transforms the response back to JSON, and returns it.

Service Provider — the backend system. For DB2, this is typically a stored procedure invoked through a JDBC connection.

35.2.2 Creating an API with the API Toolkit

The IBM API Toolkit (an Eclipse-based tool) generates the artifacts needed to map a REST API to a DB2 stored procedure. Here is the workflow for exposing Meridian's PROC_TRANSFER_FUNDS:

Step 1: Define the Service

Create a service archive (.sar) that describes how z/OS Connect calls the DB2 stored procedure:

{
  "service": {
    "name": "transferFunds",
    "version": "1.0.0",
    "description": "Transfer funds between Meridian accounts",
    "serviceProvider": "db2",
    "configuration": {
      "dataSourceName": "jdbc/meridianDB",
      "schema": "MERIDIAN",
      "procedureName": "PROC_TRANSFER_FUNDS",
      "parameters": [
        {"name": "fromAccount", "type": "INTEGER", "direction": "IN"},
        {"name": "toAccount", "type": "INTEGER", "direction": "IN"},
        {"name": "amount", "type": "DECIMAL", "direction": "IN"},
        {"name": "description", "type": "VARCHAR", "direction": "IN"},
        {"name": "transactionId", "type": "BIGINT", "direction": "OUT"},
        {"name": "statusCode", "type": "INTEGER", "direction": "OUT"},
        {"name": "statusMessage", "type": "VARCHAR", "direction": "OUT"}
      ]
    }
  }
}

Step 2: Define the API

Create an API archive (.aar) that maps REST endpoints to the service:

{
  "api": {
    "name": "accountsAPI",
    "version": "1.0.0",
    "basePath": "/api/v1/accounts",
    "operations": [
      {
        "path": "/transfers",
        "method": "POST",
        "service": "transferFunds",
        "requestMapping": {
          "fromAccount": "$.body.fromAccount",
          "toAccount": "$.body.toAccount",
          "amount": "$.body.amount",
          "description": "$.body.description"
        },
        "responseMapping": {
          "transactionId": "$.transactionId",
          "status": "$.statusCode",
          "message": "$.statusMessage"
        }
      }
    ]
  }
}

Step 3: Deploy

Deploy the .sar and .aar files to the z/OS Connect server. The API is now accessible:

curl -X POST https://zosconnect.meridian.com/api/v1/accounts/transfers \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer eyJhbGciOiJSUzI1NiI..." \
  -d '{
    "fromAccount": 1001,
    "toAccount": 1002,
    "amount": 500.00,
    "description": "Monthly rent payment"
  }'

Response:

{
  "transactionId": 987654321,
  "status": 0,
  "message": "Transfer completed successfully"
}

35.2.3 OpenAPI Generation

z/OS Connect automatically generates an OpenAPI (Swagger) specification for every deployed API:

https://zosconnect.meridian.com/api/v1/accounts/transfers?swagger

This specification can be imported into API management tools, client SDK generators, and developer portals — giving partners and internal developers self-service access to API documentation.

35.2.4 Security Configuration

z/OS Connect leverages z/OS security infrastructure:

<!-- server.xml configuration for z/OS Connect -->
<server>
    <!-- SSL/TLS configuration -->
    <ssl id="defaultSSLConfig"
         keyStoreRef="defaultKeyStore"
         trustStoreRef="defaultTrustStore"
         sslProtocol="TLSv1.2"/>

    <!-- SAF (RACF) authentication -->
    <safAuthorization id="safAuth"
                      racRouteLog="ASIS"/>

    <!-- API key validation -->
    <zosConnectApiKeyAuth>
        <apiKey name="partnerKey"
                value="encrypted:xor:..." />
    </zosConnectApiKeyAuth>

    <!-- Rate limiting -->
    <zosConnectRateLimit maxRequests="1000"
                         timePeriod="60s"
                         scope="api"/>

    <!-- Audit logging -->
    <zosConnectAudit auditType="all"
                     smfRecordType="123"/>
</server>

Key security features:

  • SAF/RACF integration — API access is controlled by RACF profiles. A mobile app user authenticated via OAuth2 is mapped to a RACF user ID, and that user ID's DB2 privileges govern what data they can access.
  • TLS 1.2/1.3 — all API traffic is encrypted in transit.
  • API keys — for partner identification and basic throttling.
  • SMF records — every API call generates an SMF record for audit and compliance.

35.2.5 Performance Tuning

z/OS Connect performance depends on:

  1. Connection pooling. Configure the Liberty data source with appropriate pool sizes: xml <dataSource id="meridianDB" jndiName="jdbc/meridianDB"> <connectionManager maxPoolSize="50" minPoolSize="10" connectionTimeout="30s" maxIdleTime="10m"/> </dataSource>

  2. Thread pool. Size the Liberty thread pool for expected concurrency: xml <executor maxThreads="100" coreThreads="20"/>

  3. JSON transformation. z/OS Connect's JSON-to-COBOL/SQL transformation adds latency. For high-throughput APIs, minimize the size and complexity of request/response payloads.

  4. zIIP offload. z/OS Connect Java workloads are eligible for zIIP specialty processor offload, reducing general-purpose processor consumption — a significant cost consideration on z/OS.


35.3 [LUW] Db2 REST Services

On Db2 for Linux, UNIX, and Windows, IBM provides built-in REST service support without requiring an external gateway.

35.3.1 Enabling REST Services

# Enable the REST service on the Db2 instance
db2 update dbm cfg using DB2_REST_SERVICE_PORT 50050
db2 update dbm cfg using DB2_REST_ENABLE YES

# Restart the instance
db2stop force
db2start

35.3.2 Creating REST Services from SQL

You register SQL statements or stored procedures as REST endpoints:

-- Register a query as a REST service
CALL SYSPROC.ADMIN_CMD('
    CREATE REST SERVICE meridian_accounts
    FROM SQL
    SELECT ACCOUNT_ID, ACCOUNT_TYPE, BALANCE, STATUS
    FROM MERIDIAN.ACCOUNTS
    WHERE CUSTOMER_ID = ?
    SERVICE VERSION 1.0
    DESCRIPTION "Get accounts for a customer"
');

-- Register a stored procedure as a REST service
CALL SYSPROC.ADMIN_CMD('
    CREATE REST SERVICE meridian_transfer
    FROM PROCEDURE MERIDIAN.PROC_TRANSFER_FUNDS
    SERVICE VERSION 1.0
    DESCRIPTION "Transfer funds between accounts"
');

35.3.3 Consuming REST Services

# Query accounts (GET request)
curl -X POST https://db2server.meridian.com:50050/services/meridian_accounts \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic $(echo -n 'user:pass' | base64)" \
  -d '{"parameters": [{"name": "1", "value": "1001"}]}'

# Execute transfer (POST request)
curl -X POST https://db2server.meridian.com:50050/services/meridian_transfer \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic $(echo -n 'user:pass' | base64)" \
  -d '{
    "parameters": [
      {"name": "P_FROM_ACCOUNT", "value": 1001},
      {"name": "P_TO_ACCOUNT", "value": 1002},
      {"name": "P_AMOUNT", "value": 500.00},
      {"name": "P_DESCRIPTION", "value": "Monthly rent"}
    ]
  }'

35.3.4 Limitations and When to Use

Db2 REST services are convenient for quick API creation but have limitations:

  • Limited request/response transformation — the JSON format is dictated by DB2, not by API design best practices.
  • Basic authentication only (no OAuth2 without an external gateway).
  • No rate limiting built in.
  • No versioning beyond the service version number.

Use Db2 REST services for internal tools, prototyping, and simple integrations. For production-grade external APIs, use z/OS Connect or an application-layer API.


35.4 Building REST APIs Over DB2

For most modern applications, a dedicated application layer sits between the API consumer and DB2. This layer provides full control over API design, security, and data transformation.

35.4.1 Spring Boot + JDBC

Java with Spring Boot is the most common choice for enterprise DB2 APIs:

@RestController
@RequestMapping("/api/v1/accounts")
public class AccountController {

    @Autowired
    private JdbcTemplate jdbcTemplate;

    @GetMapping("/{accountId}")
    public ResponseEntity<AccountDTO> getAccount(@PathVariable int accountId) {
        String sql = "SELECT ACCOUNT_ID, CUSTOMER_ID, ACCOUNT_TYPE, BALANCE, STATUS " +
                     "FROM MERIDIAN.ACCOUNTS WHERE ACCOUNT_ID = ?";

        AccountDTO account = jdbcTemplate.queryForObject(sql,
            new Object[]{accountId},
            (rs, rowNum) -> new AccountDTO(
                rs.getInt("ACCOUNT_ID"),
                rs.getInt("CUSTOMER_ID"),
                rs.getString("ACCOUNT_TYPE"),
                rs.getBigDecimal("BALANCE"),
                rs.getString("STATUS")
            ));

        return ResponseEntity.ok(account);
    }

    @PostMapping("/transfers")
    public ResponseEntity<TransferResult> transfer(@RequestBody TransferRequest req) {
        SimpleJdbcCall jdbcCall = new SimpleJdbcCall(jdbcTemplate)
            .withSchemaName("MERIDIAN")
            .withProcedureName("PROC_TRANSFER_FUNDS")
            .declareParameters(
                new SqlParameter("P_FROM_ACCOUNT", Types.INTEGER),
                new SqlParameter("P_TO_ACCOUNT", Types.INTEGER),
                new SqlParameter("P_AMOUNT", Types.DECIMAL),
                new SqlParameter("P_DESCRIPTION", Types.VARCHAR),
                new SqlOutParameter("P_TXN_ID", Types.BIGINT),
                new SqlOutParameter("P_STATUS_CODE", Types.INTEGER),
                new SqlOutParameter("P_STATUS_MSG", Types.VARCHAR)
            );

        Map<String, Object> result = jdbcCall.execute(
            req.getFromAccount(),
            req.getToAccount(),
            req.getAmount(),
            req.getDescription()
        );

        TransferResult response = new TransferResult(
            (Long) result.get("P_TXN_ID"),
            (Integer) result.get("P_STATUS_CODE"),
            (String) result.get("P_STATUS_MSG")
        );

        if (response.getStatusCode() == 0) {
            return ResponseEntity.ok(response);
        } else {
            return ResponseEntity.badRequest().body(response);
        }
    }
}

Connection configuration (application.yml):

spring:
  datasource:
    url: jdbc:db2://db2server.meridian.com:50000/MERIDIAN
    username: ${DB2_USER}
    password: ${DB2_PASSWORD}
    driver-class-name: com.ibm.db2.jcc.DB2Driver
    hikari:
      maximum-pool-size: 20
      minimum-idle: 5
      connection-timeout: 30000
      idle-timeout: 600000
      max-lifetime: 1800000
      connection-test-query: SELECT 1 FROM SYSIBM.SYSDUMMY1

35.4.2 Node.js + ibm_db

For teams using Node.js:

const express = require('express');
const ibmdb = require('ibm_db');
const router = express.Router();

const connStr = `DATABASE=MERIDIAN;HOSTNAME=${process.env.DB2_HOST};PORT=50000;` +
                `PROTOCOL=TCPIP;UID=${process.env.DB2_USER};PWD=${process.env.DB2_PASSWORD};`;

// Connection pool
const pool = new ibmdb.Pool();
pool.setMaxPoolSize(20);

router.get('/accounts/:id', async (req, res) => {
    let conn;
    try {
        conn = await pool.open(connStr);
        const result = await conn.query(
            'SELECT ACCOUNT_ID, CUSTOMER_ID, ACCOUNT_TYPE, BALANCE, STATUS ' +
            'FROM MERIDIAN.ACCOUNTS WHERE ACCOUNT_ID = ?',
            [parseInt(req.params.id)]
        );

        if (result.length === 0) {
            return res.status(404).json({ error: 'Account not found' });
        }

        res.json({
            accountId: result[0].ACCOUNT_ID,
            customerId: result[0].CUSTOMER_ID,
            accountType: result[0].ACCOUNT_TYPE,
            balance: parseFloat(result[0].BALANCE),
            status: result[0].STATUS
        });
    } catch (err) {
        console.error('DB2 error:', err);
        res.status(500).json({ error: 'Internal server error' });
    } finally {
        if (conn) conn.close();
    }
});

router.post('/transfers', async (req, res) => {
    let conn;
    try {
        conn = await pool.open(connStr);
        const sql = 'CALL MERIDIAN.PROC_TRANSFER_FUNDS(?, ?, ?, ?, ?, ?, ?)';
        const params = [
            req.body.fromAccount,
            req.body.toAccount,
            req.body.amount,
            req.body.description || 'API Transfer',
            { ParamType: 'OUTPUT', DataType: ibmdb.BIGINT },    // txn_id
            { ParamType: 'OUTPUT', DataType: ibmdb.INTEGER },   // status_code
            { ParamType: 'OUTPUT', DataType: ibmdb.VARCHAR, Length: 500 } // status_msg
        ];

        const result = await conn.query(sql, params);
        const statusCode = result[0].P_STATUS_CODE || result[0][5];
        const statusMsg = result[0].P_STATUS_MSG || result[0][6];
        const txnId = result[0].P_TXN_ID || result[0][4];

        if (statusCode === 0) {
            res.status(201).json({
                transactionId: txnId,
                message: statusMsg
            });
        } else {
            res.status(400).json({
                errorCode: statusCode,
                message: statusMsg
            });
        }
    } catch (err) {
        console.error('Transfer error:', err);
        res.status(500).json({ error: 'Internal server error' });
    } finally {
        if (conn) conn.close();
    }
});

module.exports = router;

35.4.3 Python Flask + ibm_db_sa

from flask import Flask, request, jsonify
from sqlalchemy import create_engine, text
from sqlalchemy.pool import QueuePool

app = Flask(__name__)

engine = create_engine(
    'ibm_db_sa://user:password@db2server.meridian.com:50000/MERIDIAN',
    pool_size=10,
    max_overflow=20,
    pool_recycle=1800,
    pool_pre_ping=True
)

@app.route('/api/v1/accounts/<int:account_id>', methods=['GET'])
def get_account(account_id):
    with engine.connect() as conn:
        result = conn.execute(
            text("""
                SELECT ACCOUNT_ID, CUSTOMER_ID, ACCOUNT_TYPE, BALANCE, STATUS
                FROM MERIDIAN.ACCOUNTS
                WHERE ACCOUNT_ID = :acct_id
            """),
            {"acct_id": account_id}
        )
        row = result.fetchone()

        if row is None:
            return jsonify({"error": "Account not found"}), 404

        return jsonify({
            "accountId": row[0],
            "customerId": row[1],
            "accountType": row[2],
            "balance": float(row[3]),
            "status": row[4]
        })

@app.route('/api/v1/accounts/transfers', methods=['POST'])
def transfer():
    data = request.get_json()
    with engine.connect() as conn:
        result = conn.execute(
            text("CALL MERIDIAN.PROC_TRANSFER_FUNDS(:from_acct, :to_acct, :amount, :desc, ?, ?, ?)"),
            {
                "from_acct": data["fromAccount"],
                "to_acct": data["toAccount"],
                "amount": data["amount"],
                "desc": data.get("description", "API Transfer")
            }
        )
        # Process output parameters
        # (actual parameter retrieval depends on driver version)

    return jsonify({"message": "Transfer initiated"}), 201

35.4.4 Choosing Your Application Framework

The choice between Spring Boot, Node.js, and Python depends on your team's skills and your performance requirements:

Framework Best For DB2 Driver Connection Pool Throughput Profile
Spring Boot + JDBC Enterprise Java shops, existing Spring infrastructure db2jcc4.jar (Type 4 JDBC) HikariCP High throughput, mature ecosystem
Node.js + ibm_db Teams with JavaScript expertise, I/O-bound workloads ibm_db (native addon) Built-in pool Good for I/O-bound APIs, single-threaded
Python Flask + ibm_db_sa Data science teams, rapid prototyping ibm_db + ibm_db_sa (SQLAlchemy) SQLAlchemy QueuePool Lower throughput, excellent for prototyping

For Meridian Bank's production mobile banking API, Spring Boot is the primary framework. Node.js serves the partner API gateway (which is primarily routing and transformation, not heavy computation). Python Flask powers the internal analytics API where development speed matters more than raw throughput.

35.4.5 API Design Best Practices for Database-Backed Services

  1. Use nouns for resources, verbs for actions. /accounts/1001 (GET), /accounts/1001/transfers (POST).
  2. Return consistent error structures: json { "error": { "code": "INSUFFICIENT_FUNDS", "message": "Account balance of $800.00 is insufficient for transfer of $1,000.00", "details": { "availableBalance": 800.00, "requestedAmount": 1000.00 } } }
  3. Never expose DB2 error codes to external clients. Map SQLSTATE values to application-level error codes.
  4. Implement pagination for list endpoints: sql SELECT * FROM MERIDIAN.TRANSACTIONS WHERE ACCOUNT_ID = ? ORDER BY TRANSACTION_DATE DESC OFFSET ? ROWS FETCH NEXT ? ROWS ONLY
  5. Use ETags for caching — hash the row's timestamp or version column.
  6. Version your API in the URL path: /api/v1/accounts, /api/v2/accounts.

35.5 GraphQL Over DB2

GraphQL addresses a common frustration with REST APIs: the client either gets too much data (over-fetching) or must make multiple requests to assemble what it needs (under-fetching). For complex data models like banking, where accounts have transactions, loans have payments, and customers have addresses, GraphQL can dramatically simplify client-side development.

35.5.1 GraphQL Concepts for DB2 Developers

GraphQL is a query language for APIs. Clients specify exactly the data shape they want:

query {
  account(id: 1001) {
    accountId
    accountType
    balance
    recentTransactions(limit: 5) {
      transactionDate
      amount
      description
    }
    customer {
      firstName
      lastName
      email
    }
  }
}

Response (only the requested fields):

{
  "data": {
    "account": {
      "accountId": 1001,
      "accountType": "CHECKING",
      "balance": 5432.10,
      "recentTransactions": [
        {"transactionDate": "2025-06-15", "amount": -200.00, "description": "ATM Withdrawal"},
        {"transactionDate": "2025-06-14", "amount": 3500.00, "description": "Direct Deposit"}
      ],
      "customer": {
        "firstName": "Jane",
        "lastName": "Chen",
        "email": "jane.chen@email.com"
      }
    }
  }
}

35.5.2 Schema Design

The GraphQL schema maps to DB2 tables and relationships:

type Query {
    account(id: Int!): Account
    customer(id: Int!): Customer
    searchTransactions(
        accountId: Int!
        startDate: String
        endDate: String
        minAmount: Float
        maxAmount: Float
        limit: Int = 20
        offset: Int = 0
    ): TransactionConnection!
}

type Mutation {
    transferFunds(input: TransferInput!): TransferResult!
    makePayment(input: PaymentInput!): PaymentResult!
}

type Account {
    accountId: Int!
    customerId: Int!
    accountType: String!
    balance: Float!
    status: String!
    customer: Customer!
    transactions(limit: Int = 10, offset: Int = 0): [Transaction!]!
    loans: [Loan!]!
}

type Customer {
    customerId: Int!
    firstName: String!
    lastName: String!
    email: String!
    accounts: [Account!]!
}

type Transaction {
    transactionId: Int!
    transactionDate: String!
    transactionType: String!
    amount: Float!
    description: String
    runningBalance: Float
}

type TransactionConnection {
    edges: [Transaction!]!
    totalCount: Int!
    hasNextPage: Boolean!
}

type Loan {
    loanId: Int!
    loanType: String!
    principal: Float!
    currentBalance: Float!
    interestRate: Float!
    maturityDate: String!
    payments(limit: Int = 10): [LoanPayment!]!
}

type LoanPayment {
    paymentDate: String!
    totalAmount: Float!
    interestPortion: Float!
    principalPortion: Float!
    remainingBalance: Float!
}

input TransferInput {
    fromAccount: Int!
    toAccount: Int!
    amount: Float!
    description: String
}

type TransferResult {
    transactionId: Int
    statusCode: Int!
    message: String!
}

input PaymentInput {
    loanId: Int!
    amount: Float!
}

type PaymentResult {
    interestPaid: Float!
    principalPaid: Float!
    newBalance: Float!
    statusCode: Int!
    message: String!
}

35.5.3 The N+1 Query Problem

The most significant performance challenge with GraphQL over DB2 is the N+1 problem. Consider this query:

query {
  customer(id: 500) {
    accounts {
      accountId
      balance
      transactions(limit: 5) {
        amount
        description
      }
    }
  }
}

A naive implementation executes: 1. One query for the customer 2. One query for the customer's accounts (returns N accounts) 3. N queries for each account's transactions

If the customer has 10 accounts, that is 12 database round-trips.

Solution: DataLoader pattern. Batch related queries:

// Instead of N individual queries:
// SELECT * FROM TRANSACTIONS WHERE ACCOUNT_ID = ? (repeated N times)

// Use a single batched query:
// SELECT * FROM TRANSACTIONS WHERE ACCOUNT_ID IN (?, ?, ?, ..., ?)
// Then distribute results to the appropriate resolvers

public class TransactionDataLoader extends DataLoader<Integer, List<Transaction>> {
    @Override
    public CompletionStage<List<List<Transaction>>> load(List<Integer> accountIds) {
        return CompletableFuture.supplyAsync(() -> {
            String placeholders = accountIds.stream()
                .map(id -> "?")
                .collect(Collectors.joining(","));

            String sql = "SELECT * FROM MERIDIAN.TRANSACTIONS " +
                        "WHERE ACCOUNT_ID IN (" + placeholders + ") " +
                        "ORDER BY ACCOUNT_ID, TRANSACTION_DATE DESC";

            // Execute single query, partition results by ACCOUNT_ID
            // Return results in same order as accountIds
        });
    }
}

Solution: Query optimization with lookahead. Inspect the GraphQL query before execution and generate a single optimized SQL:

-- For the customer->accounts->transactions query, generate:
SELECT
    c.CUSTOMER_ID, c.FIRST_NAME, c.LAST_NAME,
    a.ACCOUNT_ID, a.BALANCE,
    t.AMOUNT, t.DESCRIPTION, t.TRANSACTION_DATE,
    ROW_NUMBER() OVER (PARTITION BY a.ACCOUNT_ID ORDER BY t.TRANSACTION_DATE DESC) AS txn_rank
FROM MERIDIAN.CUSTOMERS c
JOIN MERIDIAN.ACCOUNTS a ON c.CUSTOMER_ID = a.CUSTOMER_ID
LEFT JOIN MERIDIAN.TRANSACTIONS t ON a.ACCOUNT_ID = t.ACCOUNT_ID
WHERE c.CUSTOMER_ID = ?
QUALIFY txn_rank <= 5
ORDER BY a.ACCOUNT_ID, txn_rank;

One query, one round-trip, all data.


35.6 Kafka Connect for DB2

Event streaming transforms DB2 from a request-response data store into an event source — every data change becomes an event that downstream systems can consume in real time.

35.6.1 CDC-Based Connectors

Change Data Capture (CDC) is the foundation. DB2 records changes in its transaction log. A Kafka Connect source connector reads these log entries and publishes them as Kafka messages.

IBM InfoSphere Data Replication provides the CDC engine for DB2 on z/OS and LUW. Debezium provides an open-source alternative for Db2 on LUW.

35.6.2 Debezium DB2 Connector

The Debezium DB2 connector captures row-level changes from Db2 LUW:

Setup requirements: 1. Db2 must have CDC enabled for the target tables. 2. The ASN capture agent must be running.

-- Enable CDC on a table
ALTER TABLE MERIDIAN.TRANSACTIONS DATA CAPTURE CHANGES;
ALTER TABLE MERIDIAN.ACCOUNTS DATA CAPTURE CHANGES;

-- Create the ASN capture schema objects
-- (typically done via the asnclp utility)

Connector configuration:

{
  "name": "meridian-db2-connector",
  "config": {
    "connector.class": "io.debezium.connector.db2.Db2Connector",
    "database.hostname": "db2server.meridian.com",
    "database.port": "50000",
    "database.user": "${env:DB2_CDC_USER}",
    "database.password": "${env:DB2_CDC_PASSWORD}",
    "database.dbname": "MERIDIAN",
    "database.server.name": "meridian-db2",
    "schema.include.list": "MERIDIAN",
    "table.include.list": "MERIDIAN.ACCOUNTS,MERIDIAN.TRANSACTIONS,MERIDIAN.LOANS",
    "topic.prefix": "meridian",
    "snapshot.mode": "initial",
    "transforms": "route",
    "transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
    "transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
    "transforms.route.replacement": "meridian.cdc.$3",
    "key.converter": "org.apache.kafka.connect.json.JsonConverter",
    "value.converter": "org.apache.kafka.connect.json.JsonConverter",
    "key.converter.schemas.enable": false,
    "value.converter.schemas.enable": true,
    "poll.interval.ms": 1000,
    "tasks.max": 1
  }
}

35.6.3 Change Event Format

A Debezium change event for an account balance update:

{
  "schema": { "..." },
  "payload": {
    "before": {
      "ACCOUNT_ID": 1001,
      "CUSTOMER_ID": 500,
      "ACCOUNT_TYPE": "CHECKING",
      "BALANCE": 5432.10,
      "STATUS": "ACTIVE"
    },
    "after": {
      "ACCOUNT_ID": 1001,
      "CUSTOMER_ID": 500,
      "ACCOUNT_TYPE": "CHECKING",
      "BALANCE": 4932.10,
      "STATUS": "ACTIVE"
    },
    "source": {
      "version": "2.5.0.Final",
      "connector": "db2",
      "name": "meridian-db2",
      "ts_ms": 1718462400000,
      "db": "MERIDIAN",
      "schema": "MERIDIAN",
      "table": "ACCOUNTS",
      "change_lsn": "00000025:00000f38:0003"
    },
    "op": "u",
    "ts_ms": 1718462400123
  }
}

The op field indicates the operation: c (create), u (update), d (delete), r (read/snapshot).

35.6.4 Topic Design

Design Kafka topics to match your consumption patterns:

meridian.cdc.ACCOUNTS          -- all account changes
meridian.cdc.TRANSACTIONS      -- all transaction events
meridian.cdc.LOANS             -- all loan changes
meridian.events.transfers      -- derived transfer events (processed)
meridian.events.fraud-alerts   -- fraud detection events
meridian.events.notifications  -- customer notification triggers

Partitioning strategy: Partition by account ID to ensure all events for a single account are processed in order:

{
  "transforms": "extractKey",
  "transforms.extractKey.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
  "transforms.extractKey.field": "ACCOUNT_ID"
}

35.6.5 Exactly-Once Semantics

Achieving exactly-once delivery requires coordination between DB2 and Kafka:

  1. Debezium's offset tracking records the DB2 log sequence number (LSN) in Kafka Connect's offset store. On restart, it resumes from the last committed LSN.
  2. Idempotent producers ensure duplicate messages are not written to Kafka.
  3. Consumer-side deduplication — consumers should be designed to handle duplicate events gracefully (idempotent processing).

For Meridian Bank's fraud detection, which must not generate duplicate alerts, the consumer maintains a processed-event table:

CREATE TABLE MERIDIAN.PROCESSED_EVENTS (
    EVENT_ID    VARCHAR(100) NOT NULL PRIMARY KEY,
    PROCESSED_TS TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP
);

-- Consumer logic (pseudocode):
-- 1. Check if EVENT_ID exists in PROCESSED_EVENTS
-- 2. If not, process the event and INSERT the EVENT_ID
-- 3. COMMIT
-- This ensures at-most-once processing per event

35.7 Event-Driven Architecture Patterns

35.7.1 Event Sourcing with DB2

In event sourcing, DB2 stores a sequence of events rather than current state. The current state is derived by replaying events:

-- Event store table
CREATE TABLE MERIDIAN.ACCOUNT_EVENTS (
    EVENT_ID        BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    ACCOUNT_ID      INTEGER NOT NULL,
    EVENT_TYPE      VARCHAR(50) NOT NULL,
    EVENT_DATA      CLOB NOT NULL,   -- JSON payload
    EVENT_TIMESTAMP TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    EVENT_VERSION   INTEGER NOT NULL
);

CREATE INDEX IDX_ACCT_EVENTS ON MERIDIAN.ACCOUNT_EVENTS (ACCOUNT_ID, EVENT_VERSION);

-- Insert events
INSERT INTO MERIDIAN.ACCOUNT_EVENTS (ACCOUNT_ID, EVENT_TYPE, EVENT_DATA, EVENT_VERSION)
VALUES (1001, 'ACCOUNT_OPENED',
    '{"initialBalance": 1000.00, "accountType": "CHECKING"}', 1);

INSERT INTO MERIDIAN.ACCOUNT_EVENTS (ACCOUNT_ID, EVENT_TYPE, EVENT_DATA, EVENT_VERSION)
VALUES (1001, 'DEPOSIT_MADE',
    '{"amount": 500.00, "source": "ATM"}', 2);

INSERT INTO MERIDIAN.ACCOUNT_EVENTS (ACCOUNT_ID, EVENT_TYPE, EVENT_DATA, EVENT_VERSION)
VALUES (1001, 'WITHDRAWAL_MADE',
    '{"amount": 200.00, "destination": "POS"}', 3);

Materializing current state from events:

-- Materialized view of current account state
CREATE VIEW MERIDIAN.V_ACCOUNT_CURRENT_STATE AS
WITH event_aggregation AS (
    SELECT
        ACCOUNT_ID,
        SUM(CASE
            WHEN EVENT_TYPE IN ('ACCOUNT_OPENED', 'DEPOSIT_MADE')
            THEN CAST(JSON_VALUE(EVENT_DATA, '$.amount' RETURNING DECIMAL(15,2)) AS DECIMAL(15,2))
            WHEN EVENT_TYPE = 'ACCOUNT_OPENED'
            THEN CAST(JSON_VALUE(EVENT_DATA, '$.initialBalance' RETURNING DECIMAL(15,2)) AS DECIMAL(15,2))
            WHEN EVENT_TYPE IN ('WITHDRAWAL_MADE', 'FEE_CHARGED')
            THEN -CAST(JSON_VALUE(EVENT_DATA, '$.amount' RETURNING DECIMAL(15,2)) AS DECIMAL(15,2))
            ELSE 0
        END) AS CURRENT_BALANCE,
        MAX(EVENT_TIMESTAMP) AS LAST_EVENT_TIME,
        MAX(EVENT_VERSION) AS CURRENT_VERSION
    FROM MERIDIAN.ACCOUNT_EVENTS
    GROUP BY ACCOUNT_ID
)
SELECT * FROM event_aggregation;

35.7.2 CQRS with DB2

Command Query Responsibility Segregation (CQRS) separates the write model (commands) from the read model (queries). DB2 can serve as the command store:

   [API Layer]
      |         \
      v          v
  [Command      [Query
   Handler]      Handler]
      |              |
      v              v
   DB2 z/OS       DB2 LUW
   (Write)        (Read Replica)
      |              ^
      v              |
   [Kafka]  -------->|
   (CDC Event Stream)

Commands (transfers, payments, account creation) write to the primary DB2 instance. A CDC stream publishes changes to Kafka. A consumer reads from Kafka and updates a read-optimized DB2 instance (or another data store) with denormalized views optimized for query patterns.

35.7.3 Saga Pattern for Distributed Transactions

When a business operation spans multiple services and databases, the saga pattern coordinates through events:

Customer opens account:
1. API Service: Validate request → publish "AccountRequested" event
2. KYC Service: Check identity → publish "KYCApproved" or "KYCRejected"
3. Account Service: If approved, create account in DB2 → publish "AccountCreated"
4. Notification Service: Send welcome email
5. If any step fails: compensating transactions undo previous steps

The saga coordinator (or choreography through events) maintains state in DB2:

CREATE TABLE MERIDIAN.SAGA_STATE (
    SAGA_ID        VARCHAR(50) NOT NULL PRIMARY KEY,
    SAGA_TYPE      VARCHAR(50) NOT NULL,
    CURRENT_STEP   VARCHAR(50) NOT NULL,
    STATUS         VARCHAR(20) NOT NULL,   -- IN_PROGRESS, COMPLETED, COMPENSATING, FAILED
    PAYLOAD        CLOB NOT NULL,
    CREATED_TS     TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP,
    UPDATED_TS     TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP
);

CREATE TABLE MERIDIAN.SAGA_STEPS (
    SAGA_ID        VARCHAR(50) NOT NULL,
    STEP_NAME      VARCHAR(50) NOT NULL,
    STEP_ORDER     INTEGER NOT NULL,
    STATUS         VARCHAR(20) NOT NULL,   -- PENDING, COMPLETED, COMPENSATED, FAILED
    EXECUTED_TS    TIMESTAMP,
    COMPENSATED_TS TIMESTAMP,
    ERROR_MESSAGE  VARCHAR(500),
    PRIMARY KEY (SAGA_ID, STEP_NAME),
    FOREIGN KEY (SAGA_ID) REFERENCES MERIDIAN.SAGA_STATE(SAGA_ID)
);

Orchestration vs. Choreography: In the orchestration model, a central saga coordinator manages the sequence of steps and decides when to compensate. In the choreography model, each service publishes events and other services react to them. For banking operations, orchestration is generally preferred because it provides a single point of visibility into the saga's state — critical for debugging and auditing.

At Meridian Bank, the account opening saga uses orchestration. The saga coordinator is a Java service that calls each downstream service (KYC, DB2, notification) in sequence and tracks progress in the SAGA_STATE table. If the notification service fails (non-critical), the saga still completes successfully — the notification is retried asynchronously. If the DB2 account creation fails (critical), the saga compensates by calling the KYC service to cancel the verification.

35.7.4 Choosing Between Event Patterns

Pattern Use When DB2 Role Complexity
Simple CDC You need to replicate DB2 changes to another system Source of events Low
Event Sourcing Audit trail is critical; you need to replay history Event store High
CQRS Read and write patterns have very different requirements Command store, query store, or both High
Saga A business operation spans multiple services Stores saga state and participates as one service Medium-High

For most DB2 applications, simple CDC with Kafka is the right starting point. It provides real-time event streaming with minimal changes to the existing DB2 application. Move to event sourcing or CQRS only when the simpler patterns cannot meet your requirements — the operational complexity of these advanced patterns is significant.


35.8 Microservices Data Access Anti-Patterns

35.8.1 The Shared Database Anti-Pattern

The problem: Multiple microservices connect directly to the same DB2 database and share tables.

Service A ----\
               +--> DB2 (MERIDIAN schema)
Service B ----/

This creates tight coupling — a schema change for Service A can break Service B. Services cannot be deployed independently.

The fix: Each service owns its data. If Service B needs data from Service A, it calls Service A's API or subscribes to its events.

35.8.2 Direct Database Access from Frontend

The problem: A web or mobile frontend connects directly to DB2 (via REST services or a thin proxy).

The risks: SQL injection, privilege escalation, no business logic enforcement, connection pool exhaustion.

The fix: Always place an application layer between the frontend and DB2. The application layer validates inputs, enforces business rules, manages authentication, and abstracts the database schema.

35.8.3 Chatty API Over Network

The problem: An API that requires multiple round-trips to complete a single user action.

Mobile App → GET /customer/500
Mobile App → GET /customer/500/accounts
Mobile App → GET /accounts/1001/balance
Mobile App → GET /accounts/1001/transactions?limit=5
Mobile App → GET /accounts/1002/balance
Mobile App → GET /accounts/1002/transactions?limit=5

Six HTTP requests to render one screen.

The fix: Design APIs around use cases, not database tables. Create aggregate endpoints:

Mobile App → GET /customer/500/dashboard

Returns all data needed for the dashboard in one response. The server-side handler calls MERIDIAN.PROC_CUSTOMER_DASHBOARD (one stored procedure call, one round-trip to DB2).

Or use GraphQL, where the client specifies exactly what it needs in a single request.

35.8.4 N+1 in Microservices

The problem: Service A needs to display data that requires calling Service B for each item. Similar to the database N+1 problem, but across network boundaries.

Account Service: GET /accounts?customerId=500
  → returns 10 accounts

For each account:
  Transaction Service: GET /transactions?accountId=X&limit=5
  → 10 additional HTTP calls

The fix: Batch APIs (GET /transactions/batch?accountIds=1001,1002,...,1010), event-driven data denormalization (each service maintains its own view of data it needs), or GraphQL federation.

35.8.5 Ignoring DB2 Connection Costs

The problem: Developers treat DB2 connections as free resources, opening new connections for every request or holding connections open during external API calls.

Each DB2 connection consumes memory on both the application server (JDBC driver buffers) and the DB2 server (thread, address space, working storage). On z/OS, each connection can consume 1-2 MB of working storage. With 1,000 idle connections, that is 1-2 GB of z/OS memory wasted.

The fix: Use connection pools with aggressive idle timeouts. Never hold a DB2 connection while waiting for an external API call. Borrow the connection, execute the DB2 operation, return the connection, then make the external call, then borrow another connection if needed.

// BAD: Holds connection during external call
Connection conn = pool.getConnection();
ResultSet rs = conn.executeQuery("SELECT BALANCE FROM ACCOUNTS WHERE ID = ?");
BigDecimal balance = rs.getBigDecimal(1);
conn.close();  // Connection held during entire method including external call below

ExternalApiResult result = externalService.validateTransaction(balance);  // 200ms call

conn = pool.getConnection();  // Need connection again
conn.executeUpdate("UPDATE ACCOUNTS SET ...");
conn.close();

// GOOD: Release connection between operations
BigDecimal balance;
try (Connection conn = pool.getConnection()) {
    ResultSet rs = conn.executeQuery("SELECT BALANCE FROM ACCOUNTS WHERE ID = ?");
    balance = rs.getBigDecimal(1);
}  // Connection returned to pool immediately

ExternalApiResult result = externalService.validateTransaction(balance);  // 200ms — no connection held

try (Connection conn = pool.getConnection()) {
    conn.executeUpdate("UPDATE ACCOUNTS SET ...");
}  // Connection returned immediately

35.8.6 No Circuit Breaker

The problem: When DB2 becomes slow or unavailable, API requests pile up waiting for connections or query responses. The API server's thread pool is exhausted. Upstream services that depend on this API also slow down. The failure cascades through the entire system.

The fix: Implement circuit breakers (using libraries like Resilience4j or Hystrix) that detect when DB2 is unhealthy and fail fast, returning an error to the client immediately instead of waiting for a timeout:

@CircuitBreaker(name = "db2", fallbackMethod = "getBalanceFallback")
public BigDecimal getBalance(int accountId) {
    return jdbcTemplate.queryForObject(
        "SELECT BALANCE FROM MERIDIAN.ACCOUNTS WHERE ACCOUNT_ID = ?",
        BigDecimal.class, accountId);
}

public BigDecimal getBalanceFallback(int accountId, Exception e) {
    // Return cached balance or meaningful error
    return cacheService.getCachedBalance(accountId)
        .orElseThrow(() -> new ServiceUnavailableException(
            "Account service temporarily unavailable"));
}

35.9 API Security for DB2

35.9.1 OAuth2/JWT Integration

Modern APIs use OAuth2 for authentication and JWT (JSON Web Tokens) for authorization:

Mobile App → Authorization Server (get JWT token)
Mobile App → API Gateway (present JWT)
API Gateway → Validate JWT signature
API Gateway → Extract claims (user_id, roles, scopes)
API Gateway → Forward to backend service
Backend Service → Map JWT claims to DB2 authorization
Backend Service → Execute stored procedure as mapped user

JWT token structure for Meridian Bank:

{
  "sub": "customer-500",
  "iss": "auth.meridian.com",
  "aud": "api.meridian.com",
  "exp": 1718466000,
  "iat": 1718462400,
  "scope": "accounts:read accounts:transfer",
  "roles": ["customer"],
  "customerId": 500,
  "accountIds": [1001, 1002]
}

The backend service enforces that customer 500 can only access accounts 1001 and 1002 — even though the DB2 connection might use a service account with broader privileges.

35.9.2 API Key Management

For partner integrations, API keys provide identification and basic rate limiting:

# API Gateway configuration (Kong example)
consumers:
  - username: fintech-partner-1
    credentials:
      - key: "fp1-prod-k3y-2025-abc123"
    plugins:
      - name: rate-limiting
        config:
          minute: 100
          hour: 5000
          policy: local
      - name: acl
        config:
          allow: ["partner-read-only"]

35.9.3 SQL Injection Prevention

The API layer is the last line of defense against SQL injection. Even though stored procedures already use parameterized queries, the API layer must validate inputs:

// Input validation in the API layer
@PostMapping("/transfers")
public ResponseEntity<?> transfer(@Valid @RequestBody TransferRequest req) {
    // @Valid triggers javax.validation annotations on TransferRequest

    // Additional business validation
    if (req.getAmount().compareTo(BigDecimal.ZERO) <= 0) {
        return ResponseEntity.badRequest()
            .body(new ErrorResponse("INVALID_AMOUNT", "Amount must be positive"));
    }

    if (req.getAmount().compareTo(new BigDecimal("50000")) > 0) {
        return ResponseEntity.badRequest()
            .body(new ErrorResponse("AMOUNT_EXCEEDED", "Maximum transfer is $50,000"));
    }

    // Never concatenate user input into SQL
    // Always use parameterized calls
    // The stored procedure handles the database interaction safely
}
// TransferRequest with validation annotations
public class TransferRequest {
    @NotNull
    @Min(1)
    private Integer fromAccount;

    @NotNull
    @Min(1)
    private Integer toAccount;

    @NotNull
    @DecimalMin("0.01")
    @DecimalMax("50000.00")
    private BigDecimal amount;

    @Size(max = 200)
    @Pattern(regexp = "^[a-zA-Z0-9\\s.,\\-]+$")  // no SQL special chars
    private String description;
}

35.9.4 Rate Limiting

Protect DB2 from API-driven overload:

Consumer Type Rate Limit Rationale
Mobile app (per user) 60 req/min Normal usage pattern
Partner API 1,000 req/min Bulk operations
Internal service 10,000 req/min Service-to-service
Anonymous 10 req/min Public endpoints only

Implement at the API gateway level (Kong, AWS API Gateway, Apigee) — not in the DB2 layer.


35.10 Performance Patterns for APIs

35.10.1 Connection Pooling

Every API request needs a DB2 connection. Opening a new connection for each request is prohibitively expensive (TCP handshake, authentication, resource allocation). Connection pools maintain a set of open connections:

# HikariCP configuration (optimal for DB2)
spring:
  datasource:
    hikari:
      maximum-pool-size: 20       # max concurrent DB2 connections
      minimum-idle: 5              # keep 5 ready
      connection-timeout: 30000    # wait up to 30s for a connection
      idle-timeout: 600000         # close idle connections after 10min
      max-lifetime: 1800000        # recreate connections every 30min
      leak-detection-threshold: 60000  # alert if connection held >60s

Sizing the pool: Start with pool_size = 2 * CPU_cores + effective_spindle_count as a baseline. Monitor wait times and adjust. A pool that is too large wastes DB2 resources; too small creates application-side queuing.

A common mistake is setting the pool size equal to the number of expected concurrent users. If you expect 5,000 concurrent users but each DB2 query takes only 5ms, a pool of 20 connections can handle 4,000 queries per second — far more than 5,000 users can generate. The formula to remember: required_pool_size = peak_queries_per_second * average_query_time_in_seconds. For 1,000 queries/sec with 5ms average: 1000 * 0.005 = 5 connections. Add headroom for bursts and slow queries, and 10-20 connections is typically sufficient.

On z/OS, each DB2 connection corresponds to a thread (allied or database access thread). z/OS system programmers monitor thread consumption carefully because each thread consumes LPAR resources. Over-provisioning connection pools across multiple API servers can exhaust DB2's thread limit (CTHREAD/DTHREAD parameters). Coordinate pool sizes with your z/OS DBA team.

35.10.2 Prepared Statement Caching

DB2 compiles SQL statements into access plans. Prepared statement caching avoids recompilation:

// DB2 JDBC driver caches prepared statements per connection
// Configure in the connection URL:
String url = "jdbc:db2://server:50000/MERIDIAN" +
             ":enableSysplexWLB=true;" +
             "maxStatements=200;";  // cache up to 200 prepared statements

On z/OS, DB2's dynamic statement cache serves a similar purpose at the database level — frequently executed SQL is found in the cache without recompilation.

35.10.3 Pagination

Never return unbounded result sets through an API:

-- Offset-based pagination (simple but slow for deep pages)
SELECT * FROM MERIDIAN.TRANSACTIONS
WHERE ACCOUNT_ID = ?
ORDER BY TRANSACTION_DATE DESC
OFFSET ? ROWS
FETCH NEXT ? ROWS ONLY;

-- Keyset pagination (fast for any page depth)
SELECT * FROM MERIDIAN.TRANSACTIONS
WHERE ACCOUNT_ID = ?
  AND (TRANSACTION_DATE, TRANSACTION_ID) < (?, ?)
ORDER BY TRANSACTION_DATE DESC, TRANSACTION_ID DESC
FETCH FIRST ? ROWS ONLY;

Keyset pagination uses an index efficiently regardless of how deep into the result set you are. The API returns a cursor that the client passes on the next request:

{
  "data": [ "..." ],
  "pagination": {
    "nextCursor": "eyJkYXRlIjoiMjAyNS0wNi0xMCIsImlkIjo5ODc2NX0=",
    "hasMore": true,
    "totalCount": 1523
  }
}

35.10.4 Response Caching

Cache API responses to reduce DB2 load:

@GetMapping("/accounts/{id}")
@Cacheable(value = "accounts", key = "#accountId", unless = "#result == null")
public AccountDTO getAccount(@PathVariable int accountId) {
    // Only hits DB2 if not in cache
}

// Cache invalidation on write operations
@PostMapping("/transfers")
@CacheEvict(value = "accounts", allEntries = true)
public TransferResult transfer(@RequestBody TransferRequest req) {
    // Evicts cached accounts after any transfer
}

For frequently-read, rarely-changed data (interest rates, fee schedules, branch info), aggressive caching with TTL-based expiry reduces DB2 load dramatically.

35.10.5 Async Processing

For long-running operations, use asynchronous processing:

POST /api/v1/reports/monthly-statement
→ 202 Accepted
→ {"reportId": "rpt-12345", "statusUrl": "/api/v1/reports/rpt-12345/status"}

GET /api/v1/reports/rpt-12345/status
→ 200 OK
→ {"status": "PROCESSING", "percentComplete": 45}

GET /api/v1/reports/rpt-12345/status  (later)
→ 200 OK
→ {"status": "COMPLETE", "downloadUrl": "/api/v1/reports/rpt-12345/download"}

The report generation runs as a background job, executing heavy DB2 queries without blocking the API thread or hitting HTTP timeouts.

35.10.6 Monitoring and Observability

Performance optimization is meaningless without observability. Every API serving DB2 data should expose these metrics:

  • Request rate: Total API calls per second, broken down by endpoint and HTTP status code.
  • Latency distribution: p50, p95, and p99 response times per endpoint. The p99 is the most important — it represents the worst experience your users regularly encounter.
  • DB2 query time: The portion of response time spent in DB2, separate from API layer processing time. If your p95 response time is 200ms but DB2 query time is only 5ms, the remaining 195ms is in the API layer (serialization, network, middleware).
  • Connection pool utilization: Active connections, idle connections, and wait time for connections. A pool that is consistently 90%+ utilized needs to be enlarged.
  • Error rate: Percentage of requests that return 4xx or 5xx status codes, broken down by error type.
  • Cache hit ratio: For endpoints with caching enabled, the ratio of cache hits to total requests. A ratio below 50% suggests the cache TTL is too short or the data changes too frequently for caching to help.

Meridian Bank exports these metrics to Prometheus and visualizes them in Grafana. Alerts fire when: p99 latency exceeds 500ms for more than 2 minutes, error rate exceeds 1% for more than 5 minutes, or connection pool wait time exceeds 5 seconds. These thresholds were calibrated during the first month of production operation and are reviewed quarterly.


35.11 Meridian Bank API Architecture

Bringing it all together, here is Meridian National Bank's complete API and event architecture:

35.11.1 Mobile Banking REST API via z/OS Connect

[iOS/Android App]
       |
       v
[CDN / WAF]  (Cloudflare / Akamai)
       |
       v
[API Gateway]  (Kong / AWS API Gateway)
   - OAuth2 token validation
   - Rate limiting
   - Request logging
       |
       v
[z/OS Connect EE]  (on z/OS LPAR)
   - JSON ↔ DB2 mapping
   - SAF/RACF security
   - SMF audit logging
       |
       v
[DB2 for z/OS]
   - PROC_TRANSFER_FUNDS
   - PROC_GET_ACCOUNT_HISTORY
   - PROC_CUSTOMER_DASHBOARD

Key API endpoints:

Method Path DB2 Backend Description
GET /api/v1/customers/{id}/dashboard PROC_CUSTOMER_DASHBOARD Full customer overview
GET /api/v1/accounts/{id} SELECT query Account details
GET /api/v1/accounts/{id}/transactions PROC_GET_ACCOUNT_HISTORY Transaction history
POST /api/v1/accounts/transfers PROC_TRANSFER_FUNDS Fund transfer
POST /api/v1/loans/{id}/payments PROC_PROCESS_LOAN_PAYMENT Loan payment
GET /api/v1/accounts/{id}/statements/{month} FN_ACCOUNT_STATEMENTS Monthly statement

35.11.2 Real-Time Transaction Events via Kafka

[DB2 for z/OS]
   - TRANSACTIONS table
   - ACCOUNTS table
       |
       v (CDC via InfoSphere Data Replication)
[Kafka Cluster]
   - meridian.cdc.TRANSACTIONS
   - meridian.cdc.ACCOUNTS
       |
  +----+----+----+
  |    |    |    |
  v    v    v    v
[Fraud    [Notify  [Analytics  [Audit
 Det.]     Svc]     Pipeline]   Log]

Fraud detection consumer processes transaction events in real time, applying ML models to detect suspicious patterns. When fraud is detected, it publishes to meridian.events.fraud-alerts and calls PROC_FREEZE_ACCOUNT on DB2.

Notification service listens for large transactions, low balance events, and loan payment confirmations to send push notifications and emails.

Analytics pipeline streams events to a data lake for batch analytics, reporting, and regulatory compliance (SOX, PCI-DSS).

35.11.3 Analytics GraphQL Endpoint

For internal tools and dashboards:

[Internal Dashboard (React)]
       |
       v
[GraphQL Server (Apollo)]
   - Schema: accounts, transactions, loans, customers
   - DataLoader for batch DB2 queries
   - Caching layer (Redis)
       |
       v
[Db2 for LUW - Analytics Replica]
   - Read-optimized indexes
   - Materialized query tables
   - No write contention with production

The GraphQL endpoint connects to a read replica, not the production z/OS system. This ensures that ad-hoc analytical queries do not impact transaction processing.

35.11.4 API Gateway Configuration

# Kong API Gateway - Meridian Bank Configuration
services:
  - name: mobile-banking-api
    url: https://zosconnect.meridian.internal:9443/api/v1
    routes:
      - name: accounts-route
        paths: ["/api/v1/accounts"]
        methods: ["GET", "POST"]
    plugins:
      - name: jwt
        config:
          uri_param_names: []
          claims_to_verify: ["exp"]
          key_claim_name: "iss"
      - name: rate-limiting
        config:
          minute: 60
          policy: redis
          redis_host: redis.meridian.internal
      - name: cors
        config:
          origins: ["https://mobile.meridian.com", "https://app.meridian.com"]
          methods: ["GET", "POST", "PUT", "DELETE"]
          headers: ["Authorization", "Content-Type"]
      - name: request-transformer
        config:
          add:
            headers: ["X-Request-ID:$(uuid)"]
      - name: tcp-log
        config:
          host: splunk.meridian.internal
          port: 8089

  - name: partner-api
    url: https://partner-gateway.meridian.internal:8443/api/partner/v1
    routes:
      - name: partner-route
        paths: ["/partner/v1"]
    plugins:
      - name: key-auth
        config:
          key_names: ["X-API-Key"]
      - name: rate-limiting
        config:
          minute: 1000
      - name: ip-restriction
        config:
          allow: ["10.0.0.0/8", "172.16.0.0/12"]

  - name: graphql-analytics
    url: http://graphql.meridian.internal:4000/graphql
    routes:
      - name: graphql-route
        paths: ["/graphql"]
    plugins:
      - name: jwt
      - name: rate-limiting
        config:
          minute: 300
      - name: request-size-limiting
        config:
          allowed_payload_size: 10  # KB - prevent massive GraphQL queries

Spaced Review: Connecting to Previous Chapters

From Chapter 19 (Security and Authorization): This chapter's API security patterns build directly on Chapter 19's DB2 authorization model. z/OS Connect maps OAuth2 tokens to RACF user IDs, which then use the role-based access control we configured in Chapter 19. The principle remains: grant EXECUTE on procedures, not direct table access. API keys and JWT tokens add authentication layers, but DB2's authorization is the final enforcement.

Consider the full authentication chain for a mobile banking transfer: (1) The user authenticates with fingerprint on their phone. (2) The mobile app sends the JWT to the API gateway. (3) The gateway validates the JWT signature and checks expiration. (4) z/OS Connect receives the request with the validated JWT. (5) z/OS Connect maps the JWT subject claim to RACF user ID "CUST0500". (6) DB2 checks that CUST0500 has EXECUTE privilege on PROC_TRANSFER_FUNDS. (7) Inside the procedure, the daily limit check queries against CUST0500's limits. Seven layers of security, each enforcing a different constraint. If any layer fails, the transfer is denied.

Review question: A partner API uses an API key for authentication. The API key maps to a DB2 user ID with EXECUTE privilege on PROC_GET_BALANCE but not PROC_TRANSFER_FUNDS. What happens when the partner's application tries to call the transfer endpoint? Answer: The API gateway authenticates the request (API key is valid), but when z/OS Connect or the application layer attempts to call PROC_TRANSFER_FUNDS as the mapped DB2 user, DB2 returns SQLCODE -551 (insufficient authorization). The API layer should catch this and return HTTP 403 Forbidden.

From Chapter 33 (Application Programming with JDBC, ODBC, and Embedded SQL): The JDBC connection pools and prepared statements in this chapter's REST APIs are the same patterns from Chapter 33, wrapped in a REST interface. HikariCP configuration mirrors Chapter 33's connection management guidance. The difference is that API frameworks manage the request lifecycle — the connection pool is a shared resource across many concurrent API requests.

The connection pool is the most critical shared resource in an API server. Chapter 33 covered the basics: open a connection, execute a statement, close the connection. But in an API context, "open" means "borrow from the pool" and "close" means "return to the pool." If a request handler borrows a connection and fails to return it (due to an exception or coding error), the pool leaks. After enough leaks, the pool is exhausted and all subsequent requests block, waiting for connections that will never be returned. This is why HikariCP's leak-detection-threshold is set to 60 seconds in our configuration — it logs a warning if any connection is held longer than a minute, making leaks visible before they cause an outage.

Review question: Your API server has a HikariCP pool with maxPoolSize=20 and connectionTimeout=30s. Under peak load, you see this log message: "Connection is not available, request timed out after 30000ms." What should you investigate? Answer: Either the pool is too small for the load (increase maxPoolSize), queries are taking too long (optimize the stored procedures or add indexes), or connections are leaking (check for database calls in code paths that do not properly close connections in finally blocks).

From Chapter 34 (Stored Procedures, UDFs, and Application Logic in the Database): Every API endpoint in this chapter ultimately calls a stored procedure from Chapter 34. PROC_TRANSFER_FUNDS, PROC_PROCESS_LOAN_PAYMENT, PROC_CUSTOMER_DASHBOARD — these are the database-tier implementations that the API tier exposes to the world. The API layer adds authentication, rate limiting, and JSON transformation, but the business logic and data integrity enforcement remain in DB2.

This separation of concerns is the architectural keystone. The stored procedures know nothing about HTTP, JSON, or OAuth2. The API layer knows nothing about SQL, transaction isolation, or lock ordering. Each layer does what it does best. When the mobile team wants to add a new field to the transfer response, they modify the API layer. When the risk team wants to add a new validation rule to transfers, they modify the stored procedure. Neither change affects the other layer.

Review question: The product team asks you to add a "scheduled transfer" feature to the mobile app. The transfer should execute at a future date specified by the user. Where should this logic live — in the API layer, in a DB2 stored procedure, or both? Answer: The scheduling logic (storing the scheduled date, triggering execution at the right time) belongs in the application layer (or a job scheduler). The actual transfer execution should still call PROC_TRANSFER_FUNDS when the scheduled time arrives. The procedure does not need to change — it transfers funds regardless of whether the request came from a user tapping "Transfer Now" or from a scheduler executing a future-dated transfer.


Summary

This chapter has mapped the full landscape of modern data access patterns for DB2. You learned to expose mainframe DB2 data through z/OS Connect EE, creating RESTful APIs without rewriting COBOL business logic. You built application-layer APIs in Spring Boot, Node.js, and Python, applying best practices for connection management, error handling, and API design. You explored GraphQL as a flexible query interface that eliminates the over-fetching and under-fetching problems of REST. And you implemented event streaming with Kafka, transforming DB2 from a request-response store into an event source that feeds real-time fraud detection, notifications, and analytics.

The access pattern spectrum presented in Section 35.1 should guide your architectural decisions. Direct JDBC remains appropriate for internal services with low latency requirements. z/OS Connect (Section 35.2) is the fastest path to APIs over mainframe DB2, with native security integration and zIIP offload for cost efficiency. Db2 REST services (Section 35.3) serve well for prototyping and internal tooling. Application-layer APIs (Section 35.4) provide maximum control for production external APIs. GraphQL (Section 35.5) shines for complex, flexible data retrieval. And Kafka CDC (Section 35.6) enables event-driven architectures that process data changes in real time.

The anti-patterns in Section 35.8 are as important as the patterns. Shared databases, direct frontend access, chatty APIs, and microservices N+1 problems are the most common mistakes in DB2 API design. Recognizing and avoiding them will save you more performance and maintenance headaches than any optimization technique.

Meridian National Bank's architecture demonstrates that these patterns are not mutually exclusive. The mobile banking app uses z/OS Connect for synchronous operations. The fraud detection system consumes a Kafka event stream. The internal analytics team uses GraphQL. And legacy batch programs still connect directly through JDBC. Each pattern serves its purpose, and DB2 anchors them all as the trusted source of truth.

The key insight: modernizing DB2 access is not about replacing DB2. It is about surrounding it with the interfaces that modern applications expect — REST for mobile and partner integrations, GraphQL for flexible analytics queries, event streams for real-time processing — while preserving the decades of investment in data integrity, security, and performance that make DB2 the system of record for the world's most critical data.