> "The question is not whether COBOL belongs in the modern stack. The question is how to make the rest of the stack work with COBOL."
In This Chapter
- 40.1 COBOL in Containers
- 40.2 Cloud COBOL
- 40.3 DevOps for COBOL
- 40.4 CI/CD Pipelines for COBOL
- 40.5 COBOL and Microservices Architecture
- 40.6 Modern IDEs for COBOL
- 40.7 Automated Testing in CI/CD
- 40.8 Infrastructure as Code for Mainframe
- 40.9 Zowe: The Open Source Mainframe Interface
- 40.10 GlobalBank: Setting Up CI/CD for GLOBALBANK-CORE
- 40.11 MedClaim: Containerized Claim Processing Service
- 40.12 Try It Yourself: Building a COBOL CI/CD Pipeline
- 40.13 Security Considerations
- 40.14 Docker Multi-Stage Builds in Depth
- 40.15 Kubernetes Deployment in Depth
- 40.16 GitHub Actions Pipelines for COBOL: Advanced Patterns
- 40.17 Zowe CLI: Working with z/OS from Your Workstation
- 40.18 VS Code Debugging for COBOL
- 40.19 Modern Testing in CI/CD
- 40.20 Try It Yourself: Complete Containerized COBOL Service
- 40.21 Monitoring Containerized COBOL in Production
- 40.22 Galasa: Modern Testing Framework for Mainframe
- 40.23 Infrastructure as Code: Terraform for the Mainframe Perimeter
- 40.24 The Developer Experience: From ISPF to VS Code
- 40.25 COBOL in GitOps Workflows
- 40.26 Secret Management for COBOL in Modern Stacks
- 40.27 Chapter Summary
Chapter 40: COBOL and the Modern Stack
"The question is not whether COBOL belongs in the modern stack. The question is how to make the rest of the stack work with COBOL." — Priya Kapoor, presenting GlobalBank's modernization strategy to the CTO
Walk into any technology conference and you will hear about containers, Kubernetes, CI/CD pipelines, microservices, and cloud-native architecture. Walk into any large bank, insurance company, or government agency and you will find COBOL running the core business logic. For too long, these two worlds have been treated as separate — the "modern" stack on one side, the mainframe on the other, with an uncomfortable gap in between.
That gap is closing. COBOL programs can be containerized. COBOL code can flow through CI/CD pipelines. COBOL services can participate in microservice architectures. COBOL developers can use VS Code. And mainframe infrastructure can be managed with the same Infrastructure as Code tools that manage cloud resources.
This chapter bridges the gap. You will learn the tools, techniques, and patterns that bring COBOL into the modern development and deployment ecosystem. Not by replacing COBOL, but by meeting it where it is and connecting it to where the rest of the industry has gone.
40.1 COBOL in Containers
Containers package an application and its dependencies into a portable, reproducible unit. Docker is the most widely used container runtime, and Kubernetes (often shortened to K8s) orchestrates containers at scale.
Why Containerize COBOL?
The value proposition is not about making COBOL "modern" — it is about operational consistency:
- Same deployment pipeline for COBOL and non-COBOL services
- Environment reproducibility — the same container runs identically in dev, test, and production
- Scaling — spin up more instances when load increases
- Isolation — each service runs in its own container with its own dependencies
📊 Where Containerized COBOL Makes Sense - COBOL programs compiled with GnuCOBOL running on Linux - Batch processing jobs that need elastic scaling - COBOL microservices behind an API gateway - Development and testing environments - Cloud migration targets (when moving off mainframe)
Dockerfile for COBOL
Here is a practical Dockerfile that builds and runs a COBOL program:
# Stage 1: Build the COBOL program
FROM ubuntu:22.04 AS builder
# Install GnuCOBOL
RUN apt-get update && \
apt-get install -y gnucobol4 && \
rm -rf /var/lib/apt/lists/*
# Copy source code
WORKDIR /app/src
COPY src/*.cbl ./
COPY copybooks/*.cpy ./
# Compile COBOL programs
RUN cobc -x -o /app/bin/acctinq acctinq.cbl \
&& cobc -x -o /app/bin/txnproc txnproc.cbl \
&& cobc -x -o /app/bin/rptgen rptgen.cbl
# Stage 2: Runtime image (smaller)
FROM ubuntu:22.04 AS runtime
# Install only the GnuCOBOL runtime (not compiler)
RUN apt-get update && \
apt-get install -y libcob4 && \
rm -rf /var/lib/apt/lists/*
# Create non-root user
RUN useradd -m cobolapp
USER cobolapp
# Copy compiled binaries from builder
COPY --from=builder /app/bin/ /app/bin/
COPY config/ /app/config/
COPY data/ /app/data/
WORKDIR /app
EXPOSE 8080
# Default: run the account inquiry service
ENTRYPOINT ["/app/bin/acctinq"]
💡 Key Insight: The multi-stage build is important. The first stage installs the full GnuCOBOL compiler (which is large). The second stage copies only the compiled binaries and the minimal runtime library, producing a much smaller container image.
Building and Running
# Build the container
docker build -t globalbank/acctinq:1.0 .
# Run interactively
docker run -it globalbank/acctinq:1.0
# Run with data volume mounted
docker run -v /data/accounts:/app/data \
globalbank/acctinq:1.0
# Run as a batch job
docker run --rm \
-v /data/input:/app/input \
-v /data/output:/app/output \
globalbank/txnproc:1.0
Kubernetes Deployment
For production workloads, Kubernetes manages container orchestration:
# kubernetes/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: acctinq-service
labels:
app: globalbank
component: account-inquiry
spec:
replicas: 3
selector:
matchLabels:
app: acctinq
template:
metadata:
labels:
app: acctinq
spec:
containers:
- name: acctinq
image: globalbank/acctinq:1.0
ports:
- containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: acctinq-service
spec:
selector:
app: acctinq
ports:
- port: 80
targetPort: 8080
type: ClusterIP
⚠️ Important Distinction: Containerized COBOL running on Linux (with GnuCOBOL) is fundamentally different from COBOL running on z/OS. The language is the same, but the runtime environment, file system, and available facilities differ significantly. Mainframe COBOL programs that use CICS, DB2, VSAM, or JCL cannot be containerized without modification. Containerization is primarily for: (a) new COBOL development targeting Linux, (b) COBOL programs being migrated off the mainframe, or (c) development/testing environments.
40.2 Cloud COBOL
Major cloud providers now offer services for running COBOL workloads:
AWS Mainframe Modernization
AWS provides two approaches: - Replatforming with Micro Focus (now OpenText) runtime — runs COBOL programs on AWS EC2 instances with a mainframe-compatible runtime - Refactoring with Blu Age — converts COBOL to Java automatically
# AWS CloudFormation for Micro Focus runtime
AWSTemplateFormatVersion: '2010-09-09'
Resources:
MainframeRuntime:
Type: AWS::M2::Environment
Properties:
Name: globalbank-runtime
EngineType: microfocus
InstanceType: M2.m5.large
SubnetIds:
- !Ref PrivateSubnet
SecurityGroupIds:
- !Ref MainframeSecurityGroup
Azure
Microsoft partners with vendors like Astadia and Micro Focus to provide: - Azure Logic Apps for mainframe integration - Azure API Management as an API gateway for COBOL services - Virtual machines with COBOL runtimes
Google Cloud
GCP offers: - Google Cloud Mainframe Modernization (in partnership with various vendors) - Dual-write patterns that replicate mainframe data to BigQuery for analytics
IBM Cloud (z/OS in the Cloud)
IBM offers z/OS as a cloud service through: - IBM Z as a Service (ZaaS) — Full z/OS environment in the cloud - Wazi as a Service — Development and test z/OS instances - Hyper Protect Virtual Servers — Secure z/OS workloads on IBM Cloud
💡 Key Insight: Cloud COBOL is not a single approach — it is a spectrum. At one end, you run actual z/OS in IBM's cloud (minimal change). At the other end, you convert COBOL to Java and run on any cloud (maximum change). Most organizations choose something in between.
40.3 DevOps for COBOL
DevOps — the combination of development and operations practices — has transformed how software is built and deployed. Applying these practices to COBOL requires some adaptation, but the core principles translate directly.
Git for COBOL Source Code
Mainframe source code has traditionally lived in PDS (Partitioned Data Set) members managed by tools like Endevor or ChangeMan. Modern shops are migrating to Git:
globalbank-core/
├── src/
│ ├── cobol/
│ │ ├── ACCTINQ.cbl
│ │ ├── TXNPROC.cbl
│ │ ├── BALCALC.cbl
│ │ └── RPTDAILY.cbl
│ ├── copybooks/
│ │ ├── ACCTCPY.cpy
│ │ ├── TXNCPY.cpy
│ │ └── ERRCPY.cpy
│ ├── bms/
│ │ ├── ACCTSCR.bms
│ │ └── TXNSCR.bms
│ └── jcl/
│ ├── COMPILE.jcl
│ ├── NIGHTLY.jcl
│ └── TESTS.jcl
├── test/
│ ├── unit/
│ │ ├── ACCTINQ-TEST.cbl
│ │ └── BALCALC-TEST.cbl
│ └── data/
│ ├── test-accounts.dat
│ └── test-transactions.dat
├── config/
│ └── compile-options.conf
├── .github/
│ └── workflows/
│ └── cobol-ci.yml
└── README.md
Branching Strategy
The same branching strategies used for Java or Python work for COBOL:
main (production code)
└── develop (integration branch)
├── feature/JIRA-1234-new-fee-calc
├── feature/JIRA-1235-stmt-format
└── hotfix/JIRA-1236-interest-rounding
✅ Best Practice: Use feature branches for all changes, no matter how small. This enables code review before merging, which is especially important for COBOL where a subtle bug can affect millions of financial records.
Code Review for COBOL
Pull request reviews for COBOL should check: - Correct use of scope terminators (END-IF, END-PERFORM, END-EVALUATE) - Proper file status checking after every I/O operation - Numeric field overflow potential (PIC 9(5) receiving a 6-digit value) - COMP-3 vs. DISPLAY usage for financial calculations - Copybook version consistency - Impact on downstream programs and job streams
40.4 CI/CD Pipelines for COBOL
Continuous Integration (CI) automatically builds and tests code when changes are pushed. Continuous Deployment (CD) automatically deploys tested code to production.
GitHub Actions Pipeline
# .github/workflows/cobol-ci.yml
name: COBOL CI Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install GnuCOBOL
run: |
apt-get update
apt-get install -y gnucobol4
- name: Compile COBOL programs
run: |
cd src/cobol
for f in *.cbl; do
echo "Compiling $f..."
cobc -x -I ../copybooks -o ../../bin/${f%.cbl} $f
if [ $? -ne 0 ]; then
echo "COMPILATION FAILED: $f"
exit 1
fi
done
- name: Run unit tests
run: |
cd test/unit
for f in *-TEST.cbl; do
echo "Running test $f..."
cobc -x -I ../../src/copybooks $f
./${f%.cbl}
if [ $? -ne 0 ]; then
echo "TEST FAILED: $f"
exit 1
fi
done
- name: Run COBOL-Check tests
run: |
# Install COBOL-Check
wget https://github.com/openmainframeproject/
cobol-check/releases/latest/download/
cobol-check.jar
java -jar cobol-check.jar \
--programs ACCTINQ TXNPROC BALCALC
- name: Archive build artifacts
uses: actions/upload-artifact@v4
with:
name: cobol-binaries
path: bin/
deploy-test:
needs: build
if: github.ref == 'refs/heads/develop'
runs-on: ubuntu-latest
steps:
- name: Deploy to test environment
run: |
echo "Deploying to test..."
# Deploy logic here
deploy-prod:
needs: build
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production
steps:
- name: Deploy to production
run: |
echo "Deploying to production..."
# Production deployment with approvals
Jenkins Pipeline
Many mainframe shops use Jenkins because of its rich plugin ecosystem for z/OS:
// Jenkinsfile
pipeline {
agent any
stages {
stage('Checkout') {
steps {
git branch: 'develop',
url: 'https://github.com/globalbank/core.git'
}
}
stage('Compile on z/OS') {
steps {
script {
// Submit compile JCL to z/OS via Zowe
sh '''
zowe jobs submit local-file \
src/jcl/COMPILE.jcl \
--wait-for-active
'''
}
}
}
stage('Unit Test') {
steps {
script {
sh '''
zowe jobs submit local-file \
test/jcl/UNITTEST.jcl \
--wait-for-output
'''
}
}
}
stage('Integration Test') {
steps {
script {
sh '''
zowe jobs submit local-file \
test/jcl/INTTEST.jcl \
--wait-for-output
'''
}
}
}
stage('Deploy to Test') {
when {
branch 'develop'
}
steps {
sh '''
zowe files upload file-to-data-set \
bin/ACCTINQ \
GBANK.TEST.LOADLIB(ACCTINQ)
'''
}
}
}
post {
failure {
mail to: 'cobol-team@globalbank.com',
subject: "Build Failed: ${currentBuild.fullDisplayName}",
body: "Check: ${env.BUILD_URL}"
}
}
}
📊 IBM Dependency Based Build (DBB)
IBM's DBB tool understands COBOL program dependencies — it knows which programs need recompiling when a copybook changes:
// DBB build script (Groovy)
@groovy.transform.BaseScript
com.ibm.dbb.groovy.ScriptLoader baseScript
def buildList = argMap.buildList
def copybooks = argMap.copybookChanges
// Find all programs that COPY the changed copybooks
def impactedPrograms = analyzeCopybookImpact(copybooks)
// Compile only impacted programs
impactedPrograms.each { program ->
compile(program)
linkedit(program)
}
40.5 COBOL and Microservices Architecture
Microservices decompose a large application into small, independently deployable services. COBOL programs can participate in microservice architectures as individual services.
The COBOL Microservice Pattern
┌───────────────────────────────────────────┐
│ API Gateway │
└──┬─────────┬──────────┬──────────┬────────┘
│ │ │ │
┌──▼──┐ ┌───▼──┐ ┌───▼──┐ ┌───▼──────┐
│Acct │ │ Txn │ │ Stmt │ │ Fraud │
│Svc │ │ Svc │ │ Svc │ │ Svc │
│COBOL│ │COBOL │ │COBOL │ │ Python │
└──┬──┘ └───┬──┘ └───┬──┘ └───┬──────┘
│ │ │ │
┌──▼──┐ ┌───▼──┐ ┌───▼──┐ ┌───▼──────┐
│Acct │ │ Txn │ │ Stmt │ │ ML │
│ DB │ │ DB │ │Store │ │ Model │
└─────┘ └──────┘ └──────┘ └──────────┘
Each COBOL program becomes a microservice when wrapped with: 1. An HTTP listener (e.g., a thin Python/Node.js wrapper, or CICS + z/OS Connect) 2. A health check endpoint 3. A well-defined API contract (OpenAPI specification) 4. Independent data storage
API-First Design with COBOL Backends
Start with the API contract, then implement in COBOL:
# openapi.yaml - Account Service API
openapi: 3.0.3
info:
title: GlobalBank Account Service
version: 1.0.0
description: Account inquiry and management backed by COBOL
paths:
/accounts/{accountNumber}:
get:
summary: Get account details
operationId: getAccount
parameters:
- name: accountNumber
in: path
required: true
schema:
type: string
pattern: '^[0-9]{10}$'
responses:
'200':
description: Account found
content:
application/json:
schema:
$ref: '#/components/schemas/Account'
'404':
description: Account not found
/accounts/{accountNumber}/balance:
get:
summary: Get account balance
operationId: getBalance
responses:
'200':
description: Balance retrieved
content:
application/json:
schema:
$ref: '#/components/schemas/Balance'
components:
schemas:
Account:
type: object
properties:
accountNumber:
type: string
accountName:
type: string
accountType:
type: string
enum: [CHK, SAV, MMA, CD]
balance:
type: number
format: decimal
status:
type: string
enum: [ACTIVE, CLOSED, FROZEN]
openDate:
type: string
format: date
Balance:
type: object
properties:
currentBalance:
type: number
availableBalance:
type: number
asOfTimestamp:
type: string
format: date-time
The COBOL program implements the business logic behind this API. The HTTP/JSON layer is handled by infrastructure.
Service Communication Patterns
COBOL microservices communicate with other services through:
- Synchronous REST calls (for real-time queries)
- Asynchronous MQ messages (for events and notifications)
- Database sharing (for legacy integration — not recommended for new designs)
⚠️ Microservice Anti-Pattern: Do not create COBOL microservices that share a database. The whole point of microservices is independent data ownership. If two services need the same data, they should communicate through APIs or events.
40.6 Modern IDEs for COBOL
The days of editing COBOL in a 3270 terminal with ISPF are numbered (though ISPF remains remarkably efficient for experienced users). Modern IDEs bring features that COBOL developers have long envied in other languages.
VS Code with COBOL Extensions
Microsoft's Visual Studio Code has become the most popular IDE for modern COBOL development, thanks to extensions from IBM, Broadcom, and the open-source community:
IBM Z Open Editor provides: - COBOL syntax highlighting with Enterprise COBOL support - COPYBOOK resolution and navigation (Ctrl+click on COPY statement) - Outline view showing all paragraphs, sections, and data items - Hover documentation for COBOL verbs - Problem detection (undeclared variables, unreachable code)
Zowe Explorer provides: - Browse and edit mainframe datasets from VS Code - Submit JCL and view output - Manage z/OS Unix files - Interactive debug sessions
// .vscode/settings.json for COBOL project
{
"cobol.copybookDirectories": [
"src/copybooks",
"src/copybooks/db2"
],
"cobol.dialect": "enterprise",
"cobol.linterEnabled": true,
"cobol.lineFormat": "fixed",
"files.associations": {
"*.cbl": "cobol",
"*.cpy": "cobol",
"*.cob": "cobol"
},
"editor.rulers": [6, 7, 11, 72, 80],
"editor.wordWrap": "off"
}
Eclipse with COBOL Plugins
IBM's IDz (IBM Developer for z/OS) and Broadcom's Explorer for Mainframe are Eclipse-based IDEs that provide deep mainframe integration including interactive debugging of COBOL programs running on z/OS.
IntelliJ with COBOL Support
JetBrains offers COBOL support through plugins, providing the same refactoring and navigation features that Java developers enjoy.
💡 Key Insight: The shift from ISPF to VS Code is not just about aesthetics. Modern IDEs enable practices that were difficult or impossible in traditional mainframe editors: real-time linting, multi-file search-and-replace, Git integration, automated testing, and collaborative code review.
40.7 Automated Testing in CI/CD
COBOL-Check
COBOL-Check (from the Open Mainframe Project) is a unit testing framework specifically designed for COBOL:
* ACCTINQ-TEST.cut (COBOL-Check test file)
* Tests for the ACCTINQ program
TESTSUITE 'Account Inquiry Tests'
TESTCASE 'Valid account returns balance'
MOVE '0012345678' TO COMM-ACCT-NO
MOVE 'INQUIRY' TO COMM-FUNCTION
PERFORM 1000-BALANCE-INQUIRY
EXPECT COMM-RETURN-CODE TO BE '00'
EXPECT COMM-BALANCE TO BE NUMERIC
EXPECT COMM-ACCT-NAME NOT TO BE SPACES
TESTCASE 'Invalid account returns not-found'
MOVE '9999999999' TO COMM-ACCT-NO
MOVE 'INQUIRY' TO COMM-FUNCTION
PERFORM 1000-BALANCE-INQUIRY
EXPECT COMM-RETURN-CODE TO BE '04'
EXPECT COMM-ERROR-MSG NOT TO BE SPACES
TESTCASE 'Invalid function returns error'
MOVE '0012345678' TO COMM-ACCT-NO
MOVE 'BADCALL' TO COMM-FUNCTION
PERFORM 0000-MAIN
EXPECT COMM-RETURN-CODE TO BE '99'
Integration Testing
Integration tests verify that COBOL programs work correctly with their dependencies (files, databases, MQ):
#!/bin/bash
# integration-test.sh
echo "=== COBOL Integration Test Suite ==="
# Step 1: Create test data
./create-test-data.sh
# Step 2: Run the COBOL program
echo "Running TXNPROC..."
./bin/txnproc < test/data/test-transactions.dat \
> test/output/actual-output.dat 2>&1
RC=$?
# Step 3: Check return code
if [ $RC -ne 0 ]; then
echo "FAIL: TXNPROC returned RC=$RC (expected 0)"
exit 1
fi
# Step 4: Compare output with expected
diff test/output/actual-output.dat \
test/expected/expected-output.dat > /dev/null 2>&1
if [ $? -ne 0 ]; then
echo "FAIL: Output does not match expected"
diff test/output/actual-output.dat \
test/expected/expected-output.dat
exit 1
fi
echo "PASS: All integration tests passed"
Test Coverage
Tools like IBM Debug Tool and Compuware Xpediter can measure COBOL test coverage — which paragraphs, branches, and conditions are exercised by your tests. Aim for: - 80%+ paragraph coverage for critical programs - 100% coverage of error handling paths (these are the paths that matter most) - Branch coverage for all EVALUATE and nested IF statements
40.8 Infrastructure as Code for Mainframe
Infrastructure as Code (IaC) manages infrastructure through configuration files rather than manual processes. On the mainframe, this means managing:
- Dataset allocations
- CICS region definitions
- DB2 objects
- MQ queue definitions
- Security profiles (RACF)
Ansible for z/OS
IBM's Red Hat Ansible Certified Content for IBM Z brings IaC to the mainframe:
# ansible/playbooks/deploy-cobol.yml
---
- name: Deploy COBOL programs to z/OS
hosts: mainframe_lpar
collections:
- ibm.ibm_zos_core
tasks:
- name: Upload COBOL source to PDS
zos_copy:
src: "src/cobol/ACCTINQ.cbl"
dest: "GBANK.SOURCE.COBOL(ACCTINQ)"
encoding:
from: UTF-8
to: IBM-1047
- name: Submit compile JCL
zos_job_submit:
src: "jcl/COMPILE.jcl"
location: LOCAL
wait_time_s: 120
register: compile_result
- name: Check compile result
assert:
that:
- compile_result.jobs[0].ret_code.code <= 4
fail_msg: "Compile failed with RC={{ compile_result.jobs[0].ret_code.code }}"
- name: Copy load module to test library
zos_copy:
src: "GBANK.BUILD.LOADLIB(ACCTINQ)"
dest: "GBANK.TEST.LOADLIB(ACCTINQ)"
remote_src: true
- name: Define CICS program resource
zos_operator:
cmd: "CEDA DEFINE PROGRAM(ACCTINQ) GROUP(GBANKGRP) LANGUAGE(COBOL)"
- name: Install CICS program
zos_operator:
cmd: "CEMT SET PROGRAM(ACCTINQ) NEWCOPY"
Terraform for Mainframe-Adjacent Resources
While Terraform does not directly manage z/OS resources, it manages the cloud infrastructure that surrounds the mainframe:
# terraform/main.tf
# API Gateway for COBOL services
resource "aws_api_gateway_rest_api" "globalbank" {
name = "globalbank-api"
description = "API Gateway for GlobalBank COBOL services"
}
resource "aws_api_gateway_resource" "accounts" {
rest_api_id = aws_api_gateway_rest_api.globalbank.id
parent_id = aws_api_gateway_rest_api.globalbank.root_resource_id
path_part = "accounts"
}
# Integration with z/OS Connect
resource "aws_api_gateway_integration" "account_inquiry" {
rest_api_id = aws_api_gateway_rest_api.globalbank.id
resource_id = aws_api_gateway_resource.accounts.id
http_method = "GET"
type = "HTTP_PROXY"
uri = "https://zosconnect.globalbank.com/api/v1/accounts"
integration_http_method = "GET"
}
40.9 Zowe: The Open Source Mainframe Interface
Zowe is an open-source framework that provides modern interfaces to z/OS. It is to the mainframe what AWS CLI is to AWS.
Zowe CLI
# List datasets
zowe files list data-set "GBANK.SOURCE.*"
# Download a COBOL source member
zowe files download data-set \
"GBANK.SOURCE.COBOL(ACCTINQ)" \
--file src/cobol/ACCTINQ.cbl
# Upload modified source
zowe files upload file-to-data-set \
src/cobol/ACCTINQ.cbl \
"GBANK.SOURCE.COBOL(ACCTINQ)"
# Submit JCL and wait for completion
zowe jobs submit data-set \
"GBANK.JCL(COMPILE)" \
--wait-for-active
# View job output
zowe jobs view spool-file-by-id JOB12345 2
# Issue console commands
zowe console issue command "D A,L"
Zowe in CI/CD
Zowe CLI integrates naturally into CI/CD pipelines, bridging the gap between Git-based workflows and z/OS:
# GitHub Actions with Zowe
- name: Configure Zowe CLI
run: |
zowe profiles create zosmf-profile mainframe \
--host ${{ secrets.ZOS_HOST }} \
--port ${{ secrets.ZOS_PORT }} \
--user ${{ secrets.ZOS_USER }} \
--password ${{ secrets.ZOS_PASS }} \
--reject-unauthorized false
- name: Upload and compile
run: |
zowe files upload dir-to-pds \
src/cobol/ \
"GBANK.SOURCE.COBOL"
zowe jobs submit data-set \
"GBANK.JCL(BUILDALL)" \
--wait-for-output
40.10 GlobalBank: Setting Up CI/CD for GLOBALBANK-CORE
When Priya Kapoor proposed CI/CD for GlobalBank's core banking modules, the initial reaction from senior management was skepticism. "We have been deploying COBOL changes for 30 years with Endevor and change management forms," said the VP of Technology. "Why fix what is not broken?"
Priya's answer was data-driven:
📊 Deployment Metrics — Before CI/CD - Average time from code complete to production: 14 business days - Deployments per month: 2 (bi-weekly release windows) - Deployment success rate: 87% (13% required rollback) - Time to detect compile errors: 4-6 hours (overnight compile batch) - Time to detect integration issues: 2-3 days (manual testing)
The Implementation
Priya's team implemented CI/CD in phases:
Phase 1: Source Control Migration (Month 1-2) - Migrated 1,247 COBOL programs from Endevor to Git - Maintained bi-directional sync with Endevor during transition - Established branching strategy (main/develop/feature)
Phase 2: Automated Build (Month 3) - Implemented IBM DBB for dependency-aware builds - Automated compile, link-edit, and CICS NEWCOPY - Build triggered on every push to develop branch
Phase 3: Automated Testing (Month 4-5) - Added COBOL-Check unit tests for 47 critical programs - Implemented integration test suite using Galasa - Added regression test for control total verification
Phase 4: Deployment Pipeline (Month 6) - GitHub Actions pipeline: build, test, deploy to test, deploy to staging - Production deployment gated by manual approval - Automated rollback on test failure
The Results
📊 Deployment Metrics — After CI/CD - Average time from code complete to production: 3 business days - Deployments per month: 12 (on-demand) - Deployment success rate: 98% - Time to detect compile errors: 3 minutes - Time to detect integration issues: 15 minutes
Derek Washington, who had come from a Java shop, was instrumental in the implementation: "The COBOL code was already well-structured. We just gave it the same development infrastructure that every other language takes for granted."
Maria Chen was more cautious: "I like it. But I still want a human to approve every production deployment. These programs move real money."
⚖️ Theme — The Modernization Spectrum: GlobalBank's CI/CD implementation changed nothing about the COBOL programs themselves. The same code, the same copybooks, the same JCL. What changed was the velocity and reliability of getting changes from a developer's workstation to production. This is modernization at the process level, not the code level — and it delivers immediate, measurable value.
40.11 MedClaim: Containerized Claim Processing Service
At MedClaim, James Okafor faced a different challenge. A new business unit needed to process dental claims using the same adjudication rules as medical claims, but on a separate infrastructure for regulatory reasons.
The Solution: Containerized COBOL
James extracted the core adjudication logic (ADJCORE subprogram) and wrapped it in a containerized service:
# Dockerfile for MedClaim Dental Adjudication
FROM gnucobol/gnucobol:3.2 AS builder
COPY src/cobol/ADJCORE.cbl /app/src/
COPY src/cobol/DENTALWRAP.cbl /app/src/
COPY src/copybooks/*.cpy /app/src/
WORKDIR /app/src
RUN cobc -x -o /app/bin/dental-adjudicate \
DENTALWRAP.cbl ADJCORE.cbl
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y libcob4
COPY --from=builder /app/bin/ /app/bin/
COPY config/ /app/config/
EXPOSE 8080
CMD ["/app/bin/dental-adjudicate"]
The DENTALWRAP program added an HTTP listener (using GnuCOBOL's system call capabilities) that received JSON claim data, parsed it into the ADJCORE linkage section format, called ADJCORE, and returned the result as JSON.
The Architecture
┌──────────────┐
│ Dental Portal│
│ (React App) │
└──────┬───────┘
│ HTTPS/JSON
┌──────▼───────┐
│ API Gateway │
│ (Kong) │
└──────┬───────┘
│
┌──────▼────────────────┐
│ Kubernetes Cluster │
│ ┌────────┐ ┌────────┐ │
│ │ADJUDIC │ │ADJUDIC │ │
│ │Pod #1 │ │Pod #2 │ │ (auto-scales 2-10 pods)
│ │(COBOL) │ │(COBOL) │ │
│ └────┬───┘ └───┬────┘ │
│ │ │ │
│ ┌────▼─────────▼────┐ │
│ │ PostgreSQL │ │
│ │ (claim data) │ │
│ └───────────────────┘ │
└───────────────────────┘
Results
The containerized COBOL adjudication service processed dental claims with the exact same business rules as the mainframe medical claims system. When adjudication rules changed, the ADJCORE subprogram was updated once and deployed to both environments.
🔵 MedClaim Insight: "The most surprising thing," James said, "was how fast COBOL ran in a container. The adjudication of a single claim took 2 milliseconds on the mainframe and 4 milliseconds in the container. The difference was negligible for our volumes."
40.12 Try It Yourself: Building a COBOL CI/CD Pipeline
Student Mainframe Lab Exercise
Build a CI/CD pipeline for a simple COBOL project:
- Create a GitHub repository with the standard directory structure (src/cobol, src/copybooks, test/)
- Write two COBOL programs: a simple calculator (CALC.cbl) and a test program (CALC-TEST.cbl)
- Create a GitHub Actions workflow that: - Installs GnuCOBOL - Compiles both programs - Runs the test program - Reports pass/fail status
- Verify that pushing a breaking change (e.g., a syntax error) causes the pipeline to fail
- Add a Dockerfile that builds and runs your COBOL program in a container
This exercise does not require mainframe access — GnuCOBOL on Linux (via GitHub Actions runners) is sufficient.
40.13 Security Considerations
Bringing COBOL into the modern stack introduces new security surfaces:
- Container security: Scan container images for vulnerabilities, run as non-root, use read-only file systems
- API security: Authenticate all API calls, use TLS, implement rate limiting
- Secret management: Never hardcode credentials in COBOL source or JCL; use vault services (HashiCorp Vault, AWS Secrets Manager)
- Network security: Encrypt data in transit between containers and the mainframe
- Audit: Log all deployments, who deployed what and when
# Kubernetes security context
securityContext:
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
40.14 Docker Multi-Stage Builds in Depth
The Dockerfile in section 40.1 introduced multi-stage builds. Let us explore this pattern more thoroughly, because it is the foundation for containerized COBOL in production.
Why Multi-Stage Matters for COBOL
The GnuCOBOL compiler and its development dependencies consume approximately 400MB of disk space. The compiled COBOL binary and runtime library need approximately 30MB. Without multi-stage builds, your production container carries 370MB of unnecessary weight — the compiler, header files, and build tools that are never used at runtime.
# ADVANCED multi-stage Dockerfile for COBOL microservice
# Stage 1: Build environment
FROM ubuntu:22.04 AS builder
# Install build dependencies
RUN apt-get update && \
apt-get install -y \
gnucobol4 \
libcjson-dev \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy source code and copybooks
WORKDIR /build
COPY src/cobol/*.cbl ./src/
COPY src/copybooks/*.cpy ./copybooks/
# Compile all programs with optimization
RUN cd src && \
for f in *.cbl; do \
echo "Compiling ${f}..." && \
cobc -x -O2 -I ../copybooks \
-o /build/bin/${f%.cbl} ${f} || exit 1; \
done
# Run compile-time tests
COPY test/unit/*.cbl ./test/
RUN cd test && \
for f in *-TEST.cbl; do \
echo "Compiling test ${f}..." && \
cobc -x -I ../copybooks ${f} -o ${f%.cbl} && \
./${f%.cbl} || exit 1; \
done
# Stage 2: Minimal runtime image
FROM ubuntu:22.04 AS runtime
# Install ONLY the runtime library (not compiler)
RUN apt-get update && \
apt-get install -y --no-install-recommends \
libcob4 \
libcjson1 \
ca-certificates \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Security: non-root user, read-only filesystem
RUN groupadd -r cobolapp && \
useradd -r -g cobolapp -d /app -s /sbin/nologin cobolapp && \
mkdir -p /app/bin /app/config /app/data /app/logs && \
chown -R cobolapp:cobolapp /app
# Copy ONLY compiled binaries from builder
COPY --from=builder --chown=cobolapp:cobolapp \
/build/bin/ /app/bin/
COPY --chown=cobolapp:cobolapp config/ /app/config/
USER cobolapp
WORKDIR /app
# Health check endpoint (if program supports it)
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
CMD /app/bin/healthcheck || exit 1
EXPOSE 8080
ENTRYPOINT ["/app/bin/acctinq"]
📊 Image Size Comparison
| Approach | Image Size | Security Surface |
|---|---|---|
| Single-stage (compiler included) | ~480MB | Large (compiler, build tools) |
| Multi-stage (runtime only) | ~85MB | Small (minimal dependencies) |
| Multi-stage + distroless base | ~45MB | Minimal (no shell, no package manager) |
docker-compose for Multi-Service COBOL
When multiple COBOL microservices work together, docker-compose orchestrates them:
# docker-compose.yml
version: '3.8'
services:
account-service:
build:
context: ./services/account
dockerfile: Dockerfile
image: globalbank/account-svc:1.0
ports:
- "8081:8080"
volumes:
- account-data:/app/data
environment:
- DB_HOST=postgres
- LOG_LEVEL=INFO
depends_on:
- postgres
restart: unless-stopped
transaction-service:
build:
context: ./services/transaction
dockerfile: Dockerfile
image: globalbank/txn-svc:1.0
ports:
- "8082:8080"
volumes:
- txn-data:/app/data
environment:
- DB_HOST=postgres
- MQ_HOST=rabbitmq
depends_on:
- postgres
- rabbitmq
restart: unless-stopped
report-service:
build:
context: ./services/report
dockerfile: Dockerfile
image: globalbank/rpt-svc:1.0
volumes:
- report-output:/app/output
environment:
- DB_HOST=postgres
postgres:
image: postgres:15
volumes:
- pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=globalbank
- POSTGRES_USER=gbuser
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
volumes:
account-data:
txn-data:
report-output:
pgdata:
secrets:
db_password:
file: ./secrets/db_password.txt
40.15 Kubernetes Deployment in Depth
The Kubernetes YAML in section 40.1 showed a basic deployment. Production COBOL workloads on Kubernetes require additional configuration.
Horizontal Pod Autoscaler
Scale COBOL services based on demand:
# kubernetes/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: acctinq-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: acctinq-service
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Pods
value: 2
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Pods
value: 1
periodSeconds: 120
ConfigMap for COBOL Parameters
Externalize COBOL program parameters using Kubernetes ConfigMaps:
# kubernetes/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: acctinq-config
data:
checkpoint-interval: "5000"
max-errors: "100"
log-level: "INFO"
db-connection-pool-size: "10"
---
# Reference in deployment
spec:
containers:
- name: acctinq
image: globalbank/acctinq:1.0
envFrom:
- configMapRef:
name: acctinq-config
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: acctinq-config
Kubernetes CronJob for COBOL Batch
COBOL batch programs can run as Kubernetes CronJobs:
# kubernetes/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: daily-report-gen
spec:
schedule: "0 4 * * *" # 4:00 AM daily
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 7
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 3600
template:
spec:
restartPolicy: Never
containers:
- name: rptgen
image: globalbank/rptgen:1.0
volumeMounts:
- name: input-data
mountPath: /app/input
readOnly: true
- name: output-data
mountPath: /app/output
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumes:
- name: input-data
persistentVolumeClaim:
claimName: daily-input-pvc
- name: output-data
persistentVolumeClaim:
claimName: report-output-pvc
40.16 GitHub Actions Pipelines for COBOL: Advanced Patterns
The basic pipeline in section 40.4 compiles and tests. Production pipelines need more.
Complete CI/CD Pipeline with Quality Gates
# .github/workflows/cobol-full-pipeline.yml
name: COBOL Full Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
COBOL_STANDARD: cobol2014
jobs:
# Stage 1: Static Analysis
analyze:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install GnuCOBOL
run: |
sudo apt-get update
sudo apt-get install -y gnucobol4
- name: Check COBOL source formatting
run: |
echo "Checking column boundaries..."
for f in src/cobol/*.cbl; do
# Check for lines exceeding column 72
awk 'length > 72 && !/^\*/ {
print FILENAME ":" NR ": line exceeds col 72"
err=1
} END { exit err }' "$f"
done
- name: Check for common anti-patterns
run: |
echo "Scanning for GO TO statements..."
for f in src/cobol/*.cbl; do
if grep -n "GO TO" "$f"; then
echo "WARNING: GO TO found in $f"
fi
done
echo "Checking for missing scope terminators..."
for f in src/cobol/*.cbl; do
IF_COUNT=$(grep -c "^.......\s*IF " "$f" || true)
ENDIF_COUNT=$(grep -c "END-IF" "$f" || true)
if [ "$IF_COUNT" -ne "$ENDIF_COUNT" ]; then
echo "WARNING: IF/END-IF mismatch in $f"
echo " IF count: $IF_COUNT, END-IF count: $ENDIF_COUNT"
fi
done
- name: Check file status handling
run: |
for f in src/cobol/*.cbl; do
READ_COUNT=$(grep -c "READ " "$f" || true)
STATUS_CHECK=$(grep -c "FILE.STATUS\|WS-.*-STATUS" "$f" \
|| true)
if [ "$READ_COUNT" -gt "$STATUS_CHECK" ]; then
echo "WARNING: $f has $READ_COUNT READs but only"
echo " $STATUS_CHECK status references"
fi
done
# Stage 2: Build and Unit Test
build-test:
needs: analyze
runs-on: ubuntu-latest
container:
image: ubuntu:22.04
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: |
apt-get update
apt-get install -y gnucobol4 default-jre wget
- name: Compile all programs
run: |
mkdir -p bin
cd src/cobol
for f in *.cbl; do
echo "::group::Compiling $f"
cobc -x -W -I ../copybooks \
-o ../../bin/${f%.cbl} "$f" 2>&1
RC=$?
echo "::endgroup::"
if [ $RC -ne 0 ]; then
echo "::error file=src/cobol/$f::Compilation failed"
exit 1
fi
done
- name: Run COBOL-Check unit tests
run: |
wget -q https://github.com/openmainframeproject/\
cobol-check/releases/latest/download/cobol-check.jar
java -jar cobol-check.jar \
--programs ACCTINQ TXNPROC BALCALC \
--config-file test/cobol-check.properties \
2>&1 | tee test-results.txt
if grep -q "FAILED" test-results.txt; then
echo "::error::Unit tests failed"
exit 1
fi
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results.txt
- name: Upload binaries
uses: actions/upload-artifact@v4
with:
name: cobol-binaries
path: bin/
# Stage 3: Integration Test
integration-test:
needs: build-test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download binaries
uses: actions/download-artifact@v4
with:
name: cobol-binaries
path: bin/
- name: Run integration tests
run: |
chmod +x bin/*
chmod +x test/integration/*.sh
cd test/integration
./run-all-tests.sh
# Stage 4: Build Container
containerize:
needs: integration-test
if: github.ref == 'refs/heads/main' ||
github.ref == 'refs/heads/develop'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Docker image
run: |
docker build \
-t globalbank/acctinq:${{ github.sha }} \
-t globalbank/acctinq:latest \
-f Dockerfile .
- name: Scan image for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: globalbank/acctinq:${{ github.sha }}
severity: HIGH,CRITICAL
exit-code: 1
- name: Push to container registry
if: github.ref == 'refs/heads/main'
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | \
docker login -u ${{ secrets.REGISTRY_USER }} \
--password-stdin registry.globalbank.com
docker tag globalbank/acctinq:${{ github.sha }} \
registry.globalbank.com/globalbank/acctinq:${{ github.sha }}
docker push \
registry.globalbank.com/globalbank/acctinq:${{ github.sha }}
💡 Key Insight: The pipeline includes a container vulnerability scan (Trivy). This catches security issues in the base image and dependencies before they reach production. For financial institutions, this scan is often a compliance requirement.
40.17 Zowe CLI: Working with z/OS from Your Workstation
Zowe CLI transforms the developer experience for z/OS. Instead of logging into TSO/ISPF through a 3270 emulator, you interact with z/OS through a modern command line.
Setting Up Zowe Profiles
# Create a z/OSMF profile (connection to z/OS)
zowe profiles create zosmf-profile myMainframe \
--host lpar1.globalbank.com \
--port 10443 \
--user MCHEN \
--password "********" \
--reject-unauthorized false \
--overwrite
# Create a TSO profile
zowe profiles create tso-profile myTso \
--account GBANK \
--character-set 697 \
--region-size 4096 \
--overwrite
# Create an SSH profile for USS
zowe profiles create ssh-profile mySsh \
--host lpar1.globalbank.com \
--port 22 \
--user MCHEN \
--password "********" \
--overwrite
# Verify connectivity
zowe zosmf check status
Common Developer Workflows with Zowe
# Download a COBOL source member, edit locally, upload
zowe files download data-set \
"GBANK.SOURCE.COBOL(ACCTINQ)" \
--file ./ACCTINQ.cbl \
--encoding IBM-1047
# Edit in VS Code...
code ./ACCTINQ.cbl
# Upload modified source
zowe files upload file-to-data-set \
./ACCTINQ.cbl \
"GBANK.SOURCE.COBOL(ACCTINQ)" \
--encoding IBM-1047
# Submit compile JCL and wait for result
JOB_ID=$(zowe jobs submit data-set \
"GBANK.JCL(COMPILE)" \
--wait-for-active \
--rff jobid --rft string)
echo "Job ID: $JOB_ID"
# Check job status
zowe jobs view job-status-by-jobid "$JOB_ID"
# View compiler output (DD SYSPRINT)
zowe jobs view spool-file-by-id "$JOB_ID" 4
# List all spool files for the job
zowe jobs list spool-files-by-jobid "$JOB_ID"
Automating with Zowe in Scripts
#!/bin/bash
# deploy.sh — Automated COBOL deployment via Zowe
set -e
PROGRAM=$1
ENVIRONMENT=$2 # TEST or PROD
echo "Deploying $PROGRAM to $ENVIRONMENT..."
# Upload source
zowe files upload file-to-data-set \
"src/cobol/${PROGRAM}.cbl" \
"GBANK.${ENVIRONMENT}.COBOL(${PROGRAM})"
# Upload copybooks
for cpy in src/copybooks/*.cpy; do
MEMBER=$(basename "$cpy" .cpy)
zowe files upload file-to-data-set \
"$cpy" \
"GBANK.${ENVIRONMENT}.COPYLIB(${MEMBER})"
done
# Submit compile
JOB_ID=$(zowe jobs submit data-set \
"GBANK.JCL(COMPILE)" \
--wait-for-output \
--rff jobid --rft string)
# Check return code
RC=$(zowe jobs view job-status-by-jobid "$JOB_ID" \
--rff retcode --rft string)
if [ "$RC" != "CC 0000" ] && [ "$RC" != "CC 0004" ]; then
echo "COMPILE FAILED: RC=$RC"
zowe jobs view spool-file-by-id "$JOB_ID" 4
exit 1
fi
echo "Compile successful. $PROGRAM deployed to $ENVIRONMENT."
40.18 VS Code Debugging for COBOL
Modern COBOL debugging in VS Code provides the same experience Java and Python developers expect.
Setting Up the Debug Configuration
// .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Debug COBOL (GnuCOBOL)",
"type": "gdb",
"request": "launch",
"program": "${workspaceFolder}/bin/acctinq",
"args": [],
"cwd": "${workspaceFolder}",
"preLaunchTask": "compile-debug",
"environment": [
{"name": "COB_SET_TRACE", "value": "Y"},
{"name": "COB_TRACE_FILE", "value": "/tmp/cobtrace.log"}
]
},
{
"name": "Debug COBOL (z/OS Remote)",
"type": "ibm-debug",
"request": "launch",
"program": "GBANK.TEST.LOADLIB(ACCTINQ)",
"host": "lpar1.globalbank.com",
"port": 8001,
"cicsRegion": "GBTEST1",
"transactionId": "AINQ"
}
]
}
// .vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "compile-debug",
"type": "shell",
"command": "cobc",
"args": [
"-x", "-g", "-debug",
"-I", "src/copybooks",
"-o", "bin/acctinq",
"src/cobol/ACCTINQ.cbl"
],
"group": "build",
"problemMatcher": "$gnucobol"
}
]
}
Debugging Features Available
With the IBM Z Open Debug extension, you can: - Set breakpoints on COBOL paragraphs and specific lines - Inspect WORKING-STORAGE variables in real time (including COMP-3 fields displayed as decimal) - Step through PERFORM statements into called paragraphs - Watch specific variables for changes - Evaluate conditions in the debug console - Debug CICS programs with transaction-level breakpoints
💡 Key Insight: The -g -debug flags on the GnuCOBOL compiler include debug symbols in the binary. These flags should be used for development and test builds but removed for production builds (use -O2 optimization instead). Derek Washington learned this the hard way when a debug-compiled program ran 40% slower than expected in the performance test environment.
40.19 Modern Testing in CI/CD
Test Pyramid for COBOL
The testing strategy for COBOL in CI/CD follows the same pyramid as any other language:
/\
/ \ End-to-End Tests
/ \ (full job stream, 2-3 tests)
/------\
/ \ Integration Tests
/ \ (program + files/DB2, 10-20 tests)
/------------\
/ \ Unit Tests
/ \ (paragraph-level, 50+ tests per program)
/------------------\
COBOL-Check Best Practices for CI/CD
* Example: comprehensive test suite for BALCALC
TESTSUITE 'Balance Calculation Tests'
* Happy path tests
TESTCASE 'Simple interest on positive balance'
MOVE 10000.00 TO WS-BALANCE
MOVE 0.05 TO WS-ANNUAL-RATE
MOVE 30 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-INTEREST TO BE 41.10
TESTCASE 'Zero balance yields zero interest'
MOVE 0.00 TO WS-BALANCE
MOVE 0.05 TO WS-ANNUAL-RATE
MOVE 30 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-INTEREST TO BE 0.00
* Boundary tests
TESTCASE 'Maximum balance does not overflow'
MOVE 99999999999.99 TO WS-BALANCE
MOVE 0.25 TO WS-ANNUAL-RATE
MOVE 1 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-INTEREST TO BE NUMERIC
TESTCASE 'Negative balance (overdraft) calculates correctly'
MOVE -500.00 TO WS-BALANCE
MOVE 0.18 TO WS-ANNUAL-RATE
MOVE 30 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-INTEREST TO BE -7.40
* Error handling tests
TESTCASE 'Zero rate yields zero interest'
MOVE 10000.00 TO WS-BALANCE
MOVE 0.00 TO WS-ANNUAL-RATE
MOVE 30 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-INTEREST TO BE 0.00
TESTCASE 'Invalid days triggers error flag'
MOVE 10000.00 TO WS-BALANCE
MOVE 0.05 TO WS-ANNUAL-RATE
MOVE -1 TO WS-DAYS
PERFORM 3000-CALC-INTEREST
EXPECT WS-CALC-ERROR TO BE 'Y'
Test Data Management in CI/CD
# test/data/create-test-data.sh
#!/bin/bash
# Generate test data files for integration tests
echo "Creating test account master..."
python3 test/generators/gen_accounts.py \
--count 1000 \
--seed 42 \
--output test/data/test-accounts.dat
echo "Creating test transactions..."
python3 test/generators/gen_transactions.py \
--accounts test/data/test-accounts.dat \
--count 5000 \
--seed 42 \
--output test/data/test-transactions.dat
echo "Creating expected output..."
python3 test/generators/gen_expected.py \
--accounts test/data/test-accounts.dat \
--transactions test/data/test-transactions.dat \
--output test/expected/expected-output.dat
⚖️ Theme — The Modernization Spectrum: Testing in CI/CD represents perhaps the highest-value modernization investment. The COBOL code does not change. The test infrastructure around it ensures that every change is validated before deployment. At GlobalBank, the introduction of automated testing reduced production defects by 73% in the first year.
40.20 Try It Yourself: Complete Containerized COBOL Service
Student Lab Exercise
Build a complete containerized COBOL microservice from scratch:
Step 1: Write the COBOL program
Create a program called CALCAPI.cbl that implements a simple loan payment calculator. It reads a JSON request containing loan amount, annual interest rate, and term in months. It calculates the monthly payment using the standard amortization formula and returns a JSON response.
* Input JSON:
* {"loanAmount": 250000.00, "annualRate": 0.065,
* "termMonths": 360}
*
* Output JSON:
* {"monthlyPayment": 1580.17,
* "totalPayment": 568861.20,
* "totalInterest": 318861.20}
Step 2: Write the Dockerfile
Create a multi-stage Dockerfile that: - Stage 1: Compiles CALCAPI.cbl with GnuCOBOL - Stage 2: Creates a minimal runtime image with only the compiled binary
Step 3: Create the Kubernetes manifests
Write the deployment, service, and horizontal pod autoscaler YAML files.
Step 4: Create the GitHub Actions pipeline
Write a CI/CD workflow that compiles, tests, and builds the container image.
Step 5: Test locally
# Build the container
docker build -t calcapi:1.0 .
# Run it
docker run -p 8080:8080 calcapi:1.0
# Test with curl
curl -X POST http://localhost:8080/calculate \
-H "Content-Type: application/json" \
-d '{"loanAmount":250000,"annualRate":0.065,
"termMonths":360}'
This exercise brings together Docker, Kubernetes, GitHub Actions, and COBOL programming into a single project that demonstrates the complete modern COBOL stack.
40.21 Monitoring Containerized COBOL in Production
Health Checks and Readiness Probes
Containerized COBOL services need health checking. The simplest approach is a dedicated health check program:
IDENTIFICATION DIVISION.
PROGRAM-ID. HEALTHCHK.
*
* Health check endpoint for containerized COBOL
* Returns 0 (healthy) or 1 (unhealthy)
*
PROCEDURE DIVISION.
0000-MAIN.
* Check that the service can access its data
OPEN INPUT CONFIG-FILE
IF WS-CONFIG-STATUS = '00'
CLOSE CONFIG-FILE
DISPLAY 'HEALTHY'
MOVE 0 TO RETURN-CODE
ELSE
DISPLAY 'UNHEALTHY: cannot access config'
MOVE 1 TO RETURN-CODE
END-IF
STOP RUN.
Prometheus Metrics for COBOL
For production monitoring, COBOL services can expose Prometheus-compatible metrics through a sidecar container or through a metrics file:
* Write metrics in Prometheus format
9500-WRITE-METRICS.
OPEN OUTPUT METRICS-FILE
STRING
'# HELP cobol_requests_total Total requests'
X'0A'
'# TYPE cobol_requests_total counter'
X'0A'
'cobol_requests_total '
WS-TOTAL-REQUESTS
X'0A'
'# HELP cobol_errors_total Total errors'
X'0A'
'# TYPE cobol_errors_total counter'
X'0A'
'cobol_errors_total '
WS-TOTAL-ERRORS
X'0A'
'# HELP cobol_processing_seconds '
'Processing time in seconds'
X'0A'
'# TYPE cobol_processing_seconds histogram'
X'0A'
'cobol_processing_seconds_sum '
WS-TOTAL-PROC-TIME
X'0A'
DELIMITED SIZE
INTO METRICS-RECORD
WRITE METRICS-RECORD
CLOSE METRICS-FILE.
A Prometheus sidecar or node exporter scrapes this file and exposes the metrics to the monitoring stack.
Logging Best Practices for Containerized COBOL
In containers, all output goes to stdout/stderr, which container orchestrators capture automatically. Structure your COBOL DISPLAY output for machine parsing:
* Structured logging for container environments
DISPLAY '{"timestamp":"' WS-ISO-TIMESTAMP '",'
'"level":"INFO",'
'"program":"ACCTINQ",'
'"event":"REQUEST_PROCESSED",'
'"accountNo":"' WS-ACCT-NO '",'
'"responseTime":' WS-RESP-TIME-MS ','
'"status":"SUCCESS"}'.
This JSON-formatted log output can be ingested by ELK (Elasticsearch, Logstash, Kibana), Splunk, or cloud-native logging services without custom parsers.
💡 Key Insight: Structured logging is one of the simplest modernization wins. Changing DISPLAY statements from free-form text to JSON format enables the same observability tooling used for Java, Python, and Node.js services. Derek Washington implemented structured logging for 12 COBOL programs in two days — and immediately gained dashboards, alerts, and search capabilities that had previously been unavailable.
40.22 Galasa: Modern Testing Framework for Mainframe
Galasa is an open-source testing framework from the Open Mainframe Project specifically designed for mainframe application testing. It bridges the gap between modern testing practices and mainframe infrastructure.
What Galasa Provides
Unlike COBOL-Check (which tests individual paragraphs), Galasa orchestrates end-to-end tests that span entire job streams, CICS transactions, and cross-program workflows:
// Galasa test class for COBOL account inquiry
@Test
public class AccountInquiryTest {
@CicsRegion
public ICicsRegion cicsRegion;
@ZosBatch
public IZosBatch batch;
@Test
public void testBalanceInquiry() throws Exception {
// Set up test data
ITerminal terminal = cicsRegion.getTerminal();
terminal.type("AINQ").enter(); // Start ACCTINQ transaction
// Enter account number
terminal.waitForKeyboard();
terminal.type("0012345678").enter();
// Verify response
terminal.waitForTextInBody("BALANCE");
String balance = terminal.retrieveField(10, 30, 15);
assertThat(balance).contains("15,847.93");
// Clean up
terminal.pf3(); // Exit transaction
}
@Test
public void testBatchPosting() throws Exception {
// Submit batch job and verify
IZosBatchJob job = batch.submitJob(
"GBANK.TEST.JCL(GBPOST)");
job.waitForJob();
assertThat(job.getRetcode()).isEqualTo("CC 0000");
// Verify control totals in output
String sysout = job.getSpoolFile("RPTOUT");
assertThat(sysout).contains("BALANCED");
}
}
Galasa in CI/CD
Galasa integrates into CI/CD pipelines, providing mainframe test automation:
# GitHub Actions with Galasa
- name: Run Galasa Integration Tests
run: |
galasactl runs submit \
--bundle dev.galasa.banking.tests \
--test AccountInquiryTest \
--test BatchPostingTest \
--stream globalbank \
--portfolio test-portfolio \
--wait
At GlobalBank, Priya Kapoor's team uses Galasa for regression testing of the entire nightly batch cycle. A complete batch cycle test runs in the test environment every time a COBOL program is modified, catching regressions before they reach production.
Test Environment Management
Galasa manages test environments through provisioning — automatically allocating and cleaning up test resources:
Before test:
- Allocate test VSAM dataset (copy from production snapshot)
- Load test reference tables
- Initialize checkpoint file
- Create test CICS terminal session
During test:
- Execute COBOL programs through CICS or batch
- Capture all output and control totals
- Assert expected results
After test:
- Delete test datasets
- Release CICS sessions
- Archive test results and logs
This provisioning ensures that tests are isolated from each other and from the production environment — a critical requirement for financial systems.
✅ Best Practice: Every COBOL program that processes financial data should have at least three Galasa tests: one for the happy path, one for the primary error path, and one for the control total verification. These three tests catch the vast majority of regression issues.
40.23 Infrastructure as Code: Terraform for the Mainframe Perimeter
While Ansible manages resources on z/OS directly (section 40.8), Terraform manages the cloud infrastructure that surrounds the mainframe. Together, they provide end-to-end Infrastructure as Code.
Complete Terraform Configuration for COBOL API Gateway
# terraform/main.tf — Complete API infrastructure for COBOL services
terraform {
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
}
}
# VPC for mainframe connectivity
resource "aws_vpc" "mainframe_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = { Name = "globalbank-mainframe-vpc" }
}
# Direct Connect to on-premises mainframe
resource "aws_dx_connection" "mainframe_link" {
name = "globalbank-dx"
bandwidth = "1Gbps"
location = "EqDC2"
tags = { Name = "mainframe-direct-connect" }
}
# API Gateway
resource "aws_apigatewayv2_api" "cobol_api" {
name = "globalbank-cobol-api"
protocol_type = "HTTP"
cors_configuration {
allow_origins = ["https://portal.globalbank.com"]
allow_methods = ["GET", "POST"]
allow_headers = ["Content-Type", "Authorization"]
}
}
# Route: Account Inquiry
resource "aws_apigatewayv2_route" "account_inquiry" {
api_id = aws_apigatewayv2_api.cobol_api.id
route_key = "GET /api/v1/accounts/{accountNumber}"
target = "integrations/${aws_apigatewayv2_integration.zos_connect.id}"
}
# Integration with z/OS Connect
resource "aws_apigatewayv2_integration" "zos_connect" {
api_id = aws_apigatewayv2_api.cobol_api.id
integration_type = "HTTP_PROXY"
integration_uri = "https://zosconnect.globalbank.com:9443"
integration_method = "ANY"
connection_type = "VPC_LINK"
connection_id = aws_apigatewayv2_vpc_link.mainframe.id
}
# WAF for API protection
resource "aws_wafv2_web_acl" "api_waf" {
name = "cobol-api-waf"
scope = "REGIONAL"
default_action { allow {} }
rule {
name = "rate-limit"
priority = 1
action { block {} }
statement {
rate_based_statement {
limit = 2000
aggregate_key_type = "IP"
}
}
visibility_config {
sampled_requests_enabled = true
cloudwatch_metrics_enabled = true
metric_name = "rate-limit"
}
}
}
# CloudWatch monitoring
resource "aws_cloudwatch_metric_alarm" "api_errors" {
alarm_name = "cobol-api-5xx-errors"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = 2
metric_name = "5XXError"
namespace = "AWS/ApiGateway"
period = 300
statistic = "Sum"
threshold = 10
alarm_description = "COBOL API returning server errors"
alarm_actions = [aws_sns_topic.alerts.arn]
}
This Terraform configuration creates the complete cloud infrastructure for routing API traffic to COBOL programs on the mainframe — including networking, API gateway, security (WAF), and monitoring. The entire setup is version-controlled, reviewable, and reproducible.
40.24 The Developer Experience: From ISPF to VS Code
The transformation of the COBOL developer experience deserves emphasis because it is one of the most impactful modernizations — even though it changes nothing about the code itself.
The Traditional Workflow (ISPF)
In the traditional mainframe development workflow, a developer: 1. Logs into TSO via a 3270 terminal emulator 2. Opens ISPF (Interactive System Productivity Facility) 3. Navigates to the PDS (Partitioned Data Set) containing the source 4. Edits the source member in the ISPF editor (green screen, fixed columns) 5. Submits compile JCL manually 6. Checks the compile output in SDSF 7. Submits test JCL manually 8. Reviews test output 9. Promotes the change through Endevor or ChangeMan
Each step requires knowledge of TSO commands, ISPF navigation, JCL syntax, and the source management tool. The learning curve is steep, and the environment feels alien to developers who grew up with graphical IDEs.
The Modern Workflow (VS Code + Zowe)
The modern workflow: 1. Opens VS Code on their laptop 2. Zowe Explorer shows mainframe datasets in the side panel 3. Clicks on the COBOL source member — it downloads and opens in VS Code 4. Edits with syntax highlighting, auto-completion, COPYBOOK resolution, and real-time error detection 5. Saves — Zowe uploads to the mainframe automatically 6. Clicks "Run Task" — a pre-configured task submits compile JCL via Zowe CLI 7. Compile output appears in the VS Code terminal 8. Clicks "Debug" — VS Code connects to the debug session on z/OS 9. Sets breakpoints, inspects variables, steps through code — just like debugging Java 10. Commits to Git, creates a pull request, CI/CD pipeline handles the rest
The Impact on Productivity and Recruitment
ISPF Workflow VS Code Workflow
───────────── ────────────────
Compile cycle: 15-20 minutes 3-5 minutes
Debug turnaround: 30-60 minutes 5-10 minutes
New developer 3-6 months 2-4 weeks
ramp-up (tools):
Code review: Print and Pull request
compare with diff view
Search codebase: ISPF FIND Ctrl+Shift+F
(one member) (all files)
Derek Washington's experience is illustrative: "When I joined GlobalBank, they handed me a 3270 emulator and an ISPF cheat sheet. I spent two weeks just learning how to navigate the editor. When we switched to VS Code with Zowe, I was productive on day one. Same COBOL code, same mainframe, completely different experience."
The recruitment impact is equally significant. Job postings that mention "VS Code" and "Git" alongside "COBOL" attract far more applicants than postings that mention only "ISPF" and "Endevor." Modern tooling signals to potential hires that the organization values developer experience.
💡 Key Insight: Modernizing the development environment costs relatively little (VS Code is free, Zowe is open source, Git hosting is inexpensive) and delivers outsized returns in developer productivity and recruitment. It is consistently the highest-ROI modernization investment.
40.25 COBOL in GitOps Workflows
GitOps extends the Git-based workflow to infrastructure and deployment. The principle is simple: the Git repository is the single source of truth for everything — source code, configuration, deployment manifests, and infrastructure definitions.
The GitOps Model for COBOL
Git Repository (source of truth)
├── src/cobol/ ← COBOL source code
├── src/copybooks/ ← COBOL copybooks
├── src/jcl/ ← JCL for compile and execution
├── test/ ← Test programs and data
├── config/ ← Runtime configuration
├── kubernetes/ ← K8s manifests (for containerized COBOL)
├── ansible/ ← Ansible playbooks (for z/OS deployment)
├── terraform/ ← Cloud infrastructure definitions
└── .github/workflows/ ← CI/CD pipeline definitions
When a developer pushes a change to any file in this repository, the CI/CD pipeline automatically: 1. Compiles the COBOL programs 2. Runs unit and integration tests 3. Builds container images (if applicable) 4. Deploys to the test environment 5. Runs acceptance tests 6. Waits for manual approval (for production) 7. Deploys to production
No manual steps. No SSH into servers. No JCL submissions from TSO. Everything flows from Git.
Reconciliation Loop
A GitOps reconciliation agent continuously compares the desired state (in Git) with the actual state (on the mainframe or in Kubernetes) and corrects any drift:
# ArgoCD application for COBOL Kubernetes services
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: globalbank-cobol-services
spec:
project: default
source:
repoURL: https://github.com/globalbank/core.git
path: kubernetes/
targetRevision: main
destination:
server: https://kubernetes.default.svc
namespace: cobol-services
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
This ArgoCD configuration ensures that the Kubernetes cluster always matches what is defined in Git. If someone manually changes a deployment (adding an extra replica, changing a container image tag), ArgoCD detects the drift and reverts the change to match Git.
For z/OS deployments, the reconciliation is handled by Ansible or IBM DBB rather than ArgoCD, but the principle is the same: Git defines the desired state, and automation enforces it.
🔗 Cross-Reference: GitOps builds on the CI/CD concepts from section 40.4, the Kubernetes deployment patterns from sections 40.1 and 40.15, and the Infrastructure as Code patterns from section 40.8. Together, they form a complete, automated, auditable deployment pipeline for COBOL systems.
40.26 Secret Management for COBOL in Modern Stacks
A persistent security anti-pattern in mainframe shops is embedding credentials in JCL or COBOL source code. Modern stacks demand proper secret management.
The Problem
//* NEVER DO THIS — credentials in JCL
//DBCONN DD DSN=GBANK.DB2.CONFIG,DISP=SHR
//* Contains DB2 userid/password in clear text
* NEVER DO THIS — credentials in source code
01 WS-DB-PASSWORD PIC X(08) VALUE 'Pa55w0rd'.
These patterns persist in legacy environments because they "work" — but they create security vulnerabilities, make password rotation difficult, and violate virtually every compliance standard.
The Solution: External Secret Management
HashiCorp Vault Integration:
For containerized COBOL, Vault injects secrets as environment variables or mounted files:
# Kubernetes pod with Vault sidecar
spec:
serviceAccountName: cobol-service
containers:
- name: acctinq
image: globalbank/acctinq:1.0
volumeMounts:
- name: vault-secrets
mountPath: /app/secrets
readOnly: true
initContainers:
- name: vault-agent
image: hashicorp/vault:latest
args:
- agent
- -config=/etc/vault/agent-config.hcl
The COBOL program reads the secret from a file at runtime:
1050-READ-DB-CREDENTIALS.
OPEN INPUT SECRET-FILE
READ SECRET-FILE INTO WS-DB-CREDENTIALS
AT END
DISPLAY 'FATAL: Cannot read credentials'
MOVE 16 TO RETURN-CODE
STOP RUN
END-READ
CLOSE SECRET-FILE.
z/OS RACF Integration:
On the mainframe, RACF (Resource Access Control Facility) manages credentials. COBOL programs never need to know passwords — RACF authenticates the batch job's identity through the JCL JOB card, and DB2 connects using that authenticated identity:
//GBNITELY JOB (ACCT),'NIGHTLY BATCH',
// CLASS=A,MSGCLASS=H,
// USER=GBSVC, ← RACF-authenticated service ID
// NOTIFY=&SYSUID
The DB2 connection uses the job's RACF identity — no password in the code or JCL.
⚠️ Compliance Note: PCI DSS, SOX, HIPAA, and GDPR all require that credentials are not stored in source code or configuration files. Modern secret management is not just a best practice — for regulated industries, it is a legal requirement.
40.27 Chapter Summary
COBOL is no longer confined to the mainframe or to the workflows of the 1990s. Containers, cloud platforms, CI/CD pipelines, microservices, modern IDEs, and Infrastructure as Code have all been adapted to work with COBOL. The language itself has not changed — what has changed is the ecosystem around it.
GlobalBank's CI/CD pipeline reduced deployment time from 14 days to 3 days. MedClaim's containerized adjudication service brought mainframe business logic to a cloud-native dental platform. In both cases, the COBOL code was the constant — the business logic that had been tested and refined over years. The modernization happened around the code, not to it.
This is the central message of the modern COBOL ecosystem: you do not have to choose between reliability and agility. You can have both. You just need the right plumbing.
Derek Washington's summary after six months of modernization work: "I used to think COBOL was stuck in the past. Now I think the past was stuck around COBOL. We just moved the walls."
🔗 Looking Ahead: Chapter 41 takes a different turn. Instead of looking forward at modern tools, we look backward — at the millions of lines of undocumented legacy COBOL code that someone, someday, has to understand. Legacy code archaeology is the art of reading code that nobody else can explain.