23 min read

> "The first rule of testing is that untested code is broken code. The second rule is that tested code is probably broken code — you just haven't found the bug yet." — Adapted from a common software engineering adage

Chapter 34: Unit Testing COBOL

"The first rule of testing is that untested code is broken code. The second rule is that tested code is probably broken code — you just haven't found the bug yet." — Adapted from a common software engineering adage

When Derek Washington joined GlobalBank's mainframe team, he brought with him a habit from his computer science coursework: writing tests before writing code. His first week, he asked Maria Chen where the unit test suite was for GLOBALBANK-CORE's 1.2 million lines of COBOL. Maria laughed — not unkindly, but with the weary recognition of someone who had asked that same question fifteen years earlier. "We have test JCL," she said. "We run the program, eyeball the output, and compare it to what we expected. That's been good enough for thirty years."

Derek's face must have shown his disbelief, because Maria added, "I didn't say it was ideal. I said it was good enough. But you're right — we should do better. The question is how."

That question — how do you bring modern testing practices to a language and ecosystem that predates the very concept of unit testing? — is the subject of this chapter. The answer, as you will discover, is both more achievable and more nuanced than you might expect.

34.1 The Testing Gap in COBOL

To understand why COBOL programs are so often untested in any formal sense, we need to understand the historical context. When most of the world's COBOL code was written — from the 1960s through the 1990s — the concept of automated unit testing simply did not exist. Kent Beck's Test-Driven Development was published in 2002. JUnit, the framework that launched the automated testing revolution, appeared in 1997. By that time, the average COBOL program in production had already been running for a decade or more.

But the problem goes deeper than historical timing. Several characteristics of the COBOL ecosystem make testing structurally challenging:

Tight Coupling to External Resources

Most COBOL programs read files, write files, access databases, and interact with transaction monitors. A typical batch COBOL program cannot run without its input files, its output datasets, and its JCL environment. Unlike a Java method that takes parameters and returns values, a COBOL paragraph reads from WORKING-STORAGE fields that were populated by a FILE READ or an EXEC SQL FETCH. Isolating a unit of logic from its data dependencies requires deliberate effort.

The Paragraph Problem

COBOL's fundamental unit of logic — the paragraph — is not a function in the modern sense. It takes no parameters and returns no values. It operates on shared state in WORKING-STORAGE. This means you cannot call a paragraph with test inputs and examine its return value. Instead, you must set up the state, PERFORM the paragraph, and then inspect the changed state.

Monolithic Program Design

Legacy COBOL programs were often written as monoliths: a single program performing input, processing, validation, calculation, error handling, and output. Breaking these into testable units after the fact is a refactoring challenge of the first order.

Cultural Factors

Perhaps most significantly, many COBOL shops developed a culture where testing meant "run the job and check the output." Integration testing — running the full program with production-like data — was considered sufficient. This approach works surprisingly well when programs are stable and changes are infrequent. But it breaks down catastrophically when you need to modify a program that hasn't been touched in fifteen years and nobody remembers exactly what it does.

💡 The Human Factor: The testing gap in COBOL is not a technical limitation of the language itself. COBOL is perfectly capable of being tested. The gap exists because of when the code was written, how the ecosystem evolved, and the cultural norms that developed around mainframe programming. Changing those norms is as much a people challenge as a technical one.

The Cost of Not Testing

GlobalBank learned this lesson the hard way in 2019. A developer modified the BAL-CALC program to handle a new type of savings account. The change was straightforward — adding a new condition to an EVALUATE statement. But the modification inadvertently altered the rounding behavior for an existing account type. The error wasn't caught for three weeks, by which time 47,000 accounts had been credited with slightly incorrect interest. The remediation cost was $2.3 million — not because any individual error was large, but because identifying every affected account, calculating the correct amount, and posting adjustments required a team of six working for two months.

A unit test for the interest calculation logic would have caught the rounding error in seconds. The cost of writing that test? Perhaps two hours of developer time.

34.2 What "Unit Testing" Means for COBOL

Before we dive into frameworks and tools, let's clarify what we mean by unit testing in the COBOL context. In modern software development, a unit test has these characteristics:

  1. It tests a small unit of logic — typically a single function or method.
  2. It is automated — it runs without human intervention and reports pass/fail.
  3. It is isolated — it does not depend on files, databases, networks, or other external resources.
  4. It is repeatable — it produces the same result every time.
  5. It is fast — it runs in milliseconds, not minutes.

In COBOL, we adapt these principles:

Modern Concept COBOL Adaptation
Function/method Paragraph, section, or subprogram
Parameters WORKING-STORAGE fields set before PERFORM
Return value WORKING-STORAGE fields inspected after PERFORM
Mock object Stubbed subprogram or replaced file I/O
Test runner COBOL-Check, or custom test harness program
Assertion IF statement comparing actual to expected

The fundamental pattern for a COBOL unit test is:

  1. Arrange: Set up WORKING-STORAGE fields with known test values.
  2. Act: PERFORM the paragraph or section under test.
  3. Assert: Check that the resulting field values match expectations.

This is identical to the Arrange-Act-Assert pattern used in every modern testing framework. The only difference is the mechanism.

34.3 COBOL-Check: A Modern Testing Framework

COBOL-Check is an open-source testing framework specifically designed for COBOL. It was created by Dave Nicolette and is freely available on GitHub. COBOL-Check takes a unique approach: rather than running tests as separate programs, it injects test code directly into the COBOL program under test. This solves the isolation problem elegantly — the test code has direct access to all of the program's WORKING-STORAGE, paragraphs, and sections.

How COBOL-Check Works

The COBOL-Check workflow has four steps:

  1. Write test cases in a .cut file (COBOL Unit Test) using COBOL-Check's test specification language.
  2. Run the COBOL-Check generator, which reads your source program and your test cases, and produces a merged COBOL program that contains both the original code and the test instrumentation.
  3. Compile the merged program using your normal COBOL compiler.
  4. Execute the merged program, which runs the tests and reports results.
┌─────────────┐    ┌─────────────┐
│  Source      │    │  Test Cases │
│  program.cbl│    │  program.cut│
└──────┬──────┘    └──────┬──────┘
       │                  │
       └────────┬─────────┘
                │
       ┌────────▼────────┐
       │  COBOL-Check    │
       │  Generator      │
       └────────┬────────┘
                │
       ┌────────▼────────┐
       │  Merged program │
       │  (with tests)   │
       └────────┬────────┘
                │
       ┌────────▼────────┐
       │  COBOL Compiler │
       └────────┬────────┘
                │
       ┌────────▼────────┐
       │  Test Execution │
       │  & Results      │
       └─────────────────┘

Installation and Setup

COBOL-Check requires Java 1.8 or later and a COBOL compiler (Enterprise COBOL, GnuCOBOL, or Micro Focus). For our Student Mainframe Lab, GnuCOBOL is the easiest path:

# Install GnuCOBOL (Ubuntu/Debian)
sudo apt-get install gnucobol

# Download COBOL-Check
git clone https://github.com/openmainframeproject/cobol-check.git
cd cobol-check

# Build (requires Java and Gradle)
./gradlew build

# Verify installation
java -jar build/libs/cobol-check-0.2.8.jar --help

The configuration file config.properties tells COBOL-Check where to find your source programs and where to write output:

# config.properties for Student Mainframe Lab
source.directory = src/main/cobol
test.directory = src/test/cobol
output.directory = build/test
cobolcheck.prefix = UT-

Writing Your First COBOL-Check Test

Let's start with a simple example. Suppose we have a program that calculates sales tax:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. CALC-TAX.

       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01  WS-SALE-AMOUNT        PIC 9(7)V99.
       01  WS-TAX-RATE           PIC V9(4).
       01  WS-TAX-AMOUNT         PIC 9(7)V99.
       01  WS-TOTAL-AMOUNT       PIC 9(7)V99.

       PROCEDURE DIVISION.
       0000-MAIN.
           PERFORM 1000-CALCULATE-TAX
           STOP RUN.

       1000-CALCULATE-TAX.
           COMPUTE WS-TAX-AMOUNT ROUNDED =
               WS-SALE-AMOUNT * WS-TAX-RATE
           COMPUTE WS-TOTAL-AMOUNT =
               WS-SALE-AMOUNT + WS-TAX-AMOUNT.

The corresponding test file (CALC-TAX.cut) looks like this:

       TESTSUITE "Sales Tax Calculation Tests"

       TESTCASE "Standard tax calculation"
           MOVE 100.00 TO WS-SALE-AMOUNT
           MOVE 0.0825 TO WS-TAX-RATE
           PERFORM 1000-CALCULATE-TAX
           EXPECT WS-TAX-AMOUNT TO BE 8.25
           EXPECT WS-TOTAL-AMOUNT TO BE 108.25

       TESTCASE "Zero sale amount"
           MOVE 0 TO WS-SALE-AMOUNT
           MOVE 0.0825 TO WS-TAX-RATE
           PERFORM 1000-CALCULATE-TAX
           EXPECT WS-TAX-AMOUNT TO BE 0
           EXPECT WS-TOTAL-AMOUNT TO BE 0

       TESTCASE "Large sale amount"
           MOVE 9999999.99 TO WS-SALE-AMOUNT
           MOVE 0.0825 TO WS-TAX-RATE
           PERFORM 1000-CALCULATE-TAX
           EXPECT WS-TAX-AMOUNT TO BE 824999.99
           EXPECT WS-TOTAL-AMOUNT TO BE 9999999.99 + 824999.99

       TESTCASE "Zero tax rate"
           MOVE 500.00 TO WS-SALE-AMOUNT
           MOVE 0 TO WS-TAX-RATE
           PERFORM 1000-CALCULATE-TAX
           EXPECT WS-TAX-AMOUNT TO BE 0
           EXPECT WS-TOTAL-AMOUNT TO BE 500.00

       TESTCASE "Rounding behavior"
           MOVE 99.99 TO WS-SALE-AMOUNT
           MOVE 0.0825 TO WS-TAX-RATE
           PERFORM 1000-CALCULATE-TAX
           EXPECT WS-TAX-AMOUNT TO BE 8.25

Notice several things about this test file:

  • TESTSUITE groups related tests and provides a descriptive name.
  • TESTCASE defines an individual test with a description.
  • The test body uses standard COBOL statements (MOVE, PERFORM) plus COBOL-Check's EXPECT assertion.
  • Each test follows the Arrange-Act-Assert pattern: set up fields, PERFORM the paragraph, check results.

Running the Tests

# Generate merged test program
java -jar cobol-check.jar -p CALC-TAX

# Compile (GnuCOBOL)
cobc -x build/test/CALC-TAX.cbl -o build/test/CALC-TAX

# Execute
./build/test/CALC-TAX

Output:

TESTSUITE: Sales Tax Calculation Tests
  PASS: Standard tax calculation
  PASS: Zero sale amount
  PASS: Large sale amount
  PASS: Zero tax rate
  PASS: Rounding behavior

5 tests, 5 passed, 0 failed

Try It Yourself: Install GnuCOBOL and COBOL-Check in your Student Mainframe Lab. Create the CALC-TAX program above and write the test suite. Run the tests. Then add a test case that fails — for example, intentionally expecting the wrong value — and observe the failure output. Understanding what failure looks like is essential to trusting your tests.

COBOL-Check Assertion Types

COBOL-Check provides several types of assertions:

       * Equality
       EXPECT WS-FIELD TO BE 42
       EXPECT WS-FIELD TO EQUAL 42

       * Inequality
       EXPECT WS-FIELD NOT TO BE 0

       * Comparison
       EXPECT WS-FIELD TO BE GREATER THAN 100
       EXPECT WS-FIELD TO BE LESS THAN 999
       EXPECT WS-FIELD >= 50

       * Alphanumeric
       EXPECT WS-NAME TO BE "SMITH"
       EXPECT WS-STATUS TO EQUAL "A"

       * Numeric sign
       EXPECT WS-BALANCE TO BE NUMERIC
       EXPECT WS-BALANCE TO BE POSITIVE
       EXPECT WS-BALANCE TO BE NEGATIVE
       EXPECT WS-BALANCE TO BE ZERO

34.4 Test Data Generation

Real COBOL programs process complex records. A claim record at MedClaim might have fifty fields across multiple levels. Creating test data for these records by hand is tedious and error-prone. We need systematic approaches to test data generation.

Approach 1: Copybook-Driven Test Data

Since COBOL programs define their record layouts in copybooks, we can write a test data generator program that reads a copybook and produces test records:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. GEN-TEST-CLM.
      *---------------------------------------------------------------
      * Generate test claim records for CLM-ADJUD testing.
      * Creates known test scenarios for boundary conditions.
      *---------------------------------------------------------------

       ENVIRONMENT DIVISION.
       INPUT-OUTPUT SECTION.
       FILE-CONTROL.
           SELECT TEST-CLAIM-FILE ASSIGN TO TESTCLM
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-FILE-STATUS.

       DATA DIVISION.
       FILE SECTION.
       FD  TEST-CLAIM-FILE
           RECORDING MODE IS F
           RECORD CONTAINS 500 CHARACTERS.
       01  TEST-CLAIM-REC.
           COPY CLAIM-REC.

       WORKING-STORAGE SECTION.
       01  WS-FILE-STATUS         PIC XX.
       01  WS-RECORD-COUNT        PIC 9(5)    VALUE 0.
       01  WS-SCENARIO-ID         PIC 9(3)    VALUE 0.

       PROCEDURE DIVISION.
       0000-MAIN.
           OPEN OUTPUT TEST-CLAIM-FILE
           PERFORM 1000-GEN-NORMAL-CLAIM
           PERFORM 2000-GEN-ZERO-AMOUNT
           PERFORM 3000-GEN-MAX-AMOUNT
           PERFORM 4000-GEN-INVALID-PROVIDER
           PERFORM 5000-GEN-EXPIRED-POLICY
           PERFORM 6000-GEN-DUPLICATE-CLAIM
           PERFORM 7000-GEN-MULTI-PROCEDURE
           CLOSE TEST-CLAIM-FILE
           DISPLAY "Generated " WS-RECORD-COUNT " test claims"
           STOP RUN.

       1000-GEN-NORMAL-CLAIM.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00001"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99213"           TO CLM-PROCEDURE-CODE
           MOVE 150.00            TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       2000-GEN-ZERO-AMOUNT.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00002"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99213"           TO CLM-PROCEDURE-CODE
           MOVE 0                 TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       3000-GEN-MAX-AMOUNT.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00003"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99215"           TO CLM-PROCEDURE-CODE
           MOVE 999999.99         TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       4000-GEN-INVALID-PROVIDER.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00004"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "INVALID99"       TO CLM-PROVIDER-ID
           MOVE "99213"           TO CLM-PROCEDURE-CODE
           MOVE 150.00            TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       5000-GEN-EXPIRED-POLICY.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00005"        TO CLM-CLAIM-ID
           MOVE "MEM99999"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99213"           TO CLM-PROCEDURE-CODE
           MOVE 150.00            TO CLM-BILLED-AMOUNT
           MOVE "2020-01-01"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       6000-GEN-DUPLICATE-CLAIM.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00001"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99213"           TO CLM-PROCEDURE-CODE
           MOVE 150.00            TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

       7000-GEN-MULTI-PROCEDURE.
           INITIALIZE TEST-CLAIM-REC
           ADD 1 TO WS-SCENARIO-ID
           MOVE WS-SCENARIO-ID    TO CLM-SCENARIO-ID
           MOVE "CLM00007"        TO CLM-CLAIM-ID
           MOVE "MEM12345"        TO CLM-MEMBER-ID
           MOVE "PRV00100"        TO CLM-PROVIDER-ID
           MOVE "99215"           TO CLM-PROCEDURE-CODE
           MOVE 450.00            TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15"      TO CLM-SERVICE-DATE
           MOVE "A"               TO CLM-STATUS
           MOVE 3                 TO CLM-LINE-COUNT
           WRITE TEST-CLAIM-REC
           ADD 1 TO WS-RECORD-COUNT.

Approach 2: Equivalence Partitioning

Equivalence partitioning divides input data into classes where all values in a class should produce similar behavior. For a numeric field like CLM-BILLED-AMOUNT (PIC 9(7)V99), the partitions are:

Partition Example Values Expected Behavior
Zero 0.00 Reject or flag
Typical low 25.00, 150.00 Normal processing
Typical high 5000.00, 25000.00 Normal, possible review flag
High-value threshold 50000.00+ Trigger manual review
Maximum 9999999.99 Must not overflow
Negative (if signed) -100.00 Reject

Approach 3: Boundary Value Analysis

For each partition boundary, we test the value at, just below, and just above the boundary:

       TESTSUITE "Claim Amount Boundary Tests"

       TESTCASE "Amount just below review threshold"
           MOVE 49999.99 TO CLM-BILLED-AMOUNT
           PERFORM 3000-ADJUDICATE-CLAIM
           EXPECT CLM-REVIEW-FLAG TO BE "N"

       TESTCASE "Amount at review threshold"
           MOVE 50000.00 TO CLM-BILLED-AMOUNT
           PERFORM 3000-ADJUDICATE-CLAIM
           EXPECT CLM-REVIEW-FLAG TO BE "Y"

       TESTCASE "Amount just above review threshold"
           MOVE 50000.01 TO CLM-BILLED-AMOUNT
           PERFORM 3000-ADJUDICATE-CLAIM
           EXPECT CLM-REVIEW-FLAG TO BE "Y"

📊 By the Numbers: Studies of COBOL production defects show that approximately 60% of bugs involve boundary conditions — off-by-one errors, maximum field lengths, and edge cases around zero values. Boundary value analysis, while simple, catches the majority of defects that slip through code review.

34.5 Stubbing and Mocking in COBOL

The greatest challenge in unit testing COBOL is isolating the code under test from its dependencies. A paragraph that reads a VSAM file, performs a DB2 query, and writes an audit log cannot be meaningfully unit-tested without replacing those external dependencies with test doubles.

Stubbing File I/O

COBOL-Check provides a mechanism for intercepting file operations. Consider a program that reads account records:

       2000-READ-ACCOUNT.
           READ ACCT-MASTER-FILE INTO WS-ACCT-REC
               KEY IS WS-ACCT-KEY
               INVALID KEY
                   SET ACCT-NOT-FOUND TO TRUE
               NOT INVALID KEY
                   SET ACCT-FOUND TO TRUE
           END-READ.

In our test, we bypass the file read entirely:

       TESTSUITE "Account Processing Tests"

       TESTCASE "Process active account"
           MOVE "1234567890" TO WS-ACCT-KEY
           MOVE "1234567890" TO ACCT-NUMBER
           MOVE "CHECKING"   TO ACCT-TYPE
           MOVE 5000.00      TO ACCT-BALANCE
           MOVE "A"          TO ACCT-STATUS
           SET ACCT-FOUND TO TRUE
           PERFORM 3000-PROCESS-ACCOUNT
           EXPECT WS-PROCESS-STATUS TO BE "SUCCESS"

       TESTCASE "Handle missing account"
           MOVE "9999999999" TO WS-ACCT-KEY
           SET ACCT-NOT-FOUND TO TRUE
           PERFORM 3000-PROCESS-ACCOUNT
           EXPECT WS-PROCESS-STATUS TO BE "NOT-FOUND"

By setting up the WORKING-STORAGE fields as if the READ had already occurred, we skip the file operation and test only the processing logic. This works when the paragraph under test is separate from the I/O paragraph.

Stubbing Subprogram Calls

When your program calls external subprograms via CALL, you can create stub versions for testing:

      *---------------------------------------------------------------
      * STUB-VALIDATE-MEMBER: Test stub replacing VALIDATE-MEMBER
      *---------------------------------------------------------------
       IDENTIFICATION DIVISION.
       PROGRAM-ID. VALIDATE-MEMBER.

       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01  WS-STUB-CONTROL.
           05 WS-STUB-RETURN-CODE   PIC 9    VALUE 0.

       LINKAGE SECTION.
       01  LS-MEMBER-ID            PIC X(10).
       01  LS-VALID-FLAG           PIC X.
           88 LS-MEMBER-VALID      VALUE "Y".
           88 LS-MEMBER-INVALID    VALUE "N".
       01  LS-RETURN-CODE          PIC 9.

       PROCEDURE DIVISION USING LS-MEMBER-ID
                                   LS-VALID-FLAG
                                   LS-RETURN-CODE.
       0000-MAIN.
      * Stub always returns valid unless control flag set
           IF WS-STUB-RETURN-CODE = 0
               SET LS-MEMBER-VALID TO TRUE
               MOVE 0 TO LS-RETURN-CODE
           ELSE
               SET LS-MEMBER-INVALID TO TRUE
               MOVE WS-STUB-RETURN-CODE TO LS-RETURN-CODE
           END-IF
           GOBACK.

At link time, you link the program under test with the stub instead of the real VALIDATE-MEMBER module. This is the COBOL equivalent of dependency injection.

Mock Pattern with Verification

A more sophisticated pattern records the calls made to the stub, allowing you to verify not just the outcome but the interactions:

      *---------------------------------------------------------------
      * MOCK-AUDIT-LOG: Records audit calls for verification
      *---------------------------------------------------------------
       IDENTIFICATION DIVISION.
       PROGRAM-ID. WRITE-AUDIT.

       DATA DIVISION.
       WORKING-STORAGE SECTION.
       01  WS-CALL-COUNT           PIC 9(3)  VALUE 0.
       01  WS-CALL-LOG.
           05 WS-CALL-ENTRY OCCURS 100 TIMES.
              10 WS-CALL-ACTION    PIC X(10).
              10 WS-CALL-DETAIL    PIC X(80).
       01  WS-CALL-IDX             PIC 9(3).

       LINKAGE SECTION.
       01  LS-ACTION               PIC X(10).
       01  LS-DETAIL               PIC X(80).

       PROCEDURE DIVISION USING LS-ACTION LS-DETAIL.
       0000-MAIN.
           ADD 1 TO WS-CALL-COUNT
           MOVE WS-CALL-COUNT TO WS-CALL-IDX
           MOVE LS-ACTION TO WS-CALL-ACTION(WS-CALL-IDX)
           MOVE LS-DETAIL TO WS-CALL-DETAIL(WS-CALL-IDX)
           GOBACK.

After running the test, you can inspect WS-CALL-COUNT and WS-CALL-LOG to verify that the program under test wrote the expected audit entries.

⚠️ Caution: Be careful about the granularity of stubbing. If you stub out everything, your test proves only that your test setup is correct — it tells you nothing about the actual program. The goal is to stub the boundaries (file I/O, database access, external calls) while testing the logic (calculations, validations, decision trees) with real code.

34.6 Test-Driven Development (TDD) in COBOL

Test-driven development — writing the test before writing the code — is not just possible in COBOL; it can be transformative. TDD forces you to think about your paragraph's inputs and outputs before writing the logic, which naturally produces cleaner, more testable code.

The TDD Cycle for COBOL

The classic TDD cycle is Red-Green-Refactor:

  1. Red: Write a test that fails (because the code doesn't exist yet).
  2. Green: Write the minimum code to make the test pass.
  3. Refactor: Clean up the code while keeping the test green.

Let's walk through a TDD example. Sarah Kim has specified a new business rule for MedClaim: claims for preventive care procedures (procedure codes 99381-99397) should be approved at 100% of the allowed amount with no copay.

Step 1: Write the failing test

       TESTSUITE "Preventive Care Adjudication"

       TESTCASE "Preventive care claim approved at 100%"
           MOVE "99385"  TO CLM-PROCEDURE-CODE
           MOVE 200.00   TO CLM-BILLED-AMOUNT
           MOVE 175.00   TO CLM-ALLOWED-AMOUNT
           PERFORM 3500-APPLY-BENEFIT-RULES
           EXPECT CLM-APPROVED-AMOUNT TO BE 175.00
           EXPECT CLM-COPAY-AMOUNT TO BE 0
           EXPECT CLM-MEMBER-RESP TO BE 0

       TESTCASE "Non-preventive claim applies normal rules"
           MOVE "99213"  TO CLM-PROCEDURE-CODE
           MOVE 200.00   TO CLM-BILLED-AMOUNT
           MOVE 175.00   TO CLM-ALLOWED-AMOUNT
           MOVE 30.00    TO CLM-COPAY-SCHEDULE
           PERFORM 3500-APPLY-BENEFIT-RULES
           EXPECT CLM-COPAY-AMOUNT TO BE 30.00
           EXPECT CLM-APPROVED-AMOUNT TO BE 145.00

       TESTCASE "Boundary: first preventive code"
           MOVE "99381"  TO CLM-PROCEDURE-CODE
           MOVE 150.00   TO CLM-ALLOWED-AMOUNT
           PERFORM 3500-APPLY-BENEFIT-RULES
           EXPECT CLM-COPAY-AMOUNT TO BE 0

       TESTCASE "Boundary: last preventive code"
           MOVE "99397"  TO CLM-PROCEDURE-CODE
           MOVE 150.00   TO CLM-ALLOWED-AMOUNT
           PERFORM 3500-APPLY-BENEFIT-RULES
           EXPECT CLM-COPAY-AMOUNT TO BE 0

       TESTCASE "Just outside preventive range"
           MOVE "99398"  TO CLM-PROCEDURE-CODE
           MOVE 150.00   TO CLM-ALLOWED-AMOUNT
           MOVE 30.00    TO CLM-COPAY-SCHEDULE
           PERFORM 3500-APPLY-BENEFIT-RULES
           EXPECT CLM-COPAY-AMOUNT TO BE 30.00

Step 2: Write the code to make it pass

       3500-APPLY-BENEFIT-RULES.
           EVALUATE TRUE
               WHEN CLM-PROCEDURE-CODE >= "99381"
                AND CLM-PROCEDURE-CODE <= "99397"
                   MOVE CLM-ALLOWED-AMOUNT
                       TO CLM-APPROVED-AMOUNT
                   MOVE 0 TO CLM-COPAY-AMOUNT
                   MOVE 0 TO CLM-MEMBER-RESP
               WHEN OTHER
                   MOVE CLM-COPAY-SCHEDULE
                       TO CLM-COPAY-AMOUNT
                   SUBTRACT CLM-COPAY-AMOUNT
                       FROM CLM-ALLOWED-AMOUNT
                       GIVING CLM-APPROVED-AMOUNT
                   MOVE CLM-COPAY-AMOUNT
                       TO CLM-MEMBER-RESP
           END-EVALUATE.

Step 3: Refactor if needed

The code is clean enough, but we might extract the preventive care check into its own paragraph for reuse:

       3400-CHECK-PREVENTIVE.
           IF CLM-PROCEDURE-CODE >= "99381"
              AND CLM-PROCEDURE-CODE <= "99397"
               SET CLM-IS-PREVENTIVE TO TRUE
           ELSE
               SET CLM-IS-PREVENTIVE TO FALSE
           END-IF.

       3500-APPLY-BENEFIT-RULES.
           PERFORM 3400-CHECK-PREVENTIVE
           EVALUATE TRUE
               WHEN CLM-IS-PREVENTIVE
                   MOVE CLM-ALLOWED-AMOUNT
                       TO CLM-APPROVED-AMOUNT
                   MOVE 0 TO CLM-COPAY-AMOUNT
                   MOVE 0 TO CLM-MEMBER-RESP
               WHEN OTHER
                   MOVE CLM-COPAY-SCHEDULE
                       TO CLM-COPAY-AMOUNT
                   SUBTRACT CLM-COPAY-AMOUNT
                       FROM CLM-ALLOWED-AMOUNT
                       GIVING CLM-APPROVED-AMOUNT
                   MOVE CLM-COPAY-AMOUNT
                       TO CLM-MEMBER-RESP
           END-EVALUATE.

Run the tests again — they should still pass after refactoring.

🔗 Cross-Reference: The TDD approach here works especially well when combined with the modular design patterns we explored in Chapter 22 (CALL and Subprogram Linkage). Programs designed with clear subprogram interfaces are inherently more testable than monoliths.

34.7 JCL for Test Execution

On a mainframe, tests run within the JCL framework. A well-designed test JCL stream automates the build-test-report cycle:

//UNITTEST  JOB (ACCT),'UNIT TESTS',CLASS=A,
//          MSGCLASS=X,MSGLEVEL=(1,1)
//*
//*----------------------------------------------------------
//* Step 1: Generate merged test program
//*----------------------------------------------------------
//MERGE    EXEC PGM=BPXBATCH
//STDPARM  DD *
SH java -jar /usr/lpp/cobol-check/cobol-check.jar
   -p BAL-CALC -c /etc/cobol-check/config.properties
/*
//STDOUT   DD SYSOUT=*
//STDERR   DD SYSOUT=*
//*
//*----------------------------------------------------------
//* Step 2: Compile merged program
//*----------------------------------------------------------
//COMPILE  EXEC IGYWCL,
//         PARM.COBOL='RENT,APOST,MAP,XREF'
//COBOL.SYSIN DD DSN=TEST.MERGED.SOURCE(BALCALC),DISP=SHR
//LKED.SYSLMOD DD DSN=TEST.LOAD(BALCALC),DISP=SHR
//*
//*----------------------------------------------------------
//* Step 3: Execute tests
//*----------------------------------------------------------
//RUN      EXEC PGM=BALCALC
//STEPLIB  DD DSN=TEST.LOAD,DISP=SHR
//SYSOUT   DD SYSOUT=*
//TESTOUT  DD DSN=TEST.RESULTS(BALCALC),DISP=SHR
//*
//*----------------------------------------------------------
//* Step 4: Check results (non-zero RC = test failure)
//*----------------------------------------------------------
//CHKRC    EXEC PGM=IEFBR14,COND=(0,NE,RUN)

Comparison Utilities for Regression Testing

For integration-level tests where you compare actual output files to expected baselines, mainframe comparison utilities are essential:

//*----------------------------------------------------------
//* Compare actual output to expected baseline
//*----------------------------------------------------------
//COMPARE  EXEC PGM=ISRSUPC,
//         PARM='DELTAL,LINECMP'
//NEWDD    DD DSN=TEST.ACTUAL.OUTPUT,DISP=SHR
//OLDDD    DD DSN=TEST.EXPECTED.BASELINE,DISP=SHR
//OUTDD    DD SYSOUT=*

ISRSUPC (the ISPF SuperC compare utility) will report any differences between actual and expected output. A clean comparison with no differences confirms the test passes.

34.8 Regression Testing: Building the Safety Net

Regression testing ensures that new changes don't break existing functionality. In COBOL shops, regression testing is arguably more important than in other environments, because the cost of production errors in financial and healthcare systems is enormous.

The Regression Test Suite Architecture

A well-organized regression test suite follows this structure:

TEST/
├── UNIT/
│   ├── BAL-CALC.cut        (balance calculation tests)
│   ├── ACCT-VALID.cut       (account validation tests)
│   ├── TXN-PROC.cut         (transaction processing tests)
│   └── RPT-FMT.cut          (report formatting tests)
├── INTEGRATION/
│   ├── NIGHTLY-BATCH/
│   │   ├── TEST-INPUT/       (test input files)
│   │   ├── EXPECTED-OUTPUT/  (baseline output files)
│   │   └── RUN-NIGHTLY.jcl  (test execution JCL)
│   └── ONLINE/
│       ├── CICS-TESTS/
│       └── RUN-ONLINE.jcl
├── DATA/
│   ├── GEN-TEST-ACCTS.cbl   (test data generator)
│   ├── GEN-TEST-TXNS.cbl    (test data generator)
│   └── REFRESH-TESTDB.jcl   (database refresh)
└── SCRIPTS/
    ├── RUN-ALL-TESTS.jcl    (master test JCL)
    └── REPORT-RESULTS.cbl   (test result aggregator)

Automating the Suite

The master test JCL runs all test suites in sequence and produces a consolidated report:

//REGRESS  JOB (ACCT),'REGRESSION SUITE',CLASS=A,
//         MSGCLASS=X,MSGLEVEL=(1,1)
//*
//* Step 1: Refresh test data
//REFRESH  EXEC PGM=GENTESTD
//STEPLIB  DD DSN=TEST.LOAD,DISP=SHR
//TESTDATA DD DSN=TEST.DATA,DISP=OLD
//*
//* Step 2: Run unit tests
//UNIT01   EXEC UNITTEST,PROG=BALCALC
//UNIT02   EXEC UNITTEST,PROG=ACCTVAL
//UNIT03   EXEC UNITTEST,PROG=TXNPROC
//*
//* Step 3: Run integration tests
//INTEG    EXEC INTGTEST,SUITE=NIGHTLY
//*
//* Step 4: Aggregate results
//REPORT   EXEC PGM=RPTRESLT
//RESULTS  DD DSN=TEST.RESULTS,DISP=SHR
//SYSOUT   DD SYSOUT=*

💡 The Modernization Spectrum: You don't have to implement a full CI/CD pipeline on day one. Start with a single unit test for a single paragraph. Then add tests as you modify code. Over time, the safety net grows organically. This incremental approach is especially effective in COBOL shops where wholesale process changes face institutional resistance.

34.9 Code Coverage in COBOL

Code coverage measures how much of your program's code is actually exercised by your tests. While 100% coverage is neither achievable nor necessary, understanding coverage helps you identify untested code paths.

Types of Coverage

Coverage Type What It Measures COBOL Relevance
Statement coverage % of statements executed Basic metric — "did we run this line?"
Branch coverage % of IF/EVALUATE branches taken Critical — EVALUATE statements often have many branches
Paragraph coverage % of paragraphs PERFORMed COBOL-specific — are all paragraphs reachable?
Path coverage % of unique execution paths Most thorough but exponentially expensive

Measuring Coverage

Enterprise COBOL provides the TEST(SEPARATE) compiler option, which generates instrumentation data. When combined with IBM Debug Tool or IBM Application Discovery, you can produce coverage reports:

Coverage Report: BAL-CALC
================================
Paragraph                  Covered?  Hit Count
---------                  --------  ---------
0000-MAIN                  YES       1
1000-INIT                  YES       1
2000-READ-ACCOUNTS         YES       1
2100-PROCESS-CHECKING      YES       3
2200-PROCESS-SAVINGS       YES       2
2300-PROCESS-CD            NO        0    <<<
2400-PROCESS-MONEY-MARKET  NO        0    <<<
3000-CALC-INTEREST         YES       5
3100-APPLY-RATE            YES       5
3200-COMPOUND-DAILY        YES       3
3300-COMPOUND-MONTHLY      YES       2
3400-SIMPLE-INTEREST       NO        0    <<<
4000-WRITE-OUTPUT          YES       5
9000-CLEANUP               YES       1

Statement Coverage: 78%
Branch Coverage: 62%
Paragraph Coverage: 11/14 (79%)

The report immediately reveals that we have no tests for CD accounts, money market accounts, or simple interest calculations. These are the areas where a new defect is most likely to hide undetected.

Using Coverage to Guide Test Writing

Coverage analysis should drive test creation. For each uncovered paragraph or branch, ask:

  1. Is this code reachable? If no test can reach it, it may be dead code (see Chapter 35).
  2. What inputs would trigger this path? Create test data for those inputs.
  3. Is this a high-risk path? Financial calculations and validation logic deserve higher coverage than logging and formatting.
       TESTSUITE "BAL-CALC Coverage Gap Tests"

       TESTCASE "CD account interest calculation"
           MOVE "CD"      TO ACCT-TYPE
           MOVE 10000.00  TO ACCT-BALANCE
           MOVE 0.0450    TO ACCT-RATE
           MOVE 12        TO ACCT-TERM-MONTHS
           PERFORM 2300-PROCESS-CD
           EXPECT WS-INTEREST-AMT TO BE 450.00

       TESTCASE "Money market tiered rate"
           MOVE "MM"      TO ACCT-TYPE
           MOVE 50000.00  TO ACCT-BALANCE
           PERFORM 2400-PROCESS-MONEY-MARKET
           EXPECT WS-TIER-LEVEL TO BE 2

       TESTCASE "Simple interest calculation"
           MOVE "S"       TO ACCT-INT-METHOD
           MOVE 1000.00   TO ACCT-BALANCE
           MOVE 0.0500    TO ACCT-RATE
           MOVE 30        TO WS-DAYS-IN-PERIOD
           PERFORM 3400-SIMPLE-INTEREST
           EXPECT WS-INTEREST-AMT TO BE 4.11

34.10 GlobalBank Case Study: Unit Testing BAL-CALC

Maria Chen decided to build a comprehensive test suite for BAL-CALC, the interest calculation program, after the 2019 incident. She started by identifying the program's core calculation paragraphs and the business rules they implement.

The Program Under Test

BAL-CALC processes the ACCT-MASTER file nightly, calculating interest for every account. The key logic resides in these paragraphs:

       3000-CALC-INTEREST.
      *    Determine calculation method based on account type
           EVALUATE ACCT-TYPE
               WHEN "CHK"
                   IF ACCT-BALANCE > WS-CHK-INT-THRESHOLD
                       PERFORM 3100-APPLY-RATE
                   ELSE
                       MOVE 0 TO WS-INTEREST-AMT
                   END-IF
               WHEN "SAV"
                   PERFORM 3100-APPLY-RATE
               WHEN "CD"
                   PERFORM 3100-APPLY-RATE
               WHEN "MMA"
                   PERFORM 3200-CALC-TIERED-RATE
                   PERFORM 3100-APPLY-RATE
               WHEN OTHER
                   MOVE "UNKNOWN-TYPE" TO WS-ERROR-MSG
                   PERFORM 9100-LOG-ERROR
           END-EVALUATE.

       3100-APPLY-RATE.
           EVALUATE ACCT-COMPOUND-METHOD
               WHEN "D"
                   PERFORM 3110-COMPOUND-DAILY
               WHEN "M"
                   PERFORM 3120-COMPOUND-MONTHLY
               WHEN "S"
                   PERFORM 3130-SIMPLE-INTEREST
               WHEN OTHER
                   MOVE "BAD-COMPOUND" TO WS-ERROR-MSG
                   PERFORM 9100-LOG-ERROR
           END-EVALUATE.

       3110-COMPOUND-DAILY.
      *    Daily compound: A = P * (1 + r/365)^n
           COMPUTE WS-DAILY-RATE =
               ACCT-ANNUAL-RATE / 365
           COMPUTE WS-GROWTH-FACTOR =
               (1 + WS-DAILY-RATE) ** WS-DAYS-IN-PERIOD
           COMPUTE WS-INTEREST-AMT ROUNDED =
               ACCT-BALANCE * (WS-GROWTH-FACTOR - 1).

       3120-COMPOUND-MONTHLY.
      *    Monthly compound: A = P * (1 + r/12)^n
           COMPUTE WS-MONTHLY-RATE =
               ACCT-ANNUAL-RATE / 12
           COMPUTE WS-GROWTH-FACTOR =
               (1 + WS-MONTHLY-RATE) ** WS-MONTHS-IN-PERIOD
           COMPUTE WS-INTEREST-AMT ROUNDED =
               ACCT-BALANCE * (WS-GROWTH-FACTOR - 1).

       3130-SIMPLE-INTEREST.
      *    Simple: I = P * r * t/365
           COMPUTE WS-INTEREST-AMT ROUNDED =
               ACCT-BALANCE * ACCT-ANNUAL-RATE
               * WS-DAYS-IN-PERIOD / 365.

Maria's Test Suite

       TESTSUITE "BAL-CALC Interest Calculations"

      *============================================================
      * Checking account tests
      *============================================================
       TESTCASE "Checking below threshold earns no interest"
           MOVE "CHK"    TO ACCT-TYPE
           MOVE 500.00   TO ACCT-BALANCE
           MOVE 1000.00  TO WS-CHK-INT-THRESHOLD
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-INTEREST-AMT TO BE 0

       TESTCASE "Checking above threshold earns interest"
           MOVE "CHK"    TO ACCT-TYPE
           MOVE 5000.00  TO ACCT-BALANCE
           MOVE 1000.00  TO WS-CHK-INT-THRESHOLD
           MOVE 0.0100   TO ACCT-ANNUAL-RATE
           MOVE "D"      TO ACCT-COMPOUND-METHOD
           MOVE 30       TO WS-DAYS-IN-PERIOD
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-INTEREST-AMT TO BE GREATER THAN 0

       TESTCASE "Checking at exact threshold"
           MOVE "CHK"    TO ACCT-TYPE
           MOVE 1000.00  TO ACCT-BALANCE
           MOVE 1000.00  TO WS-CHK-INT-THRESHOLD
           MOVE 0.0100   TO ACCT-ANNUAL-RATE
           MOVE "D"      TO ACCT-COMPOUND-METHOD
           MOVE 30       TO WS-DAYS-IN-PERIOD
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-INTEREST-AMT TO BE 0

      *============================================================
      * Savings account tests
      *============================================================
       TESTCASE "Savings daily compound 30 days"
           MOVE "SAV"    TO ACCT-TYPE
           MOVE 10000.00 TO ACCT-BALANCE
           MOVE 0.0450   TO ACCT-ANNUAL-RATE
           MOVE "D"      TO ACCT-COMPOUND-METHOD
           MOVE 30       TO WS-DAYS-IN-PERIOD
           PERFORM 3000-CALC-INTEREST
      *    Expected: 10000 * ((1 + 0.045/365)^30 - 1) = 37.01
           EXPECT WS-INTEREST-AMT TO BE 37.01

       TESTCASE "Savings zero balance"
           MOVE "SAV"    TO ACCT-TYPE
           MOVE 0        TO ACCT-BALANCE
           MOVE 0.0450   TO ACCT-ANNUAL-RATE
           MOVE "D"      TO ACCT-COMPOUND-METHOD
           MOVE 30       TO WS-DAYS-IN-PERIOD
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-INTEREST-AMT TO BE 0

      *============================================================
      * Error handling tests
      *============================================================
       TESTCASE "Unknown account type logs error"
           MOVE "XXX"    TO ACCT-TYPE
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-ERROR-MSG TO BE "UNKNOWN-TYPE"

       TESTCASE "Bad compound method logs error"
           MOVE "SAV"    TO ACCT-TYPE
           MOVE 10000.00 TO ACCT-BALANCE
           MOVE "X"      TO ACCT-COMPOUND-METHOD
           PERFORM 3000-CALC-INTEREST
           EXPECT WS-ERROR-MSG TO BE "BAD-COMPOUND"

The Bug That Tests Caught

After writing these tests, Maria discovered that the "checking at exact threshold" test revealed a bug. The original code used > (greater than), but the business rule specified "1,000 dollars or more." The threshold check should have been >=. Without the test, this discrepancy would have continued, denying interest to customers with exactly the threshold balance.

Derek noted that this was the same class of bug that caused the 2019 incident — a boundary condition error. "If we'd had these tests three years ago," he said, "we'd have saved $2.3 million."

34.11 MedClaim Case Study: Testing Adjudication Logic

James Okafor's team faced a different testing challenge with CLM-ADJUD. The adjudication program implements dozens of business rules, each with multiple conditions. A single claim passes through eligibility checking, benefit application, coordination of benefits, and payment calculation. The combinatorial explosion of possible paths makes exhaustive testing impractical.

Decision Table Testing

James used decision tables to systematically identify test cases. For the eligibility check alone:

Condition Test 1 Test 2 Test 3 Test 4 Test 5 Test 6
Member active? Y N Y Y Y Y
Provider in network? Y Y N Y Y Y
Service date in policy? Y Y Y N Y Y
Procedure covered? Y Y Y Y N Y
Pre-auth required? N N N N N Y
Pre-auth obtained? N/A N/A N/A N/A N/A N
Expected Result PASS REJECT OUT-NET REJECT DENY PEND

Each row of the decision table becomes a test case:

       TESTSUITE "Claim Eligibility Tests"

       TESTCASE "All conditions met - claim eligible"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-ACTIVE       TO TRUE
           SET PROVIDER-IN-NETWORK TO TRUE
           MOVE "2025-10-15" TO CLM-SERVICE-DATE
           MOVE "2025-01-01" TO MBR-POLICY-START
           MOVE "2025-12-31" TO MBR-POLICY-END
           SET PROCEDURE-COVERED   TO TRUE
           SET PREAUTH-NOT-REQUIRED TO TRUE
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "PASS"

       TESTCASE "Inactive member - claim rejected"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-INACTIVE     TO TRUE
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "REJECT"
           EXPECT CLM-REJECT-CODE TO BE "R001"
           EXPECT CLM-REJECT-MSG TO BE
               "MEMBER NOT ACTIVE ON DATE OF SERVICE"

       TESTCASE "Out of network provider"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-ACTIVE       TO TRUE
           SET PROVIDER-OUT-NETWORK TO TRUE
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "OUT-NET"

       TESTCASE "Service date outside policy"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-ACTIVE       TO TRUE
           SET PROVIDER-IN-NETWORK TO TRUE
           MOVE "2024-06-15" TO CLM-SERVICE-DATE
           MOVE "2025-01-01" TO MBR-POLICY-START
           MOVE "2025-12-31" TO MBR-POLICY-END
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "REJECT"
           EXPECT CLM-REJECT-CODE TO BE "R003"

       TESTCASE "Pre-auth required but not obtained"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-ACTIVE       TO TRUE
           SET PROVIDER-IN-NETWORK TO TRUE
           MOVE "2025-10-15" TO CLM-SERVICE-DATE
           MOVE "2025-01-01" TO MBR-POLICY-START
           MOVE "2025-12-31" TO MBR-POLICY-END
           SET PROCEDURE-COVERED   TO TRUE
           SET PREAUTH-REQUIRED    TO TRUE
           SET PREAUTH-NOT-OBTAINED TO TRUE
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "PEND"
           EXPECT CLM-PEND-CODE TO BE "P001"

      *============================================================
      * Shared setup paragraph for base claim data
      *============================================================
       TESTCASE "Setup helper"
       9000-SETUP-BASE-CLAIM.
           INITIALIZE CLM-WORK-AREA
           MOVE "CLM99999"  TO CLM-CLAIM-ID
           MOVE "MEM12345"  TO CLM-MEMBER-ID
           MOVE "PRV00100"  TO CLM-PROVIDER-ID
           MOVE "99213"     TO CLM-PROCEDURE-CODE
           MOVE 150.00      TO CLM-BILLED-AMOUNT
           MOVE "2025-10-15" TO CLM-SERVICE-DATE.

Edge Cases Sarah Kim Identified

Sarah Kim, the business analyst, identified several edge cases from real claims that had caused processing errors:

       TESTSUITE "Adjudication Edge Cases"

       TESTCASE "Claim on policy start date"
           MOVE "2025-01-01" TO CLM-SERVICE-DATE
           MOVE "2025-01-01" TO MBR-POLICY-START
           PERFORM 2100-CHECK-DATE-RANGE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "Claim on policy end date"
           MOVE "2025-12-31" TO CLM-SERVICE-DATE
           MOVE "2025-12-31" TO MBR-POLICY-END
           PERFORM 2100-CHECK-DATE-RANGE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "Leap year date Feb 29"
           MOVE "2024-02-29" TO CLM-SERVICE-DATE
           MOVE "2024-01-01" TO MBR-POLICY-START
           MOVE "2024-12-31" TO MBR-POLICY-END
           PERFORM 2100-CHECK-DATE-RANGE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "Billed amount exceeds allowed by >500%"
           MOVE 150.00  TO CLM-ALLOWED-AMOUNT
           MOVE 1000.00 TO CLM-BILLED-AMOUNT
           PERFORM 3100-CHECK-EXCESSIVE-CHARGE
           EXPECT CLM-EXCESSIVE-FLAG TO BE "Y"

       TESTCASE "Coordination of benefits primary pays"
           MOVE "P" TO CLM-COB-STATUS
           MOVE 500.00 TO CLM-BILLED-AMOUNT
           MOVE 400.00 TO CLM-ALLOWED-AMOUNT
           MOVE 30.00  TO CLM-COPAY-AMOUNT
           PERFORM 4000-APPLY-COB
           EXPECT CLM-PAY-AMOUNT TO BE 370.00

       TESTCASE "Coordination of benefits secondary pays"
           MOVE "S" TO CLM-COB-STATUS
           MOVE 500.00 TO CLM-BILLED-AMOUNT
           MOVE 400.00 TO CLM-ALLOWED-AMOUNT
           MOVE 350.00 TO CLM-PRIMARY-PAID
           PERFORM 4000-APPLY-COB
           EXPECT CLM-PAY-AMOUNT TO BE 50.00

34.12 Building a Custom Test Harness

While COBOL-Check is the recommended modern approach, many shops build custom test harnesses when COBOL-Check cannot be used (due to compiler restrictions, security policies, or organizational reasons). A custom harness follows this pattern:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. TEST-HARNESS.
      *---------------------------------------------------------------
      * Custom test harness for BAL-CALC paragraphs.
      * Tests core calculation logic in isolation.
      *---------------------------------------------------------------

       DATA DIVISION.
       WORKING-STORAGE SECTION.
      * Test framework fields
       01  WS-TEST-COUNTERS.
           05 WS-TESTS-RUN        PIC 9(4)  VALUE 0.
           05 WS-TESTS-PASSED     PIC 9(4)  VALUE 0.
           05 WS-TESTS-FAILED     PIC 9(4)  VALUE 0.
       01  WS-TEST-NAME           PIC X(60).
       01  WS-EXPECTED-VALUE      PIC S9(9)V99.
       01  WS-ACTUAL-VALUE        PIC S9(9)V99.
       01  WS-TOLERANCE           PIC 9V9(6) VALUE 0.005.

      * Copy in the fields from BAL-CALC
           COPY ACCT-WS-FIELDS.

       PROCEDURE DIVISION.
       0000-MAIN.
           DISPLAY "=================================="
           DISPLAY " BAL-CALC Unit Test Suite"
           DISPLAY "=================================="
           PERFORM 1000-TEST-DAILY-COMPOUND
           PERFORM 2000-TEST-MONTHLY-COMPOUND
           PERFORM 3000-TEST-SIMPLE-INTEREST
           PERFORM 4000-TEST-CHECKING-THRESHOLD
           PERFORM 8000-REPORT-RESULTS
           IF WS-TESTS-FAILED > 0
               MOVE 8 TO RETURN-CODE
           END-IF
           STOP RUN.

       1000-TEST-DAILY-COMPOUND.
           MOVE "Daily compound: $10,000 at 4.5% for 30 days"
               TO WS-TEST-NAME
           MOVE 10000.00  TO ACCT-BALANCE
           MOVE 0.0450    TO ACCT-ANNUAL-RATE
           MOVE 30        TO WS-DAYS-IN-PERIOD
           PERFORM 3110-COMPOUND-DAILY
           MOVE WS-INTEREST-AMT TO WS-ACTUAL-VALUE
           MOVE 37.01           TO WS-EXPECTED-VALUE
           PERFORM 9000-ASSERT-EQUAL.

       2000-TEST-MONTHLY-COMPOUND.
           MOVE "Monthly compound: $10,000 at 4.5% for 1 month"
               TO WS-TEST-NAME
           MOVE 10000.00  TO ACCT-BALANCE
           MOVE 0.0450    TO ACCT-ANNUAL-RATE
           MOVE 1         TO WS-MONTHS-IN-PERIOD
           PERFORM 3120-COMPOUND-MONTHLY
           MOVE WS-INTEREST-AMT TO WS-ACTUAL-VALUE
           MOVE 37.50           TO WS-EXPECTED-VALUE
           PERFORM 9000-ASSERT-EQUAL.

       3000-TEST-SIMPLE-INTEREST.
           MOVE "Simple interest: $1,000 at 5.0% for 30 days"
               TO WS-TEST-NAME
           MOVE 1000.00   TO ACCT-BALANCE
           MOVE 0.0500    TO ACCT-ANNUAL-RATE
           MOVE 30        TO WS-DAYS-IN-PERIOD
           PERFORM 3130-SIMPLE-INTEREST
           MOVE WS-INTEREST-AMT TO WS-ACTUAL-VALUE
           MOVE 4.11            TO WS-EXPECTED-VALUE
           PERFORM 9000-ASSERT-EQUAL.

       4000-TEST-CHECKING-THRESHOLD.
           MOVE "Checking below threshold earns $0"
               TO WS-TEST-NAME
           MOVE "CHK"     TO ACCT-TYPE
           MOVE 500.00    TO ACCT-BALANCE
           MOVE 1000.00   TO WS-CHK-INT-THRESHOLD
           PERFORM 3000-CALC-INTEREST
           MOVE WS-INTEREST-AMT TO WS-ACTUAL-VALUE
           MOVE 0               TO WS-EXPECTED-VALUE
           PERFORM 9000-ASSERT-EQUAL.

       8000-REPORT-RESULTS.
           DISPLAY " "
           DISPLAY "=================================="
           DISPLAY " Test Results"
           DISPLAY "=================================="
           DISPLAY " Tests Run:    " WS-TESTS-RUN
           DISPLAY " Tests Passed: " WS-TESTS-PASSED
           DISPLAY " Tests Failed: " WS-TESTS-FAILED
           DISPLAY "=================================="
           IF WS-TESTS-FAILED = 0
               DISPLAY " ALL TESTS PASSED"
           ELSE
               DISPLAY " *** FAILURES DETECTED ***"
           END-IF.

       9000-ASSERT-EQUAL.
           ADD 1 TO WS-TESTS-RUN
           COMPUTE WS-TOLERANCE =
               FUNCTION ABS(WS-ACTUAL-VALUE -
                            WS-EXPECTED-VALUE)
           IF WS-TOLERANCE < 0.005
               ADD 1 TO WS-TESTS-PASSED
               DISPLAY "  PASS: " WS-TEST-NAME
           ELSE
               ADD 1 TO WS-TESTS-FAILED
               DISPLAY "  FAIL: " WS-TEST-NAME
               DISPLAY "    Expected: " WS-EXPECTED-VALUE
               DISPLAY "    Actual:   " WS-ACTUAL-VALUE
           END-IF.

      * Copy in the calculation paragraphs from BAL-CALC
           COPY BAL-CALC-PARAS.

This custom harness pattern has an important design choice: it copies in the paragraphs under test using COPY statements. This means the test harness always tests the current version of the code. When the production code changes, the test harness automatically picks up the changes on recompilation.

🧪 Lab Exercise: Build a custom test harness for a simple COBOL program of your own design. Include at least three test paragraphs and the assert-equal utility paragraph. Run it in your Student Mainframe Lab and observe the PASS/FAIL output. Then intentionally break the code under test and verify that the appropriate test fails.

34.13 Best Practices for COBOL Unit Testing

Drawing from the experiences at GlobalBank and MedClaim, here are the practices that make COBOL unit testing sustainable:

1. Test the Right Things

Not every paragraph needs a unit test. Focus on:

  • Calculation logic — financial formulas, date calculations, rate lookups
  • Validation rules — field edits, range checks, cross-field validations
  • Decision logic — EVALUATE statements with many branches, complex IF nesting
  • Error handling — what happens when things go wrong?

Skip:

  • Simple MOVE statements
  • File OPEN/CLOSE/READ/WRITE (these are integration concerns)
  • Display formatting (unless it contains logic)

2. Name Tests Descriptively

A test name should describe the scenario, not the paragraph:

       * Bad:
       TESTCASE "Test 3100-APPLY-RATE"

       * Good:
       TESTCASE "Savings account daily compound 30 days at 4.5%"

When a test fails, the name should tell you what broke without looking at the test code.

3. One Assertion Per Concept

Each test should verify one logical concept. Multiple EXPECT statements are fine if they all verify aspects of the same outcome:

       * Good: multiple assertions about one outcome
       TESTCASE "Rejected claim has correct status fields"
           PERFORM 2000-CHECK-ELIGIBILITY
           EXPECT CLM-ELIG-STATUS TO BE "REJECT"
           EXPECT CLM-REJECT-CODE TO BE "R001"
           EXPECT CLM-REJECT-MSG NOT TO BE SPACES

4. Initialize Before Each Test

Always INITIALIZE your work areas at the start of each test to prevent state leakage between tests:

       TESTCASE "Clean state test"
           INITIALIZE WS-CLAIM-WORK-AREA
           INITIALIZE WS-CALC-FIELDS
           * Now set up specific values for this test
           MOVE 150.00 TO CLM-BILLED-AMOUNT
           ...

5. Test Boundary Values Explicitly

For every numeric field, test: zero, one, maximum, and boundary conditions. For every alphanumeric field, test: spaces, a valid value, and an invalid value.

6. Keep Tests Fast

Unit tests should compile and run in seconds, not minutes. If your tests require file I/O or database access, they are integration tests, not unit tests. Both are valuable, but keep them separate.

7. Run Tests Before Every Commit

Integrate testing into your change management process. No code change should be promoted to production without the regression suite passing. This is the single most impactful practice you can adopt.

⚖️ Debate: How Much Testing Is Enough? Some teams argue that 80% code coverage is a reasonable target. Others argue that coverage metrics create perverse incentives — developers write tests to hit coverage numbers rather than to catch bugs. The pragmatic answer: test the code that matters most (calculations, validations, business rules) and accept that some code (initialization, formatting, file I/O) is better verified through integration tests.

34.14 Integrating Tests into CI/CD Pipelines

Modern mainframe shops are increasingly integrating COBOL testing into continuous integration pipelines. Tools like Jenkins, GitLab CI, and IBM Dependency Based Build (DBB) can automate the build-test cycle:

# Jenkinsfile for COBOL CI/CD
pipeline {
    agent { label 'zos-agent' }
    stages {
        stage('Checkout') {
            steps {
                checkout scm
            }
        }
        stage('Generate Tests') {
            steps {
                sh 'java -jar cobol-check.jar -p BAL-CALC'
                sh 'java -jar cobol-check.jar -p TXN-PROC'
                sh 'java -jar cobol-check.jar -p CLM-ADJUD'
            }
        }
        stage('Compile') {
            steps {
                sh 'submit JCL/COMPILE-ALL.jcl'
                sh 'wait-for-job COMPILE-ALL'
            }
        }
        stage('Unit Tests') {
            steps {
                sh 'submit JCL/RUN-UNIT-TESTS.jcl'
                sh 'wait-for-job UNIT-TESTS'
            }
        }
        stage('Integration Tests') {
            steps {
                sh 'submit JCL/RUN-INTEG-TESTS.jcl'
                sh 'wait-for-job INTEG-TESTS'
            }
        }
    }
    post {
        failure {
            emailext subject: 'COBOL Build Failed',
                     body: 'Check Jenkins for details',
                     to: 'mainframe-team@globalbank.com'
        }
    }
}

This pipeline runs automatically when code is committed to the repository, catching defects before they reach production.

34.15 Test Data Management

Test data is the fuel that drives your test suite. Poorly managed test data leads to flaky tests, false positives, and false negatives. In COBOL environments, where test data often resides in datasets, VSAM files, and DB2 tables, disciplined test data management is critical.

The Test Data Lifecycle

Test data has a lifecycle that mirrors the test itself:

  1. Creation: Generate or extract data that exercises the scenario under test
  2. Loading: Place data into the files or tables the program reads
  3. Execution: Run the test against the loaded data
  4. Validation: Compare actual output to expected results
  5. Cleanup: Remove or reset test data to prevent contamination of subsequent tests
//*----------------------------------------------------------
//* Complete test data lifecycle in JCL
//*----------------------------------------------------------
//* Step 1: Delete any leftover test data
//CLEANUP  EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
  DELETE TEST.ACCT.MASTER PURGE
  SET MAXCC=0
/*
//*
//* Step 2: Define fresh test VSAM dataset
//DEFINE   EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
  DEFINE CLUSTER (                         -
      NAME(TEST.ACCT.MASTER)               -
      RECORDSIZE(200 200)                  -
      KEYS(10 0)                           -
      CYLINDERS(1 1)                       -
      FREESPACE(20 10)                     -
      SHAREOPTIONS(2 3)                    -
  )                                        -
  DATA (NAME(TEST.ACCT.MASTER.DATA))       -
  INDEX (NAME(TEST.ACCT.MASTER.INDEX))
/*
//*
//* Step 3: Load test records
//LOADDATA EXEC PGM=GENTESTD
//STEPLIB  DD DSN=TEST.LOAD,DISP=SHR
//TESTFILE DD DSN=TEST.ACCT.MASTER,DISP=SHR
//*
//* Step 4: Run program under test
//RUNTEST  EXEC PGM=BALCALC
//STEPLIB  DD DSN=TEST.LOAD,DISP=SHR
//ACCTMSTR DD DSN=TEST.ACCT.MASTER,DISP=SHR
//OUTPUT   DD DSN=TEST.ACTUAL.OUTPUT,DISP=(NEW,CATLG),
//            SPACE=(TRK,(5,1)),
//            DCB=(RECFM=FB,LRECL=200,BLKSIZE=32000)
//*
//* Step 5: Compare to expected baseline
//COMPARE  EXEC PGM=ISRSUPC,PARM='DELTAL,LINECMP'
//NEWDD    DD DSN=TEST.ACTUAL.OUTPUT,DISP=SHR
//OLDDD    DD DSN=TEST.EXPECTED.OUTPUT,DISP=SHR
//OUTDD    DD SYSOUT=*

Test Data Isolation Strategies

One of the biggest challenges in COBOL testing is preventing test data from contaminating production data — and preventing tests from interfering with each other. Three strategies address this:

Strategy 1: Separate Datasets. Use dedicated test datasets that mirror the structure of production files but contain only test data. This is the safest approach, but requires maintaining parallel file definitions.

Strategy 2: Test Data Prefixes. Use a naming convention (e.g., account numbers starting with "99") to identify test records within shared datasets. This is riskier but sometimes necessary when separate datasets are impractical.

Strategy 3: Snapshot and Restore. Take a snapshot of the test environment before each test run, then restore after. IDCAMS REPRO can copy and restore VSAM datasets:

//* Snapshot before test
//SNAPSHOT EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
  REPRO INFILE(PROD) OUTFILE(SNAP)
/*
//PROD     DD DSN=TEST.ACCT.MASTER,DISP=SHR
//SNAP     DD DSN=TEST.ACCT.SNAPSHOT,DISP=(NEW,CATLG),
//            SPACE=(CYL,(5,2))
//*
//* ... run tests ...
//*
//* Restore after test
//RESTORE  EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD *
  REPRO INFILE(SNAP) OUTFILE(PROD) REPLACE
/*

Mock File Creation Patterns

When your program reads from files, you need to create mock files that contain precisely the data your test requires. A dedicated test data generator program produces deterministic, repeatable mock files:

      *================================================================*
      * GEN-MOCK-ACCT: Generate mock account master file for testing.
      * Creates accounts covering all account types, balance ranges,
      * and edge cases needed by the BAL-CALC test suite.
      *================================================================*
       IDENTIFICATION DIVISION.
       PROGRAM-ID. GEN-MOCK-ACCT.

       DATA DIVISION.
       FILE SECTION.
       FD  MOCK-ACCT-FILE
           RECORDING MODE IS F
           RECORD CONTAINS 200 CHARACTERS.
       01  MOCK-ACCT-REC.
           COPY CPY-ACCT-REC.

       WORKING-STORAGE SECTION.
       01  WS-REC-COUNT           PIC 9(5)   VALUE 0.
       01  WS-BASE-KEY            PIC 9(10)  VALUE 9900000000.

       PROCEDURE DIVISION.
       0000-MAIN.
           OPEN OUTPUT MOCK-ACCT-FILE
      *    Normal checking accounts
           PERFORM 1000-GEN-CHECKING-NORMAL
      *    Checking at exact threshold
           PERFORM 1100-GEN-CHECKING-THRESHOLD
      *    Savings with various compounding
           PERFORM 2000-GEN-SAVINGS-DAILY
           PERFORM 2100-GEN-SAVINGS-MONTHLY
           PERFORM 2200-GEN-SAVINGS-SIMPLE
      *    Zero balance savings
           PERFORM 2300-GEN-SAVINGS-ZERO
      *    CD accounts
           PERFORM 3000-GEN-CD-STANDARD
      *    Money market tiered rates
           PERFORM 4000-GEN-MMA-TIER1
           PERFORM 4100-GEN-MMA-TIER2
           PERFORM 4200-GEN-MMA-TIER3
      *    Maximum balance edge case
           PERFORM 5000-GEN-MAX-BALANCE
      *    Unknown account type (error path)
           PERFORM 6000-GEN-INVALID-TYPE
           CLOSE MOCK-ACCT-FILE
           DISPLAY "Generated " WS-REC-COUNT " mock accounts"
           STOP RUN.

       1000-GEN-CHECKING-NORMAL.
           INITIALIZE MOCK-ACCT-REC
           ADD 1 TO WS-BASE-KEY
           MOVE WS-BASE-KEY     TO ACCT-NUMBER
           MOVE "CHK"           TO ACCT-TYPE
           MOVE 5000.00         TO ACCT-BALANCE
           MOVE 0.0100          TO ACCT-ANNUAL-RATE
           MOVE "D"             TO ACCT-COMPOUND-METHOD
           MOVE "A"             TO ACCT-STATUS
           WRITE MOCK-ACCT-REC
           ADD 1 TO WS-REC-COUNT.

       1100-GEN-CHECKING-THRESHOLD.
           INITIALIZE MOCK-ACCT-REC
           ADD 1 TO WS-BASE-KEY
           MOVE WS-BASE-KEY     TO ACCT-NUMBER
           MOVE "CHK"           TO ACCT-TYPE
           MOVE 1000.00         TO ACCT-BALANCE
           MOVE 0.0100          TO ACCT-ANNUAL-RATE
           MOVE "D"             TO ACCT-COMPOUND-METHOD
           MOVE "A"             TO ACCT-STATUS
           WRITE MOCK-ACCT-REC
           ADD 1 TO WS-REC-COUNT.

      * ... remaining paragraphs follow same pattern ...

💡 Deterministic Data: A good mock file generator uses fixed values, not random ones. Random test data makes test failures unreproducible. If you need variation, use a predictable algorithm (e.g., account balance = account sequence number * 1000) so that any developer can recreate the exact same test data.

34.16 Boundary Value Analysis in Practice

Boundary value analysis (BVA) is a systematic technique for identifying the input values most likely to reveal defects. The principle is simple: bugs cluster at boundaries. A field that works for the value 50 might fail at 0, at the maximum, or at the transition point between two processing paths.

The BVA Framework for COBOL

For every numeric field in your program, identify:

  1. Minimum valid value (and one below it)
  2. Maximum valid value (and one above it)
  3. Zero (if applicable)
  4. Transition values — the exact point where behavior changes

For every alphanumeric field:

  1. Spaces (all blank)
  2. Maximum length (filled to PIC size)
  3. Special characters (hyphens, apostrophes in names)
  4. Invalid values (values not in the expected set)

Worked Example: Interest Rate Boundaries

GlobalBank's money market accounts use tiered interest rates based on balance:

Balance Range Annual Rate
$0.00 - $9,999.99 1.50%
$10,000.00 - $49,999.99 2.25%
$50,000.00 - $99,999.99 3.00%
$100,000.00+ 3.75%

The boundary values to test are:

       TESTSUITE "Money Market Tier Boundaries"

       TESTCASE "Balance $0.00 — minimum, Tier 1"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 0.00      TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0150

       TESTCASE "Balance $9,999.99 — top of Tier 1"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 9999.99   TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0150

       TESTCASE "Balance $10,000.00 — bottom of Tier 2"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 10000.00  TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0225

       TESTCASE "Balance $49,999.99 — top of Tier 2"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 49999.99  TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0225

       TESTCASE "Balance $50,000.00 — bottom of Tier 3"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 50000.00  TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0300

       TESTCASE "Balance $99,999.99 — top of Tier 3"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 99999.99  TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0300

       TESTCASE "Balance $100,000.00 — bottom of Tier 4"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 100000.00 TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0375

       TESTCASE "Balance $999,999,999.99 — PIC maximum"
           MOVE "MMA"     TO ACCT-TYPE
           MOVE 999999999.99 TO ACCT-BALANCE
           PERFORM 3200-CALC-TIERED-RATE
           EXPECT ACCT-ANNUAL-RATE TO BE 0.0375

Notice the pattern: for each tier boundary, we test the value just below the boundary, the value exactly at the boundary, and the value just above. This is where off-by-one errors and incorrect comparison operators (< vs. <=, > vs. >=) reveal themselves.

⚠️ Caution: When testing PIC maximum values, be aware of field overflow. If your balance field is PIC 9(9)V99, the maximum value is 999,999,999.99. A balance of 1,000,000,000.00 would silently truncate to 000,000,000.00 — a devastating bug in a financial system. Always test at and near the PIC maximum.

BVA for Date Fields

Date boundaries are a particularly rich source of COBOL defects:

       TESTSUITE "Date Boundary Tests"

       TESTCASE "January 1 — year start"
           MOVE "2025-01-01" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "December 31 — year end"
           MOVE "2025-12-31" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "February 28 non-leap year"
           MOVE "2025-02-28" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "February 29 leap year"
           MOVE "2024-02-29" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "Y"

       TESTCASE "February 29 non-leap year — invalid"
           MOVE "2025-02-29" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "N"

       TESTCASE "Month 00 — invalid"
           MOVE "2025-00-15" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "N"

       TESTCASE "Month 13 — invalid"
           MOVE "2025-13-01" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "N"

       TESTCASE "Day 00 — invalid"
           MOVE "2025-06-00" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "N"

       TESTCASE "Day 32 — invalid"
           MOVE "2025-01-32" TO CLM-SERVICE-DATE
           PERFORM 2100-VALIDATE-DATE
           EXPECT CLM-DATE-VALID TO BE "N"

34.17 COBOL-Check Advanced Assertion Patterns

COBOL-Check provides a range of assertion operators beyond simple equality. Mastering these patterns makes your tests more expressive and your intent clearer.

Comparison Assertions

      * Exact equality
       EXPECT WS-BALANCE TO BE 1000.00

      * Not equal
       EXPECT WS-STATUS NOT TO BE "X"

      * Greater / less than
       EXPECT WS-INTEREST-AMT TO BE GREATER THAN 0
       EXPECT WS-ERROR-COUNT TO BE LESS THAN 5

      * Numeric zero
       EXPECT WS-COPAY TO BE NUMERIC
       EXPECT WS-COPAY TO BE ZERO

String Assertions

      * Alphabetic check
       EXPECT WS-NAME TO BE ALPHABETIC

      * Spaces check
       EXPECT WS-ERROR-MSG NOT TO BE SPACES

      * Starts-with pattern (use reference modification)
       EXPECT WS-ACCOUNT-KEY(1:3) TO BE "CHK"

Combining Assertions for Complex Validations

When testing a paragraph that produces multiple outputs, chain assertions to verify the complete state:

       TESTSUITE "Complete Adjudication Validation"

       TESTCASE "Standard in-network claim fully adjudicated"
           PERFORM 9000-SETUP-BASE-CLAIM
           SET MEMBER-ACTIVE       TO TRUE
           SET PROVIDER-IN-NETWORK TO TRUE
           MOVE 200.00  TO CLM-BILLED-AMOUNT
           MOVE 175.00  TO CLM-ALLOWED-AMOUNT
           MOVE 30.00   TO CLM-COPAY-SCHEDULE
           MOVE 0.80    TO CLM-COINSURANCE-RATE
           PERFORM 2000-FULL-ADJUDICATION

      *    Verify eligibility passed
           EXPECT CLM-ELIG-STATUS TO BE "PASS"

      *    Verify benefit calculation
           EXPECT CLM-COPAY-AMOUNT TO BE 30.00
           EXPECT CLM-COINSURANCE-AMT TO BE 29.00
      *    (175.00 - 30.00) * 0.20 = 29.00

      *    Verify payment amount
           EXPECT CLM-PAY-AMOUNT TO BE 116.00
      *    175.00 - 30.00 - 29.00 = 116.00

      *    Verify member responsibility
           EXPECT CLM-MEMBER-RESP TO BE 59.00
      *    30.00 + 29.00 = 59.00

      *    Verify no error flags
           EXPECT CLM-ERROR-FLAG TO BE SPACES
           EXPECT CLM-PEND-CODE TO BE SPACES

Try It Yourself: Take the BAL-CALC test suite from Section 34.10 and add boundary value tests for each account type. For checking accounts, test at balance = threshold - 0.01, balance = threshold, and balance = threshold + 0.01. For CD accounts, test at term = 0 months, term = 1 month, and term = 120 months (maximum). Run your expanded test suite and note any failures — these likely indicate real boundary bugs in the code.

34.18 Regression Test Automation with JCL Procedures

In a production mainframe environment, regression testing must be automated to the point where any developer can trigger the full suite with a single JCL submission. The key is building reusable JCL procedures (PROCs) that encapsulate the compile-test cycle.

The Unit Test PROC

//*==============================================================
//* PROC: UNITTEST - Compile and run COBOL-Check unit tests
//*   Parameters:
//*     PROG  - Program name (e.g., BALCALC)
//*     SRCLIB - Source library (default: PROD.SOURCE)
//*     TSTLIB - Test source library (default: TEST.SOURCE)
//*     LODLIB - Load library (default: TEST.LOAD)
//*==============================================================
//UNITTEST PROC PROG=,
//         SRCLIB='PROD.SOURCE',
//         TSTLIB='TEST.SOURCE',
//         LODLIB='TEST.LOAD'
//*
//* Step 1: Merge test code into program source
//MERGE    EXEC PGM=BPXBATCH
//STDPARM  DD *,SYMBOLS=JCLONLY
SH java -jar /usr/lpp/cobol-check/cobol-check.jar
   -p &PROG -c /etc/cobol-check/config.properties
/*
//STDOUT   DD SYSOUT=*
//STDERR   DD SYSOUT=*
//*
//* Step 2: Compile merged source
//COMPILE  EXEC IGYWCL,
//         PARM.COBOL='RENT,APOST,MAP,XREF,SSRANGE'
//COBOL.SYSIN  DD DSN=&TSTLIB(&PROG),DISP=SHR
//COBOL.SYSLIB DD DSN=&SRCLIB,DISP=SHR
//             DD DSN=PROD.COPYLIB,DISP=SHR
//LKED.SYSLMOD DD DSN=&LODLIB(&PROG),DISP=SHR
//*
//* Step 3: Execute tests
//EXECUTE  EXEC PGM=&PROG
//STEPLIB  DD DSN=&LODLIB,DISP=SHR
//SYSOUT   DD SYSOUT=*
//TESTOUT  DD DSN=TEST.RESULTS(&PROG),DISP=SHR
//* UNITTEST PEND

The Master Regression JCL

//REGRESS  JOB (ACCT),'NIGHTLY REGRESSION',CLASS=A,
//         MSGCLASS=X,MSGLEVEL=(1,1),NOTIFY=&SYSUID
//*
//* ============================================================
//* Full regression suite - submit nightly or before promotion
//* ============================================================
//*
//* Phase 1: Refresh all test data
//REFRESH  EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN    DD DSN=TEST.JCL(REFRESH),DISP=SHR
//*
//* Phase 2: Unit tests for all programs
//UT01     EXEC UNITTEST,PROG=BALCALC
//UT02     EXEC UNITTEST,PROG=ACCTVAL
//UT03     EXEC UNITTEST,PROG=TXNPROC
//UT04     EXEC UNITTEST,PROG=RPTFMT
//UT05     EXEC UNITTEST,PROG=CLMADJUD
//UT06     EXEC UNITTEST,PROG=CLMSTAT
//*
//* Phase 3: Integration tests
//INTEG01  EXEC PGM=BATCHRUN
//STEPLIB  DD DSN=TEST.LOAD,DISP=SHR
//ACCTMSTR DD DSN=TEST.ACCT.MASTER,DISP=SHR
//TXNFILE  DD DSN=TEST.TXN.INPUT,DISP=SHR
//OUTPUT   DD DSN=TEST.INTEG.ACTUAL,DISP=(NEW,CATLG),
//            SPACE=(CYL,(5,2)),
//            DCB=(RECFM=FB,LRECL=200,BLKSIZE=32000)
//*
//* Phase 4: Compare integration output to baseline
//COMPARE  EXEC PGM=ISRSUPC,PARM='DELTAL,LINECMP'
//NEWDD    DD DSN=TEST.INTEG.ACTUAL,DISP=SHR
//OLDDD    DD DSN=TEST.INTEG.BASELINE,DISP=SHR
//OUTDD    DD SYSOUT=*
//*
//* Phase 5: Aggregate and report
//REPORT   EXEC PGM=RPTRESLT
//RESULTS  DD DSN=TEST.RESULTS,DISP=SHR
//SUMMARY  DD SYSOUT=*,DCB=(RECFM=FBA,LRECL=133)

Scheduling Regression Runs

Most mainframe shops schedule regression tests to run automatically:

  • Nightly: Full regression suite runs after the production batch cycle completes. Results are reviewed the next morning.
  • On-demand: Developers submit the regression JCL before promoting changes to the test or production environment.
  • Weekly: Extended regression with larger test data volumes runs over the weekend.

The key principle is that regression tests should never be optional. Derek Washington instituted a rule at GlobalBank: "No code moves to production without a green regression run. No exceptions."

📊 By the Numbers: After implementing automated regression testing, GlobalBank's production defect rate dropped from an average of 3.2 defects per release to 0.4 defects per release — an 87% reduction. The regression suite caught an average of 2.8 defects per run that would otherwise have reached production. At an estimated cost of $15,000 per production defect (including diagnosis, emergency fix, testing, and deployment), the test suite saved approximately $42,000 per release cycle.

34.19 Summary

Unit testing COBOL is not only possible — it is increasingly essential. As the developers who originally wrote these systems retire, the safety net provided by automated tests becomes the primary mechanism for ensuring that modifications don't break working code.

The key concepts from this chapter:

  • The testing gap in COBOL exists for historical and cultural reasons, not technical limitations.
  • COBOL-Check provides a modern framework for writing and running unit tests against COBOL programs.
  • Test data generation using equivalence partitioning and boundary value analysis ensures thorough coverage.
  • Stubbing and mocking isolate the code under test from file I/O and external dependencies.
  • TDD produces cleaner, more testable COBOL code when applied to new development.
  • Regression suites prevent the reintroduction of defects during maintenance.
  • Code coverage identifies untested paths and guides test creation.

Maria Chen summarized it best: "Writing tests for COBOL isn't a luxury — it's an insurance policy. And given what we're insuring, the premiums are remarkably cheap."

In the next chapter, we will explore code review and static analysis — complementary practices that catch the defects that even good tests can miss.