25 min read

> "Ninety percent of what a mainframe does is read a file, do something with each record, and write another file. Master sequential I/O, and you've mastered the heart of batch COBOL."

Chapter 11: Sequential File Processing

"Ninety percent of what a mainframe does is read a file, do something with each record, and write another file. Master sequential I/O, and you've mastered the heart of batch COBOL." — Maria Chen, Senior Developer, GlobalBank

If COBOL is the language of business data processing, then sequential files are its native habitat. Every day, GlobalBank's TXN-DAILY file accumulates 2.3 million transaction records. Each night, batch programs read that file from beginning to end, process each record, and write output files — updated masters, reports, audit trails, exception files. This read-process-write cycle, repeated billions of times on mainframes worldwide, is the fundamental pattern of batch computing.

In your introductory course, you probably wrote a simple READ loop that processed records until AT END. This chapter takes you far deeper. We will cover the full lifecycle of sequential file I/O: how SELECT and ASSIGN connect logical files to physical datasets, how FD entries describe record structures, how FILE STATUS codes defend against every possible failure, how variable-length records work, and how to process files containing multiple record types. By the end, you will be able to write robust sequential file programs that handle real-world complexity — the kind of programs that run in production without operator intervention.

Every technique in this chapter builds on Chapter 10's defensive programming principles. You will check FILE STATUS after every operation, handle edge cases like empty files, and produce audit information for operational support.

11.1 Sequential File Organization — Concepts and Use Cases

A sequential file is the simplest file organization: records are stored one after another, in the order they were written, and can only be read in that same order. There is no index, no key, no random access. You start at the beginning and read through to the end.

Why Sequential Files Still Matter

In an era of databases, web services, and cloud storage, you might wonder why anyone still uses sequential files. The answer is performance and simplicity:

Throughput. A sequential READ is the fastest I/O operation on a mainframe. There is no index lookup, no hashing, no tree traversal — just the next record on the disk. For batch processing where you need every record anyway, sequential files are unbeatable.

Simplicity. A sequential file has no internal structure to manage, no fragmentation to worry about, no reorganization to schedule. It is a flat stream of records.

Universality. Every system can produce and consume sequential files. They serve as the universal interchange format between programs, systems, and even organizations.

Auditability. Sequential files provide a natural audit trail. The TXN-DAILY file at GlobalBank is a complete, chronological record of every transaction that occurred during the business day.

Common Use Cases

Use Case Example
Transaction log GlobalBank's TXN-DAILY — all daily transactions
Batch input MedClaim's claim batch files from providers
Report output Daily balance reports, exception reports
Data extract Extract files for downstream systems or regulators
Backup/archive Periodic snapshots of master file data
Sort input/output COBOL SORT operates on sequential files

📊 Scale Context A medium-sized mainframe shop might process 50-200 sequential files per night in its batch window. A large enterprise like a major bank or insurance company might process thousands. GlobalBank's nightly batch window processes 47 sequential files across 23 job steps.

11.2 SELECT...ASSIGN — File Control Entries

The SELECT statement in the FILE-CONTROL paragraph establishes the connection between a logical file name used in your program and a physical dataset on the system.

Basic Syntax

       ENVIRONMENT DIVISION.
       INPUT-OUTPUT SECTION.
       FILE-CONTROL.
           SELECT file-name
               ASSIGN TO assignment-name
               [ORGANIZATION IS SEQUENTIAL]
               [ACCESS MODE IS SEQUENTIAL]
               [FILE STATUS IS status-variable].

Breaking Down Each Clause

SELECT file-name — Declares a logical file name that you will use throughout the program. This name appears in FD entries, OPEN, READ, WRITE, and CLOSE statements.

           SELECT TXN-INPUT-FILE

ASSIGN TO assignment-name — Links the logical file to a physical entity. On z/OS, this is a DD name from the JCL. In GnuCOBOL, it is typically a filename or environment variable.

      * z/OS — refers to DD name TXNIN in the JCL
               ASSIGN TO TXNIN

      * GnuCOBOL — refers to a file path
               ASSIGN TO "transactions.dat"

      * GnuCOBOL — using environment variable
               ASSIGN TO WS-TXN-FILENAME

ORGANIZATION IS SEQUENTIAL — Specifies sequential file organization. This is the default, so it is often omitted:

               ORGANIZATION IS SEQUENTIAL

ACCESS MODE IS SEQUENTIAL — Specifies sequential access. For sequential files, this is the only option and is usually omitted.

FILE STATUS IS status-variable — Specifies a two-character field that receives the result of every I/O operation on this file. As Chapter 10 established, this is mandatory for defensive programming:

               FILE STATUS IS WS-TXN-STATUS

Complete Example

       FILE-CONTROL.
           SELECT TXN-INPUT-FILE
               ASSIGN TO TXNIN
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-TXN-STATUS.

           SELECT VALID-OUTPUT-FILE
               ASSIGN TO TXNOUT
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-OUT-STATUS.

           SELECT REJECT-FILE
               ASSIGN TO REJFILE
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-REJ-STATUS.

           SELECT REPORT-FILE
               ASSIGN TO RPTOUT
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-RPT-STATUS.

Platform Differences in ASSIGN

The ASSIGN clause is one of the areas where COBOL implementations diverge most:

📊 ASSIGN Clause Across Platforms | Platform | ASSIGN TO | Physical Mapping | |----------|-----------|-----------------| | z/OS Enterprise COBOL | DD name (e.g., TXNIN) | JCL DD statement defines dataset | | GnuCOBOL | Literal filename or data name | OS file path | | Micro Focus | Literal or data name | OS file path or environment variable | | ACUCOBOL | Literal or data name | OS file path |

On z/OS, the JCL connects the DD name to a physical dataset:

//TXNIN    DD  DSN=GLOBALBANK.TXN.DAILY,DISP=SHR
//TXNOUT   DD  DSN=GLOBALBANK.TXN.VALID,DISP=(NEW,CATLG),
//             SPACE=(CYL,(10,5),RLSE),
//             DCB=(RECFM=FB,LRECL=150,BLKSIZE=27000)

In GnuCOBOL, you can use a data name to make the filename configurable:

       WORKING-STORAGE SECTION.
       01  WS-TXN-FILENAME          PIC X(256).

       ...

       FILE-CONTROL.
           SELECT TXN-INPUT-FILE
               ASSIGN TO WS-TXN-FILENAME
               FILE STATUS IS WS-TXN-STATUS.

       ...

       PROCEDURE DIVISION.
           ACCEPT WS-TXN-FILENAME FROM ENVIRONMENT "TXN_FILE"

OPTIONAL Files

The OPTIONAL keyword allows a file to not exist when the program runs:

           SELECT OPTIONAL OVERRIDE-FILE
               ASSIGN TO OVRRIDE
               FILE STATUS IS WS-OVR-STATUS.

When you OPEN an OPTIONAL file that does not exist: - For INPUT: FILE STATUS returns '05', and the first READ returns '10' (AT END) - For OUTPUT: The file is created - For I-O: FILE STATUS returns '05', and the file is treated as empty

This is useful for configuration or override files that may or may not be present in a given job run.

11.3 FD Entries — Record Descriptions

The FD (File Description) entry in the FILE SECTION describes the physical characteristics of the file and defines the record layout.

Basic Syntax

       FILE SECTION.
       FD  file-name
           [RECORDING MODE IS mode]
           [RECORD CONTAINS size CHARACTERS]
           [BLOCK CONTAINS size {RECORDS|CHARACTERS}]
           [LABEL RECORDS ARE {STANDARD|OMITTED}]
           [DATA RECORD IS record-name].
       01  record-name.
           05  ...

Key Clauses

RECORDING MODE — Specifies the record format. This is an IBM extension, but it is widely used:

Mode Meaning Description
F Fixed All records same length
V Variable Records can vary in length
U Undefined No record structure — block is one unit
S Spanned Variable records that can span blocks
       FD  TXN-DAILY-FILE
           RECORDING MODE IS F
           RECORD CONTAINS 150 CHARACTERS.

RECORD CONTAINS — Specifies the record length. For fixed-length records, a single value. For variable-length records, a range:

      * Fixed-length records
           RECORD CONTAINS 150 CHARACTERS

      * Variable-length records
           RECORD CONTAINS 50 TO 500 CHARACTERS

BLOCK CONTAINS — Specifies the blocking factor. Blocking combines multiple logical records into a single physical block, dramatically improving I/O performance:

      * 20 records per block
           BLOCK CONTAINS 20 RECORDS

      * Block size in characters
           BLOCK CONTAINS 27000 CHARACTERS

      * Let the system determine blocking
           BLOCK CONTAINS 0 RECORDS

💡 Key Insight — Blocking and Performance Without blocking, each READ or WRITE causes a physical I/O operation to disk. With blocking (say, 20 records per block), the system reads 20 records in one physical I/O and serves subsequent READs from the memory buffer. On z/OS, BLOCK CONTAINS 0 RECORDS tells the system to use optimal blocking based on the device type — this is almost always the best choice for new programs. On GnuCOBOL, blocking is handled at the OS level and the BLOCK CONTAINS clause is typically ignored.

LABEL RECORDS — Historically significant but largely obsolete. Standard labels are the default on all modern systems:

           LABEL RECORDS ARE STANDARD

A Complete FD Example

       FILE SECTION.
       FD  TXN-DAILY-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.
       01  TXN-RECORD.
           05  TXN-ID                PIC X(12).
           05  TXN-TYPE              PIC X(02).
               88  TXN-DEPOSIT       VALUE 'DP'.
               88  TXN-WITHDRAW      VALUE 'WD'.
               88  TXN-TRANSFER      VALUE 'XF'.
               88  TXN-PAYMENT       VALUE 'PM'.
           05  TXN-DATE              PIC 9(08).
           05  TXN-TIME              PIC 9(06).
           05  TXN-ACCT-FROM         PIC X(10).
           05  TXN-ACCT-TO           PIC X(10).
           05  TXN-AMOUNT            PIC S9(11)V99 COMP-3.
           05  TXN-DESCRIPTION       PIC X(30).
           05  TXN-TELLER-ID         PIC X(08).
           05  TXN-BRANCH-CODE       PIC X(04).
           05  TXN-STATUS            PIC X(02).
               88  TXN-POSTED        VALUE 'PS'.
               88  TXN-PENDING       VALUE 'PN'.
               88  TXN-REVERSED      VALUE 'RV'.
           05  TXN-AUTH-CODE          PIC X(10).
           05  FILLER                PIC X(41).

Using Copybooks for Record Layouts

In production, record layouts are almost always in copybooks (Chapter 9):

       FD  TXN-DAILY-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.
       COPY TXN-REC.

11.4 The I/O Lifecycle: OPEN, READ, WRITE, CLOSE

Every sequential file goes through a fixed lifecycle: OPEN, then repeated READ or WRITE operations, then CLOSE. The order matters, and violations produce FILE STATUS errors (category 4x — logic errors).

OPEN

OPEN prepares a file for processing and establishes the mode of access:

      * Open for reading
           OPEN INPUT TXN-INPUT-FILE

      * Open for writing (creates new file / replaces existing)
           OPEN OUTPUT REPORT-FILE

      * Open for appending (adds to end of existing file)
           OPEN EXTEND AUDIT-LOG-FILE

      * Open for both reading and writing (sequential rewrite)
           OPEN I-O MASTER-FILE

You can OPEN multiple files in a single statement:

           OPEN INPUT  TXN-INPUT-FILE
                        ACCT-EXTRACT-FILE
                OUTPUT VALID-OUTPUT-FILE
                        REJECT-FILE
                        REPORT-FILE

⚠️ Critical — Always Check FILE STATUS After OPEN A failed OPEN is the most common production issue for sequential files. Common causes: - FILE STATUS '35': Dataset not found (DD name misspelled in JCL, or dataset deleted) - FILE STATUS '37': File attributes conflict (FD says fixed-length but file is variable) - FILE STATUS '38': File locked by another job - FILE STATUS '39': Record length mismatch between FD and actual file

The OPEN Check Pattern

       1000-INITIALIZE.
           OPEN INPUT TXN-INPUT-FILE
           IF NOT TXN-INPUT-SUCCESS
               DISPLAY 'FATAL: Cannot open TXN-INPUT-FILE'
               DISPLAY '       Status: ' WS-TXN-STATUS
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

           OPEN OUTPUT VALID-OUTPUT-FILE
           IF NOT VALID-OUTPUT-SUCCESS
               DISPLAY 'FATAL: Cannot open VALID-OUTPUT-FILE'
               DISPLAY '       Status: ' WS-OUT-STATUS
               CLOSE TXN-INPUT-FILE
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

           OPEN OUTPUT REPORT-FILE
           IF NOT RPT-SUCCESS
               DISPLAY 'FATAL: Cannot open REPORT-FILE'
               DISPLAY '       Status: ' WS-RPT-STATUS
               CLOSE TXN-INPUT-FILE
               CLOSE VALID-OUTPUT-FILE
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

      * Priming read
           PERFORM 1100-READ-FIRST-RECORD.

       1100-READ-FIRST-RECORD.
           READ TXN-INPUT-FILE
           EVALUATE WS-TXN-STATUS
               WHEN '00'
                   ADD 1 TO WS-READ-COUNT
               WHEN '10'
                   SET WS-END-OF-FILE TO TRUE
                   DISPLAY 'WARNING: Input file is empty'
                   MOVE 4 TO RETURN-CODE
               WHEN OTHER
                   DISPLAY 'FATAL: Error on first READ'
                   DISPLAY '       Status: ' WS-TXN-STATUS
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE.

READ

The READ statement retrieves the next record from a sequential file:

      * Basic READ
           READ TXN-INPUT-FILE

      * READ with AT END
           READ TXN-INPUT-FILE
               AT END
                   SET WS-END-OF-FILE TO TRUE
               NOT AT END
                   ADD 1 TO WS-READ-COUNT
           END-READ

      * READ INTO (copies record to working storage)
           READ TXN-INPUT-FILE INTO WS-TXN-WORK-RECORD
               AT END
                   SET WS-END-OF-FILE TO TRUE
               NOT AT END
                   ADD 1 TO WS-READ-COUNT
           END-READ

READ vs. READ INTO

READ file-name makes the record available in the FD's record area (the 01-level under the FD). READ file-name INTO ws-record additionally copies the record to a working-storage area. The READ INTO form is preferred for most production programs because:

  1. The FD record area is technically undefined between READs (the compiler may reuse the buffer).
  2. Working-storage fields are stable — the record remains available until you overwrite it.
  3. It cleanly separates the I/O buffer from the processing area.
       WORKING-STORAGE SECTION.
       01  WS-TXN-WORK-RECORD.
           COPY TXN-REC.

       ...

           READ TXN-INPUT-FILE INTO WS-TXN-WORK-RECORD

💡 Pro Tip — Priming Read Pattern The standard sequential processing pattern uses a priming read — an initial READ before the processing loop, with additional READs at the bottom of the loop:

```cobol PERFORM 1100-READ-FIRST-RECORD PERFORM 2000-PROCESS UNTIL WS-END-OF-FILE ...

   2000-PROCESS.
       PERFORM 2100-PROCESS-RECORD
       READ TXN-INPUT-FILE
           AT END SET WS-END-OF-FILE TO TRUE
           NOT AT END ADD 1 TO WS-READ-COUNT
       END-READ.

```

This ensures the processing logic always has a record available and that the AT END condition is detected before attempting to process a non-existent record.

WRITE

The WRITE statement adds a record to a sequential output file:

      * Basic WRITE (from FD record area)
           WRITE OUTPUT-RECORD

      * WRITE FROM (copies working-storage to FD, then writes)
           WRITE OUTPUT-RECORD FROM WS-PROCESSED-RECORD

      * WRITE with ADVANCING (for reports)
           WRITE REPORT-LINE FROM WS-DETAIL-LINE
               AFTER ADVANCING 1 LINE

           WRITE REPORT-LINE FROM WS-HEADER-LINE
               AFTER ADVANCING PAGE

⚠️ Common Mistake — WRITE Uses Record Name, Not File Name Unlike READ (which uses the file name), WRITE uses the record name — the 01-level under the FD. This catches many beginners:

```cobol * CORRECT: WRITE OUTPUT-RECORD FROM WS-DATA

  * WRONG — will not compile:
       WRITE OUTPUT-FILE FROM WS-DATA

```

WRITE FROM vs. MOVE + WRITE

WRITE record FROM ws-data is equivalent to MOVE ws-data TO record followed by WRITE record, but it is cleaner and preferred:

      * Preferred:
           WRITE VALID-TXN-RECORD FROM WS-PROCESSED-TXN

      * Equivalent but verbose:
           MOVE WS-PROCESSED-TXN TO VALID-TXN-RECORD
           WRITE VALID-TXN-RECORD

Checking Status After WRITE

           WRITE OUTPUT-RECORD FROM WS-PROCESSED-RECORD
           EVALUATE WS-OUT-STATUS
               WHEN '00'
                   ADD 1 TO WS-WRITE-COUNT
               WHEN '34'
                   DISPLAY 'OUTPUT FILE FULL'
                   PERFORM 9900-ABEND-PROGRAM
               WHEN OTHER
                   MOVE 'WRITE ERROR ON OUTPUT'
                       TO WS-ERR-MSG
                   PERFORM 9800-LOG-ERROR
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE

CLOSE

CLOSE releases the file and flushes any buffered data:

      * Close individual files
           CLOSE TXN-INPUT-FILE
           CLOSE VALID-OUTPUT-FILE
           CLOSE REPORT-FILE

      * Close multiple files in one statement
           CLOSE TXN-INPUT-FILE
                 VALID-OUTPUT-FILE
                 REPORT-FILE

Always check status after CLOSE — a failed CLOSE on an output file may mean data was lost:

           CLOSE VALID-OUTPUT-FILE
           IF NOT VALID-OUTPUT-SUCCESS
               DISPLAY 'WARNING: Error closing output file'
               DISPLAY '         Status: ' WS-OUT-STATUS
               DISPLAY '         Data may be incomplete'
               IF RETURN-CODE < 8
                   MOVE 8 TO RETURN-CODE
               END-IF
           END-IF

11.5 Line Sequential vs. Record Sequential

COBOL recognizes two flavors of sequential files, and the distinction matters:

Record Sequential (Traditional Mainframe)

Records are stored as fixed-length or variable-length blocks. There are no line delimiters — the record length is implicit (fixed) or stored in a header (variable). This is the standard on z/OS.

           SELECT TXN-FILE
               ASSIGN TO TXNIN
               ORGANIZATION IS SEQUENTIAL.

Line Sequential (PC/Unix)

Records are delimited by line-ending characters (CR/LF on Windows, LF on Unix). Trailing spaces are typically trimmed. This is the default for text files on non-mainframe systems.

      * GnuCOBOL / Micro Focus
           SELECT TXN-FILE
               ASSIGN TO TXNIN
               ORGANIZATION IS LINE SEQUENTIAL.

⚠️ Platform Portability Warning LINE SEQUENTIAL is not in the COBOL standard — it is an extension supported by GnuCOBOL, Micro Focus, and most PC-based COBOL compilers. Programs that use LINE SEQUENTIAL are not portable to z/OS Enterprise COBOL without modification. If you are writing programs in the Student Lab using GnuCOBOL but targeting z/OS concepts, use ORGANIZATION IS SEQUENTIAL with fixed-length records.

Practical Differences

Feature Record Sequential Line Sequential
Record delimiter None (length-based) CR/LF or LF
Trailing spaces Preserved Typically trimmed
Fixed-length Native support Simulated
Variable-length RDW (Record Descriptor Word) Newline-delimited
Binary data Supported Problematic (newlines)
Platform z/OS, all platforms PC/Unix/GnuCOBOL

11.6 FILE STATUS — Checking After Every Operation

We covered FILE STATUS in depth in Chapter 10, but it is worth revisiting in the specific context of sequential files. Here are the status codes most relevant to sequential I/O:

Sequential File Status Codes — Quick Reference

Code Operation Meaning
'00' Any Success
'04' READ Record length shorter than expected
'05' OPEN OPTIONAL file not present (created for output)
'10' READ End of file
'30' Any Permanent I/O error (disk, hardware)
'34' WRITE Disk space exhausted
'35' OPEN File not found
'37' OPEN File type mismatch
'38' OPEN File locked
'39' OPEN FD attributes don't match file
'41' OPEN File already open
'42' CLOSE File already closed
'44' REWRITE Record length changed on fixed-length file
'47' READ File not opened for input
'48' WRITE File not opened for output

A Robust Status-Checking Utility

Rather than writing EVALUATE blocks after every I/O, some shops use a centralized status-checking paragraph:

       01  WS-IO-CONTEXT.
           05  WS-IO-FILE-NAME       PIC X(30).
           05  WS-IO-OPERATION       PIC X(10).
           05  WS-IO-STATUS          PIC XX.
           05  WS-IO-SEVERITY        PIC X(01).
               88  IO-OK             VALUE 'O'.
               88  IO-EOF            VALUE 'E'.
               88  IO-WARNING        VALUE 'W'.
               88  IO-FATAL          VALUE 'F'.

       ...

       9700-CHECK-FILE-STATUS.
           EVALUATE WS-IO-STATUS
               WHEN '00'
                   SET IO-OK TO TRUE
               WHEN '02'
                   SET IO-OK TO TRUE
               WHEN '10'
                   SET IO-EOF TO TRUE
               WHEN '04' WHEN '05'
                   SET IO-WARNING TO TRUE
                   DISPLAY 'WARNING: ' WS-IO-FILE-NAME
                           ' ' WS-IO-OPERATION
                           ' STATUS=' WS-IO-STATUS
               WHEN OTHER
                   SET IO-FATAL TO TRUE
                   DISPLAY 'FATAL: ' WS-IO-FILE-NAME
                           ' ' WS-IO-OPERATION
                           ' STATUS=' WS-IO-STATUS
           END-EVALUATE.

Usage:

           READ TXN-INPUT-FILE
           MOVE 'TXN-INPUT' TO WS-IO-FILE-NAME
           MOVE 'READ' TO WS-IO-OPERATION
           MOVE WS-TXN-STATUS TO WS-IO-STATUS
           PERFORM 9700-CHECK-FILE-STATUS
           IF IO-FATAL
               PERFORM 9900-ABEND-PROGRAM
           END-IF
           IF IO-EOF
               SET WS-END-OF-FILE TO TRUE
           END-IF

11.7 AT END / NOT AT END Processing

The AT END and NOT AT END phrases provide inline handling for the end-of-file condition:

           READ TXN-INPUT-FILE
               AT END
                   SET WS-END-OF-FILE TO TRUE
               NOT AT END
                   ADD 1 TO WS-READ-COUNT
                   PERFORM 3000-PROCESS-RECORD
           END-READ

AT END vs. FILE STATUS

AT END fires when the FILE STATUS is '10'. It provides a convenient inline syntax, but FILE STATUS gives you more information (it also tells you about I/O errors, which AT END does not). Best practice is to use both:

           READ TXN-INPUT-FILE
               AT END
                   SET WS-END-OF-FILE TO TRUE
               NOT AT END
                   IF NOT TXN-INPUT-SUCCESS
                       MOVE 'READ ERROR' TO WS-ERR-MSG
                       PERFORM 9800-LOG-ERROR
                       PERFORM 9900-ABEND-PROGRAM
                   END-IF
                   ADD 1 TO WS-READ-COUNT
                   PERFORM 3000-PROCESS-RECORD
           END-READ

Or, use FILE STATUS as the primary check and ignore AT END:

           READ TXN-INPUT-FILE
           EVALUATE WS-TXN-STATUS
               WHEN '00'
                   ADD 1 TO WS-READ-COUNT
                   PERFORM 3000-PROCESS-RECORD
               WHEN '10'
                   SET WS-END-OF-FILE TO TRUE
               WHEN OTHER
                   PERFORM 9800-LOG-ERROR
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE

11.8 Variable-Length Records

Not all records are the same size. Sequential files can contain variable-length records, which are essential when record sizes vary significantly (avoiding wasted space in short records).

Defining Variable-Length Records

In the FD, specify a range:

       FD  CLAIM-INPUT-FILE
           RECORDING MODE IS V
           RECORD CONTAINS 100 TO 500 CHARACTERS
           BLOCK CONTAINS 0 RECORDS.

RECORD CONTAINS ... DEPENDING ON

The DEPENDING ON clause ties the record length to a data item, allowing your program to know the actual length of each record:

       FD  CLAIM-INPUT-FILE
           RECORDING MODE IS V
           RECORD IS VARYING IN SIZE
               FROM 100 TO 500 CHARACTERS
               DEPENDING ON WS-CLAIM-REC-LENGTH.
       01  CLAIM-RECORD.
           05  CLM-HEADER            PIC X(100).
           05  CLM-DETAIL-AREA       PIC X(400).

       WORKING-STORAGE SECTION.
       01  WS-CLAIM-REC-LENGTH      PIC 9(04).

When you READ a variable-length record, the system sets WS-CLAIM-REC-LENGTH to the actual length of the record just read. When you WRITE, the system uses the current value of WS-CLAIM-REC-LENGTH to determine how many bytes to write.

How Variable-Length Records Work on z/OS

On z/OS, variable-length records are stored with a Record Descriptor Word (RDW) — a 4-byte header at the beginning of each record that contains the record length. The RDW is transparent to your COBOL program; the system handles it automatically.

Physical layout:
[RDW][Record Data][RDW][Record Data][RDW][Record Data]...
  4     100-500     4     100-500     4     100-500

Processing Variable-Length Records

       2000-PROCESS-CLAIMS.
           READ CLAIM-INPUT-FILE
               INTO WS-CLAIM-WORK
           EVALUATE WS-CLM-STATUS
               WHEN '00'
                   ADD 1 TO WS-READ-COUNT
                   DISPLAY 'Record length: '
                           WS-CLAIM-REC-LENGTH
                   PERFORM 3000-PROCESS-CLAIM
               WHEN '10'
                   SET WS-END-OF-FILE TO TRUE
               WHEN OTHER
                   PERFORM 9800-LOG-ERROR
           END-EVALUATE.

Writing Variable-Length Records

Set the length before writing:

       4000-WRITE-OUTPUT-CLAIM.
      * Calculate actual data length
           IF CLM-HAS-DETAILS
               MOVE 500 TO WS-OUT-REC-LENGTH
           ELSE
               MOVE 100 TO WS-OUT-REC-LENGTH
           END-IF

           WRITE OUTPUT-CLAIM-RECORD
               FROM WS-PROCESSED-CLAIM
           IF NOT OUTPUT-SUCCESS
               PERFORM 9800-LOG-ERROR
           END-IF.

⚠️ Caution — DEPENDING ON Variable Must Be Set Correctly If you set the DEPENDING ON variable to a value larger than the actual data, you will write garbage bytes. If you set it smaller, you will truncate data. Always calculate the correct length before writing.

Try It Yourself — Variable-Length Records

Create a file of student records where the variable portion is a list of enrolled courses (0 to 8 courses, each 15 bytes). Write a program that: 1. Reads the variable-length file 2. Displays each student's name and the number of courses they're enrolled in 3. Writes a fixed-length summary record for each student

11.9 Multiple Record Types in One File

Many real-world sequential files contain more than one type of record. The first byte (or first few bytes) typically indicate the record type.

Defining Multiple Record Types

Under a single FD, you can define multiple 01-level records:

       FD  TXN-DAILY-FILE
           RECORDING MODE IS F
           RECORD CONTAINS 150 CHARACTERS.
       01  TXN-HEADER-RECORD.
           05  TXN-HDR-TYPE          PIC X(01) VALUE 'H'.
           05  TXN-HDR-DATE          PIC 9(08).
           05  TXN-HDR-BRANCH        PIC X(04).
           05  TXN-HDR-BATCH-NUM     PIC 9(06).
           05  FILLER                PIC X(131).

       01  TXN-DETAIL-RECORD.
           05  TXN-DTL-TYPE          PIC X(01).
               88  TXN-IS-DETAIL     VALUE 'D'.
           05  TXN-DTL-ID            PIC X(12).
           05  TXN-DTL-ACCT          PIC X(10).
           05  TXN-DTL-AMOUNT        PIC S9(11)V99 COMP-3.
           05  TXN-DTL-DESC          PIC X(30).
           05  FILLER                PIC X(90).

       01  TXN-TRAILER-RECORD.
           05  TXN-TRL-TYPE          PIC X(01) VALUE 'T'.
           05  TXN-TRL-RECORD-COUNT  PIC 9(07).
           05  TXN-TRL-TOTAL-AMT     PIC S9(13)V99 COMP-3.
           05  TXN-TRL-HASH-TOTAL    PIC 9(15).
           05  FILLER                PIC X(119).

      * Generic overlay for identifying record type
       01  TXN-GENERIC-RECORD.
           05  TXN-RECORD-TYPE       PIC X(01).
               88  TXN-IS-HEADER     VALUE 'H'.
               88  TXN-IS-DETAIL-REC VALUE 'D'.
               88  TXN-IS-TRAILER    VALUE 'T'.
           05  FILLER                PIC X(149).

All four 01-level records share the same physical buffer. When you READ the file, the data populates all four views simultaneously — they are implicit REDEFINES of each other.

Processing Multiple Record Types

       2000-PROCESS-LOOP.
           READ TXN-DAILY-FILE
               AT END
                   SET WS-END-OF-FILE TO TRUE
               NOT AT END
                   ADD 1 TO WS-READ-COUNT
                   PERFORM 2100-ROUTE-RECORD
           END-READ.

       2100-ROUTE-RECORD.
           EVALUATE TRUE
               WHEN TXN-IS-HEADER
                   PERFORM 3000-PROCESS-HEADER
               WHEN TXN-IS-DETAIL-REC
                   PERFORM 4000-PROCESS-DETAIL
               WHEN TXN-IS-TRAILER
                   PERFORM 5000-PROCESS-TRAILER
               WHEN OTHER
                   MOVE 'UNKNOWN RECORD TYPE' TO WS-ERR-MSG
                   STRING WS-ERR-MSG DELIMITED BY '  '
                          ' TYPE=' DELIMITED BY SIZE
                          TXN-RECORD-TYPE DELIMITED BY SIZE
                     INTO WS-ERR-MSG
                   END-STRING
                   PERFORM 9800-LOG-ERROR
                   ADD 1 TO WS-REJECT-COUNT
           END-EVALUATE.

Header-Detail-Trailer Validation

A robust program validates the file structure:

       3000-PROCESS-HEADER.
           IF WS-HEADER-FOUND
               MOVE 'DUPLICATE HEADER RECORD' TO WS-ERR-MSG
               PERFORM 9800-LOG-ERROR
           END-IF
           SET WS-HEADER-FOUND TO TRUE
           MOVE TXN-HDR-DATE TO WS-FILE-DATE
           MOVE TXN-HDR-BATCH-NUM TO WS-BATCH-NUM
           DISPLAY 'BATCH: ' WS-BATCH-NUM
                   ' DATE: ' WS-FILE-DATE.

       4000-PROCESS-DETAIL.
           IF NOT WS-HEADER-FOUND
               MOVE 'DETAIL BEFORE HEADER' TO WS-ERR-MSG
               PERFORM 9800-LOG-ERROR
               ADD 1 TO WS-REJECT-COUNT
           ELSE
               ADD 1 TO WS-DETAIL-COUNT
               ADD TXN-DTL-AMOUNT TO WS-RUNNING-TOTAL
               ADD TXN-DTL-ACCT TO WS-HASH-TOTAL
               PERFORM 4100-VALIDATE-DETAIL
           END-IF.

       5000-PROCESS-TRAILER.
           IF NOT WS-HEADER-FOUND
               MOVE 'TRAILER WITHOUT HEADER' TO WS-ERR-MSG
               PERFORM 9800-LOG-ERROR
           END-IF
           SET WS-TRAILER-FOUND TO TRUE

      * Validate control totals
           IF WS-DETAIL-COUNT NOT = TXN-TRL-RECORD-COUNT
               DISPLAY 'RECORD COUNT MISMATCH'
               DISPLAY '  EXPECTED: ' TXN-TRL-RECORD-COUNT
               DISPLAY '  ACTUAL:   ' WS-DETAIL-COUNT
               MOVE 8 TO RETURN-CODE
           END-IF

           IF WS-RUNNING-TOTAL NOT = TXN-TRL-TOTAL-AMT
               DISPLAY 'TOTAL AMOUNT MISMATCH'
               DISPLAY '  EXPECTED: ' TXN-TRL-TOTAL-AMT
               DISPLAY '  ACTUAL:   ' WS-RUNNING-TOTAL
               MOVE 8 TO RETURN-CODE
           END-IF.

💡 Key Insight — Control Totals Header-detail-trailer files often include control totals in the trailer: a record count, a total amount, and sometimes a hash total (a meaningless sum of a non-numeric field, used purely to detect missing or added records). Validating these totals is a critical integrity check. If the counts do not match, something went wrong during transmission or processing.

11.10 Writing Reports to Sequential Files

Reports are a special case of sequential output where formatting matters. COBOL's WRITE ADVANCING clause controls vertical spacing:

ADVANCING Clause

      * Advance 1 line before writing (single spacing)
           WRITE REPORT-LINE FROM WS-DETAIL-LINE
               AFTER ADVANCING 1 LINE

      * Advance 2 lines (double spacing)
           WRITE REPORT-LINE FROM WS-DETAIL-LINE
               AFTER ADVANCING 2 LINES

      * Start a new page
           WRITE REPORT-LINE FROM WS-PAGE-HEADER
               AFTER ADVANCING PAGE

      * Write, then advance (BEFORE vs AFTER)
           WRITE REPORT-LINE FROM WS-TOTAL-LINE
               BEFORE ADVANCING 2 LINES

A Report Writing Pattern

       WORKING-STORAGE SECTION.
       01  WS-PAGE-CONTROL.
           05  WS-LINE-COUNT         PIC 9(02) VALUE 99.
           05  WS-PAGE-COUNT         PIC 9(04) VALUE ZERO.
           05  WS-MAX-LINES          PIC 9(02) VALUE 55.

       01  WS-REPORT-HEADER-1.
           05  FILLER                PIC X(40)
               VALUE 'GLOBALBANK DAILY TRANSACTION REPORT     '.
           05  FILLER                PIC X(20) VALUE SPACES.
           05  FILLER                PIC X(06) VALUE 'PAGE: '.
           05  WS-HDR-PAGE           PIC Z,ZZ9.
           05  FILLER                PIC X(61) VALUE SPACES.

       01  WS-REPORT-HEADER-2.
           05  FILLER                PIC X(06) VALUE 'DATE: '.
           05  WS-HDR-DATE           PIC X(10).
           05  FILLER                PIC X(116) VALUE SPACES.

       01  WS-COLUMN-HEADER.
           05  FILLER  PIC X(14) VALUE 'TRANSACTION ID'.
           05  FILLER  PIC X(02) VALUE SPACES.
           05  FILLER  PIC X(12) VALUE 'ACCOUNT     '.
           05  FILLER  PIC X(02) VALUE SPACES.
           05  FILLER  PIC X(06) VALUE 'TYPE  '.
           05  FILLER  PIC X(02) VALUE SPACES.
           05  FILLER  PIC X(16) VALUE '          AMOUNT'.
           05  FILLER  PIC X(02) VALUE SPACES.
           05  FILLER  PIC X(30) VALUE 'DESCRIPTION                   '.
           05  FILLER  PIC X(46) VALUE SPACES.

       01  WS-DETAIL-LINE.
           05  WS-DTL-TXN-ID        PIC X(12).
           05  FILLER                PIC X(04) VALUE SPACES.
           05  WS-DTL-ACCOUNT       PIC X(10).
           05  FILLER                PIC X(04) VALUE SPACES.
           05  WS-DTL-TYPE          PIC X(04).
           05  FILLER                PIC X(04) VALUE SPACES.
           05  WS-DTL-AMOUNT        PIC Z,ZZZ,ZZZ,ZZ9.99-.
           05  FILLER                PIC X(02) VALUE SPACES.
           05  WS-DTL-DESC          PIC X(30).
           05  FILLER                PIC X(46) VALUE SPACES.

       ...

       6000-WRITE-DETAIL-LINE.
           IF WS-LINE-COUNT >= WS-MAX-LINES
               PERFORM 6100-WRITE-PAGE-HEADER
           END-IF

           WRITE REPORT-RECORD FROM WS-DETAIL-LINE
               AFTER ADVANCING 1 LINE
           IF NOT RPT-SUCCESS
               PERFORM 9800-LOG-ERROR
           END-IF
           ADD 1 TO WS-LINE-COUNT.

       6100-WRITE-PAGE-HEADER.
           ADD 1 TO WS-PAGE-COUNT
           MOVE WS-PAGE-COUNT TO WS-HDR-PAGE
           MOVE WS-REPORT-DATE TO WS-HDR-DATE

           WRITE REPORT-RECORD FROM WS-REPORT-HEADER-1
               AFTER ADVANCING PAGE
           WRITE REPORT-RECORD FROM WS-REPORT-HEADER-2
               AFTER ADVANCING 1 LINE
           WRITE REPORT-RECORD FROM WS-COLUMN-HEADER
               AFTER ADVANCING 2 LINES
           WRITE REPORT-RECORD FROM WS-SEPARATOR-LINE
               AFTER ADVANCING 1 LINE

           MOVE 4 TO WS-LINE-COUNT.

🔗 Cross-Reference We will cover the COBOL Report Writer facility in Chapter 16. Report Writer automates much of the page-break and heading logic shown above, though many shops prefer the manual approach for greater control.

11.11 GlobalBank Case Study: Processing TXN-DAILY

Let us trace through a complete real-world example: GlobalBank's nightly processing of the TXN-DAILY sequential file.

The Business Context

Every day at 6:00 PM, the online transaction system closes for the day and produces TXN-DAILY — a sequential file containing every transaction processed that day. The nightly batch sequence then:

  1. TXN-VALID — Validates each transaction and splits into valid/reject files
  2. TXN-POST — Posts valid transactions to the account master (VSAM)
  3. TXN-REPORT — Produces the daily transaction report
  4. TXN-ARCHIVE — Copies posted transactions to the history file

We will focus on step 1, TXN-VALID, as it demonstrates the richest set of sequential file techniques.

TXN-DAILY File Characteristics

Organization: Sequential
Record format: Fixed-block (FB)
Record length: 150 bytes
Block size: 27,000 bytes (180 records per block)
Average daily volume: 2.3 million records
File structure: Header + Details + Trailer

The Complete TXN-VALID Program (Simplified)

       IDENTIFICATION DIVISION.
       PROGRAM-ID. TXN-VALID.
      *================================================================*
      * TXN-VALID - Daily Transaction Validation
      * Reads TXN-DAILY, validates each transaction,
      * writes valid records to TXN-VALID-OUT,
      * writes invalid records to TXN-REJECT.
      * Produces validation summary report.
      *================================================================*
      * Author: Maria Chen    Date: 2024-01-15
      * Modified: Derek Washington  2024-06-01 (added cross-field)
      *================================================================*

       ENVIRONMENT DIVISION.
       INPUT-OUTPUT SECTION.
       FILE-CONTROL.
           SELECT TXN-DAILY-FILE
               ASSIGN TO TXNDAILY
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-DAILY-STATUS.
           SELECT TXN-VALID-FILE
               ASSIGN TO TXNVALID
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-VALID-STATUS.
           SELECT TXN-REJECT-FILE
               ASSIGN TO TXNREJ
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-REJ-STATUS.
           SELECT VALIDATION-REPORT
               ASSIGN TO VALRPT
               ORGANIZATION IS SEQUENTIAL
               FILE STATUS IS WS-RPT-STATUS.

       DATA DIVISION.
       FILE SECTION.
       FD  TXN-DAILY-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.
       01  TXN-DAILY-RECORD.
           05  TXN-RECORD-TYPE       PIC X(01).
               88  TXN-IS-HEADER     VALUE 'H'.
               88  TXN-IS-DETAIL     VALUE 'D'.
               88  TXN-IS-TRAILER    VALUE 'T'.
           05  TXN-RECORD-DATA       PIC X(149).

       01  TXN-DETAIL-VIEW REDEFINES TXN-DAILY-RECORD.
           05  FILLER                PIC X(01).
           05  TXN-DTL-ID           PIC X(12).
           05  TXN-DTL-TYPE         PIC X(02).
               88  TXN-DTL-DEPOSIT   VALUE 'DP'.
               88  TXN-DTL-WITHDRAW  VALUE 'WD'.
               88  TXN-DTL-TRANSFER  VALUE 'XF'.
           05  TXN-DTL-DATE         PIC 9(08).
           05  TXN-DTL-TIME         PIC 9(06).
           05  TXN-DTL-ACCT-FROM    PIC X(10).
           05  TXN-DTL-ACCT-TO      PIC X(10).
           05  TXN-DTL-AMOUNT       PIC S9(11)V99 COMP-3.
           05  TXN-DTL-DESC         PIC X(30).
           05  TXN-DTL-TELLER       PIC X(08).
           05  TXN-DTL-BRANCH       PIC X(04).
           05  TXN-DTL-AUTH         PIC X(10).
           05  FILLER               PIC X(42).

       01  TXN-HEADER-VIEW REDEFINES TXN-DAILY-RECORD.
           05  FILLER               PIC X(01).
           05  TXN-HDR-DATE         PIC 9(08).
           05  TXN-HDR-BRANCH       PIC X(04).
           05  TXN-HDR-BATCH-ID     PIC X(10).
           05  FILLER               PIC X(127).

       01  TXN-TRAILER-VIEW REDEFINES TXN-DAILY-RECORD.
           05  FILLER               PIC X(01).
           05  TXN-TRL-COUNT        PIC 9(09).
           05  TXN-TRL-TOTAL        PIC S9(13)V99 COMP-3.
           05  TXN-TRL-HASH         PIC 9(15).
           05  FILLER               PIC X(117).

       FD  TXN-VALID-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.
       01  TXN-VALID-RECORD         PIC X(150).

       FD  TXN-REJECT-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 230 CHARACTERS.
       01  TXN-REJECT-RECORD.
           05  REJ-ORIGINAL          PIC X(150).
           05  REJ-REASON-CODE       PIC X(04).
           05  REJ-REASON-MSG        PIC X(50).
           05  REJ-TIMESTAMP         PIC X(26).

       FD  VALIDATION-REPORT
           RECORDING MODE IS F
           RECORD CONTAINS 132 CHARACTERS.
       01  RPT-RECORD               PIC X(132).

       WORKING-STORAGE SECTION.
      * --- File Status ---
       01  WS-DAILY-STATUS          PIC XX.
           88  DAILY-SUCCESS        VALUE '00'.
           88  DAILY-EOF            VALUE '10'.
       01  WS-VALID-STATUS          PIC XX.
           88  VALID-SUCCESS        VALUE '00'.
       01  WS-REJ-STATUS            PIC XX.
           88  REJ-SUCCESS          VALUE '00'.
       01  WS-RPT-STATUS            PIC XX.
           88  RPT-SUCCESS          VALUE '00'.

      * --- Program Control ---
       01  WS-PROGRAM-NAME          PIC X(08) VALUE 'TXNVALID'.
       01  WS-FLAGS.
           05  WS-EOF-FLAG          PIC X VALUE 'N'.
               88  END-OF-FILE      VALUE 'Y'.
               88  NOT-EOF          VALUE 'N'.
           05  WS-HEADER-FOUND      PIC X VALUE 'N'.
               88  HEADER-FOUND     VALUE 'Y'.
           05  WS-TRAILER-FOUND     PIC X VALUE 'N'.
               88  TRAILER-FOUND    VALUE 'Y'.

      * --- Counters ---
       01  WS-COUNTERS.
           05  WS-READ-COUNT        PIC 9(09) VALUE ZERO.
           05  WS-DETAIL-COUNT      PIC 9(09) VALUE ZERO.
           05  WS-VALID-COUNT       PIC 9(09) VALUE ZERO.
           05  WS-REJECT-COUNT      PIC 9(09) VALUE ZERO.
           05  WS-RUNNING-TOTAL     PIC S9(15)V99 COMP-3
                                    VALUE ZERO.

      * --- Date/Time ---
       01  WS-CURRENT-DATETIME.
           05  WS-CURR-DATE.
               10  WS-CURR-YEAR    PIC 9(04).
               10  WS-CURR-MONTH   PIC 9(02).
               10  WS-CURR-DAY     PIC 9(02).
           05  WS-CURR-TIME.
               10  WS-CURR-HOUR    PIC 9(02).
               10  WS-CURR-MIN     PIC 9(02).
               10  WS-CURR-SEC     PIC 9(02).
               10  WS-CURR-HUND    PIC 9(02).
           05  WS-GMT-OFFSET       PIC S9(04).

       01  WS-ERR-MSG               PIC X(80).

       PROCEDURE DIVISION.
       0000-MAIN.
           PERFORM 1000-INITIALIZE
           PERFORM 2000-PROCESS UNTIL END-OF-FILE
           PERFORM 9000-TERMINATE
           STOP RUN.

       1000-INITIALIZE.
           MOVE FUNCTION CURRENT-DATE
               TO WS-CURRENT-DATETIME

           OPEN INPUT  TXN-DAILY-FILE
           IF NOT DAILY-SUCCESS
               DISPLAY 'FATAL: Cannot open TXN-DAILY'
               DISPLAY '       Status: ' WS-DAILY-STATUS
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

           OPEN OUTPUT TXN-VALID-FILE
           IF NOT VALID-SUCCESS
               DISPLAY 'FATAL: Cannot open TXN-VALID'
               CLOSE TXN-DAILY-FILE
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

           OPEN OUTPUT TXN-REJECT-FILE
           IF NOT REJ-SUCCESS
               DISPLAY 'FATAL: Cannot open TXN-REJECT'
               CLOSE TXN-DAILY-FILE
               CLOSE TXN-VALID-FILE
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

           OPEN OUTPUT VALIDATION-REPORT
           IF NOT RPT-SUCCESS
               DISPLAY 'FATAL: Cannot open REPORT'
               CLOSE TXN-DAILY-FILE
               CLOSE TXN-VALID-FILE
               CLOSE TXN-REJECT-FILE
               MOVE 16 TO RETURN-CODE
               STOP RUN
           END-IF

      * Priming read
           PERFORM 2100-READ-NEXT.

       2000-PROCESS.
           EVALUATE TRUE
               WHEN TXN-IS-HEADER
                   PERFORM 3000-PROCESS-HEADER
               WHEN TXN-IS-DETAIL
                   PERFORM 4000-PROCESS-DETAIL
               WHEN TXN-IS-TRAILER
                   PERFORM 5000-PROCESS-TRAILER
               WHEN OTHER
                   STRING 'UNKNOWN TYPE: '
                          DELIMITED BY SIZE
                          TXN-RECORD-TYPE
                          DELIMITED BY SIZE
                     INTO WS-ERR-MSG
                   END-STRING
                   PERFORM 4500-WRITE-REJECT
           END-EVALUATE
           PERFORM 2100-READ-NEXT.

       2100-READ-NEXT.
           READ TXN-DAILY-FILE
           EVALUATE WS-DAILY-STATUS
               WHEN '00'
                   ADD 1 TO WS-READ-COUNT
               WHEN '10'
                   SET END-OF-FILE TO TRUE
               WHEN OTHER
                   DISPLAY 'FATAL: Read error'
                   DISPLAY '       Status: ' WS-DAILY-STATUS
                   DISPLAY '       After record: ' WS-READ-COUNT
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE.

       3000-PROCESS-HEADER.
           IF HEADER-FOUND
               DISPLAY 'WARNING: Duplicate header'
           END-IF
           SET HEADER-FOUND TO TRUE
           DISPLAY 'BATCH: ' TXN-HDR-BATCH-ID
                   ' DATE: ' TXN-HDR-DATE.

       4000-PROCESS-DETAIL.
           ADD 1 TO WS-DETAIL-COUNT
           IF NOT HEADER-FOUND
               MOVE 'DETAIL BEFORE HEADER' TO WS-ERR-MSG
               PERFORM 4500-WRITE-REJECT
           ELSE
               PERFORM 4100-VALIDATE-DETAIL
               IF WS-ERR-MSG = SPACES
                   PERFORM 4200-WRITE-VALID
               ELSE
                   PERFORM 4500-WRITE-REJECT
               END-IF
           END-IF.

       4100-VALIDATE-DETAIL.
           MOVE SPACES TO WS-ERR-MSG
      * Check transaction type
           IF NOT (TXN-DTL-DEPOSIT OR
                   TXN-DTL-WITHDRAW OR
                   TXN-DTL-TRANSFER)
               MOVE 'INVALID TXN TYPE' TO WS-ERR-MSG
           END-IF
      * Check amount is positive
           IF WS-ERR-MSG = SPACES
               IF TXN-DTL-AMOUNT NOT > ZERO
                   MOVE 'AMOUNT NOT POSITIVE' TO WS-ERR-MSG
               END-IF
           END-IF
      * Check account number not blank
           IF WS-ERR-MSG = SPACES
               IF TXN-DTL-ACCT-FROM = SPACES
                   MOVE 'BLANK FROM-ACCOUNT' TO WS-ERR-MSG
               END-IF
           END-IF
      * Transfer must have TO account
           IF WS-ERR-MSG = SPACES
               IF TXN-DTL-TRANSFER AND
                  TXN-DTL-ACCT-TO = SPACES
                   MOVE 'TRANSFER WITHOUT TO-ACCT'
                       TO WS-ERR-MSG
               END-IF
           END-IF.

       4200-WRITE-VALID.
           WRITE TXN-VALID-RECORD FROM TXN-DAILY-RECORD
           IF VALID-SUCCESS
               ADD 1 TO WS-VALID-COUNT
               ADD TXN-DTL-AMOUNT TO WS-RUNNING-TOTAL
           ELSE
               DISPLAY 'WRITE ERROR ON VALID FILE'
               DISPLAY '  Status: ' WS-VALID-STATUS
               PERFORM 9900-ABEND-PROGRAM
           END-IF.

       4500-WRITE-REJECT.
           MOVE TXN-DAILY-RECORD TO REJ-ORIGINAL
           MOVE 'REJT' TO REJ-REASON-CODE
           MOVE WS-ERR-MSG TO REJ-REASON-MSG
           MOVE FUNCTION CURRENT-DATE TO REJ-TIMESTAMP
           WRITE TXN-REJECT-RECORD
           IF REJ-SUCCESS
               ADD 1 TO WS-REJECT-COUNT
           ELSE
               DISPLAY 'WRITE ERROR ON REJECT FILE'
               PERFORM 9900-ABEND-PROGRAM
           END-IF.

       5000-PROCESS-TRAILER.
           SET TRAILER-FOUND TO TRUE
      * Validate counts
           IF WS-DETAIL-COUNT NOT = TXN-TRL-COUNT
               DISPLAY '*** RECORD COUNT MISMATCH ***'
               DISPLAY '  Trailer says: ' TXN-TRL-COUNT
               DISPLAY '  We counted:   ' WS-DETAIL-COUNT
               MOVE 8 TO RETURN-CODE
           END-IF
           IF WS-RUNNING-TOTAL NOT = TXN-TRL-TOTAL
               DISPLAY '*** AMOUNT TOTAL MISMATCH ***'
               MOVE 8 TO RETURN-CODE
           END-IF.

       9000-TERMINATE.
           DISPLAY '=============================='
           DISPLAY 'TXN-VALID PROCESSING COMPLETE'
           DISPLAY '=============================='
           DISPLAY 'Records read:     ' WS-READ-COUNT
           DISPLAY 'Detail records:   ' WS-DETAIL-COUNT
           DISPLAY 'Valid records:    ' WS-VALID-COUNT
           DISPLAY 'Rejected records: ' WS-REJECT-COUNT

           IF RETURN-CODE < 4
               IF WS-REJECT-COUNT > 0
                   MOVE 4 TO RETURN-CODE
               ELSE
                   MOVE 0 TO RETURN-CODE
               END-IF
           END-IF

           DISPLAY 'Return code:      ' RETURN-CODE

           CLOSE TXN-DAILY-FILE
                 TXN-VALID-FILE
                 TXN-REJECT-FILE
                 VALIDATION-REPORT.

       9900-ABEND-PROGRAM.
           DISPLAY '*** PROGRAM ABENDING ***'
           DISPLAY 'Last record: ' WS-READ-COUNT
           DISPLAY 'Error: ' WS-ERR-MSG
           CLOSE TXN-DAILY-FILE
                 TXN-VALID-FILE
                 TXN-REJECT-FILE
                 VALIDATION-REPORT
           MOVE 16 TO RETURN-CODE
           STOP RUN.

What Derek Learned

When Derek first reviewed this program, he asked Maria why she checks FILE STATUS even after WRITE to the reject file. "If the reject file fails, what do you do — reject the reject?" Maria smiled. "You ABEND. If you can't write rejects, you can't safely continue, because rejected records would just disappear. That's data loss."

⚖️ Design Decision — Validation Strategy Notice that the validation in 4100-VALIDATE-DETAIL stops at the first error (subsequent checks are guarded by IF WS-ERR-MSG = SPACES). Maria chose this approach because the reject record only holds one reason code. James Okafor at MedClaim uses the multi-error collection approach from Chapter 10 because his reject records hold up to 20 error messages. Both approaches are valid — choose based on your reject record design.

11.12 MedClaim Case Study: Reading Claim Batch Input Files

MedClaim receives claim files from hundreds of healthcare providers. Each file has a different quirk — some use fixed-length records, others variable-length. Some include headers and trailers, others do not. James Okafor's CLM-INTAKE program must handle them all.

The Provider Abstraction

James uses a configuration record to describe each provider's file format:

       01  WS-PROVIDER-CONFIG.
           05  WS-PROV-ID            PIC X(10).
           05  WS-PROV-REC-FORMAT    PIC X(01).
               88  PROV-FIXED        VALUE 'F'.
               88  PROV-VARIABLE     VALUE 'V'.
           05  WS-PROV-REC-LENGTH    PIC 9(04).
           05  WS-PROV-HAS-HEADER    PIC X(01).
               88  PROV-HAS-HDR      VALUE 'Y'.
           05  WS-PROV-HAS-TRAILER   PIC X(01).
               88  PROV-HAS-TRL      VALUE 'Y'.
           05  WS-PROV-DATE-FORMAT   PIC X(08).

Handling Provider Variations

       2000-PROCESS-PROVIDER-FILE.
      * Read first record
           PERFORM 2100-READ-INPUT
           IF END-OF-FILE
               DISPLAY 'Empty file from provider '
                       WS-PROV-ID
               MOVE 4 TO RETURN-CODE
           ELSE
      * Handle optional header
               IF PROV-HAS-HDR
                   PERFORM 3000-PROCESS-HEADER
                   PERFORM 2100-READ-INPUT
               END-IF

      * Process detail records
               PERFORM UNTIL END-OF-FILE
                   IF PROV-HAS-TRL AND WS-IS-TRAILER
                       PERFORM 5000-PROCESS-TRAILER
                       PERFORM 2100-READ-INPUT
                   ELSE
                       PERFORM 4000-PROCESS-DETAIL
                       PERFORM 2100-READ-INPUT
                   END-IF
               END-PERFORM

      * Verify trailer was found if expected
               IF PROV-HAS-TRL AND NOT TRAILER-FOUND
                   DISPLAY 'WARNING: Expected trailer not found'
                   MOVE 4 TO RETURN-CODE
               END-IF
           END-IF.

Sarah Kim asks: "What if a provider sends us a file that's completely garbled — wrong format, wrong layout, nothing makes sense?" James shows her the consecutive error threshold:

       01  WS-CONSEC-ERRORS         PIC 9(03) VALUE ZERO.
       01  WS-MAX-CONSEC            PIC 9(03) VALUE 10.

       ...

       4000-PROCESS-DETAIL.
           PERFORM 4100-VALIDATE-CLAIM
           IF CLAIM-VALID
               MOVE ZERO TO WS-CONSEC-ERRORS
               PERFORM 4200-WRITE-CLAIM
           ELSE
               ADD 1 TO WS-CONSEC-ERRORS
               PERFORM 4500-WRITE-REJECT
               IF WS-CONSEC-ERRORS >= WS-MAX-CONSEC
                   DISPLAY 'TEN CONSECUTIVE ERRORS'
                   DISPLAY 'POSSIBLE FORMAT MISMATCH'
                   DISPLAY 'PROVIDER: ' WS-PROV-ID
                   PERFORM 9900-ABEND-PROGRAM
               END-IF
           END-IF.

"After ten bad records in a row," James explains, "we know something is fundamentally wrong. Maybe the provider changed their format and did not tell us, or they sent the wrong file. Continuing would just fill the reject file with thousands of records that all have the same problem."

11.13 Common Sequential Processing Patterns

Pattern 1: Copy with Filter

Read an input file, apply a filter condition, write matching records to output:

       2000-PROCESS.
           IF TXN-DTL-BRANCH = WS-TARGET-BRANCH
               WRITE OUTPUT-RECORD FROM TXN-DAILY-RECORD
               ADD 1 TO WS-WRITE-COUNT
           END-IF
           PERFORM 2100-READ-NEXT.

Pattern 2: Split

Read one input file, write to multiple output files based on content:

       2000-PROCESS.
           EVALUATE TXN-DTL-TYPE
               WHEN 'DP'
                   WRITE DEPOSIT-RECORD FROM TXN-DAILY-RECORD
               WHEN 'WD'
                   WRITE WITHDRAW-RECORD FROM TXN-DAILY-RECORD
               WHEN 'XF'
                   WRITE TRANSFER-RECORD FROM TXN-DAILY-RECORD
               WHEN OTHER
                   WRITE UNKNOWN-RECORD FROM TXN-DAILY-RECORD
           END-EVALUATE
           PERFORM 2100-READ-NEXT.

Pattern 3: Merge

Read two sorted input files, produce one merged output in order. This is the classic balanced line algorithm:

       2000-MERGE-PROCESS.
           EVALUATE TRUE
               WHEN FILE-A-KEY < FILE-B-KEY
                   WRITE OUTPUT-RECORD FROM FILE-A-RECORD
                   PERFORM 2100-READ-FILE-A
               WHEN FILE-A-KEY > FILE-B-KEY
                   WRITE OUTPUT-RECORD FROM FILE-B-RECORD
                   PERFORM 2200-READ-FILE-B
               WHEN FILE-A-KEY = FILE-B-KEY
                   WRITE OUTPUT-RECORD FROM FILE-A-RECORD
                   PERFORM 2100-READ-FILE-A
                   PERFORM 2200-READ-FILE-B
           END-EVALUATE.

🔗 Cross-Reference The COBOL SORT/MERGE facility (Chapter 15) provides built-in support for merging, but understanding the manual pattern is essential for situations where custom logic is needed during the merge.

Pattern 4: Match and Update

Read a transaction file and a master file (both sorted by key), produce an updated master:

       2000-MATCH-UPDATE.
           EVALUATE TRUE
               WHEN TXN-KEY < MASTER-KEY
                   PERFORM 3000-UNMATCHED-TXN
                   PERFORM 2100-READ-TXN
               WHEN TXN-KEY > MASTER-KEY
                   WRITE NEW-MASTER-REC FROM OLD-MASTER-REC
                   PERFORM 2200-READ-MASTER
               WHEN TXN-KEY = MASTER-KEY
                   PERFORM 4000-APPLY-UPDATE
                   WRITE NEW-MASTER-REC FROM OLD-MASTER-REC
                   PERFORM 2100-READ-TXN
                   PERFORM 2200-READ-MASTER
           END-EVALUATE.

Pattern 5: Accumulate and Break

Process sorted records, accumulating totals and producing output at each group break (control break processing):

       01  WS-PREV-BRANCH           PIC X(04) VALUE HIGH-VALUES.
       01  WS-BRANCH-TOTAL          PIC S9(13)V99 VALUE ZERO.

       ...

       2000-PROCESS.
           IF TXN-DTL-BRANCH NOT = WS-PREV-BRANCH
               IF WS-PREV-BRANCH NOT = HIGH-VALUES
                   PERFORM 3000-PRINT-BRANCH-TOTAL
               END-IF
               MOVE TXN-DTL-BRANCH TO WS-PREV-BRANCH
               MOVE ZERO TO WS-BRANCH-TOTAL
           END-IF
           ADD TXN-DTL-AMOUNT TO WS-BRANCH-TOTAL
           PERFORM 2100-READ-NEXT.

🔗 Cross-Reference Control break processing is covered in detail in Chapter 14. The example above shows the basic pattern; Chapter 14 handles multi-level breaks, rolling totals, and the end-of-file break.

11.14 OPEN Modes and Their Implications

Understanding when to use each OPEN mode is critical for writing correct sequential file programs. The choice affects what operations are permitted and how the file is positioned.

OPEN INPUT

           OPEN INPUT TXN-DAILY-FILE
  • File must exist (unless declared OPTIONAL)
  • Only READ operations permitted
  • File is positioned at the beginning
  • FILE STATUS '47' if you attempt WRITE or REWRITE

Use case: Reading data files, reference files, or any file you do not intend to modify.

OPEN OUTPUT

           OPEN OUTPUT VALID-OUTPUT-FILE
  • If the file exists, its contents are erased (the file is replaced)
  • If the file does not exist, it is created
  • Only WRITE operations permitted
  • FILE STATUS '47' if you attempt READ

⚠️ Danger — OPEN OUTPUT Destroys Existing Data This is one of the most common production mistakes. If a JCL error routes an OPEN OUTPUT to a master file that should be opened for I-O, the master file is erased. The data is gone. Maria Chen has a sign above her desk that reads: "CHECK YOUR OPENS. TWICE." She tells the story of a junior developer who accidentally opened the ACCT-MASTER file for OUTPUT during a test run, erasing 2 million account records. The restore took 4 hours.

OPEN I-O

           OPEN I-O MASTER-FILE
  • File must exist
  • READ, WRITE, and REWRITE operations permitted
  • For sequential files, REWRITE replaces the record most recently read
  • You must READ a record before you can REWRITE it

Use case: In-place updates to sequential files (though this is uncommon — most shops prefer the "read old master, write new master" pattern).

OPEN EXTEND

           OPEN EXTEND AUDIT-LOG-FILE
  • File must exist (unless declared OPTIONAL)
  • File is positioned at the end
  • Only WRITE operations permitted (appended after existing records)
  • Existing data is preserved

Use case: Log files, audit trails, any file where you want to add records without replacing existing content.

Try It Yourself — OPEN Mode Experiments

Write a program that creates a file with 10 records using OPEN OUTPUT. Close it. Then:

  1. Open it with OPEN INPUT and read all records — verify you get 10.
  2. Open it with OPEN EXTEND and write 5 more records. Close it.
  3. Open it with OPEN INPUT and read all records — verify you get 15.
  4. Open it with OPEN OUTPUT and write 3 records. Close it.
  5. Open it with OPEN INPUT and read all records — verify you get only 3 (the previous 15 were erased).

This exercise drives home the destructive nature of OPEN OUTPUT.

11.15 The EXTEND Mode and Audit Trails

The OPEN EXTEND mode deserves special attention because it is the foundation of sequential audit trails — a common requirement in regulated industries like banking and healthcare.

Building an Audit Trail

At GlobalBank, every account modification is recorded in an audit trail file:

       FD  AUDIT-TRAIL-FILE
           RECORDING MODE IS F
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 200 CHARACTERS.
       01  AUDIT-RECORD.
           05  AUD-TIMESTAMP         PIC X(26).
           05  AUD-PROGRAM           PIC X(08).
           05  AUD-USER-ID           PIC X(08).
           05  AUD-ACTION            PIC X(10).
               88  AUD-CREATE        VALUE 'CREATE'.
               88  AUD-UPDATE        VALUE 'UPDATE'.
               88  AUD-DELETE        VALUE 'DELETE'.
               88  AUD-INQUIRY       VALUE 'INQUIRY'.
           05  AUD-ACCT-NUMBER       PIC X(10).
           05  AUD-FIELD-NAME        PIC X(30).
           05  AUD-OLD-VALUE         PIC X(50).
           05  AUD-NEW-VALUE         PIC X(50).
           05  FILLER                PIC X(08).

The program opens the audit file with EXTEND and writes a record for every action:

       7000-WRITE-AUDIT.
           MOVE FUNCTION CURRENT-DATE TO AUD-TIMESTAMP
           MOVE WS-PROGRAM-NAME TO AUD-PROGRAM
           MOVE WS-USER-ID TO AUD-USER-ID
           MOVE WS-ACTION TO AUD-ACTION
           MOVE ACCT-NUMBER TO AUD-ACCT-NUMBER
           MOVE WS-FIELD-NAME TO AUD-FIELD-NAME
           MOVE WS-OLD-VALUE TO AUD-OLD-VALUE
           MOVE WS-NEW-VALUE TO AUD-NEW-VALUE

           WRITE AUDIT-RECORD
           IF NOT AUD-SUCCESS
               DISPLAY 'WARNING: Audit write failed'
               DISPLAY '  Status: ' WS-AUD-STATUS
      * Note: audit failure is serious but should not
      * stop the business transaction
               ADD 1 TO WS-AUDIT-FAIL-COUNT
           END-IF.

⚖️ Design Decision — Should Audit Failure Stop Processing? This is a policy question, not a technical one. In some industries (healthcare, finance), failing to write an audit record is a compliance violation that must stop processing. In others, it is a warning that triggers an alert but does not interrupt business operations. At GlobalBank, audit write failures generate an immediate alert to the compliance team but do not stop transaction processing. At MedClaim, HIPAA requirements demand that every access to Protected Health Information (PHI) is logged — failure to log means failure to process.

Audit File Management

Audit files grow indefinitely if left unmanaged. Common management strategies include:

  1. Daily rotation: Close the current audit file at midnight, open a new one. Archive the old file.
  2. GDG (Generation Data Group): On z/OS, create a new generation each day. Retain N generations online, archive older ones to tape.
  3. Size-based rotation: When the file reaches a certain size, close it and start a new one.

11.16 The Sequential Update Pattern — Old Master / New Master

The most common pattern for updating a sequential master file is not to update it in place but to read the old master, apply changes, and write a new master. This is called the old master / new master pattern.

Why Not Update in Place?

Sequential files can be updated in place (with OPEN I-O and REWRITE), but only the record just read can be rewritten, the record length cannot change, and if the program ABENDs mid-update, the file is in an inconsistent state. The old master / new master pattern avoids all of these problems:

  1. The old master is opened for INPUT — it is never modified.
  2. The new master is opened for OUTPUT — it is built from scratch.
  3. If the program ABENDs, the old master is intact. You simply delete the partially written new master and rerun.

The Pattern

Old-Master-File  ──┐
                    ├──> UPDATE-PROGRAM ──> New-Master-File
Transaction-File ──┘

Both input files must be sorted by the same key. The program reads both files in parallel, comparing keys:

       2000-MAIN-PROCESS.
           EVALUATE TRUE
               WHEN TXN-KEY < MASTER-KEY
      *            Transaction with no matching master
                   PERFORM 3000-ADD-NEW-RECORD
                   PERFORM 2100-READ-TXN
               WHEN TXN-KEY > MASTER-KEY
      *            Master with no matching transaction
                   PERFORM 4000-COPY-UNCHANGED
                   PERFORM 2200-READ-MASTER
               WHEN TXN-KEY = MASTER-KEY
      *            Match — apply the transaction
                   PERFORM 5000-APPLY-UPDATE
                   PERFORM 2100-READ-TXN
                   PERFORM 2200-READ-MASTER
           END-EVALUATE.

Handling End-of-File

When one file reaches end-of-file before the other, set that file's key to HIGH-VALUES. This ensures all remaining records from the other file are processed:

       2100-READ-TXN.
           READ TXN-FILE INTO WS-TXN-WORK
           EVALUATE WS-TXN-STATUS
               WHEN '00'
                   ADD 1 TO WS-TXN-READ
               WHEN '10'
                   MOVE HIGH-VALUES TO TXN-KEY
                   SET TXN-EOF TO TRUE
               WHEN OTHER
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE.

       2200-READ-MASTER.
           READ OLD-MASTER-FILE INTO WS-MASTER-WORK
           EVALUATE WS-MASTER-STATUS
               WHEN '00'
                   ADD 1 TO WS-MASTER-READ
               WHEN '10'
                   MOVE HIGH-VALUES TO MASTER-KEY
                   SET MASTER-EOF TO TRUE
               WHEN OTHER
                   PERFORM 9900-ABEND-PROGRAM
           END-EVALUATE.

The main loop continues until both keys are HIGH-VALUES:

           PERFORM 2000-MAIN-PROCESS
               UNTIL TXN-EOF AND MASTER-EOF

🔗 Cross-Reference The old master / new master pattern is a specific case of the match-update pattern described in Section 11.13. We will explore it further in Chapter 14 (Control Break Processing) where the pattern becomes more complex with multi-level group breaks.

11.17 Performance Considerations

Blocking

The single most important performance factor for sequential files is blocking. Without blocking, each READ causes a physical I/O. With a blocking factor of 180 (27,000-byte blocks with 150-byte records), 180 logical READs are served from one physical I/O.

      * Always use BLOCK CONTAINS 0 on z/OS
      * to let the system optimize blocking
       FD  TXN-DAILY-FILE
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.

Buffering

On z/OS, the number of I/O buffers can be specified in the JCL:

//TXNDAILY DD  DSN=GLOBALBANK.TXN.DAILY,DISP=SHR,
//             BUFNO=5

More buffers mean more records are pre-fetched, which helps throughput for sequential reading. The trade-off is memory usage.

APPLY WRITE-ONLY

For output files, this compiler option (or FD clause in some implementations) improves performance by reducing the number of physical writes:

       FD  OUTPUT-FILE
           BLOCK CONTAINS 0 RECORDS
           RECORD CONTAINS 150 CHARACTERS.
      * In z/OS, APPLY WRITE-ONLY is specified in JCL or compiler options

Minimize I/O Operations

Read the input once. Do not open and close files in a loop. Do not read the same file twice if you can process it in one pass:

      * BAD — reads the file twice
           PERFORM COUNT-RECORDS
           PERFORM PROCESS-RECORDS

      * GOOD — single pass
           PERFORM PROCESS-AND-COUNT

📊 Performance Benchmarks (Approximate) | Operation | Time per million records | |-----------|------------------------| | Sequential READ (blocked) | ~2-5 seconds | | Sequential READ (unblocked) | ~60-120 seconds | | Sequential WRITE (blocked) | ~3-7 seconds | | VSAM Random READ | ~30-60 seconds |

These are rough figures for a modern z/OS system. The key takeaway: blocked sequential I/O is extremely fast.

APPLY WRITE-ONLY and Other Optimizations

On z/OS, the APPLY WRITE-ONLY clause (or its JCL equivalent) optimizes output file performance by allowing the system to truncate partially-filled blocks. Without it, the system pads short blocks with padding characters, which wastes both CPU and disk space for variable-length record files.

For fixed-length blocked records, APPLY WRITE-ONLY ensures that the last block is written even if it is not full, without the overhead of padding it to the full block size.

Minimizing I/O: The Single-Pass Principle

The most efficient sequential processing reads each input file exactly once and writes each output file exactly once. Resist the temptation to read a file multiple times:

      * INEFFICIENT — reads the file twice
           PERFORM COUNT-RECORDS
           CLOSE INPUT-FILE
           OPEN INPUT INPUT-FILE
           PERFORM PROCESS-RECORDS

      * EFFICIENT — single pass
           PERFORM PROCESS-AND-COUNT

If you need multiple views of the data (e.g., detail records and summary totals), compute them all in a single pass. Accumulate totals while processing details, store group information in working-storage tables, and produce all outputs in one sweep through the file.

Derek Washington initially wrote a report program that read TXN-DAILY three times: once to count records, once to compute averages, and once to print the report. Maria showed him how to do all three in a single pass, reducing the elapsed time from 12 minutes to 4 minutes. "Every pass through the file costs you time proportional to the file size," she explained. "Three passes through 2.3 million records is 6.9 million READs. One pass is 2.3 million. Do the math."

11.18 Try It Yourself — Complete Exercise

Build a complete sequential file processing program that:

  1. Reads a student enrollment file (header-detail-trailer format)
  2. Validates each record: - Student ID must be numeric - Course code must match pattern "XX-999" (two letters, dash, three digits) - Credits must be between 1 and 6 - GPA must be between 0.00 and 4.00
  3. Writes valid records to an output file
  4. Writes invalid records to a reject file with error reasons
  5. Produces a summary report with counts and control total validation
  6. Checks FILE STATUS after every I/O operation
  7. Handles the empty-file case
  8. Sets appropriate return codes

Use the code examples from this chapter's code/ directory as a starting framework.

11.19 Best Practices Summary

  1. Always specify FILE STATUS. For every file. No exceptions. Check it after every operation.

  2. Use BLOCK CONTAINS 0. Let the system choose optimal blocking. Never hard-code block sizes in new programs.

  3. Use READ INTO. Copy records to working storage for processing. Keep the FD buffer area clean.

  4. Use WRITE FROM. Build the output record in working storage, then write from it.

  5. Use the priming read pattern. Read the first record before the loop. Read the next record at the end of the loop.

  6. Handle empty files. Check for immediate AT END after the priming read.

  7. Validate control totals. If the file has a header/trailer, check record counts and amount totals.

  8. Handle multiple record types explicitly. Use EVALUATE on the record type field. Always include a WHEN OTHER for unexpected types.

  9. Size counters for volume. PIC 9(09) handles a billion records. PIC 9(05) overflows at 100,000.

  10. Close files in all paths. Normal termination and error termination both need CLOSE logic.

Chapter Checkpoint You should now be able to: - Write complete SELECT...ASSIGN entries for sequential files - Code FD entries with appropriate RECORDING MODE, RECORD CONTAINS, and BLOCK CONTAINS - Implement the full I/O lifecycle (OPEN, READ, WRITE, CLOSE) with status checking - Handle variable-length records with DEPENDING ON - Process files with multiple record types (header-detail-trailer) - Write formatted reports with page breaks using ADVANCING - Apply common sequential processing patterns (copy, filter, split, merge, match-update) - Optimize sequential file performance through blocking and buffering

Chapter Summary

Sequential file processing is the backbone of batch COBOL. In this chapter, we covered the complete lifecycle: from SELECT...ASSIGN, through FD entries and their key clauses, to the OPEN-READ-WRITE-CLOSE cycle with defensive FILE STATUS checking at every step. We explored variable-length records, multi-type files, report writing, and the common processing patterns that experienced COBOL developers use daily.

The GlobalBank TXN-VALID program demonstrated how all these elements come together in a real-world validation program — reading a header-detail-trailer file, validating each record, splitting output into valid and reject streams, and verifying control totals. The MedClaim case study showed how to handle variations in file formats from different sources.

Every technique in this chapter was practiced defensively: FILE STATUS checked after every operation, empty files handled, counters sized for real volumes, error thresholds to detect systemic problems. This is not extra work — it is the standard of professional COBOL programming.

Sequential file processing is where theory meets practice in COBOL — where the abstractions of data definitions and control flow become tangible as records flow through the system, transformed and validated at each step. The programs you write using these techniques will form the backbone of any batch processing system you work on.

In the chapters ahead, we will build on this foundation. Chapter 12 covers indexed file processing (VSAM KSDS), where random access adds new dimensions of complexity. Chapter 13 introduces relative files. Chapter 14 explores control break processing — the art of producing multi-level summary reports from sorted sequential files. And Chapter 15 covers SORT/MERGE, COBOL's built-in facility for ordering sequential data.


In the next chapter, we move to Indexed File Processing — where the ORGANIZATION IS INDEXED clause opens up the world of random access, alternate keys, and the VSAM files that power online transaction processing.