Case Study 2: MedClaim Procedure Code and Fee Schedule Optimization

Background

MedClaim processes approximately 800,000 insurance claims per nightly batch. Each claim contains one or more procedure codes that must be validated against a master procedure code table (12,000 entries) and priced against a fee schedule table (approximately 10,000 entries varying by provider type and procedure code).

James Okafor's team has been asked to reduce the batch window from 3 hours to under 1 hour.

The Problem

The current system performs two VSAM KSDS reads per procedure line: 1. One read to validate the procedure code and get its description 2. One read to look up the allowed fee amount

For claims averaging 3 procedure lines each, this means approximately 4.8 million VSAM reads per night. The I/O overhead dominates the batch runtime.

Solution Design

Sarah Kim's business analysis revealed that: - The procedure code table changes quarterly (stable within a batch run) - The fee schedule table changes monthly (also stable within a batch run) - Both tables fit comfortably in WORKING-STORAGE (procedure table: ~700 KB, fee schedule: ~250 KB)

James designed a two-table in-memory solution:

  1. Procedure code table: OCCURS 1 TO 15000 DEPENDING ON, sorted by procedure code, loaded at initialization
  2. Fee schedule table: OCCURS 1 TO 12000 DEPENDING ON, sorted by provider type + procedure code (compound key), loaded at initialization

Both tables use SEARCH ALL for O(log n) lookups.

Key Technical Challenges

The fee schedule required searching on two fields simultaneously. The SEARCH ALL WHEN clause needed both conditions connected with AND, and the table had to be sorted by provider type as the major key and procedure code as the minor key.

Challenge 2: Date-Effective Fees

Some procedure codes had multiple fee entries with different effective date ranges. After the binary search found the matching key, the program had to verify the service date fell within the effective date range. If not, adjacent entries (with the same key but different dates) needed to be checked — requiring a secondary linear scan from the binary search hit point.

Challenge 3: Memory Estimation

Tomás Rivera calculated memory requirements: - Procedure table entry: 5 (code) + 50 (desc) + 9 (fee) + 3 (category) = 67 bytes x 15,000 max = 1,005,000 bytes - Fee schedule entry: 2 (prov) + 5 (proc) + 9 (amount) + 8 (eff) + 8 (end) + 4 (mods) = 36 bytes x 12,000 max = 432,000 bytes - Total: approximately 1.4 MB — well within WORKING-STORAGE limits

Results

The optimized batch completed in 38 minutes — an 79% reduction. Lookup failures decreased from 0.3% to 0.1% because the in-memory table contained the complete dataset (the VSAM approach had occasional read errors under heavy I/O load).

Discussion Questions

  1. Why was the compound key approach chosen over two separate single-key tables?
  2. What risks exist if the procedure code table grows beyond the 15,000 maximum?
  3. How should the team handle the case where a fee schedule update is deployed mid-batch?
  4. Would a hash table approach offer any advantage over binary search for this use case?