Chapter 34 Quiz: COBOL-to-Cloud Patterns

Section 1: Multiple Choice

1. What is the single most underestimated factor in COBOL-to-cloud migrations according to this chapter?

a) Developer skill gaps b) I/O architecture differences between z/OS and cloud c) COBOL syntax incompatibility d) Network bandwidth

Answer: b) I/O architecture differences between z/OS and cloud

Explanation: The chapter identifies I/O architecture as the most underestimated factor. z/OS has dedicated I/O processors (SAPs), FICON channels with gigabytes-per-second throughput, data-in-memory via hiperbatch and data spaces, and 60 years of I/O optimization. Cloud block storage (io2, Ultra Disk) is general-purpose and cannot match z/OS I/O behavior, especially for random-access patterns. Rob's batch window expanded from 2:40 to 6:15 primarily because of I/O performance differences. You can't close this gap by buying more IOPS — the architecture is fundamentally different.


2. Which of the following workload types is identified as the best candidate for COBOL-to-cloud migration?

a) High-volume CICS OLTP (>500 TPS) b) Development and test environments c) Parallel Sysplex data-sharing workloads d) Real-time transaction authorization

Answer: b) Development and test environments

Explanation: Dev/test is "almost always the right answer" for cloud migration. It eliminates expensive MIPS consumption (often 15-25% of total capacity), environments can be provisioned in hours instead of weeks, teams get full isolation, compatibility gaps are discovered safely rather than in production, and junior developers become productive faster with modern tooling. Pinnacle Health saved approximately $1.8M/year in MIPS costs by moving dev/test to Azure.


3. What is the key difference between Micro Focus Enterprise Server and Heirloom Computing's approach to running COBOL off-mainframe?

a) Micro Focus is free; Heirloom charges per core b) Micro Focus compiles COBOL to x86 native code; Heirloom converts COBOL to Java bytecode c) Micro Focus only runs on Azure; Heirloom only runs on AWS d) Micro Focus requires code changes; Heirloom requires no changes

Answer: b) Micro Focus compiles COBOL to x86 native code; Heirloom converts COBOL to Java bytecode

Explanation: Micro Focus Enterprise Server compiles Enterprise COBOL syntax to x86 native machine code, providing a COBOL runtime environment on Linux/Windows. Heirloom converts COBOL source into Java classes that run on standard JVMs (OpenJDK, GraalVM). This fundamental difference affects licensing (Micro Focus charges per-core for the runtime; Heirloom has no per-core runtime fees after conversion), maintainability (neither produces code that's comfortable for any developer), and performance characteristics (Micro Focus emulates packed decimal in x86 software; Heirloom uses Java BigDecimal).


4. In Sandra Chen's FBA eligibility verification cloud pilot, the p99 latency increased from 8.4ms on z/OS to 340ms on AWS GovCloud. What was the primary cause?

a) The COBOL code ran slower on x86 processors b) AWS GovCloud has inferior networking compared to commercial AWS c) The database could not achieve the same buffer hit ratio, causing more disk I/O d) EBCDIC-to-ASCII conversion added overhead to every transaction

Answer: c) The database could not achieve the same buffer hit ratio, causing more disk I/O

Explanation: The DB2 buffer hit ratio dropped from 98.7% on z/OS to 82.3% on PostgreSQL (shared_buffers). With 22 million beneficiary records, the eligibility lookup's complex data access pattern — date arithmetic, benefit-tier calculations, cross-reference validation — couldn't achieve the same caching efficiency on PostgreSQL as DB2 for z/OS achieves with its hiperpool and group buffer pool. Every cache miss became a 4ms disk read instead of a 0.1ms buffer pool hit. The cumulative effect of more cache misses drove the latency from 8.4ms to 340ms at p99.


5. The chapter introduces the concept of "marginal MIPS cost" versus "allocated MIPS cost." Why is this distinction critical for TCO analysis?

a) Marginal cost is always higher than allocated cost b) Removing 400 off-peak MIPS may not reduce the mainframe bill at all because software licensing is based on peak MIPS (R4HA) c) Allocated cost includes cloud costs; marginal cost does not d) Marginal cost only applies to CICS workloads

Answer: b) Removing 400 off-peak MIPS may not reduce the mainframe bill at all because software licensing is based on peak MIPS (R4HA)

Explanation: z/OS software licensing is based on the R4HA (rolling four-hour average) peak MIPS consumption. If a batch workload runs during off-peak hours (e.g., 02:00-05:00), removing it doesn't change the R4HA peak (which occurs during the online processing window, e.g., 10:00-14:00). The allocated cost ($5,000/MIPS × 400 MIPS = $2M/year) is a mathematical calculation. The marginal cost — what the mainframe bill actually decreases by — may be zero. CNB's mainframe cost actually increased by $84K/year after removing the reporting batch due to pricing model impacts.


6. Which of the following is NOT identified as a valid reason to keep a COBOL workload on the mainframe?

a) The workload requires Parallel Sysplex data sharing b) The team is familiar with the mainframe and prefers it c) The workload requires sub-10ms p99 latency for database-intensive transactions d) The workload uses two-phase commit across DB2 and MQ

Answer: b) The team is familiar with the mainframe and prefers it

Explanation: Team preference, while understandable, is not a technical criterion for platform selection. The chapter's decision framework is based on workload characteristics (volume, latency, data sharing requirements), not team comfort. However, operational capability IS a factor — "Can you operate it?" is the third question in the framework, which addresses whether the team can operate COBOL on cloud, not whether they prefer the mainframe. The distinction is between "we like the mainframe" (not valid) and "we can't effectively operate this on cloud" (valid).


7. What does the "95% compatibility trap" refer to in the context of COBOL rehosting platforms?

a) 95% of COBOL programs compile without errors on the rehosting platform b) The 5% of COBOL that doesn't work is where the most complex, critical, and difficult-to-test code lives c) 95% of mainframe costs transfer to the cloud d) The rehosting platform covers 95% of the CICS API surface

Answer: b) The 5% of COBOL that doesn't work is where the most complex, critical, and difficult-to-test code lives

Explanation: When a vendor says "95% compatible," the easy COBOL (standard PERFORM, MOVE, IF, simple file I/O) works everywhere. The 5% that breaks includes edge cases that make your business unique: complex REDEFINES-based data transformations, deeply nested PERFORM THRU logic, CICS programs using HANDLE CONDITION with PUSH/POP, EBCDIC-specific hex literals, and sort exit routines. These are the programs with the most embedded business knowledge and the least test coverage. The chapter advises budgeting testing accordingly.


8. In the hybrid architecture pattern described in Section 34.5, what is Lisa Tran's one-sentence summary?

a) "Cloud for storage, mainframe for compute" b) "The mainframe does the work. The cloud does the looking." c) "Everything new goes to cloud; everything old stays on mainframe" d) "The cloud is the future; the mainframe is the present"

Answer: b) "The mainframe does the work. The cloud does the looking."

Explanation: This summary captures the hybrid architecture pattern: the mainframe handles write operations (OLTP transactions, core batch processing, system-of-record updates) while the cloud handles read operations (reporting, analytics, dashboards, API serving from replicated data). The integration layer (batch extract, CDC, APIs) moves data from the mainframe to the cloud. This is the pattern that CNB, Pinnacle, and SecureFirst all converged on.


9. What are the three data synchronization patterns for hybrid mainframe-cloud architectures, in order of increasing complexity and decreasing latency?

a) API, CDC, Batch b) Batch Extract, CDC, API Real-Time c) Real-time replication, event streaming, file transfer d) Database link, message queue, REST API

Answer: b) Batch Extract, CDC, API Real-Time

Explanation: (1) Nightly Batch Extract has 24-hour latency but is the simplest to implement — extract, transfer, load. (2) Change Data Capture (CDC) provides seconds-to-minutes latency by reading the DB2 recovery log and replicating changes in near-real-time, but is more complex to set up and maintain. (3) API Real-Time (via z/OS Connect) provides millisecond latency with no data replication — the cloud calls the mainframe directly — but is the most complex and consumes mainframe MIPS for every call. The chapter's rule of thumb: "Start with batch extract. Move to CDC only when business requirements demand it. Use real-time API only for user-facing interactions."


10. What is the chapter's recommended starting point for any organization considering COBOL-to-cloud migration?

a) Migrate the highest-volume CICS transaction first to prove the platform b) Move development and test environments to the cloud c) Perform a complete application portfolio assessment before moving anything d) Run a proof-of-concept with the most complex workload

Answer: b) Move development and test environments to the cloud

Explanation: Dev/test is recommended as the first workload to move because: MIPS savings are real and immediate, spin-up speed for new environments improves dramatically, teams gain isolation, incompatibilities are discovered safely, junior developers become productive faster, and the team builds cloud operational skills with low risk. The chapter's Cloud Migration Maturity Model places dev/test in Stage 1 (Cloud Curious), before any production workload migration.


Section 2: True/False

11. TRUE or FALSE: Cloud block storage (io2 Block Express, Ultra Disk) can match z/OS I/O performance for random-access patterns by provisioning sufficient IOPS.

Answer: FALSE

Explanation: The performance gap for random-access patterns is architectural, not capacity-related. z/OS uses dedicated I/O processors, FICON channels, and data-in-memory (hiperbatch, data spaces, LSR buffering) that overlap I/O with processing. Cloud block storage processes I/O requests through a general-purpose network-attached storage architecture. Buying more IOPS increases the throughput ceiling but doesn't change the per-I/O latency or the ability to overlap I/O with processing. Rob's VSAM random reads were served from data space buffers on z/OS; on cloud, each random read was an actual disk I/O.


12. TRUE or FALSE: Moving a batch workload from the mainframe off-peak window (02:00) to the cloud will always reduce the mainframe software licensing bill.

Answer: FALSE

Explanation: z/OS software licensing is based on the R4HA (rolling four-hour average) peak MIPS consumption, which typically occurs during online processing hours (10:00-14:00). Removing off-peak batch doesn't change the peak. CNB's mainframe cost actually increased by $84K/year after removing reporting batch because the off-peak workload helped spread MSU consumption in certain pricing models (IBM Tailored Fit Pricing, Country Multiplex Pricing).


13. TRUE or FALSE: Heirloom Computing's approach produces code that can be maintained by standard Java developers.

Answer: FALSE

Explanation: Heirloom converts COBOL to Java bytecode, but the resulting Java code is "COBOL semantics expressed in Java syntax." PERFORM paragraphs become method calls with non-obvious control flow, WORKING-STORAGE becomes class-level fields, GOTO becomes labeled breaks. The code is not idiomatic Java and is confusing to both Java developers (who don't recognize the patterns) and COBOL developers (who don't read Java). Debugging is particularly challenging because stack traces reference Java method names derived from COBOL paragraph names.


14. TRUE or FALSE: A CICS transaction running at 500 TPS on cloud with a 1,000-record test database is a reliable predictor of production performance with 50 million records.

Answer: FALSE

Explanation: POC performance with small data sets is not predictive of production performance because: (1) small databases fit entirely in memory, eliminating I/O; (2) database optimizer behavior changes with data volume (index selectivity, join strategies, buffer pool hit ratio); (3) CICS emulation overhead increases with transaction complexity and resource contention; (4) memory pressure from concurrent transactions is absent in a POC; (5) network latency between application and database compounds with multiple calls per transaction when the database can't serve from cache.


15. TRUE or FALSE: The chapter recommends hybrid architecture as a compromise when full cloud migration isn't feasible.

Answer: FALSE

Explanation: The chapter explicitly states that hybrid is NOT a compromise — it's the architecture that puts each workload on its optimal platform. "Not because hybrid was the plan, but because reality pushed the project there. The smart organizations started with hybrid as the plan." The mainframe does what it does best (OLTP, core batch, system of record), the cloud does what it does best (reporting, analytics, dev/test, API serving), and the integration layer connects them.


Section 3: Short Answer

16. Name the three data synchronization patterns for hybrid architectures and give one real use case for each from the chapter's examples.

Answer: (1) Nightly Batch Extract — CNB's regulatory reporting data replicated to S3 at 01:00 for the cloud reporting batch at 02:30. (2) Change Data Capture (CDC) — Pinnacle Health's claims status table replicated from mainframe DB2 to Azure PostgreSQL with 8-15 second lag for fraud detection models. (3) API Real-Time — SecureFirst's mobile banking app calling z/OS Connect APIs for balance inquiry with 80-120ms end-to-end latency.


17. What is "Kwame's Rule of TCO"? State the rule and explain the reasoning behind the 30% threshold.

Answer: "If your honest TCO shows less than 30% savings after including every cost you can think of, don't do it. The costs you can't think of will eat the margin. If it shows 50%+ savings honestly, it's probably real. Between 30-50% is the danger zone where you need to ask whether the operational risk is worth the marginal savings." The 30% threshold exists because cloud migration TCO inevitably has unforeseen costs (operational learning curve, compatibility issues, organizational disruption). If your margin is thin, these hidden costs eliminate the savings. If your margin is substantial (>50%), hidden costs reduce the savings but don't eliminate them.


18. Explain why CNB's vendor claimed 87% savings but the honest analysis showed 55%. Identify the three largest sources of discrepancy.

Answer: (1) Allocated vs. marginal MIPS cost: The vendor used allocated cost ($2M/year for 400 MIPS) but the marginal mainframe savings was $0 (actually -$84K/year) because the workload ran off-peak and didn't affect R4HA. (2) **Missing migration project cost:** The vendor estimated $250K; actual cost was $660K. (3) **Missing ongoing operational costs:** $227,600/year in cloud operations, Micro Focus maintenance, Direct Connect, security compliance, and DR that weren't in the vendor's proposal. The 55% savings figure used real marginal analysis over 5 years rather than the vendor's single-year allocated comparison.


19. What are the four stages of the Cloud Migration Maturity Model? Give the key deliverable for each stage.

Answer: (1) Cloud Curious (6-12 months): Dev/test on cloud, team builds skills. Deliverable: cloud landing zone, first dev/test environments. (2) Cloud Capable (12-24 months): First production batch workload, CDC implementation. Deliverable: first production workload on cloud, validated TCO. (3) Hybrid Optimized (24-48 months): Multiple workloads, integrated monitoring, clear placement policies. Deliverable: documented hybrid architecture, operational runbooks. (4) Strategic Hybrid (48+ months): Routine data-driven placement decisions, continuous optimization. Deliverable: mature hybrid operations, optimized TCO.


20. The chapter identifies packed decimal arithmetic as a performance concern for COBOL on cloud. Explain why. What is the difference between z/Architecture hardware decimal and x86 software-emulated decimal? For what type of workload does this difference matter most?

Answer: z/Architecture processors have dedicated hardware instructions for packed decimal (COMP-3) arithmetic — ADD PACKED, SUBTRACT PACKED, MULTIPLY PACKED, etc. — that execute in silicon at hardware speed. On x86 processors, there are no equivalent hardware instructions, so packed decimal operations must be emulated in software by the COBOL runtime (or, in Heirloom's case, handled by Java BigDecimal). Benchmarks show 2-4x slower performance for software emulation. This matters most for programs that do intensive decimal arithmetic: interest calculations, actuarial computations, financial aggregations, premium calculations — any program where the inner loop is arithmetic on COMP-3 fields. For I/O-bound programs, the difference is negligible.