Chapter 20 Quiz: Event-Driven Architecture with COBOL

Instructions: Select the best answer for each question. Each question has exactly one correct answer unless otherwise noted.


Question 1

What is the primary architectural advantage of event-driven architecture over request/reply for multi-consumer message flows?

A) Events are faster than request/reply messages B) The producer publishes once without knowing the consumers, and MQ distributes to all subscribers independently C) Events don't require persistent messages D) Event-driven architecture eliminates the need for error handling

Answer: B Explanation: The core advantage of EDA is decoupling the producer from the consumers. The producer publishes an event to a topic without knowing who subscribes. MQ handles fan-out to all subscribers. This means adding a new consumer requires only a subscription definition — no changes to the producer. In request/reply, the sender must explicitly send to each receiver.


Question 2

Which MQ trigger type fires only when the first message arrives on an empty queue?

A) TRIGTYPE(EVERY) B) TRIGTYPE(DEPTH) C) TRIGTYPE(FIRST) D) TRIGTYPE(INITIAL)

Answer: C Explanation: TRIGTYPE(FIRST) fires a trigger only when a message arrives on a queue that was previously empty (depth = 0). After the trigger fires, no additional triggers occur until the queue returns to empty and another message arrives. This is the standard choice for high-volume queues where a single triggered program should drain all available messages.


Question 3

A developer configures TRIGTYPE(EVERY) on a queue that receives 50,000 messages in a burst. What is the most likely production impact?

A) Messages are processed 50,000 times faster than normal B) The queue manager runs out of disk space C) The CICS region hits MAXT as CKTI attempts to start 50,000 transaction instances simultaneously D) The trigger monitor ignores all but the first message

Answer: C Explanation: TRIGTYPE(EVERY) generates a trigger message for every arriving message. With 50,000 messages arriving in a burst, CKTI attempts to start 50,000 instances of the triggered transaction. The CICS region hits its maximum task limit (MAXT), causing SOS (Short on Storage) and cascading abends. This is a well-known production failure mode — use TRIGTYPE(FIRST) for high-volume queues.


Question 4

In a CICS environment, which component reads trigger messages from the initiation queue and starts the specified transactions?

A) The channel initiator (CHIN) B) The trigger monitor (CKTI) C) The queue manager (MSTR) D) The MQ adapter (MQAD)

Answer: B Explanation: CKTI is the CICS-supplied trigger monitor transaction. It runs as a long-running task in the CICS region, reading trigger messages from SYSTEM.CICS.INITIATION.QUEUE and starting the transaction specified in the process definition's APPLICID field. If CKTI stops running, no triggered transactions will start.


Question 5

How does a CICS triggered program discover which queue triggered it?

A) The queue name is hardcoded in the program B) The program issues EXEC CICS RETRIEVE to get the trigger message, which contains the queue name C) The program reads the CICS system area (CSA) for the trigger information D) CKTI passes the queue name through a COMMAREA

Answer: B Explanation: CKTI passes the trigger message to the started transaction via CICS's RETRIEVE mechanism. The program issues EXEC CICS RETRIEVE to obtain the trigger message (CMQTMC2 structure), which contains the TMCF-QNAME field identifying the queue that triggered. This allows a single program to serve as the triggered handler for multiple queues.


Question 6

What is the primary purpose of CICS Event Processing (EP)?

A) To replace MQ as the messaging infrastructure in CICS B) To capture business events from CICS transactions without modifying application code C) To provide real-time monitoring of CICS region performance D) To encrypt events before they are published to MQ

Answer: B Explanation: CICS EP captures events by intercepting CICS API commands (FILE WRITE, LINK, START, etc.) at runtime, extracting specified data, and emitting events through an adapter (typically MQ). The key value proposition is that no application code changes are required — the event binding is a configuration artifact deployed as a CICS bundle.


Question 7

A CICS event binding is configured with the default asynchronous EP adapter. The application program writes a VSAM record and then abends before SYNCPOINT. What happens to the event?

A) The event is rolled back along with the VSAM write B) The event is emitted to MQ even though the VSAM write was rolled back C) The event is held in a pending state until the transaction outcome is determined D) CICS EP does not emit events for transactions that abend

Answer: B Explanation: By default, CICS EP emission is asynchronous — it is not part of the application's unit of work. The event is emitted when the capture point is reached (the FILE WRITE command), regardless of whether the transaction later commits or rolls back. This creates "phantom events" — events for operations that didn't persist. For most use cases (monitoring, analytics) this is acceptable. For cases requiring transactional consistency, use application-level MQPUT under syncpoint instead.


Question 8

In MQ pub/sub, what is the difference between a durable and a non-durable subscription?

A) Durable subscriptions use persistent messages; non-durable use non-persistent B) Durable subscriptions survive subscriber disconnection and queue manager restart; non-durable subscriptions are deleted when the subscriber disconnects C) Durable subscriptions support wildcards; non-durable do not D) Durable subscriptions deliver to local queues; non-durable deliver to remote queues

Answer: B Explanation: A durable subscription persists across subscriber disconnections and queue manager restarts. Messages published while the subscriber is disconnected are stored and delivered when it reconnects. A non-durable subscription exists only while the subscriber is connected — when it disconnects, the subscription is removed and subsequent published messages are not stored. Production event processing should always use durable subscriptions.


Question 9

Which MQ topic wildcard matches zero or more levels in the topic hierarchy?

A) * (asterisk) B) + (plus) C) # (hash) D) ? (question mark)

Answer: C Explanation: The # wildcard matches zero or more levels in the topic hierarchy. For example, CNB/EVENTS/# matches CNB/EVENTS/WIRE/SUBMITTED, CNB/EVENTS/ACH/PROCESSED, and any other topic under CNB/EVENTS/. The + wildcard matches exactly one level — CNB/EVENTS/+/SUBMITTED matches CNB/EVENTS/WIRE/SUBMITTED and CNB/EVENTS/ACH/SUBMITTED but not CNB/EVENTS/WIRE/TRANSFER/SUBMITTED.


Question 10

Why must event consumers be idempotent in an MQ-based event-driven system?

A) Because MQ delivers messages exactly once B) Because MQ guarantees at-least-once delivery, meaning duplicate events can occur due to retries, trigger re-fires, and recovery processing C) Because COBOL programs cannot detect duplicate messages D) Because pub/sub inherently delivers multiple copies to the same subscriber

Answer: B Explanation: MQ provides at-least-once delivery, not exactly-once. Duplicates can arise from: network retries during channel communication, trigger re-fires when a triggered program terminates before the queue reaches zero depth, and recovery processing after failures. An idempotent consumer produces the same result whether it processes an event once or multiple times — typically implemented by checking an event log table before processing.


Question 11

What is the correct idempotency implementation for a COBOL event consumer?

A) Check the event ID in a log table and process in a separate unit of work from the log insert B) Check the event ID in a log table; if not found, process the event and insert the log entry in the same unit of work as the business processing C) Check the MQ message ID for duplicates using MQGET with match options D) Use MQGMO-SYNCPOINT to prevent duplicate delivery

Answer: B Explanation: The event log check and the business processing must be in the same DB2 unit of work. If they're in separate UOWs, a failure between the business commit and the log insert means the event was processed but not logged — and will be processed again on retry, breaking idempotency. The correct pattern: check log → process event → insert log entry → commit all together.


Question 12

In the saga pattern, what is a compensating action?

A) An action that retries a failed step B) An action that undoes the effect of a previously completed step when a later step fails C) An action that logs the failure for audit purposes D) An action that sends an alert to operations

Answer: B Explanation: A compensating action reverses the effect of a step that completed successfully but must be undone because a later step in the saga failed. For example, if "debit source account" succeeded but "credit destination account" failed, the compensating action for the debit step is "credit source account" (refund). Compensating actions are not rollbacks — they're new forward actions that logically undo previous actions.


Question 13

CNB uses orchestration (not choreography) for their wire transfer saga. What is the primary reason Kwame gives for this choice?

A) Orchestration is faster than choreography B) Orchestration supports more concurrent steps C) With orchestration, the entire saga flow is readable in a single program — top to bottom — making it easier to understand and debug D) Choreography is not supported on z/OS

Answer: C Explanation: Orchestration uses a central coordinator program that drives the saga step by step. The entire flow — forward actions and compensating actions — is visible in one program. Choreography distributes the flow across event subscriptions, making it harder to trace the complete business process. For production debugging at 2 AM, Kwame prefers a single program he can read sequentially.


Question 14

What is CQRS (Command Query Responsibility Segregation) as applied to mainframe systems?

A) Using separate CICS regions for online and batch processing B) Separating the write path (normalized transactional tables) from the read path (denormalized query-optimized tables), with events bridging the two C) Using separate DB2 subsystems for commands and queries D) Routing read operations to the TOR and write operations to the AOR

Answer: B Explanation: CQRS separates the write model (optimized for transactional correctness — normalized, constrained) from the read model (optimized for query performance — denormalized, pre-aggregated). Events published by the write side are consumed by projection programs that maintain the read model. On the mainframe, this maps naturally to existing patterns: CICS online transactions write to normalized tables, and event-driven projections maintain denormalized read tables.


Question 15

What is the "God Event" anti-pattern?

A) An event that triggers all consumers simultaneously B) An event with an excessively large payload containing all possible fields, coupling every consumer to every field and making schema changes affect everyone C) An event that bypasses security checks D) An event that is published to every topic in the hierarchy

Answer: B Explanation: The God Event is a single massive event type (e.g., 200 fields) that tries to capture everything about a business transaction. Every consumer must parse the entire event even though each uses only a subset of fields. Schema changes to any field affect every consumer. The fix: publish domain-specific events (e.g., BALANCE_UPDATED, FEE_CHARGED) instead of one monolithic event.


Question 16

Why should event sourcing implementations include periodic snapshots?

A) Snapshots reduce disk usage by compressing events B) Snapshots bound the reconstruction time — without them, reconstructing state requires replaying all events from the beginning, which grows linearly with event count C) Snapshots are required by MQ for topic persistence D) Snapshots enable pub/sub subscriptions to receive historical events

Answer: B Explanation: Without snapshots, reconstructing an entity's state requires replaying every event from the beginning. For an account with 10 million events, this could take minutes. Snapshots create periodic checkpoints — to reconstruct state, you start from the nearest snapshot and replay only events after it. This bounds reconstruction time to the snapshot interval regardless of total event count.


Question 17

A subscription queue's depth has been growing steadily for 3 hours and is now at 80% of MAXDEPTH. No alerts have been generated. What anti-pattern is this?

A) God Event B) Topic Explosion C) Fire-and-Forget Events D) Subscriber Queue Without Monitoring

Answer: D Explanation: The "Subscriber Queue Without Monitoring" anti-pattern occurs when a subscription queue lacks depth alerts. Events accumulate without anyone noticing until the queue hits MAXDEPTH, at which point MQ routes new events to the dead letter queue or rejects publications. Every subscription queue must have depth monitoring with alerts at an appropriate threshold (typically 80% of MAXDEPTH).


Question 18

In MQ pub/sub, when a message is published to a topic with 5 subscribers, how many physical message copies are created?

A) 1 — MQ uses reference counting B) 5 — one copy per subscriber's destination queue C) 6 — one for the topic and one per subscriber D) It depends on the DEFPSIST setting

Answer: B Explanation: MQ creates a physical copy of the published message for each subscriber's destination queue. With 5 subscribers, 5 copies are created and delivered to the respective subscription queues. This has storage and performance implications — publishing to a topic with N subscribers requires approximately Nx the resources of a single point-to-point PUT. Plan disk capacity and CPU accordingly for high-fan-out topics.


Question 19

Which of the following is NOT a valid CICS EP capture point?

A) EXEC CICS WRITE FILE B) EXEC CICS LINK PROGRAM C) Pure COBOL computational logic (COMPUTE, PERFORM, IF/ELSE) with no EXEC CICS commands D) EXEC CICS START TRANSID

Answer: C Explanation: CICS EP intercepts CICS API commands at the CICS runtime level. It can capture events from FILE operations, LINK/XCTL program calls, START commands, and other CICS APIs. However, EP cannot observe pure COBOL logic that doesn't invoke CICS commands — there's no CICS interception point for a COMPUTE statement or a PERFORM loop. To emit events from purely computational logic, you must modify the application to include explicit MQPUT calls.


Question 20

What is the recommended approach for creating MQ subscriptions in a production financial system?

A) Programmatic subscriptions created by each consumer application at startup B) Administrative subscriptions created by the MQ admin using MQSC commands, managed alongside queue definitions C) Dynamic subscriptions created using model queues D) Temporary subscriptions that are recreated after each queue manager restart

Answer: B Explanation: Administrative subscriptions are created and managed by the MQ operations team using MQSC commands. They exist independently of any application, are visible in the MQ configuration, and survive queue manager restarts. This provides operational transparency — the operations team can see every subscription, monitor every subscription queue, and manage subscriptions without application involvement. Programmatic subscriptions are invisible to operations and their lifecycle is tied to the application, which creates operational risk.