Chapter 17 Quiz

Instructions

Select the best answer for each question. Questions are designed to test understanding at the Analyze/Evaluate level of Bloom's taxonomy — you will need to analyze scenarios, interpret diagnostic data, and make tuning decisions, not merely recall definitions.


Question 1

A CICS region has MXT=200. Currently 195 tasks are active. A new ATM authorization transaction (priority 200) arrives. What happens?

A) The transaction is dispatched immediately because it has high priority B) The transaction receives a TCA and enters the ready queue, bringing active tasks to 196 C) The transaction receives a TCA and preempts a lower-priority running task D) The transaction queues without receiving a TCA because MXT is almost reached

Answer: B Explanation: MXT has not been reached (195 < 200), so the transaction receives a TCA and becomes active. Priority determines dispatch order from the ready queue, not TCA allocation. The task count goes to 196. Preemption does not occur on the QR TCB — CICS uses cooperative multitasking. The MAXT condition occurs at 200, not before.


Question 2

During a MAXT condition, raising MXT from 250 to 500 causes the region to enter SOS within 3 minutes. What is the most likely explanation?

A) The higher MXT allowed DB2 to process more threads B) The higher MXT allowed more tasks to accumulate, each consuming storage, until EUDSA was exhausted C) MXT and EDSALIM must always be changed together D) SOS is unrelated to MXT — it was caused by a concurrent storage leak

Answer: B Explanation: This is the core lesson of Section 17.1. Raising MXT during a MAXT caused by slow response times (tasks accumulating due to waits) allows more tasks to pile up. Each task consumes EUDSA for its TCA, working storage, and user storage. The previous MXT of 250 was effectively protecting the region from SOS by capping the number of storage-consuming tasks. Raising to 500 removed that protection.


Question 3

What is the PRIMARY benefit of defining a CICS program as CONCURRENCY(THREADSAFE)?

A) The program runs faster because L8 TCBs have higher priority than the QR TCB B) The program's DB2 calls execute on an L8 TCB, freeing the QR TCB to dispatch other tasks C) The program can use 64-bit addressing for above-the-bar storage D) The program becomes reentrant and can be shared across tasks without individual copies

Answer: B Explanation: THREADSAFE programs execute DB2 calls (and other threadsafe API calls) on L8 open TCBs rather than the QR TCB. The QR TCB is freed to dispatch other tasks during the DB2 wait. This is the single most impactful CICS performance optimization because it eliminates QR TCB blocking from DB2 I/O. L8 TCBs do not have higher priority — they run concurrently with the QR. THREADSAFE is about concurrency, not about speed or storage addressing.


Question 4

A region processes 2,000 TPS. Each transaction uses the QR TCB for 0.3ms. What is the QR TCB busy percentage?

A) 30% B) 60% C) 6% D) 0.6%

Answer: B Explanation: QR TCB busy = TPS x CPU_per_transaction / 1000ms_per_second x 100. 2,000 x 0.3ms = 600ms per second out of 1,000ms available. That is 60%. At 3,334 TPS with this profile, the QR TCB would be 100% saturated (theoretical maximum).


Question 5

Which CICS storage sub-pool is most impacted by high-volume transactions with large WORKING-STORAGE SECTIONs?

A) ERDSA — because programs are loaded into read-only storage B) ECDSA — because CICS control blocks grow with task count C) EUDSA — because each task gets its own copy of working storage in user-key storage D) ETDSA — because terminal I/O areas scale with transaction volume

Answer: C Explanation: Each CICS task receives its own copy of the program's WORKING-STORAGE SECTION, allocated from EUDSA (Extended User Dynamic Storage Area). ERDSA holds the program load module (shared across tasks for reentrant programs). ECDSA holds CICS-key control blocks. ETDSA holds terminal-related storage. Working storage is per-task, in user-key storage — EUDSA.


Question 6

EDSALIM is set to 900M. Peak EUDSA usage is 520M. An architect proposes raising EDSALIM to 1500M "for safety." What is the best response?

A) Approve — more headroom prevents SOS B) Deny — 900M already provides 73% headroom, and excessive EDSALIM masks storage leaks and increases working set size C) Deny — EDSALIM cannot exceed 1024M on CICS TS 5.6 D) Approve — but only if DSALIM is also increased proportionally

Answer: B Explanation: The current EDSALIM of 900M provides (900-520)/520 = 73% headroom above peak, which exceeds the recommended 30%. Raising to 1500M would provide nearly 3x peak usage — this delays detection of storage leaks by allowing them to grow much larger before triggering SOS, increases the region's working set size (potentially causing z/OS paging), and provides no operational benefit. EDSALIM can exceed 1024M; that is not a platform limit.


Question 7

A TRANCLASS named CLSBULK has MAXACTIVE(30). Currently 30 tasks are active in CLSBULK. The overall MXT is 250 with 180 tasks active. A new CLSBULK transaction arrives. What happens?

A) MAXT condition — the transaction cannot get a TCA B) The transaction gets a TCA (180 < 250) but queues on a TRANCLASS wait until a CLSBULK task completes C) The transaction is abended with ATCL D) The transaction bypasses TRANCLASS because overall MXT has capacity

Answer: B Explanation: TRANCLASS and MXT are independent limits. The transaction can receive a TCA because the overall MXT (250) has not been reached. However, CLSBULK's MAXACTIVE (30) is at capacity. The task receives a TCA, is created, but enters a TRANCLASS wait — it is suspended until a CLSBULK task completes and a slot opens. This is the designed behavior: TRANCLASS protects other workloads without triggering MAXT.


Question 8

CMDT is set to 50. Currently 50 tasks hold active DB2 threads. A 51st task issues an EXEC SQL statement. THREADWAIT is set to YES. What happens?

A) The task abends with a -923 SQL code B) The task waits for a DB2 thread to become available C) CICS creates an additional DB2 thread beyond the CMDT limit D) The task is routed to another AOR by CICSPlex SM

Answer: B Explanation: With THREADWAIT(YES), a task that cannot obtain a DB2 thread waits rather than receiving an abend. The task remains active (consuming its TCA and storage) but is suspended until a thread becomes available. With THREADWAIT(NO), the task would receive an abend. CICS cannot exceed CMDT — it is a hard limit on the DB2CONN definition. CICSPlex SM routing occurs before task creation, not during DB2 thread allocation.


Question 9

During an incident, you observe QR TCB busy at 98%. Which of the following is the LEAST likely cause?

A) A COBOL program with a CPU-intensive PERFORM loop containing no EXEC CICS commands B) All programs running as QUASIRENT with heavy DB2 workload C) A VSAM file on a slow DASD volume causing long I/O waits D) A high-priority monitoring transaction running at 100 TPS with 5ms CPU each

Answer: C Explanation: VSAM I/O waits release the QR TCB — the task enters a wait state while the I/O completes, and the dispatcher assigns the QR TCB to another ready task. Slow DASD would increase elapsed time but not QR TCB busy time. Option A directly monopolizes the QR. Option B forces all DB2 waits to block the QR (because QUASIRENT programs hold the QR during DB2 calls). Option D consumes 100 x 5ms = 500ms of QR per second = 50%.


Question 10

An SMF 110 Type 1 record for transaction XFER shows: elapsed 800ms, CPU 3ms, DB2 wait 750ms, dispatch wait 5ms, file I/O wait 2ms. Where is the performance problem?

A) CICS dispatcher — high dispatch wait B) DB2 — 94% of elapsed time is DB2 wait C) Application code — CPU is too high D) VSAM file I/O — file operations are slow

Answer: B Explanation: DB2 wait of 750ms dominates the 800ms elapsed time (93.75%). The CPU time of 3ms is minimal. Dispatch wait of 5ms indicates the CICS dispatcher is not a bottleneck. File I/O of 2ms is negligible. The investigation should focus on DB2: lock contention, access path regression (bad EXPLAIN), tablespace scan, or DB2 resource contention.


Question 11

Which diagnostic tool would you use FIRST to determine whether a response time degradation is caused by CICS or DB2?

A) CEDF — step through the transaction interactively B) Auxiliary trace — capture detailed CICS internal operations C) SMF 110 Type 1 records — examine the wait-time breakdown D) CEMT I TASK — view the current task list

Answer: C Explanation: SMF 110 Type 1 records provide the wait-time breakdown for every transaction: dispatcher wait, DB2 wait, file I/O wait, and MRO wait. This immediately identifies whether the dominant wait is in CICS or DB2. CEDF and auxiliary trace are deeper diagnostic tools used after you know where to look. CEMT I TASK shows current state but not historical timing.


Question 12

A CICS region's EUDSA grows by 25 MB per hour while active task counts remain stable at 120. What does this indicate?

A) Normal behavior — EUDSA grows as programs are loaded B) A storage leak — a program is performing GETMAIN without corresponding FREEMAIN C) The EDSALIM is too low and needs to be increased D) Working storage is being allocated for new program versions after NEWCOPY

Answer: B Explanation: Stable task counts with growing EUDSA indicates that per-task storage is not being freed when tasks complete. This is the classic signature of a storage leak: GETMAIN without FREEMAIN, or containers created but not released. Normal program loading affects ERDSA/ESDSA (program storage), not EUDSA (user storage). Increasing EDSALIM would mask the leak, not fix it.


Question 13

CNB's capacity planning shows that at 12% annual growth, their AOR will reach 98% of MXT in 3 years. What is the BEST course of action?

A) Raise MXT now to accommodate 5 years of growth B) Plan horizontal scaling (add a third AOR) for deployment in Year 2 C) Wait until Year 3 and raise MXT when needed D) Reduce transaction volume through business process optimization

Answer: B Explanation: Kwame's rule: plan horizontal scaling 6 months before you need it. Deploying at Year 2 provides headroom before the Year 3 limit. Raising MXT now does not address the underlying scaling need — it pushes the problem to storage or CPU bottlenecks. Waiting until Year 3 risks being too late — production CICS topology changes take weeks. Business process optimization is beyond the system architect's scope.


Question 14

A CICS region reports 0 MAXT conditions but 500 TRANCLASS waits for CLSBULK in the past hour. Is this a problem?

A) Yes — any TRANCLASS wait indicates a performance issue B) No — TRANCLASS waits for CLSBULK indicate the class limit is protecting higher-priority workloads, which is the designed behavior C) Yes — 500 waits per hour means CLSBULK's MAXACTIVE should be raised D) Cannot determine without knowing the CLSBULK SLA

Answer: D Explanation: TRANCLASS waits for CLSBULK may be desirable (protecting critical workloads) or undesirable (causing CLSBULK transactions to miss their SLA). The answer depends on whether CLSBULK transactions are still meeting their response time targets. If CLSBULK has a 10-second SLA and transactions complete in 6 seconds despite the queuing, the waits are acceptable. If they are missing their SLA, MAXACTIVE should be reviewed — but the investigation should first confirm that raising MAXACTIVE would not degrade critical workloads.


Question 15

What is the relationship between CICS task priority and z/OS WLM dispatching priority?

A) CICS task priority overrides WLM dispatching priority B) WLM dispatching priority determines CPU access for the CICS region; CICS task priority determines which task runs within the region C) They are the same mechanism — CICS task priority feeds directly into WLM D) WLM priority only applies to batch; CICS manages its own dispatching independently

Answer: B Explanation: WLM and CICS operate at different levels. WLM determines how much CPU the CICS region (address space) receives relative to other z/OS work. Within the region, the CICS dispatcher uses task priority to decide which ready task gets the QR TCB or an open TCB next. A high-priority CICS task in a low-priority WLM service class will be first in the CICS ready queue but may wait for the z/OS dispatcher to give the region CPU time.


Question 16

During a code deployment, a program's WORKING-STORAGE grows from 200 KB to 1.5 MB. The program handles 300 concurrent tasks at peak. What is the additional EUDSA impact?

A) 1.3 MB — just the difference for one copy B) 390 MB — 1.3 MB per task times 300 concurrent tasks C) 450 MB — 1.5 MB per task times 300 concurrent tasks D) 1.5 MB — programs share working storage across tasks

Answer: B Explanation: Each task gets its own copy of working storage. The increase is (1.5 MB - 0.2 MB) x 300 tasks = 390 MB of additional EUDSA consumption. This is why CNB requires architect approval for any CICS program with more than 500 KB of working storage. Answer D is incorrect — CICS allocates a separate working storage copy per task (unlike the program load module, which is shared for reentrant programs).


Question 17

An auxiliary trace filtered to the dispatcher domain shows that task XFER-4521 spent 200ms in "WAIT-OLDW" state. What does this indicate?

A) The task was waiting for QR TCB dispatching — dispatcher congestion B) The task was waiting for a VSAM I/O to complete C) The task was waiting for an MRO response from a remote region D) The task was waiting for a DB2 thread (CMDT limit)

Answer: A Explanation: WAIT-OLDW (also seen as WAIT-MVS) in the dispatcher domain trace indicates the task was in the ready queue waiting for the QR TCB. This is dispatcher wait time — the QR TCB was busy with another task. DB2 thread waits show as specific DB2-related wait states. MRO waits show as IRC (inter-region communication) wait states. VSAM I/O shows as file control wait states.


Question 18

You manage 8 CICS AOR regions across 2 LPARs. Which combination of monitoring provides the BEST balance of coverage and operational cost?

A) CEDF on all 8 regions for real-time diagnostics B) CMF/SMF 110 at 15-minute intervals on all regions, plus self-aware transaction instrumentation in the top 10 transactions C) Auxiliary trace running continuously on all 8 regions D) CEMT I TASK polling every 60 seconds from an automated script

Answer: B Explanation: CMF/SMF 110 provides continuous, low-overhead, comprehensive performance data for historical analysis and trend detection. Self-aware transaction instrumentation provides real-time alerting for individual transaction performance anomalies. Together they cover both top-down and bottom-up monitoring. CEDF is interactive and single-transaction — not suitable for 8 regions. Continuous auxiliary trace generates terabytes of data and adds significant overhead. CEMT polling captures only point-in-time snapshots and misses transient events.


Question 19

In the CNB MAXT incident (Section 17.1), why did pausing the DB2 REORG resolve the MAXT condition within 30 seconds?

A) The REORG was consuming CPU that the CICS region needed B) The REORG's drain lock was causing 35% of transactions to wait for seconds instead of milliseconds; removing the lock allowed tasks to complete and free their TCAs C) Pausing the REORG freed DB2 threads for CICS use D) The REORG was causing coupling facility contention that affected CICS shared TS

Answer: B Explanation: The REORG's drain lock on the general ledger tablespace caused transactions that accessed GL to wait for the lock, extending their response time from 50ms to 2-4 seconds. This caused task accumulation — tasks held TCAs much longer than normal. Pausing the REORG released the drain lock. The blocked transactions immediately completed, freeing their TCAs. The task backlog drained within seconds because new transactions resumed normal 50ms response times.


Question 20

You are designing a capacity plan for a new CICS region. You calculate MXT=180 using the task formula and storage-bounded MXT=250 using the storage formula. Which value should you use for MXT, and why?

A) 180 — use the lower value because it is the more conservative choice B) 250 — use the storage-bounded value because it represents the true physical capacity C) 215 — use the average of the two values D) 180 — but validate with performance testing, and set EDSALIM to ensure the storage-bounded limit stays above MXT with a safety margin

Answer: D Explanation: The task formula gives the expected MXT based on workload characteristics. The storage-bounded MXT gives the physical upper limit before SOS. You should set MXT at the task-calculated value (rounded up with headroom) and ensure the storage-bounded limit exceeds MXT by a comfortable margin. If they are too close (e.g., task=180, storage=190), you risk SOS during traffic spikes. A gap of 70 tasks (250-180) provides adequate margin. Performance testing validates the calculations against real behavior.