Chapter 13 Quiz

Instructions

Select the best answer for each question. Questions are designed to test understanding at the Evaluate level of Bloom's taxonomy — you'll need to analyze scenarios and make architectural judgments, not merely recall definitions.


Question 1

What is the fundamental distinction between viewing CICS as a "transaction manager" versus an "application server"?

A) A transaction manager runs programs faster than an application server B) A transaction manager ensures atomicity of work across multiple resources and regions; an application server merely executes programs C) A transaction manager supports only batch processing while an application server supports online processing D) A transaction manager requires CICSPlex SM while an application server does not

Answer: B Explanation: The threshold concept of this chapter. CICS's core role is managing transactions — ensuring work either completes fully or rolls back entirely, coordinating across DB2, VSAM, journals, and multiple regions. An application server just runs code. This distinction drives every topology decision because the transaction manager must coordinate recovery, routing, and resource locking across the entire topology.


Question 2

A CICS topology has a single TOR, a single AOR, and a single FOR. A defect in a rarely-used application program causes a storage overlay that corrupts the AOR's address space. What is the impact?

A) Only the failing program is affected; other programs continue B) All transactions in the AOR are affected, but the TOR and FOR continue operating C) All transactions are affected because the TOR cannot route to any healthy AOR D) The FOR detects the corruption through its MRO connection and shuts down preventively

Answer: C Explanation: With only one AOR, there is no failover target. The TOR receives the terminal requests but has nowhere to route them. The FOR is still operational but has no AOR sending it requests. This is the single-AOR failure scenario that drives the requirement for multiple AORs. Answer B is partially true — the TOR and FOR do continue running — but from the user's perspective, all transactions fail because there's no healthy AOR.


Question 3

CNB separates mobile API traffic (SYSD) from web traffic (SYSC) on different LPARs. What is the PRIMARY architectural reason?

A) Different LPARs have different z/OS versions optimized for each workload B) Failure isolation — a problem with the mobile API cannot degrade web portal service C) CICS licensing requires separate LPARs for different channel types D) Mobile API uses Liberty JVM which requires its own LPAR

Answer: B Explanation: Channel separation is fundamentally about failure isolation. If a DDoS attack targets the mobile API, or a code defect in API handling causes an AOR crash, the web portal on SYSC is completely unaffected. While Liberty JVM does have resource implications, it doesn't require a separate LPAR. The primary driver is ensuring that one channel's problems don't cascade to others.


Question 4

A dynamic routing program performs a DB2 query (2ms) for every routable transaction. The TOR processes 4,000 TPS. How much CPU time does the routing program consume per second?

A) 2 seconds B) 4 seconds C) 8 seconds D) 0.008 seconds

Answer: C Explanation: 4,000 TPS × 2ms = 8,000ms = 8 seconds of CPU per second. This means the routing program alone would consume 8 CPUs, which is clearly unsustainable. This is why routing programs must be lightweight — in-memory decision logic only, with no database calls, no file I/O.


Question 5

Which CICSPlex SM routing algorithm should CNB use for core banking transactions that have a WLM response-time goal of 200ms?

A) Queue algorithm — routes to the AOR with the shortest task queue B) Goal algorithm — routes to the AOR most likely to meet the WLM response-time goal C) Round-robin algorithm — distributes transactions evenly across AORs D) Affinity algorithm — routes based on previous transaction history

Answer: B Explanation: The goal algorithm integrates with z/OS WLM to query each candidate AOR's velocity — a measure of how well the AOR is meeting its WLM service class goals. It routes to the AOR with the highest velocity (most headroom). This creates a feedback loop: WLM adjusts priority to meet goals, CPSM routes work based on WLM's assessment. The queue algorithm ignores WLM goals; round-robin is not a CPSM algorithm; affinity is not a routing algorithm but a constraint on routing.


Question 6

A pseudo-conversational transaction stores user state in a region-local TS queue. What impact does this have on workload management?

A) None — TS queue access is too fast to affect routing B) It creates a transaction affinity that forces subsequent transactions to route to the same AOR C) It requires a FOR to own the TS queue D) It prevents CICSPlex SM from monitoring the transaction

Answer: B Explanation: Region-local TS queues are only accessible from the region that created them. If the first transaction in a pseudo-conversational sequence writes to a local TS queue, all subsequent transactions must route to the same AOR to read that queue. This is a transaction affinity — it restricts CPSM's ability to route transactions to the optimal AOR, undermining workload balancing.


Question 7

What is the recommended mitigation for the affinity described in Question 6?

A) Use static routing to ensure all transactions go to the same AOR B) Move the TS queue to shared temporary storage in the coupling facility C) Increase MAXTASK on the AOR to handle the additional load D) Disable CICSPlex SM workload management for those transactions

Answer: B Explanation: Shared TS queues (LOCATION=SYSPLEX) store data in the coupling facility, making them accessible from any CICS region in the sysplex. This eliminates the affinity because any AOR can read the TS queue created by any other AOR. The 10–20 microsecond CF access latency is negligible compared to the workload balancing benefit. Options A and D both accept the affinity rather than eliminating it; option C doesn't address the root cause.


Question 8

Two CICS regions are on the same LPAR. What communication method should they use?

A) ISC with IPIC — modern and preferred B) ISC with SNA/APPC — most reliable C) MRO with IRC — cross-memory, most efficient for same-LPAR D) Either MRO or ISC — performance is equivalent on the same LPAR

Answer: C Explanation: MRO uses z/OS cross-memory services, which bypass the TCP/IP stack entirely. On the same LPAR, MRO is significantly faster than IPIC (CNB measured 0.08ms vs. 0.4ms per round-trip). ISC is for cross-LPAR or cross-machine communication. Within an LPAR, MRO is always the right choice.


Question 9

An AOR program performs 30 function-shipped VSAM reads to a remote FOR per transaction. Each MRO round-trip takes 0.08ms. The transaction runs 1,000 times per second. What is the total function-shipping overhead per second?

A) 0.24 seconds B) 2.4 seconds C) 24 seconds D) 0.024 seconds

Answer: B Explanation: 30 reads × 0.08ms = 2.4ms per transaction. At 1,000 TPS: 2.4ms × 1,000 = 2,400ms = 2.4 seconds of MRO overhead per second. This is significant — the function shipping alone is consuming over 2 CPU-seconds per second. This scenario calls for one of the mitigation strategies: DPL (ship the program to the FOR), data migration to DB2 with data sharing, or file mirroring.


Question 10

What is the PRIMARY benefit of CICSPlex SM's Business Application Services (BAS)?

A) Improves transaction routing performance B) Enables centralized resource definition and deployment across multiple regions C) Provides backup for CICS System Definition (CSD) files D) Monitors coupling facility usage

Answer: B Explanation: BAS replaces the manual, per-region CSD management process with centralized definition and deployment. Define a resource once, deploy it to a group of regions. This eliminates the error-prone process of manually updating CSDs across every region and enables controlled rollouts (phased deployment, one AOR at a time). CNB reported an 85% reduction in deployment errors after adopting BAS.


Question 11

CNB runs two CMASs for CICSPlex SM high availability. If the primary CMAS fails, what happens to transaction routing?

A) All routing stops until the secondary CMAS takes over B) Routing continues using cached routing data in each MAS; the secondary CMAS takes over management functions C) The TORs fall back to static routing D) Transactions queue in the TOR until the CMAS is restored

Answer: B Explanation: CPSM agents in each managed region (MAS) cache their routing configuration. When the CMAS fails, the agents continue routing based on cached data. The secondary CMAS takes over management functions (health monitoring, configuration changes, workload definition updates). Transaction processing is not interrupted. This is a critical HA feature — the CMAS is a management component, not a transaction-path component.


Question 12

SecureFirst adds a new TOR for mobile API traffic alongside their existing 3270 TOR. The mobile TOR routes to a new AOR group. Both AOR groups function-ship to the same FOR. What is the architectural risk?

A) The mobile TOR cannot route to AORs that also serve 3270 transactions B) The FOR becomes a shared dependency — problems in the FOR affect both channels C) Function shipping from two AOR groups to one FOR is not supported by MRO D) CICSPlex SM cannot manage AORs that serve different TOR types

Answer: B Explanation: The FOR is the shared dependency. If the FOR experiences a problem (storage shortage, file corruption, enqueue deadlock), both the 3270 and mobile channels are affected — defeating the purpose of channel isolation at the TOR and AOR tiers. This is the trade-off the chapter discusses: a second FOR would improve isolation but may not be justified by volume. The risk must be acknowledged and monitored.


Question 13

Which of the following is NOT a valid use of coupling facility data tables (CFDTs) in a CICS Sysplex environment?

A) Storing exchange rate data read millions of times per day B) Storing web session tokens shared across AORs C) Storing high-volume transactional data updated thousands of times per second D) Storing product code reference data

Answer: C Explanation: CFDTs are optimized for reference data with high read-to-write ratios. High-volume transactional data with frequent updates would create excessive coupling facility contention and is better served by DB2 data sharing (which has sophisticated locking and buffer management) or VSAM with appropriate access patterns. CFDTs excel at the exact use cases in A, B, and D — relatively stable data accessed frequently from multiple regions.


Question 14

The ATTACHSEC=VERIFY parameter on an MRO connection definition ensures:

A) That the connection is encrypted using SSL/TLS B) That the AOR verifies the user's identity before executing the routed transaction C) That the TOR verifies the AOR's RACF profile before routing D) That MRO sessions are authenticated using digital certificates

Answer: B Explanation: ATTACHSEC=VERIFY causes the receiving region (AOR) to verify the user ID attached to the incoming transaction request. This ensures that routed transactions run under the end user's security identity, not the TOR's or AOR's region user ID. This is essential for security audit trails and RACF-based resource access control — the user's authority, not the region's authority, determines what the transaction can access.


Question 15

You are designing a CICS topology using the five-question decision framework. Your analysis shows: 3 channels (3270, web, partner API), strict failure isolation between channels, different response-time tiers, all channels access the same DB2 data, and the web channel must scale independently. Which topology element can you ELIMINATE because of DB2 data sharing?

A) Separate TORs per channel B) Separate AOR groups per channel C) File-owning regions (FORs) D) CICSPlex SM

Answer: C Explanation: When all data is in DB2 with data sharing enabled, every AOR on every LPAR can access the data directly through the coupling facility. There is no need for FORs to own files and no function shipping overhead. FORs exist to centralize VSAM file access; DB2 data sharing makes this unnecessary. TOR separation (A) is still needed for channel isolation, AOR groups (B) are still needed for workload and failure isolation, and CPSM (D) is still needed for management and routing.


Question 16

What is the correct formula for calculating the MAXTASK SIT parameter?

A) MAXTASK = peak TPS × average response time B) MAXTASK = peak TPS × average response time × safety factor C) MAXTASK = total daily transactions / seconds per day × safety factor D) MAXTASK = number of terminals × average think time

Answer: B Explanation: MAXTASK = (peak transactions per second) × (average response time in seconds) × (safety factor). The safety factor (typically 1.5–2.5) accounts for response time variability, burst workloads, and growth. Setting MAXTASK without a safety factor means that any spike above average response time causes task queuing. CNB uses a safety factor of 2.2.


Question 17

A CICS topology has the following anti-pattern: a single AOR runs all 200 transactions. You need to split it into multiple AORs with minimal application code changes. What is the MOST IMPORTANT factor to analyze before splitting?

A) Which transactions have the highest CPU consumption B) Which transactions have inter-transaction affinities (shared TS queues, COMMAREAs) C) Which transactions are invoked most frequently D) Which transactions access DB2 vs. VSAM

Answer: B Explanation: Affinities determine which transactions must stay together. If transaction A writes a TS queue that transaction B reads, they must route to the same AOR (unless you migrate to shared TS). Before splitting an AOR, you must map all affinities — TS queue sharing, COMMAREA pseudo-conversations, shared containers — to determine which transactions can be separated and which must remain co-located. Splitting without affinity analysis will break pseudo-conversational flows.


Question 18

SecureFirst's current topology has no Sysplex (single z/OS image). They want 99.99% availability for their mobile banking API. Is this achievable?

A) Yes — multiple AORs on the same LPAR provide sufficient redundancy B) Yes — CICS recovery and restart capabilities can meet this target on a single LPAR C) No — 99.99% requires at least two LPARs to survive hardware failure, which requires a Sysplex D) No — 99.99% is not achievable with CICS regardless of topology

Answer: C Explanation: 99.99% availability allows only 52 minutes of unplanned downtime per year. A single LPAR is a single point of failure — if the LPAR's hardware fails (processor, memory, I/O channel), all CICS regions on that LPAR are down. LPAR-level failures, though rare, typically take longer than 52 minutes to resolve. To survive hardware failure, you need at least two LPARs (preferably on separate machines), which requires Sysplex for CICS cross-LPAR coordination.


Question 19

Kwame Mensah sizes MRO sessions at 2x observed peak concurrent MRO requests. What happens if sessions are sized at 0.5x observed peak?

A) Transactions fail with a SYSIDERR condition B) Excess MRO requests queue waiting for available sessions, increasing response time C) CICS automatically allocates additional sessions from a dynamic pool D) The MRO connection drops and must be re-established

Answer: B Explanation: When all MRO sessions for a connection are in use, additional requests queue in the sending region's dispatcher until a session becomes available. This adds wait time to every queued request and can cascade — as response times increase, more tasks are active simultaneously, consuming more sessions, creating more queuing. Under-sizing sessions creates an artificial bottleneck that manifests as intermittent response time degradation under load — one of the hardest CICS performance problems to diagnose.


Question 20

You are presenting a CICS topology redesign to the architecture review board. The CISO asks: "How do you ensure that the mobile API TOR cannot directly access customer financial data?" Which architectural feature provides this guarantee?

A) CICSPlex SM workload management restricts TOR data access B) The TOR has no DB2 connection and no local files — it can only route transactions to AORs C) RACF profiles on the TOR prevent SELECT access to DB2 tables D) Both B and C together provide defense in depth

Answer: D Explanation: The primary protection is architectural: the TOR doesn't have a DB2 connection plan or local data files, so it physically cannot access data — it can only route transactions. The secondary protection is administrative: RACF profiles for the TOR's user ID should explicitly deny data access, providing defense in depth. If someone mistakenly adds a DB2 connection to the TOR, RACF prevents data access. Both layers together satisfy the CISO's requirement and align with PCI DSS principle of least privilege.