Chapter 35 Quiz — Building a RegTech Program: Strategy, Governance, and Roadmapping

13 questions. Allow approximately 25–30 minutes. Questions range from recall to analysis and application.


Part A: Foundations and Maturity (Questions 1–4)

Question 1

An organization has purchased fourteen compliance technology platforms over the past five years. Of these, four are in active daily use, three are used occasionally, and seven are dormant or abandoned. This pattern is best described as:

A) A typical outcome of aggressive RegTech adoption that will improve over time as users adapt B) Evidence of a tool graveyard — the result of procurement decisions made without clear use cases, governance, or change management C) An acceptable distribution of software utilization typical of large regulated institutions D) A technology vendor quality problem that should be escalated through contract disputes


Question 2

According to the five-stage compliance maturity model described in this chapter, an organization that has documented compliance processes, assigned ownership of regulatory obligations, and deployed technology tools that are actively used in daily operations, but whose reporting is largely backward-looking and whose data quality monitoring is manual and periodic, is most accurately assessed at:

A) Stage 2 — Reactive B) Stage 3 — Defined C) Stage 4 — Managed D) Stage 5 — Optimized


Question 3

A maturity assessment finds that an organization has a sophisticated-looking policy architecture and a self-reported score of 4 ("Managed") across all dimensions. However, when the assessor requests evidence for the "Process Automation" dimension, the compliance team describes processes that are entirely manual, justified by a well-written policy document that describes what an automated process would look like. The most likely diagnosis is:

A) Regulatory mis-specification — the policy describes a process that was designed for an outdated regulation B) Over-reporting maturity by conflating "policy exists" with "policy is implemented and effective" C) The pilot trap — the automation described in the policy is currently in pilot phase D) Governance vacuum — the policy is sound but the responsible individual has left the organisation


Question 4

The data-first principle in RegTech roadmapping states that:

A) Data science and analytics capabilities should be the first investments made in a RegTech program, because they provide immediate visibility into compliance gaps B) Data infrastructure, data quality work, and golden source establishment must be scheduled and completed before deploying the analytics and reporting capabilities that depend on them C) The first six months of any RegTech program should be dedicated entirely to data cataloguing and data governance policy development before any technology work begins D) Data quality is primarily a technology problem that should be addressed by the CTO function, not the compliance function


Part B: Strategy and Business Case (Questions 5–7)

Question 5

A mid-size asset manager is facing a regulatory deadline in seven months. The FCA has issued guidance requiring enhanced monitoring of Consumer Duty outcomes, and the firm's current monitoring capability is monthly and backward-looking. When selecting a strategic orientation for their RegTech program, which orientation is most appropriate given these facts?

A) Business-driven orientation — the firm should invest in compliance as a competitive advantage B) Risk-driven orientation — the firm should invest well beyond minimum requirements to address all possible future regulatory expectations C) Compliance-driven orientation — the immediate driver is meeting a specific regulatory requirement, and the program should be scoped accordingly D) Federated orientation — each business line should independently develop its own monitoring capability


Question 6

A CFO asks a RegTech program sponsor to quantify the "risk reduction" value in the business case. The program is designed to close a known gap in transaction monitoring capability. Which of the following is the most rigorous approach to quantifying risk reduction value?

A) State that the risk reduction value is unquantifiable and focus the business case on operational efficiency savings B) Use an industry benchmark for the average cost of an enforcement action in the relevant area, multiplied by the reduction in probability of such an action that the investment is expected to produce C) Report the reduction in alert false positive rate as a proxy for regulatory risk reduction D) Commission an independent legal opinion on the maximum possible penalty and present this as the risk exposure avoided


Question 7

The build/buy/borrow decision framework suggests that an organization should "build internally" a compliance capability when:

A) The capability involves standardised regulatory requirements for which multiple credible vendors already exist B) The capability represents a genuine competitive differentiator, the regulatory requirement is specific to the firm's business model, the firm has engineering talent, and the cost of customising a vendor solution exceeds the build cost C) The firm has a large technology team and wants to maintain internal control of all compliance systems D) Regulatory guidance explicitly encourages firms to build rather than buy compliance systems


Part C: Governance (Questions 8–10)

Question 8

A large retail bank is designing the governance structure for a major RegTech program. The program involves replacing the core transaction monitoring system, rebuilding the sanctions screening process, and implementing a new regulatory reporting platform. The program will run for approximately 24 months, involve five business lines, and spend approximately £8M. Which governance element is most critical to add at this complexity level?

A) A federated ownership model where each business line manages its own workstream independently B) A Program Management Office (PMO) with genuine delivery experience, responsible for dependency tracking, vendor management, change management coordination, and risk escalation C) A rotating chairperson for the steering committee to prevent any single function from dominating program decisions D) A weekly all-hands meeting with all 200+ affected staff members to ensure transparency


Question 9

A RegTech program's steering committee meets monthly and receives progress reports, but has no explicit authority to approve scope changes, resolve cross-functional disputes, or make spending decisions above £10K without seeking additional approval elsewhere. This committee structure is best described as:

A) An appropriately cautious governance structure that prevents scope creep and cost overrun B) A governance theater — a committee with reporting functions but without the authority required to perform genuine governance C) A standard governance structure for programs of this type in regulated financial institutions D) A PMO structure, not a steering committee structure — the distinction lies in the reporting relationship


Question 10

Rafael Torres, a post-acquisition compliance integration consultant, is designing a governance structure for a RegTech program at a firm where the CCO and CTO have historically had a difficult working relationship. The CCO believes all compliance technology decisions must be hers; the CTO believes compliance technology is enterprise technology and falls within his domain. Which governance structure is most likely to produce a functional outcome?

A) Assign full program ownership to the CCO and require the CTO to provide engineering resource on demand B) Assign full program ownership to the CTO and require the CCO to provide compliance specifications as input C) Establish a standalone RegTech function that reports to neither, resolving the tension by removing both parties from ownership D) Design an explicit shared governance model — for example, CCO-led with technology co-sponsorship — that defines which decisions each executive owns, with a written decision authority matrix and a pre-agreed escalation path to the CEO for unresolved disputes


Part D: Roadmapping and Failure Patterns (Questions 11–13)

Question 11

An organization is designing a three-horizon RegTech roadmap. In Horizon 1, they plan to deploy a real-time compliance monitoring dashboard. In Horizon 2, they plan to address data quality issues and establish golden sources for critical reference data. In Horizon 3, they plan to integrate all data sources into a central data platform. A RegTech advisor identifies this sequencing as flawed. What is the primary sequencing error?

A) The Horizon 1 scope is too ambitious — a monitoring dashboard is a Horizon 2 or Horizon 3 activity B) The data quality and golden source work should be in Horizon 1, before the monitoring dashboard that depends on it — the current sequencing violates the data-first principle C) Horizon 2 should always focus on transformation, not foundational data work D) The integrated data platform should be the first investment, making all other investments dependent on its completion


Question 12

Maya Osei, CCO at Verdant Bank, sponsors a pilot of a new AI-powered KYC platform. The pilot runs successfully for three months, demonstrating a 40% reduction in manual review time. Six months later, the pilot is still running — it has been extended twice. The enterprise deployment has been scheduled and then postponed. The data team has flagged that production data quality is worse than the pilot test data. The original project sponsor has moved to a new role. This situation is best diagnosed as:

A) A regulatory mis-specification — the KYC platform was designed for the regulatory requirements at the time of the pilot, which have since changed B) The pilot trap — the pilot has not been designed with predefined success criteria and a go/no-go decision process, and is now running indefinitely without progressing to production C) A governance vacuum — the system went into production but nobody owns it D) The change management gap — the KYC team has not been trained on the new platform


Question 13

A financial institution deploys a new regulatory reporting system. The system is technically complete and has passed all acceptance testing. Eighteen months after go-live, 60% of reports are still being produced manually by the reporting team, who have returned to their old spreadsheet process because "the new system isn't their job to manage." The most accurate diagnosis of this outcome is:

A) A tool graveyard — the system was procured without a genuine use case and should be decommissioned B) A pilot trap — the system was never formally moved from pilot to production C) The change management gap combined with the governance vacuum: the process change was not managed, users were not required to adopt the new system, and no one was designated to own adoption and enforce the new working method D) The regulatory mis-specification — the reporting system was built for a regulation that changed after go-live


Answer Key

Question 1 — B The pattern described — procurement of many tools with low active utilisation — is the textbook definition of a tool graveyard. It is caused by procurement decisions made without clear use cases, insufficient governance, and the absence of change management. Option A mischaracterises it as a temporary adoption problem. Option C normalises an inefficient outcome. Option D misattributes the cause to vendor quality.

Question 2 — B The description — documented processes, assigned ownership, active technology use, backward-looking reporting, manual and periodic data quality monitoring — precisely matches Stage 3 (Defined). Stage 4 (Managed) would require near-real-time monitoring, integrated systems, and automated data quality management. Stage 2 (Reactive) would lack documented processes and active tool deployment.

Question 3 — B The assessor has identified the most common maturity assessment failure mode: over-reporting maturity by treating policy documentation as evidence of operational capability. A well-written policy describing an automated process is not evidence that the process is automated — it is evidence that someone knows what automation would look like. This is distinct from the pilot trap (D) or governance vacuum (D), both of which involve different failure patterns.

Question 4 — B The data-first principle is precisely stated in option B. Option A inverts the principle — analytics should come after data foundations, not before. Option C overstates the principle — it does not require a full halt on all other work, only proper sequencing of dependent capabilities. Option D misattributes data quality ownership to the CTO alone.

Question 5 — C Given a specific regulatory deadline seven months away and a specific capability gap identified by the regulator, the compliance-driven orientation is the appropriate choice. This is not the moment for transformation ambition (A) or open-ended risk investment (B). Option D is not an orientation — it is a governance structure.

Question 6 — B This is the standard expected-value approach to quantifying risk reduction: probability of adverse event multiplied by average cost of adverse event, calculated for both pre- and post-investment states. Option A is defensible but suboptimal — a well-constructed business case should attempt to quantify all four value categories. Option C measures something useful (false positive rate) but is not a direct measure of regulatory risk. Option D overstates the risk by using maximum penalty rather than expected cost.

Question 7 — B The build decision is appropriate when the capability is a genuine differentiator, when the requirement is specific to the firm's model, when engineering talent exists, and when build cost is competitive with vendor customization cost. Option A describes conditions favouring the buy decision. Option C is not a sound basis for build decisions — internal preference is not a build criterion. Option D is incorrect; regulatory guidance does not typically prescribe build versus buy.

Question 8 — B At this complexity level — 24 months, five business lines, £8M, three simultaneous platforms — a PMO with genuine delivery experience is the critical governance addition. The PMO performs the dependency tracking, vendor management, change coordination, and escalation log functions that cannot be managed by the steering committee alone. Option A (federated, no coordination) is a recipe for inconsistency. Options C and D are process improvements, not structural governance additions.

Question 9 — B A committee that meets and receives reports but cannot approve scope changes, resolve disputes, or make spending decisions above a de minimis threshold is performing governance theater. Governance requires not just information flow but decision authority. Option A mischaracterises the situation — the problem is absence of authority, not excess of caution.

Question 10 — D The functional resolution to a CCO/CTO governance dispute is explicit shared governance with a written decision authority matrix and a pre-agreed escalation path. Assigning to one party (A or B) will produce resentment and non-cooperation from the other. Option C (standalone function) is a legitimate structural option for large institutions but removes both executives from ownership rather than resolving the tension productively.

Question 11 — B A real-time monitoring dashboard is precisely the type of analytics capability that the data-first principle places in Horizon 2 or later. If data quality issues and golden source gaps are not resolved first (Horizon 1), the dashboard will display unreliable data — which is worse than no dashboard, because it gives false confidence. The sequencing should move data quality work to Horizon 1 and the dashboard to Horizon 2.

Question 12 — B The pilot trap is defined by a pilot that runs beyond its intended scope, has been extended multiple times, has not been given a formal go/no-go decision process, and is not progressing to production. The data quality issue and sponsor change are contributing factors that would have been identified and managed by a well-designed pilot with predefined exit criteria. This is not a governance vacuum (C) — the system has not yet reached production.

Question 13 — C This scenario illustrates the change management gap and governance vacuum operating together. The system went live without the process change required to make it the expected way of working (change management gap), and no one was designated to own adoption and enforce the new process post-go-live (governance vacuum). Option A is incorrect — the system works technically. Option B is incorrect — it did reach production. Option D is incorrect — the question states reports are still being produced, just not through the new system.


Score: 11–13 correct — strong command of RegTech program strategy and governance. 8–10 correct — solid understanding with some gaps to review, particularly in governance design and failure pattern diagnosis. Below 8 — re-read Sections 35.4 through 35.8 and revisit the case studies before proceeding.