50 min read

> "Buying software is not the same as building capability. The difference is the difference between owning a gym membership and being fit."

Chapter 35: Building a RegTech Program — Strategy, Governance, and Roadmapping

"Buying software is not the same as building capability. The difference is the difference between owning a gym membership and being fit."

— Priya Nair, Senior Manager, RegTech Advisory Practice


Learning Objectives

By the end of this chapter, you will be able to:

  1. Distinguish between a genuine RegTech program and an ad hoc collection of compliance tools, and articulate what makes the difference.
  2. Apply a five-stage compliance maturity model to assess an organization's current state across key dimensions of regulatory capability.
  3. Define the three strategic orientations for a RegTech program and select the appropriate orientation based on organizational context.
  4. Design a governance structure that resolves the three-way ownership tension between compliance, technology, and operations functions.
  5. Construct a three-horizon roadmap that sequences RegTech investments in dependency order, avoiding common sequencing errors.
  6. Build the financial case for a RegTech program using the four value categories and cost-of-status-quo analysis.
  7. Recognize and diagnose the five most common RegTech program failure patterns before they become entrenched.
  8. Apply Priya Nair's Five Questions framework as a pre-program diagnostic for any RegTech initiative.

35.1 Introduction: The Wish List Problem

The meeting had been running for forty minutes when Priya Nair finally understood what she was dealing with.

She was sitting in a glass-walled conference room on the fourteenth floor of a building in the City of London, looking across a polished table at eight people who were, by almost any measure, serious and accomplished professionals. The Chief Compliance Officer had twenty years of financial services experience. The CTO had led enterprise technology transformations at two previous firms. The Head of Operations had a reputation throughout the industry as someone who got things done. The CFO was working through a presentation on her laptop with the practiced efficiency of someone who had seen too many proposals to be easily impressed.

The occasion was the kick-off meeting for what the firm — a mid-size UK asset manager with approximately £38 billion under management — had called its "RegTech Transformation Initiative." The CTO had just finished presenting what he described as their strategy: a slide deck listing twenty-three different software platforms they wanted to evaluate and potentially deploy over the next three years.

The list was ambitious. It included a KYC automation platform, two competing transaction monitoring vendors, a regulatory reporting engine, a data lineage tool, a sanctions screening overlay, a conflicts-of-interest management system, a best execution analytics platform, three different AI-powered surveillance tools, a horizon-scanning service for regulatory change, and — almost as an afterthought — a master data management platform that appeared at the bottom of slide six.

The CCO leaned forward. "Where do we start?"

The Head of Operations asked: "Who owns this? Because I'm not taking on twenty-three new systems without knowing who's going to support them."

The CFO looked up from her laptop. "How much will this cost?"

Priya had said nothing yet. She had been taking notes in the precise shorthand she had developed over five years in Big 4 RegTech advisory, first as a Senior Associate, then as a Manager, and — as of eight weeks ago — as a Senior Manager with her own portfolio of client engagements. She had been in this room before, not literally, but functionally. Different city, different firm, different list of twenty-three platforms. Same problem.

She looked at her notes and said: "Before we talk about any of the tools on that list, I'd like to suggest we talk about something else. I'd like to understand what regulatory obligation or risk problem you are trying to solve first. Because what I see here is not a strategy. What I see is a shopping list."

There was a brief silence.

"The good news," Priya continued, "is that a shopping list is a perfectly normal starting point. The mistake is treating it as the destination."

This chapter is about the distance between a shopping list and a strategy — and about how to close it.


35.2 What Makes a RegTech Program (Versus a RegTech Shopping List)

35.2.1 The Capability Gap

Organizations acquire compliance technology for understandable reasons. A regulatory deadline creates pressure; a vendor demonstrates a compelling product; a peer institution is reported to have deployed a similar tool; an internal audit finding creates urgency. Each acquisition is locally rational. The aggregate is frequently a mess.

The asset manager Priya visited had spent approximately £4.2 million on compliance technology over the preceding five years. Of the fourteen platforms they had procured, four were in active daily use. Three were used occasionally. Seven were either dormant, running in perpetual pilot status, or had been quietly abandoned after a failed implementation. The total cost of the dormant and abandoned platforms — including licensing fees that continued to be paid, integration work that had never been completed, and staff time invested in implementations that never reached production — exceeded £2 million.

This is not unusual. Industry surveys consistently find that between 30% and 50% of enterprise software purchased by regulated financial institutions fails to reach meaningful production status. For compliance technology specifically, the failure rate tends toward the higher end of that range, for reasons this chapter will examine in detail.

The difference between the asset manager's situation and a genuine RegTech program is not primarily about the tools. It is about four things: strategic clarity, governance, people, and technology — roughly in that order of importance.

35.2.2 The Four Dimensions of a Genuine RegTech Program

Strategic Clarity means the organization has answered, before acquiring any technology, three foundational questions: What specific regulatory obligations or risk problems are we solving? In what priority order? And how will we know when we have solved them? Strategic clarity does not require a fifty-page strategy document. It requires clear answers to simple questions that are surprisingly difficult to answer in practice.

Governance means there are identifiable human beings — with specific roles, decision rights, and accountability — responsible for the program's outcomes. Not just during implementation, but permanently. The governance question is not "who is running the project?" It is "who owns the capability after the project ends, and what are they accountable for?"

People means the human processes that the technology will support, replace, or enable have been analyzed and redesigned before the technology is deployed. Every compliance technology system eventually produces outputs — reports, alerts, data, workflows — that human beings must interpret, act on, and be accountable for. If those human processes have not been designed, the technology will produce outputs that nobody knows what to do with.

Technology comes last. Not because it is unimportant — the entire premise of this textbook is that technology fundamentally transforms regulatory compliance — but because technology deployed in the absence of the first three dimensions reliably fails. The tool is the implementation mechanism for a capability, not the capability itself.

35.2.3 Why Technology Is the Last Thing to Deploy, Not the First

This sequencing is counterintuitive for organizations accustomed to solving problems by purchasing software. The instinct is understandable: technology is tangible, demonstrable, and easy to point to as evidence of action. Saying "we have deployed a KYC automation platform" sounds like a compliance achievement. And it may be — if the organization first defined what KYC outcomes the platform is intended to produce, who owns the KYC process the platform will support, what data the platform will consume and whether that data is accurate, and what the human review process looks like for platform outputs.

If those things have not been done, deploying the KYC automation platform is roughly equivalent to installing a state-of-the-art kitchen in a house with no plumbing, no electricity, and no one who knows how to cook.

The sequence that works — which Priya has observed across dozens of client engagements — is always some version of: define the problem clearly → design the target process → identify the data requirements → select the technology → implement with change management → measure outcomes → iterate. Organizations that begin at "select the technology" rarely reach "measure outcomes." They tend to circle back to "define the problem clearly" after a failed implementation, at considerable expense.

35.2.4 The Build / Buy / Borrow Spectrum

One strategic decision that must be made before committing to any technology path is whether to build, buy, or borrow the capability in question. The conventional wisdom that regulated institutions should always buy rather than build has eroded significantly over the past decade, as cloud-native development has reduced build costs and as regulatory scrutiny of third-party vendors has increased. The reality is that the optimal choice depends on a structured analysis of several factors.

Build internally when: the capability represents a genuine competitive differentiator; the regulatory requirement is so specific to your business model that no vendor solution will fit without extensive customization; you have the engineering talent to build and maintain the system; and the cost of customizing a vendor solution exceeds the cost of building.

The "build" option is often underweighted by compliance and risk teams who lack engineering background. For some capabilities — particularly certain forms of real-time behavioral surveillance or proprietary risk scoring — a well-resourced internal build can produce superior outcomes to any available vendor product.

Buy from a vendor when: the capability is well-defined and standardized; multiple credible vendors exist with proven track records; the vendor can provide ongoing regulatory updates; and the cost of vendor licensing plus integration is materially lower than the cost of building and maintaining an equivalent internal capability.

The "buy" option is the default for most RegTech capabilities, and for good reason: the vendor market for compliance technology has matured substantially since 2015, and for standard regulatory requirements — FATCA/CRS reporting, standard KYC workflows, conventional sanctions screening — the quality of available products is high.

Borrow — meaning adopt open-source libraries, participate in industry consortia, or access shared regulatory infrastructure (such as the FCA's Digital Sandbox or industry-maintained reference data) — when: the capability involves reference data or common frameworks that are not competitive differentiators; participation in industry-wide data sharing reduces costs for all participants; or you are a smaller institution that cannot justify the cost of a proprietary build or a full vendor licensing arrangement.

The "borrow" option has expanded significantly with the rise of regulatory sandboxes, industry data utilities (such as GLEIF for legal entity data or the LEI system for counterparty identification), and open-source compliance toolkits maintained by industry groups. For data-centric capabilities in particular, consortium-based approaches to data collection and standardization can dramatically reduce the cost burden on individual firms.


35.3 Understanding Your Starting Point — The Compliance Maturity Assessment

35.3.1 The Five Stages of Compliance Maturity

Before designing any RegTech strategy, an organization must honestly assess where it is. Without this assessment, roadmapping is guesswork: you cannot plan a route without knowing your starting location.

Priya uses a five-stage maturity model adapted from capability maturity frameworks developed in software engineering and applied to the compliance technology domain. The five stages are:

Stage 1 — Ad Hoc: Compliance activities are performed inconsistently, dependent on individual effort and tacit knowledge rather than documented process. There is no systematic approach to regulatory obligation tracking. Technology tools are absent or unused. Audit trails are incomplete. The organization's compliance posture is essentially invisible to management: nobody knows whether obligations are being met, by whom, and how.

Stage 2 — Reactive: The organization responds to regulatory events (audits, examinations, enforcement actions, Dear CEO letters) with remediation efforts, but compliance is not systematically managed outside those trigger events. Some documentation exists but is inconsistently maintained. A small number of technology tools may be in use, but they were acquired in response to specific incidents rather than as part of a deliberate program. Data quality is variable and often unknown. The organization can demonstrate compliance after the fact but cannot monitor it in real time.

Stage 3 — Defined: The organization has documented compliance processes, assigned ownership of regulatory obligations, and deployed technology tools that are actively used in daily operations. Reporting is regular but largely backward-looking. Data quality is monitored for key data sets. There is an audit trail for major compliance activities, though gaps exist. This is the modal state for mid-size regulated institutions that have been through at least one significant regulatory review: they have fixed the obvious gaps but have not yet built forward-looking compliance capability.

Stage 4 — Managed: Compliance capability is measured, managed, and continuously improved. Technology tools are integrated, sharing data across systems rather than operating as silos. Key risk indicators and compliance metrics are monitored in near-real time. Data quality is actively managed with automated monitoring and remediation workflows. The organization can demonstrate not just that it complied with a regulation, but can forecast where compliance gaps are likely to emerge. Management makes resourcing decisions based on compliance data rather than instinct.

Stage 5 — Optimized: The compliance function operates as a strategic business capability. Technology enables the organization to monitor, test, and certify its own compliance posture continuously. Regulatory change is absorbed systematically: new obligations are mapped to existing processes and technology, with gap analysis performed automatically as regulatory text changes. The compliance function contributes to competitive advantage by enabling faster product launches, cleaner capital allocation, and demonstrably lower operational risk — which in turn supports better terms in counterparty relationships, insurance pricing, and capital market access.

Most large banks and global asset managers operate at Stage 3 to 4. Most mid-size firms, challenger banks, and newer entrants operate at Stage 2 to 3. Stage 5 is rare; it represents the frontier of RegTech capability.

35.3.2 Dimensions of the Maturity Assessment

The overall maturity score is an average across five key dimensions, each assessed separately:

Process Automation: To what extent have manual compliance processes been replaced or augmented by automated workflows? This covers KYC refresh, transaction monitoring, sanctions screening, regulatory reporting, and breach management. A score of 1 means entirely manual; a score of 5 means rule-based and AI-driven automation handles the majority of the process, with human review reserved for exceptions.

Data Quality: How reliable, complete, and current is the data that the compliance function uses? A score of 1 means data quality is unknown and largely unmanaged; a score of 5 means automated data quality monitoring is in place, data lineage is documented, golden sources are established for all critical data elements, and data quality issues trigger automated remediation workflows.

Reporting Capability: Can the organization produce accurate, timely, and complete regulatory reports, and can it demonstrate how those reports were produced? A score of 1 means regulatory reports are produced manually in spreadsheets with no audit trail; a score of 5 means reports are generated automatically from integrated data sources, with full lineage from raw data to report output, version control, and automated reconciliation checks.

Monitoring Effectiveness: Does the organization have real-time or near-real-time visibility into its compliance posture, and is it able to detect issues before they become regulatory breaches? A score of 1 means monitoring is periodic and backward-looking; a score of 5 means continuous automated monitoring with machine learning-enhanced detection, calibrated alert thresholds, and metrics that distinguish true risk signals from noise.

Audit Trail Completeness: Is there a comprehensive, tamper-evident record of who did what, when, and why across all material compliance activities? A score of 1 means audit trails are partial, fragmented across systems, or easily modified; a score of 5 means all compliance-relevant activities are logged in a centralized, immutable audit trail with sufficient granularity to reconstruct any event and demonstrate regulatory compliance on demand.

35.3.3 The Assessment as Diagnostic, Not Scorecard

A maturity assessment is only useful if it is honest. Priya is frank with clients about the most common way maturity assessments go wrong: institutions systematically over-report their maturity. The reasons are understandable — the assessment is being conducted in front of colleagues; nobody wants to admit that processes they are responsible for are in poor shape; there is a natural tendency to score based on what is in place rather than whether what is in place actually works.

The antidote is evidence-based scoring. For each dimension, the assessment should be supported by specific evidence: not "we have a data governance policy" (which would support a score of 2 or 3) but "our data governance policy has been reviewed in the last twelve months, is actively enforced, and the last data quality audit produced a documented findings report" (which might support a 3 or 4) versus "our automated data quality monitoring system generated 847 alerts in the last quarter, of which 93% were remediated within SLA, and the remediation actions were logged against specific data elements in our data catalogue" (which might support a 4 or 5).

Three pitfalls are particularly common:

Over-reporting maturity by conflating "policy exists" with "policy is implemented and effective." In Priya's experience, this gap — between documented process and operational reality — is the single most important thing a maturity assessment can surface. An organization with a sophisticated-looking policy architecture and a score of 4 on paper may in reality be operating at a 2.

Ignoring data quality as a dimension, treating it as a technology concern rather than a fundamental constraint on everything else. Organizations frequently have mature-looking process automation and reporting capability scores while ignoring that the data feeding those processes is unreliable. An automated transaction monitoring system running on dirty data is not a Stage 4 capability — it is an expensive source of false positives with a Stage 4 presentation.

Conflating tool ownership with process maturity. The question is not whether the organization has a monitoring system — it is whether the organization's monitoring capability is effective. A firm might own a sophisticated AI-powered surveillance platform (which looks like a Stage 4 or 5 capability) but be using only 20% of its features, with no ongoing tuning, no performance measurement, and alert queues that are cleared by marking alerts as reviewed without genuine investigation (which is a Stage 2 operational reality).

35.3.4 A Python Implementation: The RegTech Maturity Assessor

The following Python implementation provides a structured framework for conducting and documenting a compliance maturity assessment. It is designed to be simple enough to use in a client workshop setting while producing output that can be formally documented and revisited.

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional


class MaturityLevel(Enum):
    AD_HOC = 1
    REACTIVE = 2
    DEFINED = 3
    MANAGED = 4
    OPTIMIZED = 5


@dataclass
class MaturityDimension:
    name: str
    description: str
    score: int  # 1-5
    evidence: list[str] = field(default_factory=list)
    gaps: list[str] = field(default_factory=list)

    def __post_init__(self):
        if not 1 <= self.score <= 5:
            raise ValueError(f"Score must be between 1 and 5, got {self.score}")


@dataclass
class MaturityAssessment:
    institution_name: str
    assessment_date: str
    assessor: str = ""
    dimensions: list[MaturityDimension] = field(default_factory=list)

    def overall_score(self) -> float:
        if not self.dimensions:
            return 0.0
        return sum(d.score for d in self.dimensions) / len(self.dimensions)

    def maturity_level(self) -> MaturityLevel:
        score = self.overall_score()
        if score < 1.5:
            return MaturityLevel.AD_HOC
        elif score < 2.5:
            return MaturityLevel.REACTIVE
        elif score < 3.5:
            return MaturityLevel.DEFINED
        elif score < 4.5:
            return MaturityLevel.MANAGED
        else:
            return MaturityLevel.OPTIMIZED

    def lowest_scoring_dimensions(self, n: int = 3) -> list[MaturityDimension]:
        """Return the n lowest-scoring dimensions — highest priority for improvement."""
        return sorted(self.dimensions, key=lambda d: d.score)[:n]

    def priority_gaps(self) -> list[str]:
        """Return gaps from lowest-scoring dimensions first."""
        gaps = []
        for dim in self.lowest_scoring_dimensions(3):
            gaps.extend(dim.gaps)
        return gaps

    def dimension_by_name(self, name: str) -> Optional[MaturityDimension]:
        for dim in self.dimensions:
            if dim.name.lower() == name.lower():
                return dim
        return None

    def summary_report(self) -> str:
        level = self.maturity_level()
        score = self.overall_score()
        level_descriptions = {
            MaturityLevel.AD_HOC: "Processes are inconsistent and depend on individual effort.",
            MaturityLevel.REACTIVE: "Compliance is event-driven; limited systematic management.",
            MaturityLevel.DEFINED: "Processes are documented and technology is in active use.",
            MaturityLevel.MANAGED: "Capability is measured, integrated, and near-real-time.",
            MaturityLevel.OPTIMIZED: "Compliance is a strategic capability with continuous improvement.",
        }

        report_lines = [
            "=" * 60,
            f"REGTECH MATURITY ASSESSMENT",
            f"Institution:     {self.institution_name}",
            f"Assessment Date: {self.assessment_date}",
            f"Assessor:        {self.assessor or 'Not specified'}",
            "=" * 60,
            f"OVERALL SCORE:  {score:.2f} / 5.00",
            f"MATURITY LEVEL: {level.name.replace('_', ' ')} (Stage {level.value})",
            f"DESCRIPTION:    {level_descriptions[level]}",
            "",
            "DIMENSION SCORES:",
            "-" * 40,
        ]

        for dim in sorted(self.dimensions, key=lambda d: d.score):
            bar = "#" * dim.score + "." * (5 - dim.score)
            report_lines.append(
                f"  [{bar}] {dim.score}/5  {dim.name}"
            )

        report_lines.extend([
            "",
            "PRIORITY GAPS (lowest-scoring dimensions first):",
            "-" * 40,
        ])
        for i, gap in enumerate(self.priority_gaps(), 1):
            report_lines.append(f"  {i}. {gap}")

        report_lines.extend([
            "",
            "RECOMMENDED FOCUS AREAS:",
            "-" * 40,
        ])
        for dim in self.lowest_scoring_dimensions(3):
            report_lines.append(
                f"  - {dim.name} (score: {dim.score}/5): Advance to "
                f"Stage {min(dim.score + 1, 5)}"
            )

        report_lines.append("=" * 60)
        return "\n".join(report_lines)


# --- Example: Cornerstone Financial Group Assessment ---

cornerstone_assessment = MaturityAssessment(
    institution_name="Cornerstone Financial Group",
    assessment_date="2024-Q3",
    assessor="Priya Nair, Senior Manager — RegTech Advisory",
    dimensions=[
        MaturityDimension(
            name="Process Automation",
            score=2,
            description="Degree to which compliance workflows are automated",
            evidence=[
                "Manual KYC refresh process — triggered by relationship manager, "
                "no systematic scheduling",
                "Regulatory reports produced in Excel with manual sign-off",
                "SAR case management in shared mailbox and spreadsheet",
            ],
            gaps=[
                "No automated periodic review scheduling for KYC refresh",
                "No workflow management system for case handling",
                "Regulatory reporting not connected to source systems",
                "No automated breach detection workflow",
            ],
        ),
        MaturityDimension(
            name="Data Quality",
            score=3,
            description="Reliability, completeness, and currency of compliance data",
            evidence=[
                "Data governance policy documented and board-approved (2023)",
                "Quarterly data quality reviews conducted by data team",
                "Customer data warehouse maintained with defined ownership",
            ],
            gaps=[
                "No automated data quality monitoring — reviews are manual "
                "and infrequent",
                "No golden source established for counterparty legal entity data",
                "Date of birth and address fields have ~18% null rate in "
                "legacy customer records",
                "No data lineage documentation for regulatory reports",
            ],
        ),
        MaturityDimension(
            name="Reporting Capability",
            score=2,
            description="Accuracy, timeliness, and auditability of regulatory reports",
            evidence=[
                "CMAR reports submitted on time in last 4 quarters",
                "Transaction reporting team established 2022",
            ],
            gaps=[
                "No automated reconciliation between source data and report output",
                "Report production relies on 3 key individuals — single point "
                "of failure risk",
                "No version control for report templates — changes made ad hoc",
                "Cannot currently demonstrate full data lineage for CMAR fields",
            ],
        ),
        MaturityDimension(
            name="Monitoring Effectiveness",
            score=2,
            description="Real-time visibility into compliance posture and risk",
            evidence=[
                "Transaction monitoring system deployed (vendor: legacy TM platform)",
                "Monthly compliance dashboard reported to board",
            ],
            gaps=[
                "TM alert thresholds not reviewed since initial deployment (2019)",
                "Alert false positive rate estimated at 94% — investigation "
                "quality degraded",
                "No real-time compliance dashboard — board pack is monthly, "
                "backward-looking",
                "No monitoring of conflicts of interest or personal account "
                "dealing outside of manual sign-off",
            ],
        ),
        MaturityDimension(
            name="Audit Trail Completeness",
            score=3,
            description="Completeness and integrity of records for regulatory examination",
            evidence=[
                "Core banking system maintains transaction logs",
                "Email archiving in place for 7 years",
                "Compliance decisions documented in case management system "
                "for majority of activities",
            ],
            gaps=[
                "Audit trail fragmented across 6 separate systems with no "
                "unified search capability",
                "Verbal decisions (telephone, in-person) not systematically recorded",
                "Voice recording retention and retrieval process not tested in "
                "last 24 months",
                "Policy exception approvals documented inconsistently",
            ],
        ),
    ],
)

# Generate and print the assessment report
print(cornerstone_assessment.summary_report())

# Access specific dimensions
data_dim = cornerstone_assessment.dimension_by_name("Data Quality")
if data_dim:
    print(f"\nData Quality gaps: {len(data_dim.gaps)}")

# Identify quick wins — dimensions close to the next level
quick_wins = [
    d for d in cornerstone_assessment.dimensions if d.score == 2
]
print(f"\nDimensions at Stage 2 (highest ROI for Stage 3 investment): "
      f"{[d.name for d in quick_wins]}")

When run against the Cornerstone Financial Group data, this assessment produces an overall score of 2.4 — firmly in the Reactive band, with four out of five dimensions at Stage 2 or Stage 3 and none above Stage 3. The output surfaces the specific gaps that must be addressed before significant RegTech investment will produce reliable returns.


35.4 Defining the RegTech Program Strategy

35.4.1 Starting with the Regulatory Obligation, Not the Technology

The most common strategic error in RegTech program design is beginning with a technology — "we need an AI-powered monitoring solution" — rather than with a regulatory obligation or risk problem. The technology-first approach produces programs that are solutions in search of problems, investments that cannot be measured against regulatory outcomes, and implementations that stall because nobody can articulate what success looks like.

The correct starting point is the regulatory obligation inventory: a structured catalogue of every material regulatory requirement that applies to the organization, mapped against the current state of compliance capability for each requirement. In its most basic form, this is a gap analysis: for each regulatory obligation, what is the current state of compliance, and what is the target state?

The obligation inventory has several useful properties. It provides an objective basis for prioritizing RegTech investments — the gaps with the highest regulatory risk or the most resource-intensive manual remediation are obvious candidates for early automation. It creates a common language for discussing compliance investment with non-compliance stakeholders: the CFO who cannot evaluate the merits of "an AI surveillance platform" can evaluate the merits of "reducing the annual cost of fulfilling our MiFID II transaction reporting obligation from £1.2M to £0.4M while simultaneously improving accuracy." And it provides a measurement framework: once deployed, did the technology close the identified gap?

35.4.2 The Three Strategic Orientations

Organizations approach RegTech investment from three distinct strategic orientations, each of which implies a different scope, governance model, and return expectation.

Compliance-Driven (Regulatory Minimum): The program is designed to achieve compliance with specific regulatory requirements at minimum cost and minimum risk of regulatory action. Technology is justified primarily by its ability to reduce the cost of compliance, reduce error rates, or reduce the volume of manual effort. This orientation is characteristic of organizations under direct regulatory pressure — those responding to enforcement actions, thematic reviews, or Dear CEO letters. It produces focused, well-scoped programs with clear ROI but limited strategic ambition.

This is the right orientation when the primary driver is closing a specific, identified regulatory gap. It is the wrong orientation when it becomes the permanent strategic posture of a large organization: a compliance-minimum approach to RegTech leaves significant value on the table in the form of risk reduction benefits, operational efficiency, and competitive positioning that more ambitious programs deliver.

Risk-Driven (Beyond the Minimum): The program is designed not just to comply with current regulations, but to build a risk management capability that exceeds regulatory requirements and addresses business risks that regulation may not fully capture. Technology is justified by its risk reduction benefits — reduced probability of a regulatory breach, reduced severity of a breach when it occurs, faster detection of emerging risk, and better management information for risk-based decision-making. This orientation is characteristic of organizations that have experienced regulatory difficulty and want to be ahead of the next problem.

The risk-driven orientation is appropriate for organizations with material compliance risk exposure — large complex businesses, firms operating in multiple jurisdictions, or organizations with business models that are inherently difficult to monitor. It produces more ambitious programs with harder-to-quantify ROI, which requires more sophisticated business case construction.

Business-Driven (Compliance as Competitive Advantage): The program is designed to turn compliance capability into a competitive advantage. This orientation recognizes that for some organizations and some products, the ability to demonstrate regulatory excellence — faster KYC completion, lower regulatory risk in counterparty relationships, demonstrably superior conduct standards — directly supports business development. A bank that can onboard a new institutional client in two days versus a competitor's three weeks has a genuine commercial advantage. An asset manager that can demonstrate automated, audited ESG monitoring to institutional investors is more competitive in sustainability-mandated mandates.

The business-driven orientation is rare but increasingly important. It requires the compliance function to think like a product team: what compliance capabilities would our clients value if we could demonstrate them? What regulatory certifications or assurances would open new client segments? It produces programs that are harder to govern (because they span compliance, technology, and commercial functions) but which generate the most durable value.

35.4.3 Making the Strategic Case

The strategic case for a RegTech program must be made to at least four distinct audiences with distinct interests: the Board (concerned with regulatory risk and fiduciary duty); the CFO (concerned with cost and return on investment); the business lines (concerned with whether compliance requirements impede growth); and the compliance and risk functions (concerned with capability and regulatory standing).

Regulatory Pressure: For organizations under active regulatory scrutiny — whether through formal investigation, thematic review, or early-stage supervisory engagement — the regulatory pressure argument is often sufficient to drive investment. The question is not whether to invest, but how much and where. Regulatory pressure arguments should be quantified where possible: not just "the FCA has written to us" but "the FCA has identified specific gaps that we estimate will require X remediation if not addressed, or will result in Y enforcement risk if not closed."

Operational Efficiency: Compliance is labor-intensive. In most mid-to-large regulated institutions, the compliance function employs between 2% and 6% of total headcount, with significant additional compliance-adjacent work embedded in business lines. Manual compliance processes are expensive, error-prone, and difficult to scale. The operational efficiency case is about reducing the unit cost of compliance — the cost per KYC refresh, the cost per regulatory report, the cost per SAR case managed — through automation. This argument is well-understood by CFOs and is often the primary driver of business case approval.

Risk Reduction: Better compliance technology produces fewer regulatory breaches, faster detection of emerging issues, and better evidence of compliance posture to regulators. Risk reduction is more difficult to quantify than operational efficiency but is typically more material in financial terms. A single enforcement action, a business interruption from a regulatory investigation, or the cost of a comprehensive remediation program can easily exceed the entire cost of a multi-year RegTech program. The risk reduction argument should be grounded in base rates where possible: how frequently do organizations in similar situations face regulatory action, and what does that action typically cost?

Market Differentiation: For organizations whose clients care about regulatory standing — prime brokerage relationships, institutional asset managers, fintech partnerships — the ability to demonstrate regulatory excellence is a commercial asset. This argument is context-specific and not universally applicable, but for organizations where it applies it can be compelling.

35.4.4 Stakeholder Mapping

Every RegTech program has a stakeholder landscape that must be understood and managed. Priya maps stakeholders across four categories:

Must Approve: Stakeholders without whose active approval the program cannot proceed. Typically the Board (for programs above a materiality threshold), the CCO (for compliance-facing programs), the CTO or CIO (for technology-facing programs), and the CFO (for any program with significant budget implications). These stakeholders need to be convinced of the case for the program; their concerns must be directly addressed in the program strategy.

Must Execute: Stakeholders who will be responsible for implementing the program. Typically the program management team, the technology delivery function, and the compliance operations teams who will use the resulting systems. These stakeholders need to be involved in design, not just informed of decisions. Programs that do not involve must-execute stakeholders in design consistently encounter passive resistance during implementation.

Must Support: Stakeholders who do not own the program but whose active cooperation is necessary for success. Business line heads whose data the program will use or whose processes the program will change. The legal function for regulatory interpretation. The HR function for any program element that affects roles and responsibilities. Internal audit for assurance. These stakeholders need to understand the program's purpose and benefit; perceived threat to existing roles must be addressed proactively.

Can Block: Stakeholders who cannot approve but can veto or effectively sabotage the program through non-cooperation. This category is frequently underestimated. A Head of Technology who does not believe a compliance-driven program is a genuine priority can ensure that engineering resource is perpetually unavailable without ever formally refusing. A business line manager whose client data the program requires can introduce data access delays that stretch indefinitely. Identifying can-block stakeholders and understanding their concerns is as important as managing must-approve stakeholders.

35.4.5 The RegTech Strategy Document

Every program above a certain scale should have a written strategy document. The discipline of writing the strategy forces clarity; the document becomes a reference point for resolving disputes during implementation; and it provides a basis for communicating the program to new stakeholders who join mid-program.

The RegTech strategy document should contain, at minimum:

Program Objectives: What specific regulatory obligations or risk problems is this program solving? What are the measurable outcomes that will define success?

Scope: What is in scope and explicitly out of scope? What jurisdictions, legal entities, and business lines does the program cover? What regulatory obligations are included?

Guiding Principles: The principles that will govern design and implementation decisions when tradeoffs arise. For example: "Data quality will be addressed before analytics capability" (data first principle); "We will not deploy a system that cannot be supported after implementation" (sustainability principle); "Regulatory coverage takes priority over operational efficiency where the two conflict" (compliance primacy principle).

Constraints: What are the non-negotiable constraints? Budget, timeline, regulatory deadlines, technology environment, organisational change capacity?

Success Metrics: How will the program measure success, and at what intervals? A good metrics set includes at least one input metric (capability deployed), one output metric (process outcome improved), and one outcome metric (regulatory risk reduced or regulatory standing improved).


35.5 Governance — Who Owns the RegTech Program?

35.5.1 The Three-Way Governance Problem

RegTech programs exist at the intersection of three organizational functions that all have legitimate claims to ownership — and whose interests and incentives frequently diverge.

The Chief Compliance Officer owns the regulatory obligations that the program is designed to address. The CCO has the clearest understanding of what the regulation requires, what the regulatory risk is, and what "done" looks like from a regulatory perspective. However, the CCO typically lacks the technology capability and budget to deliver a complex technology program, and may lack the organizational authority to impose change on business lines and technology functions.

The CTO or CIO owns the technology infrastructure on which the program will be built. The technology function has engineering capability, vendor relationships, and enterprise architecture standards that any new RegTech deployment must comply with. However, technology functions often lack the regulatory expertise to make compliance-sensitive design decisions, and may deprioritize compliance technology in favor of revenue-generating technology investment.

The COO owns the operations processes that the program will change. Compliance technology almost always involves changes to how operations teams do their work — new workflows, new data inputs, new exception management processes. The COO has the organizational authority to implement process change in operations, but may not have the regulatory knowledge to understand what the process changes need to achieve, or the technology budget to fund them.

This three-way tension is not a deficiency in organizational design. It reflects the genuine multi-disciplinary nature of RegTech programs. Managing it productively requires explicit governance design.

35.5.2 Common Governance Structures

CCO-Led with Technology Partnership: The CCO holds program ownership; the CTO/CIO provides a technology lead who works within the program team as a permanent partner. This structure works well when the regulatory requirement is clear and relatively contained (a specific reporting obligation, a defined KYC enhancement), when the CCO has strong executive authority and peer relationships, and when the technology scope is not excessively complex. The risk is that technology delivery velocity is constrained by the CCO's limited technology leadership capacity.

Technology-Led with Compliance Ownership: The CTO leads program delivery; the CCO retains ownership of the compliance specification and acceptance criteria. This structure works well for technically complex programs (a data platform rebuild, a new trade surveillance architecture) where technology leadership is the primary success factor. The risk is that compliance requirements are treated as just another set of functional requirements rather than as constraints that must be met exactly, leading to scope compromise and regulatory exposure.

Standalone RegTech Function: A dedicated RegTech function sits between compliance and technology, owned by neither. This is increasingly common in large, complex institutions. The RegTech function may report to the CCO, the CTO, or — in some models — directly to the COO or CEO. It employs people who are genuinely bilingual in compliance and technology, able to translate between both worlds. This structure is highly effective at scale but requires the organizational maturity to establish a new function with real budget, real authority, and real accountability.

Federated Model: Program ownership is distributed across business lines, with central coordination by a program management office. Each business line owns the implementation within its domain; central PMO sets standards, manages vendor relationships, and tracks overall progress. This structure works for geographically or business-line-dispersed organizations where centralized ownership is impractical. The risk is that federated programs are vulnerable to inconsistent implementation standards and to individual business line sponsors losing interest or changing priorities.

35.5.3 The RegTech PMO

For programs above a certain complexity threshold — typically, programs involving more than three systems, affecting more than two business lines, running for more than twelve months, or spending more than £500K — a Program Management Office is usually necessary.

The PMO for a RegTech program is not a bureaucratic overhead function. It has four specific jobs that the program will not get done reliably without dedicated resource: tracking dependencies between work streams (ensuring that the trade reporting system is not deployed before the data quality layer it depends on); managing vendor relationships and contracts; coordinating change management across business lines; and maintaining the program's risk register and escalation log.

The PMO should be staffed with people who have genuine delivery experience, not just project management credentials. A common failure mode is staffing the PMO with junior project coordinators who can maintain a Gantt chart but cannot identify when a technical dependency has been misunderstood, or when a vendor's delivery commitment is structurally unrealistic.

35.5.4 The RegTech Steering Committee

Every program of material size needs a steering committee that meets regularly and has genuine authority. "Genuine authority" means the committee can make spending decisions up to a defined threshold, approve scope changes, resolve cross-functional disputes, and escalate issues to the Board if required. A steering committee that meets monthly to receive progress reports but cannot make decisions is a governance theater, not a governance mechanism.

Steering committee composition should include: the program sponsor (typically the CCO or an equivalent executive); the CTO or a designated technology representative; the COO or equivalent; a business line representative from each affected business line; the program director; and an independent observer (typically internal audit or risk management). The meeting cadence should be monthly for major programs, with the ability to convene ad hoc meetings for urgent decisions.

The steering committee agenda should cover, at minimum: milestone progress; risk and issue escalations; scope change requests; budget status; and key decisions required. The committee should produce formal minutes and a decision register that is maintained throughout the program.

35.5.5 Escalation Paths and Conflict Resolution

Every governance structure needs an explicit escalation path for disputes that cannot be resolved at program level. The most common dispute type is resource prioritization: a business line's operations team is needed for user acceptance testing but has been pulled to address a separate operational priority. Without a clear escalation path, these disputes resolve by atrophy — the person or function with more organizational power gets what they want, and the program suffers.

Priya's rule for escalation paths: they must be agreed in advance, documented, and short. An escalation that requires three meetings and two written submissions before reaching a decision-maker will not be used — people will find workarounds. An escalation that can reach a final decision-maker within five business days will be used, and will resolve disputes before they become critical path delays.


35.6 Roadmapping — Sequencing the Build

35.6.1 Why Sequencing Matters

A RegTech roadmap is not simply a prioritized list of technology investments. It is a sequenced plan that respects the dependencies between investments — the technical, process, and data dependencies that determine which capabilities can be built before others.

The most common roadmapping error is treating a RegTech investment portfolio as a set of independent items that can be executed in any order. They cannot. A real-time compliance monitoring dashboard cannot be built before the data infrastructure it reads from is reliable. An automated regulatory reporting engine cannot be built before the golden source data it reports from has been established. An AI-powered transaction monitoring system cannot be tuned effectively before the false positive rate baseline of the existing system has been measured.

Sequencing errors produce programs that stall in the middle, not at the beginning. An organization that begins building its monitoring dashboard before its data foundation is stable will find, six months into the build, that the dashboard is displaying unreliable data and requires a parallel data remediation project — a project that should have come first. The resulting delay and rework cost is almost always greater than the time saved by beginning both projects simultaneously.

35.6.2 The Three-Horizon Roadmap

Priya structures RegTech roadmaps around three time horizons, each with a distinct character and objective:

Horizon 1 — Quick Wins (0 to 6 months): Actions that address the highest-priority regulatory gaps with the lowest implementation complexity. Horizon 1 is deliberately not about transformation — it is about demonstrating progress, generating organizational confidence, reducing the most acute regulatory risk, and building the operational foundations that more complex capabilities will require.

Quick wins are typically process improvements (not technology deployments), policy fixes, data remediation tasks, and the deployment of simple, already-scoped tools that do not require significant integration work. A KYC policy update, the deployment of a document collection tool that integrates with an existing system, the establishment of a data quality monitoring routine for the three most critical data elements — these are Horizon 1 activities. Their value is twofold: they reduce risk, and they create the organizational momentum and executive confidence that sustains the longer-horizon work.

Horizon 2 — Capability Build (6 to 18 months): The major technology deployment and process redesign work. Horizon 2 activities are the core investments of the RegTech program: the KYC automation platform, the transaction monitoring upgrade, the regulatory reporting engine, the data governance platform. These are complex, integration-heavy, and organizationally demanding. They require substantial project management, change management, vendor management, and technology delivery capacity.

Horizon 2 work should be sequenced by dependency: data infrastructure before analytics capability; process design before technology deployment; user acceptance testing before production rollout; training and change management before go-live. The Horizon 2 schedule should include explicit milestones for each dependency, with the consequence of missing each dependency clearly documented.

Horizon 3 — Transformation (18 to 36 months): The capabilities that require a foundation built in Horizons 1 and 2 to function. Horizon 3 is where the genuinely transformative RegTech investments sit: AI-powered behavioral surveillance that requires a clean, integrated transaction data platform; continuous compliance monitoring that requires automated data feeds from all relevant systems; predictive risk analytics that require historical compliance data of sufficient quality and depth to train a model.

Horizon 3 activities should not be planned in detail at program initiation. The technology landscape, regulatory requirements, and organizational context will all have changed by the time Horizon 3 work begins. What should be planned is the Horizon 3 direction: a statement of the target-state capability that the program is building toward, which guides Horizon 1 and 2 design decisions (particularly data architecture decisions) to ensure they do not create technical debt that forecloses Horizon 3 options.

35.6.3 The Data-First Principle

The single most important sequencing principle in RegTech roadmapping is: data infrastructure work must precede most analytics investments. This is so frequently violated, and the consequences of violating it are so consistently severe, that it warrants explicit statement as a named principle.

"Data first" does not mean building a perfect data warehouse before deploying any capability. It means identifying the specific data that each planned capability will consume, assessing the current quality and accessibility of that data, and scheduling the work required to make that data fit-for-purpose before scheduling the capability that depends on it.

In practice, the data-first principle means: - Establish golden sources for critical reference data (counterparty identities, legal entity hierarchy, product master data) before building analytics that use those sources. - Implement automated data quality monitoring for key data sets before deploying machine learning models that will be sensitive to data quality problems. - Build or procure a data lineage capability before deploying a regulatory reporting system whose outputs must be audited. - Address known data quality issues (null rates, duplication, outdated records) before go-live of any system that will generate compliance outputs from that data.

The cost of doing data work before capability deployment is always lower than the cost of remediating a capability that has been deployed on bad data. Priya's experience across more than twenty RegTech program engagements has not produced a single exception to this rule.

35.6.4 Prioritization Frameworks

When there are more potential investments than the organization can pursue simultaneously — which is almost always — a structured prioritization framework prevents ad hoc decision-making and provides a defensible basis for sequencing decisions.

The most robust framework for RegTech prioritization combines three dimensions:

Regulatory Risk Weight: What is the probability and severity of a regulatory adverse outcome if this gap is not closed? High-probability, high-severity gaps (a regulatory deadline that is six months away with a known penalty regime attached) should be prioritized over low-probability, low-severity gaps (a process improvement that would be beneficial but is not connected to any specific regulatory requirement).

Value/Effort Matrix: What is the value of closing this gap (regulatory risk reduction plus operational efficiency gain plus strategic benefit) relative to the effort required (cost, complexity, time, change management burden)? High-value, low-effort items should be prioritized as quick wins. High-value, high-effort items are the major capability builds that belong in Horizon 2. Low-value items of any effort level should be deprioritized or removed from the roadmap.

Dependency Order: Regardless of standalone priority, what is the structural position of this item in the dependency chain? An item that is a prerequisite for six other high-priority items must be sequenced early even if its standalone priority rank is low.


35.7 The RegTech Business Case

35.7.1 Building the ROI Case

The financial justification for a RegTech program typically encounters a fundamental asymmetry: the costs are concrete and immediate; the benefits are probabilistic, distributed across time, and partly non-financial. Building a credible business case requires addressing this asymmetry explicitly rather than pretending it does not exist.

Chapter 38 of this book covers financial evaluation of RegTech investments in depth. This section provides the framework context relevant to program strategy.

35.7.2 The Four Value Categories

RegTech investments generate value across four categories, not all of which will be relevant in every case:

Cost Efficiency: The reduction in the direct cost of compliance — staff time, vendor fees, error remediation, manual report production. Cost efficiency is the most straightforward value category to quantify and is typically the primary driver of business case approval. The calculation is: (current cost of manual process) minus (projected cost of automated process) minus (cost of technology and implementation) equals net efficiency saving. This calculation should be based on time-and-motion data wherever possible, not on estimates.

Risk Reduction: The reduction in the expected cost of regulatory adverse outcomes — enforcement actions, remediation programs, civil penalties, business restrictions. Risk reduction is calculated as: (baseline probability of adverse outcome) times (average cost of adverse outcome) minus (post-investment probability of adverse outcome) times (average cost of adverse outcome). The inputs to this calculation are inherently uncertain, but even rough estimates are useful: an enforcement action in the area of concern cost comparable firms an average of £X, and the investment reduces the probability of a similar outcome from Y% to Z%, producing an expected benefit of the difference.

Regulatory Relationship: Some RegTech investments generate value through improvement in the quality of the supervisory relationship — demonstrating proactive investment in compliance capability, providing regulators with better-quality regulatory data, or enabling faster response to supervisory requests. This value is difficult to quantify but is real and material for organizations that face intensive supervisory engagement.

Speed to Market: Compliance bottlenecks that delay product launches, client onboarding, or new market entry have a direct commercial cost. RegTech investments that remove those bottlenecks generate value equivalent to the commercial benefit of the activities they enable. This value category is particularly important for challenger banks, fintech firms, and rapidly growing organizations where compliance velocity is a genuine competitive constraint.

35.7.3 Quantifying the Cost of the Status Quo

One of the most effective techniques in RegTech business case construction is the cost-of-status-quo analysis: a structured calculation of what the current approach actually costs, which establishes the baseline against which the investment case is measured.

The cost of the status quo for a compliance function typically includes: direct staff costs (hours spent on processes that will be automated, at fully-loaded cost); error and remediation costs (the cost of fixing data errors, reprocessing reports, and investigating false positives generated by uncalibrated systems); vendor costs for existing tools that will be replaced or decommissioned; opportunity costs of staff time spent on manual compliance rather than higher-value activities; and the actuarially adjusted cost of regulatory risk attributable to the identified gaps.

Organizations consistently underestimate the cost of their current approach because the costs are distributed across many staff members, partly invisible (nobody tracks the time spent on compliance activities that could be automated), and normalized through habit. A structured cost-of-status-quo analysis frequently reveals that the business case for RegTech investment is significantly stronger than initially apparent.

35.7.4 Sensitivity Analysis

The business case should include a sensitivity analysis that identifies which assumptions drive the case most heavily — and what happens to the overall case if those assumptions are wrong.

For most RegTech business cases, the assumptions that matter most are: the probability and cost of a regulatory adverse outcome (risk reduction value is highly sensitive to these estimates); the achievable reduction in staff time post-automation (efficiency value is sensitive to actual automation rates, which vendors often overstate in demonstrations); and the all-in implementation cost (frequently underestimated by organizations that do not account for internal staff time, data remediation, and change management costs).


35.8 Common Failure Patterns

35.8.1 The Tool Graveyard

The tool graveyard is the accumulated inventory of compliance software that has been purchased but never reached meaningful production status. As described in the opening section, the asset manager Priya visited had seven platforms in this condition. The cost was approximately £2 million in wasted investment.

Tool graveyards are created by the same conditions: procurement decisions made without clear use cases; implementations that stall when initial enthusiasm wanes; tools that require data foundations that do not exist; and organizational indifference about whether a deployed tool is actually being used.

The remedy is pre-procurement discipline: before signing any vendor contract, the organization should be able to answer four questions. What specific process or problem will this tool address? Who will use it, and do they know that? What data will it consume, and is that data available? Who will own it after implementation is complete?

35.8.2 The Pilot Trap

The pilot trap is the state of perpetually running technology pilots that never scale to production. It is insidious because it looks like progress — a pilot is running, the vendor is engaged, the project is active — while in reality the organization is not extracting value from the technology and is not advancing its compliance capability.

Pilots enter the trap for several reasons: the pilot reveals data quality problems that feel impossible to address; the organizational sponsor loses enthusiasm or changes role; the technology works in the pilot but the cost of enterprise-scale deployment is higher than anticipated; nobody wants to be accountable for the production deployment going wrong; or the pilot is being used as a way to avoid making a definitive commitment.

The remedy is disciplined pilot design: before beginning a pilot, define the criteria that will determine whether the technology proceeds to production deployment. These criteria must be specific, measurable, and agreed in advance by the stakeholders who will make the go/no-go decision. A pilot with no predefined success criteria will never end.

35.8.3 The Governance Vacuum

The governance vacuum occurs when a capability is deployed with no ongoing ownership. The technology goes live, the project is closed, and nobody is explicitly responsible for managing, maintaining, or improving the capability thereafter. The system operates in production, but its performance degrades as the regulatory environment changes, as data quality issues accumulate, and as the people who built it move on.

This failure pattern is discussed in detail in Case Study 2 (Cornerstone Financial Group's regulatory reporting platform). It is one of the most common and most costly failure modes in RegTech implementation.

The remedy is explicit post-production ownership: before any system goes live, the organization should be able to name the individual who is responsible for each of the following: system performance monitoring; data quality monitoring; regulatory change assessment (determining when the system needs to be updated in response to regulatory change); vendor relationship management; user support and training; and annual review of system effectiveness.

35.8.4 The Change Management Gap

The change management gap occurs when technology is deployed without the organizational change management required to ensure that the people who will use the technology actually change their behaviour. This is the mirror image of the governance vacuum: where the vacuum is about post-production ownership, the change management gap is about the transition from old to new ways of working.

Compliance technology does not improve compliance outcomes by existing. It improves compliance outcomes by changing what human beings do. If the people who are supposed to use the new system continue to use the old manual process — because they were not trained, because they don't trust the new system, because the new system creates more work for them than the old one, or because their manager hasn't made clear that using the new system is expected — the technology investment generates zero return.

Change management for RegTech programs includes: stakeholder engagement before go-live to ensure users understand why the change is happening; process redesign to reflect the new system's role in the workflow; training calibrated to different user types (system administrators, power users, occasional users); clear communication from management that the new system is the expected way of working; and performance measurement that tracks adoption, not just deployment.

35.8.5 The Regulatory Mis-Specification

The regulatory mis-specification occurs when a system is built to comply with a regulatory requirement as it was understood at the time of design, but the regulation changes before or after deployment, and the system is not updated to reflect the change. The compliance outcome is a system that is technically operational but not actually compliant with the current regulation.

This failure mode is particularly dangerous because it is invisible. The system is running; reports are being produced; alerts are being generated. The organization believes it is compliant. The regulatory examiner who understands that the applicable regulation changed eighteen months ago will disagree.

The remedy is embedding regulatory change management into the governance of every deployed system. Each system should have a named owner responsible for monitoring regulatory developments relevant to that system and assessing whether system updates are required. This assessment should occur at least annually, and whenever a relevant regulatory change is identified.


35.9 Priya's Five Questions

At the end of her first meeting with any new RegTech program client, Priya asks five questions. She does not ask them in this order — she weaves them into the conversation — but she does not leave until she has the answers. The answers tell her more about the probability of a successful program than any amount of slide deck review.

The five questions have been sharpened over five years and perhaps thirty client engagements. They are not the most sophisticated questions one could ask about a RegTech program. They are the questions that, most reliably, identify whether the program has what it needs to succeed.


Question 1: What specific regulatory obligation or risk are you solving for first?

This question tests strategic clarity. The answer "we need to improve our compliance generally" is not an answer — it is an indication that strategic clarity does not yet exist. The answer "we need to achieve compliance with the FCA's new Consumer Duty monitoring requirements by 31 July, and our current process produces monthly backward-looking reports rather than the near-real-time monitoring the FCA expects" is a real answer.

Organizations that cannot answer this question are not ready to deploy technology. They are ready to conduct the obligation inventory and maturity assessment that will allow them to answer it.


Question 2: Who will own the output of this system day-to-day, and do they know that?

This question tests governance readiness. Compliance systems produce outputs — alerts, reports, data feeds, flags — that require human decisions. Who will make those decisions? What authority do they have? What are they accountable for? Do those people know that this is their job?

Priya has walked away from engagements where this question produced blank stares. Not because she was being rigid, but because she knew from experience that a system with nobody owning its outputs will produce outputs that nobody acts on, which means the system will add cost without adding value.


Question 3: What data will this system use, and is that data clean, current, and accessible?

This question tests data readiness. The answer should be specific: "The transaction monitoring system will consume trade data from our order management system, customer risk ratings from our CRM, and counterparty data from our client master. The trade data is clean and current; the customer risk ratings were last updated in 2021 and need a full refresh; the counterparty data has a 22% null rate on LEI field that must be addressed before go-live."

An answer that is vague — "we'll figure out the data as we go" — is a warning sign. Data problems discovered during implementation are significantly more expensive to fix than data problems identified and addressed beforehand.


Question 4: How will you measure whether this worked?

This question tests outcome orientation. A RegTech program that cannot answer this question is not solving a defined problem — it is building infrastructure for its own sake. The answer should specify both leading indicators (the system is live, adopted, running without errors) and lagging indicators (the regulatory gap has been closed, the cost of the affected process has declined, the alert false positive rate has decreased).

Organisations that have not defined success before starting a program will define it retrospectively, in terms of what they happened to achieve. This is not measurement; it is rationalization.


Question 5: What changes to human processes does this technology require, and have you planned for those changes?

This question tests change management readiness. Every compliance technology changes how human beings do their work. If the organization has not identified those changes, designed the new processes, and planned the transition, the technology will go live into a process vacuum.

The answer Priya is looking for includes: a description of the current process and the target process; identification of which roles will change and how; a change management plan (communication, training, transition support); and a timeline for the process change that is coordinated with the technology deployment.

The answer that worries her most: "We'll deal with the process side after the system is live." In twenty years of collective RegTech implementation experience across Priya's team, no system has ever successfully changed human behaviour from behind. The process design must come first.


These five questions, honestly answered, constitute a readiness assessment for any RegTech initiative. A program that can answer all five clearly is ready to begin. A program that cannot answer one is not — and knowing which question it cannot answer tells you exactly what work needs to be done before deployment begins.

The asset manager that Priya visited was unable to answer questions one, two, four, or five at that initial meeting. They had a partial answer to question three. What Priya told them, and what the first eight weeks of the engagement produced, was not a list of platforms to buy. It was the foundation of answers to all five questions. Only then — with strategic clarity, governance designed, data assessed, success defined, and process changes mapped — did it make sense to talk about which tools to acquire.

The twenty-three platforms on the CTO's slide deck were not a strategy. But several of them were the right answer, once the right questions had been asked.


Summary

Building a RegTech program is not the same as buying compliance software. It is the sustained organizational project of building compliance as a genuine capability — a capability that is strategically directed, institutionally governed, data-grounded, and continuously measured.

The foundation of that project is a clear-eyed assessment of where the organization currently sits on the compliance maturity curve. From that foundation, a program strategy can be constructed around the specific regulatory obligations and risk problems the organization must solve, in the priority order that its regulatory exposure and organizational capacity dictate. Governance — the explicit assignment of ownership, accountability, and decision rights — transforms a strategy document into an operational reality. Roadmapping sequences the work in the order that dependencies permit and priorities demand, with data infrastructure as the non-negotiable first investment. And the business case, grounded in an honest cost-of-status-quo analysis, provides the financial rationale for the sustained investment the program requires.

Programs that avoid the five common failure patterns — the tool graveyard, the pilot trap, the governance vacuum, the change management gap, and the regulatory mis-specification — will reach their objectives. Programs that do not will spend money without improving their compliance posture, which is the worst possible outcome: the cost of a program without its benefits.

Priya's five questions offer a pre-program diagnostic that any organization can apply before committing significant resources. They are not technically sophisticated. They are organizationally honest. And in the domain of RegTech program design, organizational honesty is the scarcest and most valuable resource of all.


Key Terms

Compliance Maturity Model: A five-stage framework (Ad Hoc, Reactive, Defined, Managed, Optimized) for assessing and benchmarking an organization's regulatory compliance capability across multiple dimensions.

Regulatory Obligation Inventory: A structured catalogue of every material regulatory requirement applicable to an organization, used as the foundation for RegTech program strategy and prioritization.

Three-Horizon Roadmap: A RegTech roadmapping framework that divides the program into Quick Wins (0–6 months), Capability Build (6–18 months), and Transformation (18–36 months) phases, with each horizon respecting the dependencies established in the previous one.

Governance Vacuum: The failure mode in which a RegTech capability is deployed without designated post-production ownership, leading to performance degradation and regulatory exposure over time.

Data-First Principle: The roadmapping principle that data infrastructure, data quality, and data governance work must be scheduled and completed before the analytics and reporting capabilities that depend on them.

Build/Buy/Borrow Spectrum: The decision framework for determining whether a compliance capability should be built internally, purchased from a vendor, or accessed through open-source libraries or industry consortia.

Cost of the Status Quo: The structured calculation of the total cost of the current compliance approach — including direct staff costs, error and remediation costs, vendor costs, and actuarially adjusted regulatory risk — used as the baseline for RegTech investment cases.

Pilot Trap: The program failure mode in which a technology pilot runs indefinitely without progressing to production deployment, generating cost without generating value.


Next: Chapter 36 — Vendor Selection and Management in RegTech Programs examines how to evaluate, select, contract with, and manage the RegTech vendors who will deliver much of the technology in your program.