Chapter 39: Key Takeaways — The Future of RegTech
Essential Insights
-
SupTech is already here, not a future aspiration. Regulators in more than 50 jurisdictions are actively deploying supervisory technology — including API-based direct data access, machine learning analytics, and automated supervisory reporting. The FCA, ECB, BIS Innovation Hub, and SEC all have active SupTech programs. Institutions need to treat data architecture quality as a regulatory compliance matter today, not when SupTech pipelines arrive.
-
Digital regulation changes who bears the interpretation risk. Machine-executable regulatory rules are in active pilot (FCA Digital Regulatory Reporting Phase 1). When a regulation is expressed as code, someone writes that code — and the compliance of a firm's behavior depends on whether the coder's interpretation matches the regulator's. The interpretive function remains human even as the execution becomes automated, and the stakes of interpretation errors increase.
-
LLMs are research accelerators, not compliance decision-makers. Large language models offer genuine efficiency gains in regulatory horizon scanning, policy gap analysis, and document drafting. A 12% material error rate in LLM-generated regulatory summaries (tested in real pilots) illustrates the stakes of the hallucination risk. The safe deployment model is: LLM as first-pass research tool, requiring expert human verification before any output influences a compliance decision or regulatory submission.
-
The compliance professional's core value is shifting, not disappearing. Technology handles more of the routine compliance execution. This concentrates compliance professionals' work on the complex, the novel, and the consequential — the cases that require judgment, interpretation, and accountability. The compliance professional of 2030 is less a processor of compliance information and more a governor of compliance systems.
-
The regulatory horizon is widening, not stabilizing. The EU AI Act, DORA, MiCA, CSRD, PSD3, and CBDCs are not the end of regulatory development — they are the current wave of it. An anti-fragile compliance program is built to adapt to regulatory change, not to comply with a static set of requirements.
-
ESG and climate risk disclosure is a new data compliance frontier. The CSRD, EU Taxonomy Regulation, and SEC climate disclosure rules create extensive new data collection, governance, and reporting obligations for which many institutions' data infrastructure is not yet adequate. This is an area of significant near-term compliance investment need.
-
The most dangerous compliance posture is assuming yesterday's skills are sufficient for tomorrow. The compliance professional who does not develop data literacy, technology architecture understanding, and AI governance competency is not maintaining their position — they are falling behind the pace of change in their field.
RegTech Trend Radar: Horizon and Readiness Table
| Trend | Timeline | Confidence | Recommended Action | Institutional Implication |
|---|---|---|---|---|
| FCA Digital Regulatory Reporting (DRR) API feeds | NOW (0-2 years) | High | Implement | Audit data architecture; ensure regulatory data is accessible, consistent, API-ready |
| EU AI Act high-risk AI compliance | NOW (0-2 years) | High | Implement | Inventory AI systems in compliance; assess against high-risk criteria; establish governance |
| LLM-assisted regulatory intelligence | NOW (0-2 years) | Medium | Pilot with controls | Deploy for horizon scanning with mandatory expert review; prohibit autonomous decision use |
| SupTech direct data access (FCA, ECB) | NOW (0-2 years) | High | Assess capability gaps | Data quality and governance investment; understand what a regulator would see in direct query |
| DORA / FCA CTP critical third-party oversight | NOW (0-2 years) | High | Implement | Renegotiate vendor contracts; register critical third parties; test operational resilience |
| ESG / CSRD data collection and reporting | NOW (0-2 years) | High | Implement | Build financed emissions data infrastructure; evaluate ESG data vendors |
| Machine-executable substantive regulation | NEAR (2-5 years) | Medium | Monitor + assess | Participate in FCA DRR consultations; track pilot outcomes; plan data architecture for compatibility |
| Digital euro / CBDC integration | NEAR (2-5 years) | Low-medium | Monitor | Follow ECB digital euro project; track MAS and Fed research; identify payment and AML implications |
| MiCA full deployment effects | NEAR (2-5 years) | High | Assess | If crypto-asset exposure exists: assess authorization requirements; plan compliance program additions |
| Embedded finance regulatory clarity | NEAR (2-5 years) | Medium | Monitor + assess | Clarify compliance obligations in BaaS and embedded product arrangements |
| DeFi regulatory framework (EU/US/UK) | NEAR (2-5 years) | Medium | Monitor | Track FATF virtual asset guidance updates; assess customer DeFi transaction due diligence |
| Comprehensive machine-executable substantive rules | FAR (5-10 years) | Low | Monitor | No near-term action required; maintain awareness through industry working groups |
| Full CBDC deployment in major economies | FAR (5-10 years) | Low | Monitor | Theoretical planning only at this stage; watch pilot deployments (digital renminbi) |
LLM Safe vs. Unsafe Use in Compliance
| Use Case | Safe? | Conditions / Risks |
|---|---|---|
| Regulatory horizon scanning (first pass) | Safe with controls | Outputs must be verified against primary sources by an expert before action |
| Policy gap analysis (first pass) | Safe with controls | LLM identifies areas for expert review; cannot certify compliance |
| Drafting training content (first pass) | Safe with controls | Expert review and revision required before deployment |
| SAR narrative drafting assistance | Safe with controls | Human analyst must review, revise, and take responsibility for final submission |
| Contract review for standard provisions | Safe with controls | Expert escalation required for non-standard or complex provisions |
| Summarizing regulatory publications | Safe with controls | Verification against original source text required; do not act on summary alone |
| Making compliance decisions (approve/reject) | Unsafe | Requires human judgment, accountability, and regulatory defensibility |
| Drafting regulatory submissions (unreviewed) | Unsafe | Hallucination risk; FCA and other regulators have received LLM-generated errors in submissions |
| Providing regulatory advice to clients (unreviewed) | Unsafe | Inaccurate advice creates professional liability and client harm |
| Interpreting ambiguous regulatory language | Unsafe | LLMs cannot reliably resolve interpretive uncertainty; output may appear authoritative but be wrong |
| Generating SAR decisions autonomously | Unsafe | Regulatory requirement for human judgment in SAR determination; cannot be delegated to AI |
| Replacing expert review of model outputs | Unsafe | Model governance requires human validation; LLM review is not a substitute |
Compliance Skills Evolution: 2015 vs. 2030
| Skill Domain | 2015 Compliance Professional | 2030 Compliance Professional |
|---|---|---|
| Regulatory knowledge | Deep knowledge of applicable regulatory text; ability to interpret rules | Same — plus ability to track rule changes in machine-readable form; fluency with digital regulatory frameworks |
| Technology literacy | Ability to use compliance software tools; basic spreadsheet competency | Data literacy; technology architecture understanding; AI governance competency; ability to govern complex systems |
| Reporting | Expertise in manual report preparation; XBRL familiarity | API-based data governance; understanding of direct regulatory data access; automated pipeline oversight |
| Vendor management | Contract review; relationship management; escalation processes | Technical due diligence; AI/ML system evaluation; DORA/CTP compliance assessment; model governance of vendor AI |
| Risk assessment | Rules-based risk scoring familiarity; qualitative risk judgment | AI-assisted risk model governance; model validation understanding; explainability requirements; SR 11-7 / EU AI Act literacy |
| Data management | Ability to pull and verify regulatory data; spreadsheet-based reconciliation | Data quality governance; data lineage understanding; API interface familiarity; ability to specify data requirements for systems |
| Communication | Ability to explain regulatory requirements to business; board reporting | Same — plus ability to explain AI system behavior to regulators; engagement with regulatory consultations on technology |
| Career capital | Breadth of regulatory knowledge; seniority and relationship network | Technology-regulation integration; AI governance expertise; SupTech readiness leadership; digital regulatory reporting expertise |
Three Questions Every Compliance Professional Should Be Asking
-
"What would the regulator see if they queried our data directly today?" This question focuses attention on the gap between the compliance function's reported position and the underlying data reality. SupTech is advancing toward direct data access. Institutions that know the answer to this question are prepared; those that do not face the risk of a supervisory finding that surfaces data quality or governance issues they did not know existed.
-
"For every AI or ML system we use in compliance, who is accountable for its outputs — and what happens when it is wrong?" This question forces explicit governance of the AI systems that increasingly underpin compliance functions. It requires identification of the human accountable for each system's outputs, documentation of the system's limitations, and an escalation path for situations where the system's output should not be relied upon. Without clear answers, AI governance is theoretical rather than operational.
-
"What regulatory changes in the next three years would require us to rebuild a major compliance process — and how long would that take?" This question assesses program anti-fragility. The answer reveals whether the compliance program is built for a static regulatory environment (dangerous) or an adaptive one (defensible). Institutions that cannot answer this question clearly have not stress-tested their compliance program against the regulatory horizon they are actually facing.
Chapter 39 — Key Takeaways complete.