Case Study 40.2: Building the Program View — Verdant Bank's Integration Journey
The Situation
Organization: Verdant Bank (fictional UK challenger bank) Maya Osei's role: Chief Compliance Officer (second year in post) Challenge: Transforming a collection of individually capable compliance systems into an integrated compliance program with coherent data flows, unified audit trail, and cross-domain management information Timeline: Q1–Q4 2025 Characters: Maya Osei; Amir (CISO); the data engineering team; and Priya Nair (advisory capacity)
The Starting Point
By January 2025 — eighteen months after Maya had joined Verdant as CCO — the bank had made significant compliance technology investments. The KYC/identity platform was performing well (99.2% automated approval rate). The transaction monitoring upgrade had been completed (false positive rate reduced from 34% to 21%). The regulatory reporting platform was operational (97% on-time submission). Each system, reviewed individually, was doing its job.
The Risk Committee's October 2024 request — "show us how it all fits together" — had exposed the gap. Maya could present each system's metrics. She could not present a coherent picture of the compliance program as a whole. And when she tried to construct one manually, she discovered that the numbers were inconsistent: the customer count in the KYC system differed from the customer count in the transaction monitoring system. The risk classification of several hundred customers was different in the two systems. Three customers who had been the subject of AML case escalations had a clean profile in the KYC system — the case findings had never been fed back.
The program worked less well than the sum of its parts.
The Integration Project
Maya's January 2025 priority: design and implement a compliance data integration layer that would:
- Establish a single customer master record accessible to all systems
- Create a unified alert registry across AML monitoring, fraud detection, and sanctions screening
- Build a governance data store — model performance metrics, audit log aggregation, and cross-domain management information
- Implement a program health dashboard for the Risk Committee
This was not a technology project in the sense of deploying new compliance software. It was a data engineering and governance project — building the infrastructure that connected existing systems.
Amir, the CISO, was Verdant's most important collaborator. The integration layer required access to data from every compliance-adjacent system — payment processing, core banking, the KYC platform, the monitoring system. Each access required security review. The GDPR implications of aggregating customer data from multiple systems into a unified record required legal review.
Maya had underestimated both. The security and privacy work added two months to the project timeline.
The Technical Build
Verdant's data engineering team — two engineers, working part-time on the compliance integration project — built the integration layer over six months. The core components:
Customer Master Record: A canonical customer data store maintained by the KYC system as the master, with APIs that allowed the monitoring and case management systems to both read from it and write updates back to it. When the AML case management system escalated a customer to Very High risk, the write-back API updated the KYC master, which in turn propagated to all subscribing systems.
Unified Alert Registry: A new data layer that received structured alert notifications from the AML monitoring system, the fraud detection system, and the sanctions screening system. Each alert was stored with a common customer identifier, linked to the relevant transactions, and visible to authorized users across all three domains. The Alert Registry surfaced a cross-domain flag when a customer had concurrent open alerts in more than one domain.
Audit Log Aggregator: A read-only aggregator that pulled audit records from each system's native log and stored them in a unified, tamper-evident store accessible for examination response. Queries against the aggregator could now reconstruct the complete decision history of any compliance matter — across all systems — from a single interface.
Program Health Dashboard: A reporting layer built on the governance data store, providing automated calculation of the program's key health indicators — false positive rates, SLA compliance, model drift flags, KYC currency, regulatory reporting on-time rates. Refreshed daily. Accessible to Maya and her direct reports.
What the Integration Revealed
As the integration layer went live in July 2025, data that had been invisible in isolated systems became visible.
Finding 1: The customer count discrepancy. The unified customer master revealed 847 customers who were in the transaction monitoring system but not in the KYC system. These were customers who had been onboarded before the KYC platform upgrade in 2022 and had never been migrated. Their monitoring was running against customer records that did not exist in the KYC master. Maya initiated a remediation exercise to migrate all 847 records.
Finding 2: The risk classification drift. 312 customers had different risk classifications in the KYC system and the transaction monitoring system. In 89 cases, the transaction monitoring system was working with an older, lower risk classification — customers who had been upgraded to High risk in the KYC system but whose upgrade had not propagated because there was no API connection. In 14 cases, the discrepancy was the other direction — customers marked High risk in monitoring but downgraded in KYC following a periodic review.
Finding 3: The cross-domain correlation. The unified alert registry, in its first month of operation, identified 23 customers with concurrent alerts in more than one domain. Of these, 4 were customers with both AML and fraud alerts — a pattern that should have been visible to both investigation teams but had not been. One of the 4 was escalated to a SAR filing on the combined picture that neither investigation, in isolation, had sufficient evidence to support.
Finding 4: The model drift discovery. The program health dashboard's model governance module showed the sanctions screening system's PSI score at 0.28 — above the 0.25 threshold. This had been invisible without the aggregated monitoring layer. The sanctions vendor was notified, a recalibration was scheduled, and interim additional review steps were added for borderline matches pending recalibration.
The October 2025 Risk Committee Report
Maya's October 2025 report to the Risk Committee was the one they had been asking for: a unified program view with cross-domain metrics, integrated health score, and clear action items for identified gaps.
The Risk Committee Chair's response — "That's the report I've been asking for" — was the professional milestone that Maya described in her year-end reflection as the moment she felt she had transitioned from being a compliance officer who managed compliance systems to a compliance leader who ran a compliance program.
The distinction was not semantic. A system is something you deploy. A program is something you govern, continuously, with the kind of integrated visibility that lets you see what is working, what is not, and what the system is about to tell you before it happens.
Discussion Questions
1. The customer count discrepancy — 847 customers in the monitoring system but not in the KYC master — had been invisible before the integration project. What controls should have been in place at the point of KYC platform migration to prevent this gap from forming? How should data migration completeness be verified?
2. The risk classification drift (312 customers with inconsistent classifications) was caused by the absence of an API connection between the KYC system and the monitoring system. Why are API-based data propagation connections preferable to periodic batch synchronization for customer risk data? In what circumstances might batch synchronization be acceptable?
3. The cross-domain correlation finding produced one SAR filing that would not have been filed without the integrated view. Neither the AML case nor the fraud case, individually, had sufficient evidence to support a SAR. Does this raise any concerns about the quality of individual-domain investigations, or is cross-domain correlation an expected and appropriate input into SAR filing decisions?
4. The model drift discovery (PSI score 0.28 for sanctions screening) was made possible by the program health dashboard — it had been invisible before. Who is responsible for model performance monitoring: the sanctions team that uses the model, the compliance technology team that deployed it, or the vendor who built it? How should this responsibility be formally assigned and documented?
5. Maya describes the transition from "compliance officer who managed compliance systems" to "compliance leader who ran a compliance program." This is a meaningful professional distinction. What specific capabilities — technical, organizational, governance-related — are required for the latter role that are not required for the former? How does this textbook's arc support the development of those capabilities?