Case Study 37.2: Maya's Recovery — Rebuilding Trust After a Failed Go-Live
The Situation
Organization: Verdant Bank (fictional UK challenger bank) Maya Osei's role: Chief Compliance Officer (first year in post) Challenge: A transaction monitoring system go-live that succeeded technically and failed organizationally — and how Maya recognized the failure, adapted her approach, and rebuilt the trust she had inadvertently damaged Timeline: Q1–Q3 2024
Background
When Maya Osei joined Verdant Bank as CCO in January 2024, one item was at the top of her inherited to-do list: replace the bank's aging transaction monitoring system. The existing system was a legacy rules-based platform from 2019 that generated a false positive rate of 34% — meaning that more than one in three alerts required significant analyst time before being closed as non-suspicious. The system had not been updated to reflect the latest typologies from the Joint Money Laundering Intelligence Taskforce. The FCA's Dear CEO letter on financial crime controls, issued in October 2023, had highlighted legacy monitoring infrastructure as a supervisory concern.
Maya had clear authority to act, a budget (£840,000 for software, implementation, and training), and a compelling regulatory case. What she did not have — though she did not know it at the time — was an adequate change management plan.
The Go-Live
The technical implementation ran from February to May 2024. The vendor's implementation team was professional. Integration with the bank's transaction data feed went smoothly. UAT was conducted over three weeks in April, with four analysts from the monitoring team participating. The vendor's training team delivered a full-day training session in late May. Go-live was June 3, 2024.
The first week was chaotic.
The new system's AI-driven alert logic was different from the rules-based approach the team had used for four years. Familiar patterns — transaction sizes, counterparty types, timing patterns — produced different alert volumes and priorities than the team expected. Analysts who had developed intuitions about which alert types were usually benign found those intuitions no longer mapped reliably. Alert review times initially increased rather than decreased, as analysts spent additional time trying to understand why the system had flagged specific transactions.
By the end of week two, Maya received a message from her senior analyst — the most experienced person on the monitoring team — that she described as "the hardest professional email I've ever received."
The message was respectful, detailed, and damning. It listed five specific cases where the analyst believed the new system was producing worse outputs than the old one. It described the anxiety the team was experiencing about their ability to meet SAR filing deadlines. And it ended with: "I don't think anyone asked us whether we were ready."
Maya read it three times. And then she did something that, she later said, was the most important compliance decision she made in her first year as CCO. She called a meeting with the full monitoring team and said: "I owe you an apology."
The Pivot
The meeting lasted two hours. Maya listened for most of it.
What emerged was not a picture of a bad system — the performance data, when reviewed carefully, showed that the new system was performing as the vendor had specified. The alert quality was genuinely better on average. But the team's experience was of disorientation, not improvement. They did not yet have the calibration skills to distinguish between alerts that were correctly generated by new logic and alerts that they would previously have resolved quickly. Everything felt uncertain.
Maya identified five specific problems that required immediate action:
Problem 1: Calibration gap. The team did not know when to trust the new system's prioritization. They had spent four years calibrating their judgment to the old system's logic. That calibration was now mismatched.
Action: She arranged three days of intensive calibration sessions with the vendor's technical team, working through the bank's own historical alert data. The goal: for the team to develop an evidence-based understanding of which alert types the new system handled better and where it needed more analyst judgment.
Problem 2: Coverage anxiety. The team was worried about missing SARs. The new system's different alert distribution — fewer low-risk alerts, more concentrated high-risk alerts — felt unfamiliar. The team was spending time checking whether the new system was "catching everything" the old system would have caught, which doubled their workload.
Action: Maya arranged a comparison analysis over a 30-day lookback: the same transaction population, processed through both the old and new system logic (the old system was still accessible during the transition period). The comparison showed the new system was generating equivalent SAR precursors at a higher rate. This was not communicated as "the system is better" — it was communicated as "here is the evidence that shows the system is catching what it should."
Problem 3: No channel for concerns. During the first two weeks, analysts who had questions about specific alerts did not know where to direct them. The implementation team had gone; the vendor's day-to-day support was a ticket system, not a person. There was no internal escalation path designed for the transition period.
Action: Maya designated a "super-user" — the senior analyst who had written the original email — as the first-line resource for transition-period questions. She gave the super-user two hours per day protected from alert review to answer team questions and document answers for the FAQ. The designation was not just practical — it was a signal that the team's most experienced member was a trusted guide rather than a problem.
Problem 4: The documentation problem. The monitoring team's procedure manual still described the old system. Analysts doing unfamiliar tasks were referring to documentation that told them how to do those tasks in a system that no longer existed.
Action: A documentation sprint: the super-user, with support from the vendor's customer success team, rewrote the top ten procedure steps over ten days. The updated documentation was shared immediately; the full manual was updated within six weeks.
Problem 5: Maya's visibility. The team did not feel seen. The implementation had been managed at a distance. The announcement had come by email. Maya had not spent time with the team during the transition.
Action: Maya began attending the weekly monitoring team meeting personally. She sat in on alert review sessions twice in the first two weeks — not to supervise, but to understand what the team was experiencing. She did not present conclusions; she asked questions. The team's perception of the change shifted noticeably within two weeks of this change in her behavior.
The Outcomes
By the end of Q3 2024 — three months after the troubled go-live — the situation had stabilized. False positive rate: reduced to 18% (from 34%). SAR filing time: reduced by 22%. Alert review capacity: increased by 31% on a team headcount unchanged. The system was performing as the vendor had projected — and the team now believed it.
The senior analyst who had sent the difficult email was, by October, leading the team's calibration review sessions and had been designated as the bank's primary point of contact for the vendor's user advisory group.
Maya presented the outcomes to Verdant's Risk Committee in November 2024. She was asked what she would do differently. Her answer — which she included verbatim in the bank's lessons-learned documentation — was a four-sentence change management checklist that every subsequent compliance technology project at Verdant used as its starting framework:
"First, consult before you select. Second, name the losses, not just the gains. Third, plan two training events — one before, one after go-live. Fourth, be present in the room where the work happens."
Discussion Questions
1. Maya sent a company-wide announcement about the system replacement before engaging the monitoring team directly. She later described this as her first significant mistake. What is the correct sequence for communicating a major compliance system change — and why does sequence matter?
2. The monitoring team's anxiety was driven by calibration uncertainty: they did not know when to trust the new system's outputs. This is a specific form of the "Ability" gap in the ADKAR model. How should a training program be designed to build calibration skills — the ability to recognize when an AI system's output needs additional scrutiny — rather than just operating skills?
3. Maya's decision to acknowledge her mistake directly to the team ("I owe you an apology") is described as the most important compliance decision she made in her first year. In what way is an apology a change management tool? What does it accomplish that conventional communication about improvements does not?
4. The senior analyst who sent the critical email was subsequently designated as a super-user and first-line support resource. Evaluate this decision from a change management perspective. What are the risks of this approach? What conditions make it likely to succeed?
5. Maya began attending monitoring team meetings and alert review sessions personally after the troubled go-live. The chapter argues that "leadership modeling" is a reinforcement mechanism. How should this principle be balanced against the risk that senior leadership presence is experienced as surveillance rather than support? What makes the difference?