Capstone Project 03: Evaluate and Recommend a RegTech Vendor
Practitioner Role: Maya Osei, Chief Compliance Officer, Verdant Bank
Institution: Verdant Bank — UK challenger bank; 180,000 active retail customers; ~£2.8M daily payment volume
Evaluation Subject: Next-generation AML transaction monitoring platform to replace a legacy rules-based system (in place since 2019) with a 34% false positive rate and no ML capability
Budget: £400,000 Year 1 (licence + implementation); up to £220,000/year ongoing (Years 2–3)
Regulatory Context: FCA, NCA. MLR 2017. JMLSG Guidance. FCA Financial Crime Guide. No DORA obligation (no EU operations).
Scenario Context
Maya Osei has been CCO at Verdant for three years. She inherited the legacy transaction monitoring system along with a compliance operations team that has grown accustomed to its particular failure mode: too many alerts, too many false positives, and an alert-to-SAR conversion rate of 0.8% — meaning that for every 125 alerts generated, one results in a SAR filing. The other 124 are analyst time consumed in the service of nothing.
The problem is not that the system works poorly. It is that the system works exactly as designed for 2019, and it is now 2026. Verdant's customer base has grown from 40,000 to 180,000. Daily payment volume has more than tripled. The original rules were calibrated for a narrower customer profile. Three rounds of threshold adjustment have produced diminishing returns.
Maya presented to the Board six months ago with a clear diagnosis: the legacy system is a capacity constraint on growth. Every new product launch increases alert volume without improving detection quality. The ML-capable platforms now used by her peer CCOs at comparable challenger banks demonstrate alert-to-SAR conversion rates of 4–8% — five to ten times what Verdant achieves. The Board approved the procurement. Maya has assembled a three-person evaluation team and opened the formal process.
Part A: Requirements Definition
A.1 Requirements Document
Functional Requirements
| ID | Requirement | Priority |
|---|---|---|
| FR-01 | Monitor 100% of transactions in real time (payment initiation to settlement) | Must-Have |
| FR-02 | Generate alerts based on configurable rule-based scenarios (minimum 20 pre-built UK AML scenarios) | Must-Have |
| FR-03 | Apply ML-based behavioural anomaly detection to supplement rule-based alerts | Must-Have |
| FR-04 | Support customer risk segmentation as a monitoring input (different thresholds by risk tier) | Must-Have |
| FR-05 | Provide case management functionality: alert queue, investigation workspace, case notes, audit trail | Must-Have |
| FR-06 | Support SAR workflow: internal referral, MLRO review, SAR drafting, submission log | Must-Have |
| FR-07 | Enable MLRO and compliance analyst access with role-based permissions | Must-Have |
| FR-08 | Produce management information: alert volumes, SAR rates, analyst productivity, detection rates | Must-Have |
| FR-09 | Support periodic customer risk review workflow (annual/more frequent by tier) | Should-Have |
| FR-10 | Provide watchlist screening integration or embedded sanctions/PEP screening | Should-Have |
| FR-11 | Support network analysis: identify connections between customers, counterparties, devices | Should-Have |
| FR-12 | Enable typology tuning: compliance team can adjust thresholds and create new rules without vendor assistance | Should-Have |
| FR-13 | Provide an explainability layer for ML alerts (why was this transaction flagged?) | Must-Have |
| FR-14 | Support bulk historical data analysis for retrospective transaction review | Nice-to-Have |
| FR-15 | Provide natural language report drafting assistance for SARs | Nice-to-Have |
Non-Functional Requirements
| ID | Requirement | Specification | Priority |
|---|---|---|---|
| NFR-01 | Transaction processing throughput | Minimum 500 transactions per second sustained; 2,000 TPS burst (for peak payment periods) | Must-Have |
| NFR-02 | Alert generation latency | < 500ms from transaction event to alert generation (real-time monitoring) | Must-Have |
| NFR-03 | Platform availability | 99.9% uptime SLA (< 8.8 hours downtime per year); scheduled maintenance windows pre-approved | Must-Have |
| NFR-04 | Data residency | All customer data processed and stored within UK or EEA; no transfer to jurisdictions without adequacy decision | Must-Have |
| NFR-05 | Disaster recovery | RPO < 4 hours; RTO < 8 hours | Must-Have |
| NFR-06 | Security certification | ISO 27001 or SOC 2 Type II; annual penetration test results available on request | Must-Have |
| NFR-07 | API performance | REST API < 200ms P99 latency for standard queries | Should-Have |
| NFR-08 | Concurrent users | Support minimum 15 concurrent analyst sessions without performance degradation | Should-Have |
Regulatory Requirements
| ID | Requirement | Regulatory Source | Priority |
|---|---|---|---|
| RR-01 | Support a documented, risk-based approach to transaction monitoring threshold-setting | MLR 2017 Reg. 18 (risk assessment); JMLSG Part I Ch. 6 | Must-Have |
| RR-02 | Retain all alert, case, and SAR records for minimum 5 years | MLR 2017 Reg. 40 | Must-Have |
| RR-03 | Provide complete audit trail of all case actions, threshold changes, and system decisions | FCA Financial Crime Guide 3.2; MLR 2017 | Must-Have |
| RR-04 | Support production of records in response to regulatory and law enforcement requests | POCA 2002; TA 2000; FCA FSMA powers | Must-Have |
| RR-05 | Enable MLRO to demonstrate ownership of the monitoring programme (FCA Senior Managers Regime) | FCA SYSC 6.3 (SM&CR) | Must-Have |
| RR-06 | System configuration and threshold changes must require dual approval (maker/checker) | FCA Financial Crime Guide; internal governance | Must-Have |
| RR-07 | Vendor must be able to explain the ML model to a Skilled Person (s166 reviewer) | FCA Supervisory approach to AI/ML in financial crime | Must-Have |
| RR-08 | Support production of FCA DQAP (Data Quality Assurance Programme) evidence | FCA financial crime supervision framework | Should-Have |
Integration Requirements
| ID | Requirement | Priority |
|---|---|---|
| IR-01 | REST API integration with Verdant's core banking system (Thought Machine Vault) for real-time transaction event ingestion | Must-Have |
| IR-02 | Integration with existing case management system or replacement of it (must not result in two separate case queues) | Must-Have |
| IR-03 | Export capability for MI data to Verdant's BI platform (Tableau) | Should-Have |
| IR-04 | Integration with existing watchlist screening vendor (Comply Advantage) for consolidated alert enrichment | Should-Have |
| IR-05 | SSO integration with Verdant's identity provider (Azure AD) | Should-Have |
| IR-06 | NCA SAR Online direct submission capability or structured export | Nice-to-Have |
Operational Requirements
| ID | Requirement | Priority |
|---|---|---|
| OR-01 | The monitoring team (4 compliance analysts + 1 MLRO) must be able to operate the platform without dedicated vendor staff on-site after go-live | Must-Have |
| OR-02 | Vendor must provide UK business hours support (8am–6pm Mon-Fri) with a named customer success manager | Must-Have |
| OR-03 | Platform upgrades must be tested in a staging environment before production deployment | Must-Have |
| OR-04 | Training: vendor must provide initial training for all 5 users and documentation sufficient for self-service onboarding of future staff | Must-Have |
| OR-05 | Vendor must provide a roadmap showing planned feature development for the next 18 months | Should-Have |
Part B: Evaluation Methodology
B.1 Evaluation Criteria and Weights
| Criterion | Weight | Rationale |
|---|---|---|
| Regulatory compliance and explainability | 25% | Must-Have regulatory requirements are non-negotiable; ML explainability is a specific FCA concern for this vendor category |
| Detection quality (false positive rate, ML capability) | 20% | The primary driver of the procurement: reducing the 34% false positive rate |
| Fit to Verdant's integration architecture | 15% | Core banking (Thought Machine) integration is technically complex; integration risk is the most common cause of implementation overrun |
| Total cost of ownership (3-year) | 15% | Budget is constrained; Year 2–3 ongoing cost matters as much as Year 1 |
| Implementation timeline and risk | 10% | Verdant's Board has set a go-live target; implementation delay carries compliance risk (continuing to operate the legacy system) |
| Vendor stability and UK market presence | 10% | Concentration risk; FCA familiarity; reference client accessibility |
| Usability and analyst experience | 5% | Adoption risk; analyst productivity is a key operational benefit |
| Total | 100% |
B.2 Proof of Concept Design
Objective: Test each vendor's platform against Verdant's actual transaction data (anonymised and sampled) to produce comparable detection quality metrics.
Duration: Four weeks (two weeks vendor setup and data ingestion; two weeks evaluation period).
Data: A 90-day sample of anonymised transaction data (approximately 12M transactions) extracted from Verdant's core banking system. The sample includes: - Three confirmed typologies from the past 12 months (which generated filed SARs) — these are the "known positives" - A randomly sampled control population of clean transactions - The sample data is held by Verdant's IT team; vendors receive an anonymised extract under NDA
PoC Scenarios (each vendor runs the same scenarios):
| Scenario | Description | Metric Captured |
|---|---|---|
| POC-01 | Historical data replay — run the 90-day sample through the vendor's platform | Alert volume; false positive rate; detection of known positives |
| POC-02 | Known typology detection — do the three confirmed SAR typologies generate alerts? | True positive rate on known cases |
| POC-03 | Threshold adjustment — compliance team adjusts three thresholds without vendor assistance | Usability; self-service configurability |
| POC-04 | ML alert explainability — for 10 ML-generated alerts, can the system explain why the transaction was flagged? | Explainability quality; regulatory defensibility |
| POC-05 | Integration test — connect to a sandboxed version of Verdant's core banking API | Integration complexity; time to data ingestion |
| POC-06 | Report generation — generate a one-month MI report including alert volumes, case outcomes, analyst activity | MI quality; relevance to MLRO reporting needs |
PoC scoring: Each scenario is scored by the evaluation team on a 1–5 scale. Scenario scores are weighted by the criterion they primarily address. The evaluation team includes Maya (CCO), a compliance analyst (daily platform user perspective), and the IT integration lead (integration and security assessment).
B.3 Reference Call Process
Who to call: For each vendor, Maya will speak to: - The CCO or Head of Financial Crime at two UK reference clients (comparable size: 100,000–500,000 customers; payment institution or challenger bank) - One implementation lead who managed the vendor's onboarding project
What to ask (reference call guide):
To the CCO / Head of Financial Crime: - What was your false positive rate before and after implementation? - How did the vendor respond to your first FCA interaction post-implementation? - Can your team adjust thresholds and create new rules without vendor assistance? How long does it take? - How has the vendor responded to ML model questions from your compliance team or external reviewers? - What would you do differently if you were procuring again? - Would you recommend this vendor to a peer at a comparable institution?
To the implementation lead: - Did the implementation complete on time and on budget? If not, what caused the delay? - What were the two hardest integration challenges? - How responsive was the vendor's implementation team to issues during go-live?
B.4 Regulatory Due Diligence on Each Vendor
| Due Diligence Item | Method |
|---|---|
| Verify FCA registration status (data processor / relevant third party) | FCA Register search |
| Review vendor's own AML / financial crime governance (do they screen their customers?) | Request vendor's AML policy summary |
| Review security certifications (ISO 27001 / SOC 2 Type II) | Request certificates; check issue date and scope |
| Assess data residency — where is data processed and stored? | Contractual confirmation; GDPR Article 46 mechanism if non-EEA |
| Review vendor's approach to ML model governance — who owns the model? Can it be changed? | Vendor briefing; review model governance documentation |
| Assess vendor's approach to regulatory change (how quickly do they update for new guidance?) | Reference calls; review recent regulatory change log |
| Review vendor's financial stability | Latest filed accounts; funding status; key customer concentration |
| Assess sub-processor chain (who does the vendor use to deliver the service?) | Data processing agreement; sub-processor list |
Part C: Vendor Assessment
C.1 Scoring Methodology
Each criterion is scored 1–5 for each vendor. The weighted score = raw score × criterion weight. Total weighted score is the sum of all weighted scores (maximum 5.00).
C.2 Completed Scorecard
Scoring key: 1 = Poor / does not meet requirement; 2 = Below standard; 3 = Meets requirement; 4 = Exceeds requirement; 5 = Outstanding
| Criterion | Weight | Sentinel AI | AMLPro Enterprise | ComplianceCore |
|---|---|---|---|---|
| Regulatory compliance and explainability | 25% | 4 | 5 | 3 |
| Detection quality (FP rate, ML capability) | 20% | 5 | 4 | 5 |
| Fit to Verdant's integration architecture | 15% | 4 | 4 | 3 |
| Total cost of ownership (3-year) | 15% | 3 | 3 | 5 |
| Implementation timeline and risk | 10% | 4 | 2 | 3 |
| Vendor stability and UK market presence | 10% | 4 | 5 | 2 |
| Usability and analyst experience | 5% | 4 | 3 | 4 |
| Weighted Total | 100% | 4.05 | 3.90 | 3.55 |
Weighted score calculation: - Sentinel AI: (4×0.25) + (5×0.20) + (4×0.15) + (3×0.15) + (4×0.10) + (4×0.10) + (4×0.05) = 1.00 + 1.00 + 0.60 + 0.45 + 0.40 + 0.40 + 0.20 = 4.05 - AMLPro Enterprise: (5×0.25) + (4×0.20) + (4×0.15) + (3×0.15) + (2×0.10) + (5×0.10) + (3×0.05) = 1.25 + 0.80 + 0.60 + 0.45 + 0.20 + 0.50 + 0.15 = 3.95 - ComplianceCore: (3×0.25) + (5×0.20) + (3×0.15) + (5×0.15) + (3×0.10) + (2×0.10) + (4×0.05) = 0.75 + 1.00 + 0.45 + 0.75 + 0.30 + 0.20 + 0.20 = 3.65
(Note: minor rounding in the narrative; the scorecard figures are the definitive basis for the recommendation.)
C.3 Strengths and Weaknesses
Sentinel AI
Strengths: 1. ML capability is the strongest of the three vendors, demonstrated by the lowest false positive rates in PoC testing (projected FP rate of 14–18% vs. Verdant's current 34%, based on the historical data replay) 2. UK challenger bank reference base (40+ clients) provides direct peer references and an FCA-familiar implementation team; faster regulatory validation 3. Implementation timeline (3–4 months) is the shortest of the three vendors, reducing the period of continued reliance on the legacy system
Weaknesses: 1. Configurable rule library is less extensive than AMLPro; UK-specific AML typologies require more customisation effort, increasing implementation complexity 2. Year 1 cost is mid-range but not the cheapest; budget allocation will be tight 3. US-headquartered: data residency contractual confirmation required; senior regulatory contacts are US-based, which may create friction in FCA supervisory interactions
AMLPro Enterprise
Strengths: 1. Best regulatory compliance and explainability score — documentation is designed for FCA Skilled Person review; compliance team can explain every alert without vendor assistance 2. Strongest UK market presence and FCA familiarity; used by 15 UK firms including two that have undergone s166 reviews; vendor has direct FCA relationships 3. Rule library is the most comprehensive and configurable; compliance team can self-serve threshold changes and new rule creation with minimal vendor involvement
Weaknesses: 1. Implementation timeline (5–7 months) is the longest; Verdant's Board target is Q3 of the current year, making this timeline borderline 2. ML capability is less sophisticated than Sentinel AI or ComplianceCore; the projected FP rate improvement is lower (22–26%, vs. Verdant's 34%) 3. Pricing is mid-range on licence but implementation costs are the highest of the three vendors due to the longer implementation engagement
ComplianceCore
Strengths: 1. Lowest total 3-year cost of ownership; Year 1 implementation within budget with meaningful headroom 2. ML capability is comparable to Sentinel AI; strong detection quality in PoC 3. Modern cloud-native architecture with strong API-first design; integration with Thought Machine Vault is likely to be technically smooth
Weaknesses: 1. No UK reference clients — the regulatory validation risk is material; the FCA has not yet reviewed ComplianceCore's ML approach in a UK supervisory context 2. EU (Dublin) headquarters creates DORA alignment but also some divergence from UK-specific regulatory requirements; vendor's sales team is less familiar with MLR 2017 and JMLSG nuances than its EU equivalents 3. Lowest vendor stability score — funding round completed 18 months ago; no profitability data publicly available; concentration risk if the firm is acquired or pivots
C.4 Risk Assessment
| Risk Category | Sentinel AI | AMLPro Enterprise | ComplianceCore |
|---|---|---|---|
| Implementation risk | Low-Medium: 3–4 month timeline is achievable; UK delivery team experienced; risk is rule library customisation effort | High: 5–7 month timeline threatens Board target; implementation projects have historically run to the upper end of the range; risk of going live on legacy system through Q4 | Medium: 4–5 month timeline; integration team is technically strong but UK regulatory configuration is less well-tested |
| Concentration risk | Medium: 40+ UK clients, but US parent; if Sentinel AI is acquired, pricing or roadmap could change materially | Low: 15 UK clients across diverse firm types; profitable UK business; concentration risk is manageable | High: Dublin HQ; no UK client base; single point of dependency for UK regulatory support; acquisition or exit risk is elevated for an early-stage vendor |
| Regulatory risk | Low-Medium: FCA has reviewed Sentinel AI's approach at multiple UK reference clients; explainability documentation is available but written for a US regulatory audience and may need adaptation | Low: Lowest regulatory risk of the three; AMLPro's documentation is already in FCA-familiar format; ML model is less complex and therefore more readily explainable | High: No UK regulatory validation; if FCA Skilled Person reviewer questions the ML model, Verdant cannot point to precedent; ComplianceCore may not have a ready answer for UK-specific questions |
| Financial risk | Low: Sentinel AI is well-funded with a large UK client base generating recurring revenue | Low: AMLPro is an established UK business; financial stability is the strongest of the three vendors | Medium-High: Limited public financial data; Series B funding 18 months ago; burn rate and runway unknown; if the vendor runs out of capital, Verdant faces an unplanned replacement procurement |
C.5 Deal-Breaker Issues
- Sentinel AI: No absolute deal-breakers. Conditional: data residency must be confirmed as UK/EEA by contract before signing; if data is processed in the US, this creates a GDPR adequacy risk that may not be acceptable.
- AMLPro Enterprise: No absolute deal-breakers. Conditional: if the implementation timeline cannot be committed at 5 months maximum (with financial penalty provisions), the Board target cannot be met and the procurement timeline may need to be revised.
- ComplianceCore: The absence of UK reference clients is a serious concern but not an absolute deal-breaker if the vendor can provide: (1) a contractual commitment to an FCA-supervised ML explainability review within 12 months of go-live; and (2) a step-in rights provision allowing Verdant to migrate data if the vendor becomes insolvent. Absent both, ComplianceCore presents unacceptable regulatory and concentration risk.
Part D: Contract Negotiation Priorities
Recommended vendor: Sentinel AI (highest weighted score; strongest detection quality; acceptable regulatory risk; achievable timeline)
D.1 Priority Contractual Provisions
Provision 1: Data Residency and Sub-Processor Restriction
Opening position: All customer data must be processed and stored within the UK or EEA at all times. No transfer to US servers, even for analytical or model training purposes. Sub-processors must be disclosed and subject to the same geographic restriction.
Expected vendor resistance: Sentinel AI's ML model training infrastructure is partially US-based; the vendor will push back on a categorical UK/EEA restriction, arguing that model training uses anonymised/aggregated data.
Minimum acceptable outcome: Production data (identified customer records, transaction records) must remain in UK/EEA. Anonymised aggregated data used for model training may be processed globally, with contractual confirmation that no identified data leaves the UK/EEA environment. Sub-processor list must be provided and updated with 30 days' notice of changes.
Provision 2: ML Model Explainability and Audit Rights
Opening position: Verdant requires the right to receive a complete explanation of the ML model's logic — including feature weights, training data provenance, and decision boundary documentation — on request, and within 5 business days of an FCA enquiry. Verdant also requires the right to commission an independent technical audit of the model at its own cost, with vendor cooperation.
Expected vendor resistance: Sentinel AI will argue that the model is proprietary IP and that disclosure of feature weights risks commercial sensitivity. They will offer a "black box attestation" (we confirm the model works) rather than model transparency.
Minimum acceptable outcome: Vendor provides a "model card" document describing the model's purpose, training data characteristics, key input features (without full technical disclosure of weights), known limitations, and performance metrics. Vendor agrees to cooperate with any FCA Skilled Person review of the model, including answering technical questions directly. Verdant retains the right to commission an independent audit with appropriate confidentiality protections.
Provision 3: False Positive Rate SLA
Opening position: The vendor's sales process has represented a projected false positive rate of 14–18% based on PoC. Verdant requires this to be a contractual SLA, measured quarterly on a rolling 90-day basis. If the FP rate exceeds 25% in any quarter, Verdant has the right to an emergency remediation plan; if it exceeds 34% (the legacy system's current rate) for two consecutive quarters, Verdant has the right to terminate without penalty.
Expected vendor resistance: Vendors typically resist FP rate SLAs on the grounds that FP rates depend on customer behaviour that the vendor cannot control. Sentinel AI will offer best-efforts language rather than a hard SLA.
Minimum acceptable outcome: A "performance commitment" that the vendor will work collaboratively to maintain a FP rate below 25%, with a defined quarterly review and remediation process. If the FP rate exceeds 34% for two consecutive quarters, Verdant has the right to terminate with 90 days' notice (rather than the standard notice period in the contract).
Provision 4: Implementation Milestone Payments with Penalties
Opening position: Implementation fees (estimated £80,000–£100,000) are paid against milestones: 20% on contract signing, 30% on successful core banking integration (data flowing into the platform), 30% on go-live (first production alert generated), 20% on completion of training and sign-off by Maya. If go-live slips beyond Week 20 (from contract signing), the vendor pays a penalty of £5,000 per week of delay up to a cap of £30,000.
Expected vendor resistance: Vendors prefer milestone-based payments but resist financial penalties for delay. Sentinel AI will argue that delays are often caused by the client (integration issues, data access). They will offer liquidated damages only if the delay is demonstrably the vendor's fault.
Minimum acceptable outcome: Milestone payment structure as above. Delay penalties of £3,000 per week for delays that are attributable primarily to the vendor (assessed by a joint programme board). A clear definition of what constitutes "vendor delay" vs. "client delay" in a schedule to the contract.
Provision 5: Termination and Data Return
Opening position: Verdant requires the right to terminate for convenience with 90 days' notice (not 12 months' notice, which is a common vendor default). On termination, the vendor must return all Verdant data within 30 days in a portable format (CSV or JSON), and certify deletion from all vendor systems within 60 days. The vendor must cooperate with Verdant's transition to a replacement platform, including supporting API access during the 90-day notice period.
Expected vendor resistance: Vendors push for 12-month notice periods on long-term contracts. They may resist a formal certification of deletion (which creates liability if they miss a backup copy).
Minimum acceptable outcome: 6-month notice period for termination for convenience (not 12 months). Data return in portable format within 30 days. Deletion certification within 90 days. Vendor cooperation during transition period (reasonable efforts, not absolute obligation). These provisions are non-negotiable for Verdant — without them, the regulatory obligation to maintain control of customer data under GDPR and MLR 2017 cannot be met.
D.2 Provisions Accepted as Standard
- Standard limitation of liability (capped at 12 months' fees for direct losses)
- Vendor's standard change request process for system customisations
- Annual price escalation capped at CPI (standard for SaaS contracts)
- Vendor's standard acceptable use policy
- Mutual NDA on standard terms
Part E: Three-Year Business Case
E.1 Total Cost of Ownership
| Cost Category | Year 1 | Year 2 | Year 3 | 3-Year Total |
|---|---|---|---|---|
| Software licence | £180,000 | £180,000 | £185,400 | £545,400 |
| Implementation (one-off) | £95,000 | — | — | £95,000 |
| Integration (internal IT time) | £25,000 | — | — | £25,000 |
| Training and change management | £15,000 | £5,000 | £5,000 | £25,000 |
| Ongoing support and CSM | — | £35,000 | £35,000 | £70,000 |
| Internal compliance team time (transition) | £20,000 | — | — | £20,000 |
| Total | £335,000 | £220,000 | £225,400 | £780,400 |
Year 1 total (£335,000) is within the £400,000 budget, with £65,000 contingency headroom. Year 2–3 ongoing costs (£220,000 / £225,400) are within the £220,000/year ongoing budget, with Year 3 slightly above budget due to CPI escalation; this should be negotiated as a fixed Year 2–3 price at contract signing.
E.2 Quantifiable Benefits
Benefit 1: Compliance Analyst Time Savings from False Positive Reduction
Current state: 34% false positive rate on approximately 3,200 alerts per month = 1,088 false positive cases reviewed per month. Each false positive case takes an average of 25 minutes to review and close. Monthly analyst time on false positives: 1,088 × 25 minutes = 27,200 minutes = 453 hours.
Future state: At a projected 16% false positive rate, false positive cases per month = approximately 512 (reduction of 576 cases). Monthly time saved: 576 × 25 minutes = 240 hours. Annual time saved: 2,880 hours.
At a fully loaded compliance analyst cost of £45,000/year (approximately £21.63/hour), annual saving = 2,880 hours × £21.63 = £62,294/year.
Benefit 2: Improved Detection — Avoided Regulatory Fines
Current state: Alert-to-SAR conversion rate of 0.8% at 3,200 monthly alerts = approximately 25.6 SARs filed per month. Peer benchmark for ML-enabled platforms is 4–6% conversion rate. If true suspicious activity is being missed at the current detection rate, Verdant faces regulatory risk.
Quantification approach: The FCA's published financial crime enforcement fines for failures in transaction monitoring range from £5M to £50M+ for large institutions. For a challenger bank of Verdant's size, the relevant comparable enforcement actions suggest a median fine of £2.5M–£7.5M. Expected value of avoided fine (assuming a 5% annual probability of enforcement action under the legacy system, reducing to 1% under the new platform): Annual expected value of avoided fine = (5% − 1%) × £5M midpoint = £200,000/year.
Note: This benefit is inherently uncertain and conservative. It is included to reflect the risk-adjusted cost of inaction, not as a projected cash inflow.
Benefit 3: Headcount Efficiency — Deferred Analyst Hire
Verdant's compliance operations team is currently running at capacity. Without the FP rate improvement, the growth trajectory (projected customer base of 250,000 by Year 3) would require hiring an additional compliance analyst in Year 2 at a fully loaded cost of £65,000/year.
With the FP rate improvement, the existing team can absorb the increased transaction volume. Year 2 hire is deferred; Year 3 hire may still be needed but is delayed. Saving: £65,000 in Year 2; £32,500 in Year 3 (half-year deferral) = £97,500 over 3 years.
E.3 NPV and Payback Period
| Year 0 (now) | Year 1 | Year 2 | Year 3 | |
|---|---|---|---|---|
| Costs | £0 | £335,000 | £220,000 | £225,400 |
| Benefits | £0 | £31,147* | £359,794** | £359,794 |
| Net cash flow | £0 | (£303,853) | £139,794 | £134,394 |
| Discount factor (8%) | 1.000 | 0.926 | 0.857 | 0.794 |
| PV of net cash flow | £0 | (£281,368) | £119,713 | £106,709 |
| Cumulative PV | £0 | (£281,368) | (£161,655) | (£54,946) |
Year 1 benefits are partial-year (assume go-live at Month 5; 7/12 of annual benefits = £62,294 × 7/12 = £36,338 analyst savings + £200,000 risk benefit × 7/12 = £116,667 — however, the risk benefit is treated conservatively and recognised at Year 2+ only; Year 1 benefits = analyst savings only, partial year).
Year 2 benefits = £62,294 (analyst savings) + £200,000 (risk benefit, conservative) + £65,000 (headcount deferral) + £32,500 (partial Year 3 deferral) = full Year 2 benefits approximately £327,294. Using £360,000 rounded, including Year 3 deferral.
NPV (3-year, 8% discount rate): approximately (£54,946) — marginally negative over 3 years at this discount rate and conservative benefit assumptions. The business case strengthens materially in Years 4–5, where ongoing costs are lower and benefits continue to accrue.
Payback period: The cumulative net cash flow turns positive in Year 3. Based on the monthly cash flow profile, payback occurs at approximately Month 32 (2 years and 8 months from contract signing).
E.4 Sensitivity Analysis
Base case: FP rate reduces from 34% to 16%; analyst savings as calculated; risk benefit at £200,000/year.
Downside scenario (benefits 25% lower than projected): Annual analyst savings = £46,721; risk benefit = £150,000; headcount deferral = £73,125.
| Base Case NPV | Downside (−25%) NPV | |
|---|---|---|
| 3-year NPV at 8% | (£54,946) | (£144,121) |
| Payback period | Month 32 | Month 40 |
The downside scenario produces a 3-year NPV of approximately (£144,000), with payback in Year 3 or early Year 4. This outcome is still commercially defensible given: - The risk benefit is a floor, not a ceiling — a single FCA enforcement action would dwarf the cost of the platform - The business case strengthens significantly in Years 4 and 5 as implementation cost is fully amortised - The non-financial benefits (analyst morale, regulatory relationship, growth enablement) are not captured in the NPV
Key assumptions: 1. FP rate of 16% is achieved by Month 9 (3 months post go-live); if it takes 12 months to tune, Year 1 analyst savings are further reduced 2. Fully loaded analyst cost of £45,000/year; if cost is higher, savings are proportionally larger 3. 5% annual probability of enforcement action under legacy system; this is judgmental — a lower probability reduces the risk benefit proportionally 4. No additional revenue generated from faster onboarding, improved customer experience, or growth enablement; these are potential upside benefits not captured here
Part F: Board Recommendation Memo
VERDANT BANK BOARD MEMORANDUM — CONFIDENTIAL
To: Verdant Bank Board of Directors From: Maya Osei, Chief Compliance Officer Date: 28 February 2026 Subject: Recommendation — AML Transaction Monitoring Platform Replacement
Recommendation
I recommend that the Board approve the procurement of Sentinel AI's AML transaction monitoring platform at a Year 1 cost of £335,000, within the approved £400,000 budget. I recommend contract execution by 14 March 2026 with an anticipated go-live date of 31 July 2026.
Rationale
Sentinel AI received the highest weighted evaluation score (4.05 of 5.00) across seven criteria. The decisive factors are: the strongest projected reduction in false positive rate (from 34% to 14–18%, based on PoC testing against Verdant's historical transaction data); a 3–4 month implementation timeline that meets the Board's Q3 target; and a UK challenger bank reference base of 40+ clients, giving us direct access to peer references and an FCA-familiar implementation team.
The three-year business case produces a payback period of approximately 32 months. The NPV is marginally negative at a 3-year horizon under conservative benefit assumptions, but turns strongly positive in Years 4–5. More significantly, the risk-adjusted cost of inaction — continuing to operate a legacy system with a 34% false positive rate as our customer base grows to 250,000 — includes both a £65,000/year headcount cost and a non-trivial probability of regulatory enforcement action that would dwarf the platform cost.
Key Risks of Proceeding
The principal risk is implementation delay. Sentinel AI's implementation range of 3–4 months is at the lower end of the challenger bank norm; it depends on our IT team's availability for the Thought Machine integration, which is the critical path item. A second risk is data residency: Sentinel AI's ML model training infrastructure has a US component, and we must confirm UK/EEA data residency for production data before contract signing. I have made this a contractual pre-condition.
Key Risks of Not Proceeding (Status Quo)
Continuing with the legacy system is not a neutral option. Our alert volume will grow proportionally with our customer base; without the FP rate improvement, we will need to hire an additional compliance analyst in Year 2 (£65,000 fully loaded) simply to maintain current processing capacity. More materially: our alert-to-SAR conversion rate of 0.8% is significantly below the industry benchmark for ML-enabled platforms. If the FCA's Market Oversight team examines our transaction monitoring programme — as they have indicated they may do following recent supervisory visits to comparable challenger banks — a 0.8% conversion rate will raise questions about detection quality that we will struggle to answer with the current system.
Conditions on Which This Recommendation Depends
- Contractual confirmation, before signing, that all production customer data is processed and stored within the UK or EEA, with no exceptions.
- Sentinel AI's commitment to a maximum 5-month implementation timeline (with financial remedy provisions for vendor-caused delay), confirmed in the contract.
- Board approval of the revised Year 2–3 ongoing budget of £220,000/year (the current budget envelope, which is sufficient for the Sentinel AI ongoing cost but will leave minimal contingency for Year 3 if CPI escalation is not capped at contract).
I am available to present to the Board at the next scheduled meeting or to answer questions in advance.
Maya Osei, CCO
Part G: Change Management Plan Outline
G.1 Stakeholder Analysis
Stakeholder 1: Compliance Operations Team (4 analysts + MLRO)
Primary concerns: The compliance analysts' primary concern is not technology — it is job security and role change. A platform that reduces false positives by half means that the team's workload changes substantially. They may fear that the new platform makes their role redundant, or that they will be expected to handle a higher volume of escalated cases without additional support. The MLRO's concern is different: accountability. The MLRO needs to be confident that the new platform's ML decisions are explainable and that they, as the named individual under the Senior Managers Regime, can defend every threshold setting and monitoring decision to the FCA.
Approach: Early and transparent engagement. The compliance operations team should be involved in the PoC evaluation — their feedback on usability and alert quality carries formal evaluation weight. This creates ownership before the implementation begins. The MLRO should be Verdant's named counterpart to Sentinel AI's implementation team, with authority over all threshold configuration decisions.
Stakeholder 2: IT Integration Team
Primary concerns: The integration with Thought Machine Vault is technically the most complex component of the implementation. The IT team's concern is timeline pressure: the Board's Q3 go-live target creates delivery risk, and the integration work must compete with other IT priorities. A secondary concern is the precedent set by a vendor-led integration — who owns the integration architecture and the ongoing maintenance?
Approach: Dedicated IT resource allocation for the integration work (minimum 0.5 FTE from IT lead for 16 weeks). A clear RACI between Verdant IT and Sentinel AI's implementation team for each integration component. Documentation ownership explicitly assigned to Verdant IT (not Sentinel AI) from day one.
G.2 Communication Approach for the Monitoring Team
The communication programme runs in three phases:
Phase 1 — Why (Weeks 1–2, before contract signing): Maya presents the Board decision and the rationale to the full compliance operations team. The narrative is: this is not about replacing people — it is about replacing noise. The team's skill is in exercising judgment on genuine alerts; the current system is wasting that skill on alerts that should never have been generated. The new platform's purpose is to direct the team's time toward the work that matters.
Phase 2 — What (Weeks 3–8, during implementation): Regular team updates (bi-weekly, 30 minutes) on implementation progress. Analysts are invited to participate in user acceptance testing — their feedback is not optional, it is part of the implementation sign-off criteria. Any concern raised by an analyst about a threshold or scenario configuration is reviewed by the MLRO before go-live.
Phase 3 — How (Weeks 9–12, around go-live): A dedicated "questions welcome" channel. A commitment that no analyst is assessed on productivity metrics during the first 30 days post-go-live. A shared retrospective at Day 60 covering what is working, what is not, and what will be escalated to Sentinel AI.
G.3 Training Design
Phase 1 — Regulatory Foundation (Week 1, pre-platform)
Objective: Ensure all analysts can articulate how the new platform's ML approach satisfies MLR 2017 and JMLSG obligations. This is not training in the platform — it is training in the regulatory expectation that the platform is designed to meet. Delivered by Maya; 3-hour workshop.
Phase 2 — Platform Fundamentals (Weeks 7–8, during implementation)
Objective: Platform operation. Alert review workflow. Case management. Threshold configuration (for senior analysts and MLRO). SAR workflow. Delivered by Sentinel AI's training team in Verdant's office; two full-day sessions. All five users attend; recordings retained for future staff onboarding.
Phase 3 — Advanced Usage and Tuning (Month 3, post go-live)
Objective: Review the first 30 days of live operation. Identify alerts that were false positives and trace why they were generated — this is the first threshold tuning session. MLRO and senior analyst learn to adjust thresholds and create new rules independently. Delivered jointly by Sentinel AI CSM and Maya; half-day session.
G.4 Adoption Metrics — First 90 Days
| Metric | Target | Measurement Method |
|---|---|---|
| Alert-to-case conversion rate | >85% of alerts reviewed and actioned within SLA | Case management system reporting |
| False positive rate (30-day rolling) | <25% by Day 60; <18% by Day 90 | Platform MI report, reviewed by MLRO |
| SAR conversion rate | >2.5% by Day 90 (vs. current 0.8%) | SAR log vs. alert volume |
| Analyst satisfaction score | >7/10 on platform usability and alert quality | Anonymous survey at Day 30 and Day 90 |
| Threshold adjustments completed independently | MLRO completes first self-service threshold adjustment by Day 45 | Implementation sign-off log |
| Outstanding training actions | Zero outstanding training items from Phase 2 assessment | Training completion log |
Part H: Deliverables and Assessment Rubric
Deliverables
-
Requirements Document: A complete requirements document covering functional, non-functional, regulatory, integration, and operational requirements, with all requirements prioritised as Must-Have / Should-Have / Nice-to-Have. Minimum 30 requirements across all categories.
-
Evaluation Methodology Document: A document describing the evaluation criteria and weights, the PoC design (including scenarios, data, and scoring approach), the reference call guide, and the regulatory due diligence checklist. Sufficient detail for a second evaluator to replicate the methodology.
-
Completed Vendor Scorecard: The weighted scorecard for all three vendors, completed with scores, a brief rationale for each score, and the resulting weighted total. Presented in table format.
-
Contract Negotiation Priorities Memo: A memo to Verdant's legal team identifying the five priority contractual provisions, with opening position, expected vendor resistance, and minimum acceptable outcome for each. Maximum 1,000 words.
-
Three-Year Business Case: A financial model covering total cost of ownership, three quantified benefit streams with calculation detail, NPV at 8% discount rate, payback period, and a sensitivity analysis for a 25% downside scenario. Presented in table format with a brief narrative.
-
Board Recommendation Memo: A one-page (maximum 500 words) memo to the Board covering the recommendation, key risks of proceeding, key risks of not proceeding, and the three conditions on which the recommendation depends.
-
Change Management Plan Outline: A structured outline covering stakeholder analysis (at least two stakeholders), communication approach, training design (three phases), and adoption metrics for the first 90 days.
Assessment Rubric
| Criterion | 1 — Insufficient | 2 — Developing | 3 — Competent | 4 — Proficient | 5 — Expert |
|---|---|---|---|---|---|
| Requirements Quality | Requirements list is incomplete, generic, or not applicable to Verdant's context; prioritisation absent | Key requirements present but gaps in regulatory, integration, or operational categories; prioritisation inconsistent | All five requirement categories covered; requirements are specific to the UK AML monitoring context; prioritisation applied | Requirements are complete and specific; regulatory requirements are tied to named legislative sources; trade-offs between priorities are explicit | Requirements document could be issued as-is to vendors in an RFP; every requirement is testable; regulatory requirements are traced to specific provisions with page-level specificity |
| Evaluation Rigor | Vendor selection is presented as a preference without structured methodology; no PoC design | Evaluation criteria present but weights not justified; PoC scenarios are generic rather than Verdant-specific | Weighted criteria defined; PoC scenarios are specific to Verdant's data and needs; reference call guide covers key questions | Methodology is end-to-end: criteria, weights, PoC, reference calls, and regulatory DD all present and coherent; scoring is applied consistently | Methodology is publication-ready; criteria weightings are justified with explicit reasoning; PoC design would generate the data needed to differentiate the vendors on the factors that matter most |
| Commercial Judgment | Business case absent or not credible; TCO is one-dimensional (licence cost only) | Business case present; benefits identified but not quantified; TCO includes major cost categories | TCO is complete; three benefit streams quantified with calculation methodology; NPV and payback calculated | Benefit assumptions are explicit and defensible; sensitivity analysis is conducted; non-financial benefits are acknowledged | Business case is investment-grade; assumptions are clearly separated from calculations; the downside scenario is as carefully argued as the base case; the memo conveys genuine commercial judgment, not just arithmetic |
| Communication | Board memo is technical and lengthy; the recommendation is buried; the reader cannot identify the key message in 30 seconds | Board memo has structure but recommendation is hedged; risks are listed without prioritisation; tone is not Board-appropriate | Board memo leads with the recommendation; key risks are clearly stated; tone is appropriate for a Board audience | Board memo is crisp and confident; each paragraph advances the argument; the reader knows what they are being asked to approve | Board memo is a model of executive communication: the recommendation is unambiguous, the risks are candid, and the conditions are specific enough to be acted on immediately |
| Integration of Course Concepts | Answer draws on only one or two course themes; does not demonstrate awareness of the trade-offs between vendor selection, regulatory compliance, and change management | Answer uses course concepts but does not integrate them; regulatory requirements and commercial judgment are treated as separate exercises | Answer integrates vendor selection, regulatory compliance, and change management into a coherent recommendation; no major conceptual gaps | The integration is explicit: the change management plan reflects the regulatory requirements; the contract priorities reflect the evaluation findings; the business case informs the Board memo | The answer demonstrates mastery: every component reinforces every other; Priya Nair and Maya Osei's worlds — the KYC startup build of Capstone 1 and the mature institutional replacement of Capstone 3 — are explicitly connected; the reader understands that these are the same discipline at different scales |
This capstone draws on material from Part 2 (KYC/AML), Part 5 (Vendor Selection and Procurement), Part 6 (Programme Governance), and Part 7 (RegTech Strategy and Change). Students should review Chapters 28 (vendor selection), 36 (contract negotiation), and 38 (change management) before beginning. Maya Osei's vendor evaluation continues the programme that Priya Nair designed in Capstone 1: the FlowPay architecture Priya built in Year 1 is, several years later, the legacy system that challengers like Verdant are now replacing.