Case Study 3.1: Priya's Vendor Landscape Briefing — Separating Signal from Noise
The Situation
Client: A large UK retail bank's compliance technology team Context: The compliance CTO has been asked by the board to present a "RegTech vendor landscape overview" — an assessment of the major vendors in the bank's key compliance domains Challenge: Producing a credible, defensible overview in 4 weeks without vendor access influencing the analysis Priya's role: Leading the research and analysis
The Assignment
The bank's CCO had grown frustrated with the quality of vendor intelligence reaching her from internal sources. The two primary sources were: (1) vendor sales presentations, which were obviously self-serving; and (2) an industry analyst report that the CCO suspected was influenced by vendor sponsorship. She asked Priya to produce an independent assessment.
The scope: four RegTech domains relevant to the bank's current priorities: 1. KYC/identity automation 2. AML transaction monitoring 3. Regulatory reporting (Basel IV capital + LCR/NSFR) 4. Trade surveillance (the bank has a large FX desk and a growing rates business)
Priya's Research Methodology
Priya knew that any vendor landscape assessment is only as good as its methodology. She designed a four-source approach:
Source 1: Client references (weighted highest) For each shortlisted vendor, Priya requested contact details for three reference clients of similar size and regulatory profile. She conducted structured 45-minute calls with each reference using a consistent question set: What problem did you deploy the solution for? What worked? What didn't? What would you do differently? Would you buy this vendor again?
This was the most labor-intensive part of the process and the most valuable. References said things in private that they would never say in a joint vendor reference call.
Source 2: Regulatory examination findings Several regulators publish findings from thematic reviews that name technology systems (or at least describe system characteristics in identifying ways). The FCA's thematic reviews on AML transaction monitoring quality and MiFIR transaction reporting accuracy were particularly useful — they described patterns of failures associated with specific types of systems, allowing Priya to cross-reference vendor claims against regulatory evidence.
Source 3: Trade press and practitioner networks Compliance professionals talk. Priya's network of contacts across 17 prior client engagements gave her access to informal assessments that did not appear in any formal document. She systematized this by posting structured questions to three compliance professional forums and conducting 10-minute informal calls with 12 practitioners.
Source 4: Direct vendor demonstrations Priya took demonstrations from all shortlisted vendors but structured them as due diligence exercises rather than sales presentations. She prepared a standard demonstration script: "Please show me how your system handles [specific scenario]" — real compliance problems from actual client experience — rather than accepting the vendor's prepared demonstration flow. Vendor behavior during the scripted demonstration was itself informative.
Key Findings by Domain
KYC/Identity Automation
The market had consolidated significantly around three main approaches: document verification + biometrics (for digital onboarding), eIDV through data bureau integration (for low-risk onboarding), and full orchestration platforms that combined multiple methods.
Key finding from references: implementation time was the most commonly underestimated cost. Vendors quoted 6–8 week implementation timelines; references reported 4–6 months as more realistic for a bank of this size. "The integration with our core banking system took longer than the vendor expected and longer than we expected," said one reference from a comparable UK bank. "Nobody had done exactly this integration before. We were the third client and they were learning as much as we were."
AML Transaction Monitoring
This was the most competitive segment with the most noise between vendor claims and delivered performance. The key dimension of differentiation was false positive rate — vendors claiming 30-60% reductions from current rates. Priya was skeptical of undifferentiated claims.
Her methodology for evaluating false positive claims: she provided each vendor (under NDA) with a sanitized sample dataset of 10,000 historical transactions, including the confirmed alerts and the false positives. She asked each vendor to run their system against this dataset and share the results. The vendor whose system most accurately replicated the bank's known true positive set while reducing false positives would be most credibly claiming a performance improvement.
Three of the five vendors agreed to this test. Two declined (claiming proprietary concerns). The two that declined were removed from the shortlist.
Of the three that agreed, results varied significantly — from one vendor that achieved a 40% false positive reduction with no change in true positive detection, to another that reduced false positives by 60% but also missed 15% of the known true positives.
Key finding: false positive reduction claims require validation against actual historical data. A claim of "40% false positive reduction" measured against a generic benchmark population may translate to 10% reduction against your specific population — or it may translate to 70%. You cannot know without testing.
Regulatory Reporting
This was the most mature and standardized segment. The leading vendors had robust track records on Basel capital and liquidity reporting. Priya found less variation in core functionality and more variation in: (a) the breadth of regulatory jurisdictions covered, (b) the quality of the calculation audit trail (how easy it was to trace a final number back to source data), and (c) the integration architecture.
Key finding: audit trail quality was systematically underweighted in vendor evaluations. Institutions focused on whether the output was correct; they underweighted whether they could demonstrate why it was correct — a critical issue when a regulatory examiner asks them to explain a specific capital ratio.
Trade Surveillance
The most technically complex domain evaluated. The leading vendors offered sophisticated pattern detection across equity, FX, rates, and derivatives with cross-asset surveillance capability. The key differentiator was the quality of the surveillance analytics team — how actively the vendor updated their typologies in response to new regulatory guidance and enforcement precedent.
Key finding from references: ongoing vendor engagement quality was as important as initial product capability. "The system they sold us in 2022 was good. What's made it valuable is that their analysts came back to us in 2023 when ESMA issued new guidance on commodity market manipulation, and they helped us update our scenarios before our next supervisory examination. That's the relationship we needed."
The Deliverable
Priya's final report ran to 48 pages with a 6-page executive summary. It included: - Market map (categorized vendors by approach and maturity) - Recommended shortlists for each domain (2–3 vendors) - Red flags identified for specific vendors - A vendor selection process recommendation for each domain - Total estimated implementation budget for all four domains: £1.4M over 18 months
The CCO's response: "This is exactly what we needed. We're proceeding with the shortlists."
The bank completed implementations in all four domains within 24 months — on budget, with one vendor replaced mid-implementation when the integration issues proved more severe than estimated.
Discussion Questions
1. Priya's most weighted research source was client references. Why are references typically more valuable than other sources? What are their limitations?
2. Two vendors declined Priya's request to run their system against historical sample data. She removed them from the shortlist. Was this the right decision? What are the potential arguments for reconsidering?
3. Priya found that audit trail quality was "systematically underweighted" in vendor evaluations. Why might compliance teams underweight this factor? What are the consequences of doing so?
4. The report recommended a £1.4M implementation budget across four domains over 18 months. How should a CCO build the internal business case for this investment? (Preview of Chapter 38)
5. What are the limitations of Priya's four-source methodology? What fifth source, if you could add one, would most improve the quality of the analysis?