Case Study 31-1: What Verdant Bank Learned in the FCA Sandbox
Background
Verdant Bank's Chief Compliance Officer, Maya Osei, had spent the better part of two years watching the bank's credit lending team struggle with a problem she could articulate precisely but could not easily solve: the bank's traditional credit scoring model systematically disadvantaged a large segment of the UK population who were financially capable but credit-invisible.
The customers in question were not high-risk borrowers. They were, in Maya's view, low-risk borrowers who had simply never been asked the right questions. They had limited or no credit bureau records — not because they had defaulted or been financially irresponsible, but because they had moved frequently, were recent immigrants, had avoided credit products for cultural or personal reasons, or had simply not needed to borrow before. The standard credit bureau score, which relied on repayment history with financial institutions, returned a thin file: insufficient data to generate a score, or a score based on so few data points as to be statistically unreliable.
The credit modeling team had developed a solution: an alternative credit scoring model that used open banking transaction data — with explicit customer consent — rather than traditional credit bureau data. Instead of asking "has this customer repaid loans to banks in the past?", the model asked "does this customer's financial behavior indicate the capacity and discipline to repay a loan?" It examined income regularity, expenditure patterns, savings behavior, rent payment consistency, and utility payment history — data that bank accounts contain but credit bureaus generally do not.
The model's performance in internal testing was compelling. A backtest against historical data suggested that it would have correctly assessed creditworthiness for a significant proportion of customers who had been declined under the traditional model — and would have maintained default rates well within Verdant's risk appetite.
Maya had two problems with launching it commercially. The first was regulatory. The FCA's consumer credit rules and the responsible lending requirements in the Consumer Credit Act were written around conventional credit assessment methods. Using open banking transaction data as the primary basis for a lending decision was novel enough that the FCA had not published guidance on it — and the absence of guidance in a heavily regulated area is itself a regulatory signal. The second was empirical. The backtest was not a live test. A model that worked against historical data might behave differently with real customers making real applications in real time.
Maya's solution to both problems was the same: apply to the FCA regulatory sandbox.
The Application
Verdant's sandbox application was prepared over three months, with significant involvement from the bank's regulatory affairs team and external legal counsel. The application identified five eligibility criteria and addressed each:
Genuine innovation. Open banking credit scoring using transaction data as the primary assessment basis, rather than as a supplement to bureau data, was genuinely novel in the UK market. Several US fintechs had explored similar approaches, but no FCA-authorized lender had launched a transaction-data-primary credit product.
Consumer benefit. The benefit case was specific and well-evidenced: a defined population of UK adults had been systematically declined for credit or quoted unaffordable rates due to thin credit files, despite financial behavior consistent with creditworthiness. The alternative model would extend credit access to this population at risk-appropriate rates. Maya's team produced a quantitative analysis estimating the size of the affected population and the financial impact of improved access.
Need for sandbox. This was the application's strongest criterion. The FCA had not published guidance on open banking transaction data as a primary credit assessment method. Launching commercially without regulatory clarity would expose Verdant to enforcement risk under responsible lending requirements — specifically, the risk that a regulator or court would later determine that the transaction-data model did not satisfy the responsible lending obligation as it would be interpreted under existing guidance.
UK nexus. Verdant was a UK-authorized bank operating under FCA and PRA supervision. All test customers would be UK residents applying for UK personal loans.
Ready to test. The model was production-ready. Verdant had built the open banking data pipeline, designed the customer consent journey, and validated the model infrastructure. It was ready to accept live applications within sixty days of sandbox admission.
The application requested two specific waivers: a modification of the FCA's responsible lending guidance to confirm that transaction-data-primary assessment satisfied the responsible lending obligation; and a no-action letter covering the bank's data processing activities in connection with the test, pending FCA review of its Data Protection Impact Assessment. The proposed customer limit was 500 applications over twelve months.
Verdant was admitted to the sandbox in Cohort 14.
Testing: What Went as Expected
The test launched with a focused marketing campaign targeting the credit-invisible population segment Maya's team had identified. Verdant's open banking consent journey had been carefully designed to be transparent and comprehensible — customers were told, in plain language, that their bank transaction data would be analyzed to assess their creditworthiness, what data would be used, for how long it would be retained, and how to withdraw consent.
The model performed well. Approval rates in the target population were substantially higher than under the traditional model, and early default data — limited, given the twelve-month test window — was tracking within expectations. The FCA case officer, who joined every monthly check-in call, expressed satisfaction with both the technical performance and the compliance framework.
So far, so expected.
Testing: What Did Not Go as Expected
Two findings emerged from the sandbox that Verdant's team had not anticipated.
Finding One: Customers did not understand how their spending data was being used for credit.
The consent journey had been designed and user-tested before the sandbox launch. Verdant's design team had conducted usability testing with recruited participants and was satisfied that the disclosure was clear and comprehensible. The recruited participants — typically younger, digitally comfortable, and financially engaged individuals — had understood it.
The actual test population was different. A significant proportion of applicants — particularly those in the older segments of the target population, and those who were less comfortable with digital financial services — signed the consent form without understanding its content. Post-application surveys revealed that a substantial proportion of applicants who consented to open banking data use did not understand that their grocery spending, subscription payments, and leisure expenditure would be part of the assessment. Many had assumed that only their bank account balance and income deposits would be reviewed.
This was not a regulatory violation — the disclosure had been made, and consent had been obtained — but it was a consumer experience problem with regulatory implications. The FCA case officer raised it at the next monthly check-in, and the FCA's published sandbox guidance places considerable weight on genuine informed consent, not merely technical consent. Maya ordered an immediate redesign of the consent journey: a shorter, simpler primary disclosure (three bullet points in 14-point type) followed by a comprehensive secondary disclosure for applicants who wanted more detail. The redesigned journey was tested with a broader and more representative user panel before redeployment.
Finding Two: The model performed worse for customers with variable income.
The transaction-data model had been backtested on historical data in which the majority of applicants had regular salaried income. The live test population included a significant number of applicants with variable income — gig economy workers, freelancers, self-employed individuals, and those with multiple part-time jobs. For these applicants, the model's income regularity signals — designed to identify stable income patterns as a creditworthiness indicator — produced artificially low scores, because income that was genuinely present but irregularly timed read as income instability rather than income variability.
The statistical picture was concerning. The approval rate for applicants with variable income was 18 percentage points lower than for applicants with regular income, after controlling for other creditworthiness indicators. The default rate differential between the two groups, measured over the test period, did not justify this gap. The model was producing outcomes that appeared to disadvantage a specific economic cohort — gig workers — not because they were higher risk, but because the model's signal design assumed a mode of working that an increasing proportion of the target population did not fit.
Maya escalated this finding to the FCA case officer immediately. It raised issues that extended beyond the sandbox: the FCA had already engaged with algorithmic fairness concerns in its AI and machine learning discussion papers, and a finding that an open banking credit model systematically disadvantaged gig workers — a population already underserved by traditional financial services — was exactly the kind of outcome the FCA's responsible lending and fair treatment requirements were designed to prevent.
Redesign and Resolution
Verdant's credit modeling team spent six weeks redesigning the income assessment component of the model. Rather than measuring income regularity as a single signal, the redesigned model distinguished between income type (regular salary, regular freelance, variable gig economy, mixed) and applied different assessment logic to each type. For gig economy applicants, the model assessed average monthly income over a twelve-month horizon rather than income-to-income consistency — capturing genuine earning capacity without penalizing the timing variability inherent in gig work.
The redesigned model required a technical modification to the sandbox terms, which the FCA approved within three weeks. The second half of the test — run with the redesigned model — showed the income differential in approval rates narrowing from 18 percentage points to 4 percentage points, within a range the FCA considered acceptable given genuine creditworthiness differences between income types.
The consent journey redesign was simultaneously implemented. Post-application surveys in the second half of the test showed a material improvement in comprehension scores: the proportion of applicants who correctly understood that spending data was being used rose from 61% to 89%.
Post-Sandbox Outcomes
Verdant's exit report was reviewed by the FCA's Financial Conduct team and its Credit and Lending Policy team. The two findings — consent comprehension and gig economy model bias — featured prominently in the FCA's published sandbox learning paper for Cohort 14, which noted that open banking credit assessment models should explicitly address variable income customers in their design, and that consent journeys for data-intensive products required testing with representative rather than recruited user populations.
Verdant applied for a variation of permissions to launch the open banking credit model commercially, incorporating the redesigned consent journey and the income-type-differentiated model architecture. The variation was granted eight months after the exit report was submitted. The bank launched commercially with a product that was materially better — more equitable in its income assessment, more genuinely transparent in its data use disclosure — than the product it had originally designed.
Maya's post-sandbox assessment was direct: "We thought we were testing whether the technology worked. We were also testing whether we understood our customers. We didn't, not fully. The sandbox told us that, at a scale where we could fix it."
Discussion Questions
-
Verdant's consent journey was user-tested before the sandbox launch but still produced comprehension failures in the live test. What does this reveal about the limitations of pre-launch user testing in financial services product design, and how should firms design consent journeys for data-intensive products to reduce this risk?
-
The income-variability finding required Verdant to redesign a core component of its credit model mid-test. How should firms balance the goals of maintaining test validity (consistency of methodology across the test period) against the obligation to correct potentially discriminatory outcomes as soon as they are detected?
-
The FCA published the Verdant sandbox learnings — including the consent comprehension and gig economy model findings — in its cohort learning paper. What is the regulatory rationale for publishing individual firm findings? What are the potential costs to the firm, and what are the countervailing benefits to the broader market?
-
Verdant's redesigned model reduced the approval rate gap between variable-income and regular-income applicants from 18 percentage points to 4 percentage points. The residual 4-point gap was accepted by the FCA as reflecting genuine creditworthiness differences rather than model bias. How should regulators and firms think about distinguishing acceptable outcome disparities (reflecting genuine risk differences) from unacceptable outcome disparities (reflecting model design bias)?
-
Verdant entered the sandbox because it was uncertain whether its open banking credit model satisfied the FCA's responsible lending requirements. At the end of the sandbox, the FCA granted a variation of permissions confirming that it did — subject to the design modifications. What would have happened, and what risks would Verdant have faced, if it had launched commercially without the sandbox, under the original model design?