Part Six: Governance, Ethics, and Law
Chapters 29–34
There is a version of the RegTech story that is entirely technocratic. In that version, compliance is a set of rules, technology is a set of tools, and the work is matching tools to rules. The harder the rules, the more sophisticated the tools required. Deploy the right platform, configure the right thresholds, and compliance follows.
Part Six exists because that version is incomplete.
Technology does not operate in a values vacuum. When an algorithm decides who gets credit or who gets flagged for money laundering suspicion, it is making a consequential judgment about a person's life — and that judgment has an ethical dimension that no technology specification can contain. When a government deploys surveillance AI to monitor financial transactions, it is exercising state power — and that power raises questions that extend beyond efficiency. When a regulator approves a new technology model, it is making a choice about what kind of system should govern financial life — and that choice has political and social consequences.
Part Six confronts these dimensions directly. It does not argue that technology is bad, or that compliance automation is wrong, or that efficiency should be sacrificed for philosophical purity. It argues that compliance professionals — people who work daily in the space where law meets practice — have both the need and the capability to engage seriously with the governance, ethical, and legal questions that automated compliance raises. Ignoring these questions does not make them go away. It just means they are answered by default rather than by design.
The Six Chapters
Chapter 29 — Algorithmic Fairness and Bias in Compliance Systems
Every dataset used to train a compliance model reflects history. And history, in financial services, contains discrimination: lending redlined by race, insurance priced by gender, credit denied by geography. A model trained on historical decisions will reproduce historical biases unless actively prevented from doing so — and sometimes even when actively prevented.
This chapter examines the technical reality of bias in compliance systems (how it arises, how it is measured, what can be done), the regulatory framework that governs algorithmic discrimination (ECOA in the US, the Equality Act in the UK, the EU AI Act's non-discrimination requirements), and the organizational practices that distinguish firms that manage bias systematically from those that manage it only when forced to. It uses Maya Osei's experience building Verdant Bank's KYC system as a sustained example of what fair-by-design looks like in practice.
Chapter 30 — The EU AI Act and Algorithmic Accountability
When the EU AI Act was formally adopted in 2024, it became the world's first comprehensive regulatory framework specifically addressing AI systems. Its implications for financial services are significant: credit scoring, fraud detection, AML decision-making, and employment screening all fall within the Act's high-risk category — and high-risk AI systems face requirements for transparency, human oversight, data governance, and conformity assessment that represent a materially new compliance burden.
This chapter provides the most complete treatment of the EU AI Act available at the time of writing: the risk-based tier system, the requirements for high-risk AI systems, the conformity assessment process, the prohibited AI practices, the extraterritorial scope, and the enforcement framework. It also examines how other jurisdictions are responding — the UK's sector-specific approach, the US NIST AI Risk Management Framework, and the emerging international landscape.
Chapter 31 — Regulatory Sandboxes: Innovation Meets Oversight
The regulatory sandbox is one of the most significant institutional innovations in financial regulation of the last decade. First deployed by the FCA in 2016, the sandbox allows firms to test innovative products and services in a live environment with real customers, under regulatory oversight but with temporary relief from normal regulatory requirements. The goal: create space for beneficial innovation that might not survive under full regulatory scrutiny from day one.
This chapter examines how regulatory sandboxes work (application, cohort structure, bespoke waiver frameworks), what firms have learned from them, what regulators have learned from them, and how the sandbox model has spread globally (now operational in 60+ jurisdictions). It also examines the critiques: do sandboxes provide meaningful competitive advantage to participants? Do they risk creating a two-tier regulatory system? And what happens after the sandbox ends?
Chapter 32 — Global RegTech: US, EU, UK, APAC Comparative Landscape
RegTech compliance is inherently multi-jurisdictional for any firm operating across borders. The regulatory landscape is fragmented: the EU has DORA and MiCA; the US has Dodd-Frank, FCPA, and a bank-specific regulatory architecture; the UK has post-Brexit independence from EU frameworks; Singapore has MAS TRM Guidelines; Australia has APRA and ASIC; Hong Kong has HKMA. These frameworks overlap, sometimes conflict, and are evolving at different speeds.
This chapter provides a structured comparative analysis of RegTech requirements across the five major jurisdictions, organized by compliance domain: KYC/AML, market surveillance, operational resilience, AI governance, and data privacy. It identifies convergences (where requirements are aligning) and divergences (where compliance professionals must maintain genuinely different programs). Cornerstone Financial Group's multi-jurisdictional compliance program serves as the running example.
Chapter 33 — Cybersecurity Regulations: DORA, NIST, and Operational Resilience
Cybersecurity and compliance have become inseparable. A firm that suffers a significant cyber incident will face not just operational disruption but regulatory scrutiny — of its incident response, its third-party risk management, its data breach notification, and its overall operational resilience framework. Regulators across jurisdictions have developed increasingly detailed cybersecurity requirements: DORA in the EU, NIST CSF in the US (increasingly referenced in supervisory guidance), the FCA's operational resilience framework, and sector-specific guidance from the OCC, FFIEC, and PRA.
This chapter covers the regulatory landscape for cybersecurity (the major frameworks and what they require), the technical architecture of cyber resilience (defense in depth, incident detection and response, business continuity), and the compliance professional's role in managing the intersection of cyber risk and regulatory obligation. It pays particular attention to the third-party/supply chain dimension — where most significant cyber incidents in financial services originate.
Chapter 34 — Ethics in Automated Decision-Making
The final chapter of Part Six addresses the broadest question: when should automated systems make decisions that affect people's financial lives, and on what basis should those decisions be made?
This is not a question that regulation fully answers. Regulation sets minimum floors — you must explain adverse decisions; you must not discriminate unlawfully; you must have human oversight for high-risk AI. But floors are not ceilings. The regulatory minimum does not tell a firm whether it is ethical to use an algorithm to decline loan applications in underserved communities, even if the algorithm is accurate. It does not tell a firm how to balance efficiency against fairness when they conflict. It does not tell a regulator whether surveillance-based compliance technology serves the public interest or undermines it.
Chapter 34 provides a framework for ethical analysis of automated decision-making in compliance — drawing on the philosophical traditions of consequentialism, deontology, and virtue ethics, translating them into practical questions that compliance professionals can apply. It does not provide answers so much as the tools for asking better questions — tools that become more important as automation becomes more pervasive.
Why Governance and Ethics Matter to Compliance Professionals
There is a practical objection to devoting six chapters to governance, ethics, and law: aren't compliance professionals already overwhelmed by technical requirements? Is there room for philosophy when there are DORA timelines to meet?
The objection misunderstands the nature of compliance work. Compliance professionals are not rule appliers. They are judgment exercisers — people who must assess ambiguous situations, weigh competing interests, advise on difficult choices, and make recommendations that will affect real people and real institutions. That work requires a well-developed set of values and a capacity to engage with hard questions.
The firms that have gotten compliance technology wrong — the ones whose algorithms discriminated, whose automated decisions caused customer harm, whose cybersecurity failures exposed millions of records — typically did not have compliance professionals who failed to apply the rules. They had compliance professionals who failed to ask the right questions. The questions in Part Six are the right ones.
Recurring Characters in Part Six
Maya Osei (CCO, Verdant Bank) has been building Verdant's compliance infrastructure for four years now. The bank has grown: customer base tripled, transaction volume quadrupled, regulatory complexity increased proportionally. In Part Six, Maya confronts the governance and ethical dimensions of the systems she has built. Her KYC algorithm has been flagging certain customer profiles at higher rates. Her AML monitoring system's automated decisions are being challenged. She is asked to present to Verdant's Board on the ethical use of AI in customer decisions.
Rafael Torres (now consulting, following Meridian Capital's sale) has built compliance technology programs at scale and watched them from the outside. In Part Six, Rafael grapples with what happens when the programs he built are used in ways he did not intend — and with the professional responsibility question of what compliance professionals owe to the systems they help create.
Priya Nair (now Partner, RegTech Advisory) has completed her promotion from Senior Associate. Her Part Six arc involves advising clients on the EU AI Act conformity assessment process — and confronting the professional tension between giving clients the advice they want (that their systems are compliant) and the advice they need (that their systems may have unresolved fairness issues that the AI Act will expose).
Part Six begins with Chapter 29: Algorithmic Fairness and Bias in Compliance Systems — where history meets mathematics, and where the compliance professional's obligation to be fair meets the regulator's obligation to define what fair means.
Chapters in This Part
- Chapter 29: Algorithmic Fairness and Bias in Compliance Systems
- Chapter 30: The EU AI Act and Algorithmic Accountability
- Chapter 31: Regulatory Sandboxes — Innovation Meets Oversight
- Chapter 32: Global RegTech — US, EU, UK, APAC Comparative Landscape
- Chapter 33: Cybersecurity Regulations — DORA, NIST, and Operational Resilience
- Chapter 34: Ethics in Automated Decision-Making