Key Takeaways — Chapter 30: The EU AI Act and Algorithmic Accountability
1. The EU AI Act Establishes Four Risk Tiers, Not a Blanket AI Regulation
The EU AI Act (Regulation (EU) 2024/1689) does not regulate all AI indiscriminately. It applies a risk-based tiered framework: Prohibited practices (Article 5, banned outright), High-Risk AI systems (Annex III, full compliance obligations), Limited Risk (transparency disclosures only), and Minimal Risk (general law only). The vast majority of an institution's AI systems will likely fall into the limited or minimal-risk tiers. Compliance effort concentrates on the high-risk tier — but getting the classification right is the first and most consequential step.
2. Several Financial Services AI Categories Are Explicitly High-Risk
Annex III(5)(b) explicitly classifies AI systems used to evaluate the creditworthiness of natural persons or establish their credit score as high-risk. Annex III(5)(c) classifies life and health insurance risk assessment and pricing AI. Annex III(4) covers employment and worker management AI, including recruitment screening tools. Annex III(1) covers biometric identification. For financial institutions, credit scoring models, insurance pricing models, AI-driven hiring tools, and KYC systems with facial recognition components are unambiguously subject to the full high-risk compliance framework.
3. Six Substantive Requirements Apply to High-Risk AI Systems
Articles 9–15 impose six core obligations on providers and deployers of high-risk AI:
- Article 9: Risk management system — continuous, iterative, lifecycle-spanning;
- Article 10: Data and data governance — representativeness, bias examination, GDPR compliance;
- Article 11 + Annex IV: Technical documentation — comprehensive, maintained throughout lifecycle;
- Article 12: Automatic logging — audit trail of inputs, outputs, and events;
- Article 13: Transparency — instructions for use enabling deployers to understand outputs;
- Article 14: Human oversight — designated competent persons with intervention authority and automation-bias awareness.
These are not one-time pre-launch checks. They are ongoing operational requirements.
4. Conformity Assessment for Most Financial Services AI Is Self-Assessment
For most Annex III high-risk AI systems — including credit scoring, insurance pricing, and employment AI — providers may conduct an internal conformity assessment without a mandatory third-party notified body. The exception is biometric identification AI and some law enforcement AI, which require notified-body assessment. Self-assessment means firms bear full responsibility for the integrity of their own evaluation. Following conformity assessment, firms must affix CE marking, draw up an EU Declaration of Conformity, and register the system in the publicly accessible EU database of high-risk AI systems.
5. The August 2026 Deadline Is the Critical Date for Financial Services Firms
The Act's staggered timeline creates a compliance calendar: prohibited practices applied from 2 February 2025; GPAI model obligations from 2 August 2025; and high-risk AI system obligations in Annex III apply from 2 August 2026. The 2026 deadline is when credit scoring, insurance pricing, and employment AI must be fully compliant — CE marking, technical documentation, risk management systems, human oversight frameworks, and EU database registration all in place. For many institutions, given the time required for conformity assessment, compliance programs need to begin at least twelve to eighteen months before the deadline.
6. The EU AI Act Has Extraterritorial Reach — Non-EU Firms Serving EU Customers Are Caught
The Act applies to providers placing AI on the EU market and to providers and deployers whose AI outputs are used in the EU, regardless of where the firm is established. A US or UK bank running credit scoring models whose outputs affect EU-resident customers is subject to the Act for those systems. The extraterritorial scope is not theoretical for large financial institutions with European retail books. Compliance programs must assess which systems affect EU customers, not merely which systems are operated from EU offices.
7. The UK Has Deliberately Diverged From the EU AI Act
The UK does not have an AI Act equivalent. Following Brexit, the UK Government's 2023 AI White Paper chose a sector-specific, principles-based approach relying on existing regulators (FCA, PRA, ICO, CMA) to apply five cross-sector AI principles (safety, security and robustness; transparency and explainability; fairness; accountability and governance; contestability and redress) through their existing powers. The FCA applies AI oversight through its principles-based regulatory framework and the Consumer Duty rather than through prescribed Article-level requirements. UK firms serving EU customers remain subject to the EU AI Act for those services.
8. The NIST AI RMF Is the Primary US Voluntary Framework — Not a Legal Mandate
The NIST AI Risk Management Framework (AI RMF 1.0, January 2023) is the most widely referenced AI risk management framework in the US financial sector. Its four core functions — Govern, Map, Measure, Manage — provide a structured methodology for AI risk management adopted as a reference by the federal financial regulators. Critically, the AI RMF is voluntary guidance, not a regulation. US financial institutions bear no legal obligation to adopt it, though adoption is strongly encouraged by supervisory expectations and aligns with SR 11-7 model risk management discipline. The absence of a comprehensive federal AI Act means US firms with EU exposure cannot rely on their US compliance posture alone.
9. General-Purpose AI Models Face a Separate Framework — With a Compute Threshold for Systemic Risk
The Act's GPAI provisions (Articles 51–56) establish separate obligations for foundation models. All GPAI providers must maintain technical documentation and publish training data summaries. GPAI models trained with more than 10^25 FLOPs are classified as presenting systemic risk and face additional requirements: adversarial testing (red-teaming), incident reporting to the AI Office, enhanced cybersecurity measures, and ongoing evaluation. Financial institutions deploying financial services applications built on foundation models need to understand whether the underlying model is systemically classified and what obligations flow to downstream deployers.
10. CE Marking and EU Database Registration Create Public Accountability
Following conformity assessment, high-risk AI systems must carry CE marking (the EU conformity mark) and be registered in the EU database of high-risk AI systems — a publicly accessible register maintained by the European Commission. Public registration means civil society organizations, journalists, regulators, and affected individuals can identify which high-risk AI systems are deployed by which organizations. This creates reputational and political accountability that extends beyond the direct compliance framework. Financial institutions should treat the public register not merely as an administrative obligation but as a public disclosure with reputational dimensions.