Further Reading — Chapter 30: The EU AI Act and Algorithmic Accountability
Primary Legal Texts
EU AI Act — Official Text Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Published in the Official Journal of the European Union, OJ L 2024/1689, 12.7.2024. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
The foundational primary source. The complete text runs to over 450 pages including 180 recitals, 113 articles, and 13 annexes. Practitioners should be familiar with: Articles 1–6 (scope and risk tiers), Articles 9–15 (high-risk requirements), Articles 51–56 (GPAI), Annex III (high-risk use cases), and Annex IV (technical documentation). The recitals provide essential interpretive context.
EU AI Act — Corrigenda and Amendments Monitor EUR-Lex for corrections and implementing measures as the European Commission develops secondary legislation and guidance under the Act. The first set of implementing acts, standardisation mandates, and AI Office guidance documents are expected throughout 2025–2026. URL: https://eur-lex.europa.eu/search.html (search: "artificial intelligence act")
Official Guidance and Implementation Resources
European AI Office The EU AI Office, established within the European Commission, is the primary regulatory authority for GPAI models and the coordination body for national competent authorities. Its website publishes official guidance, the Code of Practice for GPAI models, and enforcement updates. URL: https://digital-strategy.ec.europa.eu/en/policies/ai-office
GPAI Code of Practice The Code of Practice for general-purpose AI model providers, developed by the European AI Office in consultation with industry, civil society, and academia. Essential reading for any firm building or deploying applications on foundation models. The Code operationalizes the GPAI provisions of Articles 51–56 and is the primary compliance reference for systemic-risk GPAI obligations. URL: https://digital-strategy.ec.europa.eu/en/policies/ai-office (section on GPAI)
EU AI Act High-Level Expert Group Documentation The AI HLEG's earlier AI Ethics Guidelines (2019) and Assessment List for Trustworthy AI (ALTAI) predate the Act but remain useful for understanding the values framework that informed the legislative design. ALTAI is a practical self-assessment tool that maps well to the Act's requirements. URL: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai
EU AI Act Database (EUAI DB) The publicly accessible database of registered high-risk AI systems, maintained by the European Commission. Once operational and populated with registrations from the August 2026 application date, this will be an important resource for due diligence, civil society scrutiny, and competitive intelligence. URL: https://eudatabase.eu (expected; check current EU Digital Strategy portal for authoritative link)
European Commission AI Act FAQ and Explainer The Commission has published explanatory materials and FAQ documents to assist with implementation. These are less authoritative than the Act's recitals but useful for understanding the Commission's interpretive intent on contested questions. URL: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
US Frameworks and Regulatory Guidance
NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) National Institute of Standards and Technology, AI RMF 1.0, January 2023. The primary voluntary AI risk management framework in the US. Its four core functions (Govern, Map, Measure, Manage) and the companion AI RMF Playbook provide practical implementation guidance. Widely referenced by US financial regulators as a best-practice reference. URL: https://www.nist.gov/artificial-intelligence/ai-risk-management-framework
NIST AI RMF Playbook The companion implementation resource to the AI RMF, providing concrete suggested actions for each core function. More operationally useful than the framework document itself for firms beginning an AI risk management program. URL: https://airc.nist.gov/Docs/2
Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 2023) The Biden administration's executive order directing federal agencies to develop AI governance guidance within their mandates. Contains financial services-specific provisions directing FSOC and member agencies to assess AI risks to financial stability. Note: executive orders are subject to modification or revocation by subsequent administrations; verify current status. URL: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
OCC, Federal Reserve, and FDIC — Model Risk Management Guidance (SR 11-7 / OCC 2011-12) The interagency guidance on model risk management remains the primary framework governing AI models as models in US banking supervision. The guidance's requirements for model validation, independent review, documentation, and ongoing monitoring map closely to several EU AI Act obligations. Practitioners fluent in SR 11-7 have a strong foundation for AI Act compliance. URL: https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
UK Regulatory Framework
UK Government AI White Paper (March 2023) A Pro-Innovation Approach to AI Regulation (HM Government, CP 815). The policy document that established the UK's sector-specific, principles-based approach to AI regulation. Sets out the five cross-sector AI principles and the role of existing regulators. Essential context for understanding UK/EU divergence. URL: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
FCA and PRA — AI and Machine Learning Discussion Papers The FCA has engaged with AI governance through several publications: the 2022 discussion paper on AI and Machine Learning (DP22/4), joint work with the PRA and Bank of England on AI in financial services, and the joint FCA/PRA statement on the macroprudential risks of AI (2024). These establish the UK regulators' expectations without creating binding rules. URL: https://www.fca.org.uk/publications (search: AI and machine learning)
ICO — AI Guidance and Auditing Framework The UK Information Commissioner's Office has published guidance on AI and data protection under UK GDPR, including guidance on AI auditing, explaining AI decisions to individuals, and fairness in algorithmic processing. Directly relevant to firms navigating both AI Act (EU) and UK GDPR obligations. URL: https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/guidance-on-ai-and-data-protection/
CMA — AI Foundation Models: Initial Review The Competition and Markets Authority's 2023 review of foundation AI models examined competition and consumer protection issues arising from the development and deployment of large foundation models. Relevant for firms assessing market structure risks in AI vendor relationships. URL: https://www.gov.uk/government/publications/ai-foundation-models-initial-review
Academic and Practitioner Literature
Veale, M. and Borgesius, F.Z. (2021). "Demystifying the Draft EU Artificial Intelligence Act." Computer Law Review International, 22(4), 97–112. One of the most cited early analyses of the Act's legislative architecture, risk-based approach, and gaps. Essential academic reading.
Cihon, P., Schuett, J., and Hadfield, G.K. (2021). "Should Artificial Intelligence Governance be Centralised?" Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Examines the centralization vs. decentralization question in AI governance — relevant context for understanding the EU's choice of a horizontal regulation vs. the UK's sector-specific approach.
Mökander, J., and Floridi, L. (2021). "Ethics-Based Auditing to Develop Trustworthy AI." Minds and Machines, 31(2), 323–357. Theoretical framework for AI auditing that maps well to the conformity assessment and technical documentation requirements of the EU AI Act.
Barocas, S., Hardt, M., and Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press (open access). The definitive technical text on algorithmic fairness, covering the mathematical definitions, impossibility results, and practical techniques that underpin Article 10's bias examination requirements. URL: https://fairmlbook.org
Industry Guidance
European Banking Authority (EBA) — Report on Big Data and Advanced Analytics The EBA's work on big data and advanced analytics in financial services predates the AI Act but addresses the same substantive concerns from a prudential supervision perspective. Useful for understanding how AI Act obligations intersect with EBA supervisory expectations. URL: https://www.eba.europa.eu (search: big data advanced analytics)
Financial Stability Board (FSB) — Artificial Intelligence and Machine Learning in Financial Services The FSB's 2017 report on AI and ML in financial services remains a useful primer on the risk categories and regulatory considerations relevant to AI in the financial sector, with a focus on systemic and macroprudential dimensions. URL: https://www.fsb.org/work-of-the-fsb/financial-innovation-and-structural-change/fintech/
Alliance for Artificial Intelligence in Finance (AAIF) Industry body working on AI governance standards and best practices in financial services. Publications on AI Act compliance implementation and model risk governance for AI. URL: Check current status for authoritative URL.