Chapter 33: Further Reading — Regulation and Compliance: GDPR, EU AI Act, and Beyond
Primary Legal Texts and Official Guidance
1. European Parliament and Council (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. The full text of the EU AI Act, the primary regulatory framework for this chapter. The Act is dense but essential reading for compliance practitioners. Key provisions to read carefully: Articles 5–6 (prohibited and high-risk definitions), Annex I (existing safety legislation), Annex III (high-risk AI categories), Annex IV (technical documentation requirements), Articles 9–15 (high-risk system obligations), Articles 51–68 (GPAI model provisions), and Articles 99–101 (penalties). The European Commission's AI Office website provides supplementary guidance, Q&A documents, and implementation resources.
2. European Data Protection Board (2022). Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement. While focused on law enforcement, this guidance document provides detailed analysis of how GDPR provisions interact with AI biometric applications that applies broadly to AI compliance analysis.
3. European Data Protection Board (2018). Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679 (WP251). The authoritative guidance document on GDPR Article 22's application to automated decision-making. Essential for any organization using AI in decisions with legal or similarly significant effects. Note that this guidance predates the EU AI Act and should be read in conjunction with EU AI Act provisions.
4. Consumer Financial Protection Bureau (2022). "Chatbots in Consumer Finance." CFPB Issue Spotlight. The CFPB's analysis of AI chatbot applications in consumer financial services, covering applicable legal requirements and compliance expectations.
5. Consumer Financial Protection Bureau (2023). "Adverse Action Notification Requirements and the Equal Credit Opportunity Act and Regulation B." CFPB Circular 2023-03. The CFPB's clearest statement of its position that model complexity does not excuse ECOA adverse action compliance, and its expectations for how creditors using AI models must generate compliant adverse action notices. Essential reading for any organization using AI in consumer credit.
6. NIST (2023). AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. The primary voluntary governance framework for AI in the United States. Freely available from NIST. The companion AI RMF Playbook provides more detailed implementation guidance. Essential for any organization seeking to implement a comprehensive AI risk management program in the United States.
7. EEOC (2023). "Artificial Intelligence and Algorithmic Fairness." EEOC Technical Assistance Document. The EEOC's guidance on how employment anti-discrimination law applies to AI hiring, performance, and termination tools. Covers disparate impact doctrine, the four-fifths rule for adverse impact assessment, and employer liability for vendor-supplied AI tools.
8. Federal Reserve / OCC (2011). "Supervisory Guidance on Model Risk Management." SR Letter 11-7. The foundational US banking regulatory guidance on model risk management, which applies to AI models used by banks. While pre-dating AI as we now understand it, this guidance's requirements for model validation, documentation, and governance apply directly to AI applications in financial services.
Academic Analysis
9. Wachter, Sandra, Mittelstadt, Brent, and Russell, Chris (2017). "Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR." Harvard Journal of Law and Technology, 31(2), 841–887. The foundational academic analysis of GDPR Article 22's implications for AI explainability, including the argument for "counterfactual explanations" as a compliance mechanism. Technically sophisticated but essential for understanding the academic debate around algorithmic explanation requirements.
10. Selbst, Andrew D., and Barocas, Solon (2018). "The Intuitive Appeal of Explainable Machines." Fordham Law Review, 87(3), 1085–1138. A critical analysis of explainability requirements in AI governance, arguing that explanation requirements may be less effective than commonly assumed and may create false confidence in AI accountability. Essential for balanced analysis of the explainability debate.
11. Barocas, Solon, Hardt, Moritz, and Narayanan, Arvind (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press. The definitive academic text on algorithmic fairness, covering statistical definitions of fairness, their incompatibilities with each other, and the limitations of technical fairness interventions. Freely available online at fairmlbook.org.
12. Doshi-Velez, Finale, and Kim, Been (2017). "Towards a Rigorous Science of Interpretable Machine Learning." arXiv preprint arXiv:1702.08608. A rigorous technical analysis of what interpretability means in machine learning contexts and how different interpretability methods should be evaluated. Essential for technically literate compliance practitioners.
Regulatory Landscape and Policy Analysis
13. KPMG (2024). "EU AI Act: A Compliance Guide for Organizations." KPMG Advisory. One of several major consulting firm guides to EU AI Act compliance. These guides are useful for their practical orientation and their integration of legal requirements with implementation realities. Similar guides from Deloitte, PwC, and McKinsey are also valuable.
14. Future of Privacy Forum (2023). "The AI Act and Its Interaction with Existing Laws." FPF Policy Analysis. An analysis of how the EU AI Act interacts with other EU legal frameworks — the GDPR, the Digital Services Act, the Digital Markets Act, and sector-specific legislation. Important for understanding the full regulatory landscape rather than treating the AI Act in isolation.
15. Information Commissioner's Office (UK) (2023). "AI and Data Protection." ICO Guidance. The UK ICO's guidance on how the UK GDPR applies to AI applications, with particular attention to transparency, automated decision-making, data protection impact assessments, and accountability. Important for organizations with UK operations, given the post-Brexit divergence between UK GDPR and EU GDPR approaches.
16. Zalnieriute, Monika, and Moses, Lyria Bennett (2022). "AI Governance Challenges: A Risk-Based Approach." Global Policy Journal. An academic analysis of risk-based approaches to AI governance — the conceptual foundation of both the EU AI Act and several other regulatory frameworks — that is valuable for understanding both the rationale and the limitations of this approach.
17. Newman, Abraham L. (2008). Protectors of Privacy: Regulating Personal Data in the Global Economy. Cornell University Press. While focused on privacy regulation more broadly, this book provides essential analytical background for understanding why the EU has taken a comprehensive regulatory approach to data protection (and, by extension, to AI) while the United States has not.
Practitioner Resources
18. International Association of Privacy Professionals (IAPP). IAPP AI Governance Center. The IAPP maintains a comprehensive resource center for AI governance and compliance practitioners, including regulatory trackers, practical guides, and professional training resources. The iapp.org website is an essential ongoing resource for compliance professionals.
19. Future of Life Institute (2024). "AI Regulation Tracker." A regularly-updated database tracking AI legislation and regulatory developments across major jurisdictions globally. Available at futureoflife.org. Essential for practitioners who need to track the rapidly evolving state-level AI legislation landscape in the United States.
20. Algorithm Watch (2023). AI Ethics Guidelines Global Inventory. A database of AI ethics guidelines, principles, and governance documents from governments, companies, and civil society organizations globally. Useful for benchmarking an organization's internal AI governance commitments against existing frameworks and for tracking the evolution of global AI governance norms. Available at algorithmwatch.org.