Chapter 28 Further Reading: AI Regulation --- Global Landscape
The EU AI Act
1. European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. The full text of the EU AI Act. At over 400 pages including annexes, it is not light reading, but every business leader deploying AI in the EU should be familiar with its key provisions. Focus on Articles 5-7 (prohibited practices and high-risk classification), Articles 8-15 (requirements for high-risk systems), and Articles 51-56 (GPAI provisions). The annexes listing high-risk AI systems (Annex III) and EU harmonization legislation (Annex I) are essential reference material.
2. Veale, M., & Zuiderveen Borgesius, F. (2021). "Demystifying the Draft EU Artificial Intelligence Act." Computer Law Review International, 22(4), 97-112. An accessible and analytically rigorous walkthrough of the EU AI Act's original proposal, written by two leading AI law scholars. While the final Act differs in important ways from the 2021 proposal (particularly regarding GPAI provisions), this paper provides essential context for understanding the Act's conceptual foundations and design choices. Updated analyses by the same authors track the Act's evolution through the legislative process.
3. Malgieri, G. (2023). "The EU AI Act: A Summary of Its Significance, Requirements, and Impact." Future of Life Institute. A clear, well-organized summary of the EU AI Act's key provisions, written for a non-legal audience. Particularly useful for business leaders who need to understand the Act's requirements without reading the full legislative text. Updated as the Act moved through trilogue negotiations and final adoption.
4. Engler, A. (2024). "The EU AI Act: A Primer." Brookings Institution Center for Technology Innovation. A policy-oriented analysis that places the EU AI Act in the context of broader technology governance debates. Engler provides thoughtful analysis of the Act's likely impact on innovation, competitiveness, and fundamental rights, along with practical guidance for companies preparing for compliance. One of the most balanced assessments available.
US AI Regulation
5. National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. The foundational document for AI risk management in the United States. The AI RMF's four-function framework (Govern, Map, Measure, Manage) provides a structured approach to identifying, assessing, and mitigating AI risks. The companion NIST AI RMF Playbook offers practical implementation guidance. Even organizations outside the US will find the framework useful as a starting point for AI governance programs.
6. Engler, A. (2023). "The EU and US Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment." Brookings Institution. A detailed comparison of EU and US approaches to AI regulation, analyzing the philosophical, legal, and practical differences between comprehensive legislation and sector-specific regulation. Particularly valuable for companies operating in both jurisdictions that need to understand how the two frameworks interact --- and where they conflict.
7. Chander, A., Crain, M., & Sun, E. (2024). "Artificial Intelligence Legislation in the States." Stanford RegLab Working Paper. A comprehensive survey of state-level AI legislation across the United States, tracking bills introduced, enacted, and vetoed. Updated periodically, this resource is essential for companies navigating the increasingly active state-level regulatory landscape. The analysis identifies common themes across states and highlights the emerging patchwork of requirements.
8. Selbst, A. D., & Barocas, S. (2023). "Unfair Artificial Intelligence: How FTC Enforcement Can Ensure Equitable AI." University of Pennsylvania Law Review, 172. An analysis of the Federal Trade Commission's authority to regulate AI under existing consumer protection law. Selbst and Barocas argue that the FTC's unfairness authority provides a powerful tool for addressing AI harms --- including algorithmic discrimination --- without new legislation. Their framework for evaluating AI practices under FTC standards is directly relevant to companies assessing US regulatory risk.
China's AI Regulations
9. Sheehan, M. (2023). "China's AI Regulations and How They Get Made." Carnegie Endowment for International Peace. The most accessible English-language overview of China's AI regulatory ecosystem. Sheehan explains the institutional landscape (the Cyberspace Administration of China, the Ministry of Science and Technology, and their respective roles), the regulatory philosophy, and the practical implications for companies operating in China. Essential reading for anyone trying to understand how China's approach differs from Western models.
10. Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2021). "The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation." AI & Society, 36, 59-77. A scholarly analysis of China's AI governance approach, examining the interplay between industrial policy, ethical principles, and regulatory requirements. The paper places China's AI regulations in the context of broader governance objectives --- economic competitiveness, social stability, and technological sovereignty --- and provides a framework for understanding how regulatory decisions are made.
Comparative and International Perspectives
11. Smuha, N. A. (2021). "From a 'Race to AI' to a 'Race to AI Regulation': Regulatory Competition for Artificial Intelligence." Law, Innovation and Technology, 13(1), 57-84. A rigorous academic analysis of regulatory competition in AI governance. Smuha examines how different jurisdictions' regulatory strategies interact --- creating possibilities for regulatory arbitrage, races to the bottom or top, and the "Brussels Effect" (where EU regulation effectively sets global standards). Directly relevant to the chapter's discussion of the regulatory race and its impact on innovation.
12. Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press. Not specifically about AI, but essential for understanding why the EU AI Act may set the global standard for AI regulation. Bradford's thesis --- that the EU's regulatory power derives from its large market size, stringent standards, and inelastic demand, causing companies worldwide to adopt EU standards rather than maintain separate compliance programs --- provides the theoretical framework for the "highest standard" compliance strategy discussed in the chapter.
13. OECD. (2024). OECD Principles on Artificial Intelligence (Updated). OECD Legal Instruments. The OECD AI Principles, adopted by over 46 countries, represent the closest thing to an international consensus on AI governance values. The principles --- inclusive growth, human-centered values, transparency, robustness, and accountability --- are not legally binding but have influenced legislation in the EU, the US, Canada, Japan, and other jurisdictions. The OECD's AI Policy Observatory provides ongoing monitoring of AI policies worldwide.
14. Cath, C. (2018). "Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges." Philosophical Transactions of the Royal Society A, 376(2133). A foundational overview of the challenges of AI governance at the international level. Cath examines the tension between global AI development and national regulatory sovereignty, the role of multi-stakeholder governance, and the limits of both government regulation and industry self-regulation. Though written before the EU AI Act's passage, the analytical framework remains highly relevant.
AI Auditing and Compliance
15. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33-44. The most cited paper on internal AI auditing frameworks. Raji and colleagues propose a structured approach to algorithmic auditing that extends across the entire AI development lifecycle --- from problem formulation through deployment and monitoring. Their framework anticipates many of the requirements that the EU AI Act now mandates for high-risk systems and provides practical guidance for implementation.
16. Metaxa, D., Park, J. S., Landay, J. A., & Hancock, J. (2021). "Auditing Algorithms: Understanding Algorithmic Systems from the Outside In." Foundations and Trends in Human-Computer Interaction, 14(4), 272-344. A comprehensive survey of algorithmic auditing methods, from scraping-based external audits to formal verification. Particularly useful for understanding the range of approaches available for bias auditing and the strengths and limitations of each. Directly relevant to the NYC Local Law 144 case study and the broader question of how to make AI audit requirements operationally meaningful.
17. Mökander, J., & Floridi, L. (2022). "Operationalising AI Governance Through Ethics-Based Auditing: An Industry Case Study." AI and Ethics, 3, 451-468. A case study of implementing AI ethics auditing in a corporate setting. Mökander and Floridi bridge the gap between governance principles and operational practice, identifying specific challenges (stakeholder buy-in, data access, expertise gaps) and solutions. Relevant for companies transitioning from compliance planning to compliance execution.
Self-Regulation and Industry Governance
18. Jobin, A., Ienca, M., & Vayena, E. (2019). "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence, 1(9), 389-399. A systematic review of 84 AI ethics guidelines from around the world, identifying common themes (transparency, justice, non-maleficence, responsibility, privacy) and significant gaps (enforcement, specificity, stakeholder input). The paper provides context for evaluating the proliferation of voluntary AI commitments and their relationship to binding regulation.
19. Calo, R. (2017). "Artificial Intelligence Policy: A Primer and Roadmap." UC Davis Law Review, 51(2), 399-435. Though written before the current wave of AI legislation, Calo's primer remains one of the most cited frameworks for thinking about AI policy design. His taxonomy of AI policy challenges --- representation, rights, duties, and governance --- provides a structured approach to evaluating any AI regulatory proposal. Essential background for understanding why AI regulation is difficult and what design choices are available.
Practical Compliance Guides
20. Future of Life Institute. (2024). EU AI Act Compliance Checker. Online tool. An interactive online tool that helps organizations assess whether their AI systems fall within the scope of the EU AI Act and, if so, what requirements apply. While not a substitute for legal advice, it provides a useful starting point for regulatory mapping. Available at artificialintelligenceact.eu.
21. World Economic Forum. (2024). AI Governance Alliance: Briefing Papers. WEF. A series of practical briefing papers developed by the World Economic Forum's AI Governance Alliance, covering topics including responsible AI deployment, regulatory compliance, and multi-stakeholder governance. Written for a business audience, these papers bridge the gap between policy frameworks and corporate practice.
22. Information Commissioner's Office (UK). (2023). Guidance on AI and Data Protection. ICO. The UK ICO's guidance on applying data protection principles to AI systems. While UK-specific, the guidance provides practical frameworks for addressing transparency, fairness, and accountability in AI that are applicable across jurisdictions. Particularly useful for companies navigating the intersection of AI regulation and data protection law.
Looking Ahead
23. Erdlyi, O. J., & Goldsmith, J. (2022). "Regulating Artificial Intelligence: Proposal for a Global Solution." Government Information Quarterly, 39(4). An analysis of proposals for international AI governance coordination, including the feasibility of a global AI regulatory body. The authors evaluate options ranging from soft-law coordination (like the OECD principles) to treaty-based governance (like the International Atomic Energy Agency model), providing a framework for thinking about where international AI governance is heading.
24. Buiten, M. C. (2019). "Towards Intelligent Regulation of Artificial Intelligence." European Journal of Risk Regulation, 10(1), 41-59. A thoughtful analysis of the regulatory design challenges specific to AI, including the pace-of-change problem, the definitional problem, and the information asymmetry between regulators and regulated entities. Buiten proposes design principles for "intelligent" regulation that is adaptive, proportionate, and technologically informed. A useful counterpoint to arguments that AI regulation is inherently futile because technology moves too fast.
For the most current regulatory developments, the following online resources are updated regularly: the OECD AI Policy Observatory (oecd.ai), the Stanford HAI AI Index (aiindex.stanford.edu), the Future of Life Institute EU AI Act tracker (artificialintelligenceact.eu), and the Brennan Center for Justice AI legislation tracker (brennancenter.org). Given the pace of regulatory change, readers should consult these sources for developments after this textbook's publication date.