On August 1, 2024, the EU AI Act entered into force — the world's first comprehensive law specifically governing artificial intelligence. Its prohibited AI practices took effect in February 2025, with high-risk AI system requirements phasing in...
In This Chapter
- The Age of AI Law Has Arrived
- Learning Objectives
- Section 1: The Regulatory Landscape Overview
- Section 2: GDPR and AI
- Section 3: The EU AI Act — Framework and Risk Tiers
- Section 4: EU AI Act — High-Risk AI Requirements
- Section 5: EU AI Act — Governance and Enforcement
- Section 6: US Federal AI Regulation
- Section 7: US Sector-Specific Regulation
- Section 8: State-Level US Regulation
- Section 9: Other Major Jurisdictions
- Section 10: Building a Compliance Program
- Summary
Chapter 33: Regulation and Compliance — GDPR, EU AI Act, and Beyond
The Age of AI Law Has Arrived
On August 1, 2024, the EU AI Act entered into force — the world's first comprehensive law specifically governing artificial intelligence. Its prohibited AI practices took effect in February 2025, with high-risk AI system requirements phasing in through 2026. Within months of the Act's entry into force, major AI companies were already restructuring products, hiring compliance officers specifically for EU AI Act work, and commissioning technical documentation that the Act requires. Legal departments that had spent years operating without specific AI law to interpret suddenly had several hundred pages of regulation to apply to systems they often understood imperfectly.
The EU AI Act did not arrive in isolation. By 2025, organizations operating AI systems faced a regulatory landscape that included data protection law applied to AI (the GDPR's intersection with automated decision-making had been generating enforcement actions since 2018), sector-specific regulations from banking, healthcare, and employment regulators, consumer protection enforcement from agencies with broad authority, an expanding wave of state-level AI legislation in the United States, and comprehensive regulatory frameworks in China, Canada, Brazil, and elsewhere. The era in which organizations could operate AI systems with essentially no compliance obligations was over.
This represents a fundamental shift in the operating environment for AI. For years, AI governance was primarily a matter of organizational ethics commitments and voluntary framework adherence — important, but optional. The regulatory landscape of 2025 and beyond makes AI compliance a legal obligation with real enforcement teeth. Organizations that fail to understand and navigate this landscape face penalties that, at the EU AI Act's highest tier, can reach 7% of global annual revenue. They also face regulatory scrutiny from multiple agencies simultaneously, civil litigation from affected individuals, and reputational costs that can dwarf the direct financial penalties.
The compliance challenge is compounded by the complexity of the regulatory landscape itself. AI regulation in 2025 is not a single coherent body of law. It is a patchwork of overlapping frameworks — data protection law, sector-specific guidance, comprehensive AI legislation, consumer protection law, anti-discrimination law — that apply simultaneously and sometimes inconsistently. A financial services firm using AI for credit underwriting in the United States must simultaneously navigate the Equal Credit Opportunity Act's fair lending requirements, the Fair Credit Reporting Act's accuracy and disclosure obligations, CFPB guidance on algorithmic lending, the FTC's authority over unfair or deceptive practices, and — if it also operates in Europe — the GDPR, the EU AI Act, and potentially the EU's Mortgage Credit Directive. Getting this right requires both legal expertise and deep understanding of how the AI systems actually work.
This chapter provides a comprehensive map of the AI compliance landscape as it exists in the mid-2020s. It focuses on the frameworks that matter most for a wide range of organizations — the GDPR, the EU AI Act, and key US regulatory frameworks — while also surveying the major international developments that business professionals need to track.
Learning Objectives
By the end of this chapter, you will be able to:
-
Describe the major categories of AI regulation — data protection, sector-specific, comprehensive AI law, consumer protection, and anti-discrimination — and explain how they overlap and interact in practice.
-
Identify the five key intersections between GDPR and AI, including Article 22's requirements for automated decision-making, and assess the compliance implications for specific AI deployments.
-
Apply the EU AI Act's four-tier risk classification framework to specific AI systems, determining what category a system falls into and what compliance obligations follow.
-
Outline the complete compliance requirements for a high-risk AI system under the EU AI Act, including conformity assessment, technical documentation, human oversight, registration, and post-market monitoring.
-
Describe the US federal AI regulatory landscape, including the key agencies with AI-related authority, the role of the NIST AI Risk Management Framework, and the major sector-specific frameworks in banking, healthcare, and employment.
-
Identify the most significant state-level AI regulations in the United States, including Illinois BIPA, New York City Local Law 144, and the wave of state AI legislation in 2024–2025, and assess their compliance implications.
-
Compare AI governance approaches in the UK, Canada, China, Brazil, and India and identify the key compliance obligations they impose on internationally operating organizations.
-
Design the core elements of a multi-jurisdictional AI compliance program, including an AI inventory process, risk classification methodology, documentation framework, vendor management approach, and audit trail system.
-
Define the roles of the Data Protection Officer and the emerging AI Compliance Officer in organizational governance structures and explain how they should interact.
-
Assess an organization's compliance gaps against specific regulatory requirements and prioritize remediation steps.
Section 1: The Regulatory Landscape Overview
The most important thing to understand about AI regulation as a compliance professional is that it is not a single body of law. It is a layered, overlapping, and sometimes inconsistent patchwork of legal frameworks that apply simultaneously, in ways that differ depending on the specific AI application, the industry sector, the geographic market, and the populations affected. Navigating it requires holding multiple regulatory frameworks in mind simultaneously — and recognizing that satisfying one framework's requirements does not necessarily satisfy another's.
The major categories of AI regulation can be organized along two dimensions: scope (sector-specific versus cross-sectoral) and approach (data-focused versus application-focused). This produces four types of regulatory regime, each of which contributes important requirements to the compliance landscape.
Data protection law governs how personal data is collected, processed, and used, with AI applications creating numerous data protection obligations. The GDPR in Europe, the UK GDPR, Canada's PIPEDA and proposed Bill C-27, Brazil's LGPD, and the United States' patchwork of federal and state privacy laws all impose requirements on AI systems that process personal data. Data protection law was not designed for AI but contains provisions — automated decision-making rights, purpose limitation requirements, data minimization — that apply directly to many AI applications. Data protection regulators were among the first regulatory bodies to take enforcement action against AI systems, and they remain active.
Comprehensive AI law — most importantly, the EU AI Act — takes a different approach. Rather than focusing on the data inputs to AI systems, comprehensive AI law focuses on the systems themselves: their design, their risk profiles, their documentation, and their governance. The EU AI Act is currently the only comprehensive AI law that applies to a major market, though Canada's proposed Artificial Intelligence and Data Act (Bill C-27) would be the second if adopted. Comprehensive AI law typically imposes requirements that vary by the risk level of the AI application, with the most demanding requirements applying to the highest-risk uses.
Sector-specific regulation applies AI-relevant requirements within particular industries, enforced by the sector's specialized regulatory body. In the United States, banking regulators (the OCC, Federal Reserve, FDIC, and CFPB), healthcare regulators (the FDA), employment regulators (the EEOC), housing regulators (HUD), and financial markets regulators (the SEC) all have existing authority that extends to AI applications in their domains. These sector-specific frameworks often have greater enforcement capacity and more specific requirements than horizontal AI governance frameworks, making them practically very important for organizations in regulated industries.
Consumer protection and anti-discrimination law provides a final layer that often catches AI applications that other frameworks miss. In the United States, the FTC has authority over unfair or deceptive practices and has used this authority to pursue AI companies that made misleading claims about their systems. Anti-discrimination law — Title VII of the Civil Rights Act, the Fair Housing Act, the Equal Credit Opportunity Act — applies to AI systems that produce discriminatory outcomes in employment, housing, and credit regardless of whether any specific AI regulation addresses those applications.
The practical consequence of this layered structure is that AI compliance requires a systematic approach that identifies all applicable legal frameworks for a given application, maps the requirements of each, identifies conflicts and gaps, and develops a unified compliance program that satisfies all applicable requirements. This is not a simple task, and it gets more complex as the number of jurisdictions in which an organization operates increases.
Section 2: GDPR and AI
The General Data Protection Regulation, which became applicable in May 2018, was not designed for AI. Its drafters were primarily thinking about large-scale personal data processing by social media platforms and other internet companies, not the specific challenges of machine learning systems. But the GDPR's broad and technologically neutral principles have been applied to AI by data protection authorities (DPAs) across the EU in ways that create substantial compliance obligations for AI developers and deployers. Understanding the GDPR-AI intersection is essential for any organization operating AI in European markets.
First intersection: Data minimization and purpose limitation. The GDPR's principle of data minimization requires that personal data be "adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed" (Article 5(1)(c)). Its purpose limitation principle requires that personal data be "collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes" (Article 5(1)(b)). These principles create significant challenges for AI systems, which often require large datasets and may use data in ways that were not specified at the time of collection. Training a machine learning model on customer transaction data originally collected for billing purposes, then using it to generate behavioral predictions, raises purpose limitation questions that many organizations have not adequately addressed. The Hamburg Data Protection Commissioner's investigation of the Hamburg Public Transport Authority for similar practices illustrates that DPAs are actively pursuing these questions.
Second intersection: Automated decision-making — Article 22. Article 22 of the GDPR gives data subjects the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects concerning them. This provision applies directly to many AI applications — credit scoring, hiring screening, insurance pricing, and similar systems that make or meaningfully shape decisions about individuals without meaningful human involvement. Where Article 22 applies, organizations must either ensure meaningful human review of automated decisions, obtain the data subject's explicit consent, or establish that the processing is necessary for a contract. They must also provide the data subject with "meaningful information about the logic involved" in automated decisions and the right to challenge and seek human review of those decisions.
Enforcement of Article 22 has been growing. The Berlin Commissioner for Data Protection fined a company for using an automated credit scoring system that violated Article 22. The Spanish data protection authority (AEPD) has issued guidance extensively discussing Article 22's application to AI. The European Data Protection Board has published guidelines on automated decision-making and profiling that provide the most authoritative interpretation of Article 22's requirements. For organizations deploying AI systems that make or significantly influence decisions about individuals, Article 22 compliance is not optional and requires careful technical and organizational design.
Third intersection: Data subject rights. The GDPR's data subject rights — access, rectification, erasure, portability, restriction, and objection — apply to personal data processed by AI systems just as they apply to any other personal data processing. Practically, this creates significant technical challenges. When an individual exercises their right of access, organizations must be able to identify and provide all personal data held about them — including data incorporated into or derived from AI model training. When an individual exercises their right of erasure, organizations must consider whether the individual's data needs to be removed from AI model training datasets. The difficulty of "unlearning" data from machine learning models is a documented technical challenge that regulators are increasingly focused on. The Italian Garante's intervention in ChatGPT — which resulted in a temporary suspension of ChatGPT's availability in Italy in early 2023, pending OpenAI's demonstration of compliance with data subject rights requirements — illustrated both the practical stakes of this issue and regulators' willingness to use available enforcement tools.
Fourth intersection: International data transfers. The GDPR's restrictions on transferring personal data outside the EU (Articles 44–49) have significant implications for AI applications that involve processing EU personal data in third countries. Organizations that train AI models in the United States using EU personal data, or that use US-based cloud services to process EU personal data in AI applications, need appropriate transfer mechanisms — Standard Contractual Clauses (SCCs), Binding Corporate Rules (BCRs), or adequacy decisions. The 2023 EU-US Data Privacy Framework provides an adequacy-based transfer mechanism for US companies that self-certify, but its continued validity has been subject to legal challenge, creating ongoing uncertainty.
Fifth intersection: DPA enforcement landscape. Data protection authorities across EU member states have active AI enforcement programs. The Luxembourg CNPD's investigation of Amazon's AI systems, the French CNIL's enforcement actions against AI companies, the Italian Garante's aggressive stance on generative AI, and the Irish DPC's investigations of major US tech platforms all demonstrate that GDPR enforcement against AI is real and escalating. The geographic distribution of DPA enforcement matters: because many major US tech companies have their EU headquarters in Ireland (making the Irish DPC their lead supervisory authority under GDPR's one-stop-shop mechanism), the Irish DPC's enforcement decisions have outsized significance for the AI compliance landscape — and the DPC has faced persistent criticism for relatively slow enforcement.
Section 3: The EU AI Act — Framework and Risk Tiers
The EU AI Act represents the most ambitious attempt yet to govern artificial intelligence through comprehensive legislation. Adopted by the European Parliament in March 2024 and entering into force in August 2024, the Act applies a risk-based regulatory framework that calibrates requirements to the potential harm of the AI system. Understanding this framework — who it applies to, what it requires, and how the risk tiers work — is the foundation of EU AI Act compliance.
Who the Act applies to. The EU AI Act applies to providers (those who develop or have an AI system developed and place it on the EU market), deployers (those who use AI systems in the course of their professional activities), importers, and distributors. The Act has explicit extraterritorial scope: it applies to providers established in third countries whose AI systems are placed on the EU market or whose outputs are used in the EU. This extraterritorial reach means that US, Chinese, and other non-EU AI developers whose products are used in the EU face EU AI Act compliance obligations.
The prohibited tier. At the apex of the risk hierarchy are AI practices that the EU AI Act prohibits outright — practices whose potential for harm the EU legislature judged so severe that no legitimate use case justifies them. The prohibited practices include:
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, subject to narrow exceptions for targeted searches for specific missing persons, prevention of imminent terrorist threats, and identification of suspects in serious crimes subject to judicial authorization. This prohibition reflects EU fundamental rights commitments regarding privacy and dignity in public spaces.
AI systems that deploy subliminal techniques to manipulate individuals' behavior in ways that they are not aware of and that may harm them. This prohibition targets AI designed to bypass conscious rational agency — systems that exploit psychological vulnerabilities or subconscious processing to influence decisions.
AI systems that exploit vulnerabilities of specific groups — children, elderly persons, people with disabilities — to distort their behavior in ways that harm them. This provision extends the manipulative AI prohibition to groups with heightened vulnerability.
Social scoring systems by public authorities — systems that evaluate or classify individuals based on their social behavior or personal characteristics and produce adverse treatment in unrelated contexts. This provision specifically targets the kind of comprehensive social credit systems deployed in China.
AI systems used to assess the risk that individuals will commit criminal offenses based solely on profiling or personality traits, in the absence of objective and verifiable facts. This prohibits predictive policing systems that assess future criminality based on demographic or behavioral profiles rather than specific evidence.
The prohibited tier prohibitions took effect in February 2025. Organizations that were using any prohibited AI practices after that date face the Act's highest penalty tier.
The high-risk tier. High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. The Act identifies two subcategories of high-risk AI:
High-risk AI systems covered by existing product safety legislation (listed in Annex I), including AI components in machinery, medical devices, aircraft, vehicles, and similar safety-critical products. These systems are governed through the conformity assessment processes of the relevant product safety directive, with additional AI-specific requirements layered on.
High-risk AI systems covered specifically by the AI Act (listed in Annex III), including systems used in: biometric identification and categorization of natural persons; management and operation of critical infrastructure; education and vocational training (including exam scoring and student assessment); employment, workers management, and access to self-employment (including CV screening, interview analysis, task allocation, and performance monitoring); access to essential private services and essential public services (including credit scoring, life insurance risk assessment, and social benefit eligibility); law enforcement; migration, asylum, and border control management; and administration of justice and democratic processes.
The Annex III categories require careful analysis because they are defined broadly and their application to specific AI systems requires judgment. A resume screening tool that automatically filters candidates to human review is clearly high-risk under the employment category. An AI system that suggests but does not require human review of credit applications occupies a more ambiguous position that may require legal advice to resolve.
The limited-risk tier. Limited-risk AI systems are those that present risks of manipulation or deception but not the more severe risks that trigger high-risk classification. The Act's primary limited-risk obligation is the transparency requirement: AI systems that interact with humans (chatbots, virtual assistants) must disclose that they are AI systems. AI systems that generate synthetic content — images, audio, video, text — must label that content as AI-generated. AI that does emotion recognition or biometric categorization must inform those subject to it.
These transparency requirements took effect in August 2026 for GPAI models and are important for any organization deploying customer-facing AI systems. The simple requirement that chatbots identify themselves as AI systems, for example, has implications for many customer service applications.
The minimal-risk tier. Most AI applications — AI-powered spam filters, AI-assisted word processors, AI-powered game NPCs — present minimal risk and are subject to no specific EU AI Act requirements. Providers and deployers of minimal-risk AI systems are nevertheless encouraged to adopt voluntary codes of conduct.
General-purpose AI models. The EU AI Act includes specific provisions for general-purpose AI models (GPAI models) — large AI models with broad capabilities that can be used for a wide range of downstream applications. This tier was added during the Act's development specifically to address systems like GPT-4, Gemini, and Claude. GPAI model providers must maintain technical documentation, publish information about training data, comply with EU copyright law regarding training data, and meet additional requirements. GPAI models designated as posing "systemic risk" — currently defined as models trained on more than 10^25 floating-point operations (roughly corresponding to the scale of the largest frontier models) — face additional requirements including adversarial testing, reporting of serious incidents to the AI Office, and cybersecurity protections.
Section 4: EU AI Act — High-Risk AI Requirements
For organizations deploying high-risk AI systems in the EU, the EU AI Act imposes a comprehensive set of requirements that span system design, documentation, governance, and ongoing monitoring. Understanding these requirements in their full scope is essential for compliance planning.
Risk management system. Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. The system must identify and analyze known and foreseeable risks, estimate and evaluate emerging risks from the system's use, and adopt risk mitigation and control measures. This is an ongoing obligation, not a one-time assessment — the risk management system must be continuously updated as the system is developed, deployed, and operated.
Data and data governance. Training, validation, and testing datasets must meet quality criteria appropriate to the system's intended purpose. They must be relevant, representative, free of errors, and complete to the best of practitioners' ability. The Act requires explicit attention to potential biases that could affect fundamental rights. Data governance practices must be documented, including the origin, scope, and collection methods for training data.
Technical documentation. Providers must prepare comprehensive technical documentation before the high-risk AI system is placed on the market. This documentation must allow competent authorities to assess the system's compliance with requirements. The Act specifies detailed content requirements, including a general description of the system, its intended purpose, the technical specifications of the system (hardware, software, data), information about the development process, performance metrics, and risk management measures. This documentation must be kept up to date throughout the system's lifecycle.
Record-keeping and logging. High-risk AI systems must be capable of automatically generating logs throughout their operation, to enable post-hoc assessment of whether the system has functioned in accordance with its intended purpose. The level of logging detail must reflect the system's intended purpose and the risks it presents. Deployers are obligated to retain these logs for a period determined by the system's purpose — minimum three years for some categories.
Transparency and provision of information. Providers must ensure that high-risk AI systems are accompanied by instructions for use that enable deployers to fulfill their own obligations under the Act. These instructions must cover the system's intended purpose, accuracy levels, technical measures enabling human oversight, and relevant data characteristics. Deployers must be transparent with affected individuals when they are subject to decisions from high-risk AI systems.
Human oversight. High-risk AI systems must be designed and developed to be effectively overseen by humans during operation. This means building systems that: enable human operators to understand the system's capabilities and limitations; enable monitoring of the system's output; allow operators to intervene and override the system; and enable the system to be stopped. Human oversight cannot be a formality — it must be genuinely possible for a human to understand and override the AI system's outputs.
Accuracy, robustness, and cybersecurity. High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose, documented in technical documentation. They must be robust against errors, faults, and inconsistencies, and against attempts by third parties to adversely affect the system's operation. Cybersecurity measures must be proportionate to the system's risks.
Conformity assessment. Before placing a high-risk AI system on the market, providers must conduct a conformity assessment demonstrating that the system meets all applicable requirements. For most high-risk AI systems, this is a self-assessment — but for systems covered by Annex III in sensitive categories (biometric identification, critical infrastructure, certain law enforcement applications), third-party assessment by a "notified body" (an accredited conformity assessment body) is required. Conformity assessment must be documented and the declaration of conformity must be signed before the system is placed on the market or put into service.
Registration. High-risk AI systems must be registered in an EU-wide database before being placed on the market. This database, maintained by the European Commission's AI Office, is publicly accessible, making registration a transparency commitment as well as a compliance requirement.
Post-market monitoring. Providers must establish post-market monitoring systems to actively collect and review data on high-risk AI systems' performance in the field. Any serious incidents — failures that result or may result in death, serious injury, damage to critical infrastructure, infringement of fundamental rights, or other serious harms — must be reported to the national competent authority within defined timeframes (immediately for incidents involving death or other serious impacts; 15 days for other serious incidents).
Fundamental rights impact assessment. Deployers of certain high-risk AI systems must conduct a fundamental rights impact assessment before deployment. This requirement, inspired by data protection impact assessments under GDPR, requires an assessment of the potential impact of the AI system on fundamental rights, documentation of that assessment, and consultation with representative groups and experts where appropriate.
Section 5: EU AI Act — Governance and Enforcement
The EU AI Act establishes a multi-level governance architecture that distributes enforcement responsibility across European and national institutions.
The AI Office. At the European Commission level, the newly established AI Office is the central governance body for the EU AI Act. It has primary supervisory authority over general-purpose AI models (including those with systemic risk designations), coordinates national competent authorities, develops implementation guidance, and maintains the EU AI registry. The AI Office has direct enforcement authority over GPAI model providers and investigative authority over other AI actors when cross-border or systemic issues arise.
National Competent Authorities. Each EU member state must designate one or more national competent authorities responsible for implementing and enforcing the AI Act at the national level. These authorities are responsible for market surveillance — monitoring AI systems placed on their national market for compliance — and for receiving and investigating complaints about non-compliant AI systems. They have significant investigative powers, including the ability to require access to AI systems, request documentation, conduct audits, and take enforcement action.
Notified Bodies. For high-risk AI systems that require third-party conformity assessment, accredited "notified bodies" conduct the assessment process. These are independent conformity assessment bodies that have been designated by national competent authorities to perform third-party assessment functions. The availability and quality of notified body services for AI systems is a significant practical constraint on AI Act implementation.
Market Surveillance. The EU AI Act applies the EU's existing market surveillance framework to AI systems. National market surveillance authorities monitor AI systems placed on the EU market for conformity with the Act's requirements. This includes both surveillance of systems already on the market and enforcement action against non-compliant systems, including withdrawal from the market.
Penalties. The EU AI Act's penalty framework is severe: - Up to €35 million or 7% of global annual worldwide turnover (whichever is higher) for violations of prohibited AI practices - Up to €15 million or 3% of global annual worldwide turnover for violations of other obligations - Up to €7.5 million or 1.5% of global annual worldwide turnover for providing incorrect, incomplete, or misleading information to authorities
For AI companies with large global revenues, the 7% of global turnover cap for the most serious violations represents an enormous potential fine. Even the 1.5% cap for providing incorrect information represents a meaningful penalty for a large organization.
The Timeline. Compliance with the EU AI Act is phased: - August 2024: Act entered into force - February 2025: Prohibitions took effect - August 2025: GPAI model requirements took effect - August 2026: High-risk AI system requirements become applicable for most systems - 2027: Certain product safety-related high-risk AI provisions phase in
Organizations have had time to prepare for high-risk AI requirements, but preparation requires significant investment and time to implement properly.
Section 6: US Federal AI Regulation
The United States federal approach to AI regulation in 2025 differs fundamentally from the EU's comprehensive framework. Rather than comprehensive AI legislation, the US federal approach relies on existing agency authority applied to AI applications in specific sectors, executive actions, and voluntary frameworks. This creates a more fragmented but in some ways more agile regulatory landscape.
The Biden Executive Order on AI (October 2023). The most significant federal AI governance action was Executive Order 14110, issued by President Biden in October 2023. The EO required developers of the most powerful AI systems (those trained using computational power above defined thresholds) to share safety test results with the federal government before public release. It directed federal agencies to develop AI governance guidance across their domains and established several new federal AI governance functions, including a federal AI safety institute (housed at NIST). The order was revoked by President Trump in January 2025, with a replacement executive order directing development of an AI Action Plan focused on AI competitiveness. This regulatory uncertainty underscores the significance of understanding both the executive framework and the more durable agency-specific regulatory activity.
FTC Authority. The Federal Trade Commission has broad authority under Section 5 of the FTC Act to prevent "unfair or deceptive acts or practices in or affecting commerce." The FTC has applied this authority aggressively to AI, pursuing companies that made misleading claims about AI capabilities, using AI to manipulate consumers, and employing AI in deceptive advertising practices. The FTC's 2023 policy paper on AI made clear the agency's view that its authority extends broadly to AI applications and that AI does not exempt companies from consumer protection requirements. The FTC's "Operation AI Comply" enforcement actions in 2024 targeted companies making unfounded AI performance claims.
EEOC Authority. The Equal Employment Opportunity Commission has authority over employment discrimination under Title VII, the Age Discrimination in Employment Act, the Americans with Disabilities Act, and similar statutes. The EEOC has published technical guidance on AI and employment that makes clear that AI systems used in hiring, promotion, and termination decisions are subject to anti-discrimination law, and that employers cannot outsource their anti-discrimination obligations to AI vendors. The EEOC's 2023 guidance document on AI and the ADA specifically addressed AI hiring tools and their potential to screen out qualified people with disabilities.
CFPB Authority. The Consumer Financial Protection Bureau has authority over consumer financial products and services, with particular relevance to AI in credit underwriting, debt collection, and financial services. The CFPB has issued guidance making clear that creditors must be able to explain AI-based credit decisions to applicants and that "model complexity" does not excuse compliance with adverse action notice requirements under the Equal Credit Opportunity Act.
NIST AI Risk Management Framework. While not legally binding, the National Institute of Standards and Technology's AI Risk Management Framework (AI RMF), published in January 2023, has become the de facto voluntary governance standard for AI in the United States and has significant practical importance. The AI RMF organizes AI risk management around four core functions: Govern (establishing organizational policies and accountability), Map (identifying and categorizing AI risks), Measure (assessing and monitoring risks), and Manage (mitigating and addressing risks). Many federal contracts, procurement standards, and industry frameworks are beginning to require AI RMF alignment, giving it quasi-mandatory status in significant contexts.
Section 7: US Sector-Specific Regulation
AI regulation in US regulated industries is primarily implemented through the application of existing sector-specific regulatory frameworks to AI systems, supplemented by agency guidance.
Banking and Financial Services. The banking regulators — the Office of the Comptroller of the Currency, the Federal Reserve, the Federal Deposit Insurance Corporation, and the National Credit Union Administration — issued a joint statement in 2021 confirming that AI models in banking are subject to the existing model risk management guidance established in SR Letter 11-7 (issued by the Federal Reserve and OCC). SR 11-7 requires banks to maintain rigorous validation and oversight processes for quantitative models used in decision-making — requirements that apply directly to AI models used in credit underwriting, fraud detection, market risk management, and similar applications.
The Equal Credit Opportunity Act and its implementing Regulation B require that creditors provide specific reasons when they take adverse action on credit applications, and that these reasons be the actual reasons for the adverse action — not a generic "model said no." For AI credit underwriting models, this creates an explainability requirement: creditors must be able to determine and communicate the specific factors that drove an adverse credit decision, even if the underlying model is complex. This requirement has driven significant investment in explainable AI for credit scoring.
Healthcare. The Food and Drug Administration regulates AI/ML-based software that meets the definition of a medical device — software that is intended to diagnose, treat, prevent, or mitigate disease or condition. The FDA has developed a framework for AI/ML-based Software as a Medical Device that addresses the unique challenges of AI systems that may continuously learn and update, and has issued a series of guidance documents on AI-based medical devices. FDA premarket clearance or approval processes apply to many healthcare AI applications, with the specific pathway depending on risk level.
Employment. The EEOC's authority over employment discrimination, combined with the doctrine of disparate impact liability (which holds employers responsible for employment practices that produce discriminatory outcomes even without discriminatory intent), creates significant liability exposure for organizations using AI in hiring, promotion, and termination. The four-fifths (80%) rule for adverse impact assessment applies to AI-based selection procedures. Employers who cannot demonstrate the business necessity of AI systems that produce disparate impacts face liability under Title VII.
Section 8: State-Level US Regulation
In the absence of comprehensive federal AI legislation, US states have become active AI regulators, creating a complex and rapidly evolving landscape of state-level requirements.
Illinois BIPA. The Illinois Biometric Information Privacy Act (2008) was far ahead of its time in regulating the collection and use of biometric data, including face geometry templates generated by facial recognition systems. BIPA requires informed written consent before collecting biometric identifiers, prohibits the sale or transfer of biometric data, and requires protective storage standards. Its private right of action — allowing individuals to sue directly without a government enforcement intermediary — has produced massive litigation, including a $650 million settlement with Facebook (now Meta) and a $92 million settlement with TikTok. Any AI system that collects biometric data in Illinois must comply with BIPA or face substantial litigation risk.
California CPRA/CCPA. California's comprehensive privacy law — the California Consumer Privacy Act as amended by the California Privacy Rights Act — applies to AI applications that process personal information of California residents. The law gives consumers rights to know what data is collected about them, to delete that data, to opt out of its sale, and to correct inaccurate data. The California Privacy Protection Agency has proposed regulations on automated decisionmaking that would significantly extend consumer rights in this area, potentially requiring opt-out rights from AI-based profiling and disclosure obligations for automated decision-making.
New York City Local Law 144. New York City's Local Law 144, which took effect in July 2023, imposes specific requirements on employers using automated employment decision tools (AEDTs) to make employment decisions in New York City. Covered employers must conduct annual bias audits of their AEDTs by an independent auditor, publish the results of those audits, and notify candidates and employees that AEDTs are being used in decision-making. This law is one of the few AI-specific regulations in the United States with an independent auditing requirement, making it significant both for compliance purposes and as a model for other jurisdictions.
Colorado SB 21-169. Colorado's law on insurance use of external consumer data and information sources (2021) prohibits insurers from using any AI system that unfairly discriminates based on protected characteristics. It requires insurers to be able to demonstrate that their AI systems do not produce unfair discrimination and imposes significant documentation and testing obligations. Colorado's subsequent Consumer Protections for Artificial Intelligence Act (2024) extended similar protections to other high-risk AI decisions.
The 2024–2025 State AI Legislation Wave. By mid-2025, more than 40 US states had introduced AI-related legislation in the 2024–2025 legislative cycle. Many of these bills addressed AI in employment, healthcare, education, and government use cases. The emerging state AI legislation landscape creates a genuine multi-state compliance challenge, particularly for organizations operating nationally. The preemption debate — whether federal AI legislation should preempt state law — is ongoing and unresolved, meaning that the current multi-state compliance burden may persist indefinitely.
Section 9: Other Major Jurisdictions
United Kingdom. Following Brexit, the UK adopted a "pro-innovation" approach to AI governance that deliberately contrasts with the EU's comprehensive regulatory approach. Rather than creating new AI-specific legislation, the UK's AI White Paper (March 2023) directed existing sector regulators to apply their existing powers to AI within their domains, guided by five cross-sectoral principles (safety, security, robustness; appropriate transparency and explainability; fairness; accountability; contestability and redress). The UK AI Safety Institute, established in 2023, focuses on AI safety research and evaluation rather than regulatory enforcement. The UK approach is explicitly designed to attract AI development by avoiding the compliance burden that the EU AI Act imposes, creating a meaningful divergence between UK and EU AI governance requirements.
Canada. Canada's proposed Artificial Intelligence and Data Act (AIDA), included in Bill C-27, would create a comprehensive AI regulatory framework similar in structure to the EU AI Act, with a risk-based approach and requirements for high-impact AI systems including impact assessments, transparency, human oversight, and bias mitigation. Bill C-27 has faced significant legislative delays and its ultimate form remains uncertain, but it signals Canada's intention to adopt comprehensive AI regulation rather than relying exclusively on existing frameworks.
China. China has taken a distinctive approach to AI governance, producing a series of specific regulations addressing particular AI application types rather than a single comprehensive framework. The Provisions on the Management of Algorithmic Recommendations (2022) require transparency about algorithmic recommendation systems and prohibit certain manipulative practices. The Administrative Provisions on Deep Synthesis Internet Information Services (2022) regulate synthetic media. The Interim Measures for the Management of Generative Artificial Intelligence Services (2023) impose requirements on generative AI providers serving Chinese users, including content filtering requirements, service registration, and security assessments. These regulations serve both consumer protection and political control objectives, creating a compliance environment that is both technically demanding and politically sensitive.
Brazil. Brazil's Lei Geral de Proteção de Dados (LGPD), modeled on GDPR, has been in effect since 2020 and applies to AI applications processing personal data of Brazilian residents. Brazil's proposed AI bill would create a comprehensive AI regulatory framework with risk-based requirements similar to the EU AI Act. The bill had not been enacted as of the time of this chapter's writing but was in advanced legislative stages, and Brazil's approach to AI governance has significant implications for the largest economy in Latin America.
India. India's Digital Personal Data Protection Act (2023) creates a new data protection framework that applies to AI applications processing personal data of Indian residents. India's approach to AI governance more broadly has been notably cautious about regulation, consistent with the government's priority on making India an AI development hub. The Indian AI governance framework as of 2025 relies primarily on voluntary principles and sector-specific guidance rather than comprehensive legislation.
Section 10: Building a Compliance Program
Organizations facing the regulatory landscape described in this chapter need systematic compliance programs that can identify their AI governance obligations, assess their current state against those obligations, implement necessary controls, and maintain compliance as systems and regulations evolve. This section describes the core elements of such a program.
AI Inventory. The foundation of AI compliance is knowing what AI systems you have. Many organizations are surprised to discover, when they conduct a systematic inventory, how many AI systems are in use across different business units — some developed internally, many purchased from vendors, some embedded in broader software platforms. A comprehensive AI inventory should document for each AI system: its name and description; its intended purpose and business function; the data it processes (and whether that includes personal data, sensitive personal data, or protected-class data); who developed it (internal vs. vendor); where it is deployed geographically; what decisions or recommendations it informs; and what compliance frameworks may apply to it.
Risk Classification. Once inventoried, AI systems must be classified for regulatory risk. This involves applying the relevant regulatory frameworks — primarily the EU AI Act for EU-market systems, but also sector-specific and data protection frameworks — to determine what compliance tier applies and what specific requirements follow. Risk classification should be documented, periodically reviewed (as systems change), and updated when relevant regulations change.
Documentation Framework. High-risk AI systems under the EU AI Act require comprehensive technical documentation. More broadly, good AI governance requires that all material AI systems be documented in ways that enable oversight, auditing, and accountability. Documentation should cover: the system's intended purpose and use cases; the data used in training, validation, and testing; performance metrics and their limitations; known risks and limitations; the human oversight processes in place; and the regulatory compliance assessment. This documentation should be version-controlled and updated throughout the system's lifecycle.
Vendor Management. Organizations that deploy AI systems developed by third-party vendors are not absolved of compliance responsibility. The EU AI Act, in particular, creates obligations for deployers as well as providers, and deployers cannot simply transfer those obligations to vendors contractually. Effective AI vendor management requires: contractual provisions requiring vendors to provide documentation and support compliance; due diligence processes for AI vendors that assess their systems' compliance status; ongoing monitoring of vendor compliance as regulations evolve; and clear contractual allocation of responsibility for compliance-related obligations.
Human Oversight Implementation. For high-risk AI systems, human oversight is both a regulatory requirement and a practical necessity. Implementing meaningful human oversight requires more than nominal human review — it requires designing workflows in which human reviewers have genuine ability to understand, question, and override AI outputs. This includes training reviewers on the AI system's capabilities and limitations, designing interfaces that make the AI's reasoning transparent, establishing clear escalation paths when reviewers identify concerns, and monitoring whether human oversight is actually being exercised rather than rubber-stamping AI decisions.
Audit Trails. AI systems must generate and preserve audit trails sufficient to enable post-hoc assessment of how they functioned. For high-risk AI systems under the EU AI Act, this is a specific technical requirement. More broadly, organizations should maintain logs of AI system outputs, decisions, and any human review or override actions for a period determined by the system's purpose and applicable regulatory requirements.
Training. AI compliance is not solely a technical or legal challenge — it requires that the people who develop, deploy, and use AI systems understand their compliance obligations. Training programs should cover: what AI systems the organization uses and what regulatory frameworks apply; the specific obligations that follow from those frameworks (documentation, human oversight, transparency); how to identify and report potential compliance concerns; and how the organization's AI governance processes work in practice.
The DPO and AI Compliance Officer. Organizations subject to GDPR must appoint a Data Protection Officer if they meet certain criteria. The DPO has specific responsibilities with respect to AI systems that process personal data, including involvement in data protection impact assessments for AI systems, advising on GDPR compliance, and serving as the primary contact with data protection authorities. As AI regulation has expanded, many organizations have also established AI compliance officer roles — either within the DPO function or separately — with responsibility for EU AI Act compliance, sectoral AI regulation, and the organization's AI governance program more broadly. These roles require a combination of legal, technical, and organizational skills that is genuinely rare and valuable.
Summary
The AI regulatory landscape has transformed from a voluntary ethics exercise into a multi-jurisdictional legal compliance challenge with substantial enforcement consequences. The EU AI Act, the most comprehensive AI regulatory framework, imposes a risk-tiered regime with extensive requirements for high-risk AI systems and outright prohibition of the most dangerous applications. Its extraterritorial scope means that these requirements apply to AI systems serving the EU market regardless of where they are developed.
The GDPR continues to impose significant data protection requirements on AI systems, including the important Article 22 provisions on automated decision-making that affect many AI applications. US federal regulation operates through sectoral agencies — the FTC, EEOC, CFPB, FDA, and banking regulators — with the NIST AI RMF providing a widely-adopted voluntary governance framework. State-level AI regulation is rapidly expanding, creating additional compliance obligations for US-operating organizations.
Building an effective AI compliance program requires systematic AI inventory, risk classification, documentation, vendor management, human oversight implementation, and training — sustained organizational capacity that grows with the regulatory environment rather than reacting to it crisis by crisis.
This chapter continues with Case Study 1: EU AI Act Compliance for a High-Risk AI Hiring Tool, Case Study 2: The CFPB and Algorithmic Lending, Key Takeaways, Exercises, Quiz, and Further Reading.