Appendix F: AI Regulation Reference

This appendix supplements Chapter 28 (AI Regulation --- Global Landscape) with detailed comparison tables, compliance checklists, and practical reference material. The regulatory landscape is evolving rapidly; the information below reflects the state of play as of early 2026. Organizations should verify current requirements with qualified legal counsel before making compliance decisions.


F.1 Global AI Regulation Comparison Table

The following table provides a jurisdiction-by-jurisdiction overview of the major AI regulatory frameworks worldwide. Use this as a quick-reference tool when assessing multi-jurisdictional compliance obligations.

Jurisdiction Primary Legislation / Framework Approach Risk Classification Scheme Key Requirements Enforcement Body Penalties Status Effective Date
European Union AI Act (Regulation 2024/1689) Prescriptive, risk-based Four tiers: Unacceptable, High, Limited, Minimal Risk management systems; technical documentation; data governance; human oversight; transparency; conformity assessment for high-risk systems; GPAI obligations EU AI Office (GPAI); national competent authorities (other provisions); national market surveillance authorities (product-embedded AI) Up to EUR 35M or 7% global annual turnover (prohibited practices); EUR 15M or 3% (high-risk non-compliance); EUR 7.5M or 1% (incorrect information) Enacted August 1, 2024 (entry into force); phased implementation through August 2, 2027
United States (Federal) No comprehensive law; Executive Orders + sector-specific agency authority + NIST AI RMF (voluntary) Sectoral, fragmented No unified federal classification; sector-specific risk categories (e.g., FDA risk classes for SaMD) Varies by agency: FTC requires non-deceptive AI; FDA requires pre-market review for AI medical devices; EEOC enforces anti-discrimination in AI hiring; NIST AI RMF provides voluntary risk framework FTC, FDA, SEC, EEOC, CFPB, FHFA, DOT/NHTSA, and others within their respective mandates Varies by agency: FTC civil penalties up to $50,120/violation; FDA injunctions and criminal penalties; EEOC compensatory/punitive damages Partially enacted (agency authority); voluntary (NIST AI RMF); executive orders subject to change Ongoing; NIST AI RMF v1.0 released January 2023
US --- California Multiple bills: SB 942 (AI transparency), AB 2013 (GenAI training data disclosure); SB 1047 vetoed (AI safety) Sectoral, transparency-focused No unified classification; applies to specific AI applications (generative AI disclosure, deepfakes, automated decision systems) Generative AI content disclosure; training data transparency; automated decision system impact assessments (proposed); deepfake labeling California Attorney General; California Privacy Protection Agency (CPRA-related AI processing) CPRA penalties up to $7,500 per intentional violation; other penalties vary by statute Partially enacted Various; SB 942 effective January 1, 2025; ongoing legislative activity
US --- Colorado Colorado AI Act (SB 24-205) Prescriptive, risk-based "High-risk AI systems" making or substantially contributing to "consequential decisions" in defined domains Developer disclosure obligations; deployer risk management policies; impact assessments; consumer notification; appeal mechanisms Colorado Attorney General Civil enforcement by AG; no private right of action; penalties under Colorado Consumer Protection Act (up to $20,000/violation) Enacted February 1, 2026
US --- New York City Local Law 144 (Automated Employment Decision Tools) Prescriptive, application-specific Applies specifically to automated employment decision tools (AEDTs) Annual independent bias audit; publication of audit results; candidate notice about AEDT use and data sources NYC Department of Consumer and Worker Protection (DCWP) Civil penalties: $500 first violation; $500--$1,500 per subsequent violation per day Enacted July 5, 2023
US --- Illinois Illinois AI Video Interview Act (820 ILCS 42); Biometric Information Privacy Act (BIPA) Application-specific No risk classification; applies to specific uses (AI video interviews; biometric data collection) AI video interviews: notice and consent before AI analysis; data destruction upon request. BIPA: written consent for biometric data collection; data retention and destruction policies Illinois Attorney General; private right of action under BIPA BIPA: $1,000/negligent violation; $5,000/intentional or reckless violation; AI Video Interview Act: penalties under Consumer Fraud Act Enacted AI Video Interview Act: January 1, 2020; BIPA: 2008
United Kingdom Pro-Innovation Approach to AI Regulation (white paper, 2023); no AI-specific law Principles-based, pro-innovation No statutory classification; sector regulators apply five cross-cutting principles within existing mandates Five principles: safety/security/robustness, transparency/explainability, fairness, accountability/governance, contestability/redress; AI Safety Institute evaluations (voluntary) Existing sector regulators (FCA, ICO, MHRA, Ofcom, CMA, EHRC, HSE); AI Safety Institute (frontier models) No AI-specific penalties; existing regulatory penalties apply (e.g., ICO fines up to GBP 17.5M or 4% global turnover for data protection violations) Voluntary (principles); enacted (existing sector regulation) White paper published March 2023; AISI established November 2023; iterative implementation
China Algorithmic Recommendation Provisions (2022); Deep Synthesis Provisions (2023); Generative AI Interim Measures (2023) State-directed, application-specific By application type: algorithmic recommendations, deep synthesis/deepfakes, generative AI services Algorithm registration with CAC; content alignment with "socialist core values"; user consent and opt-out for recommendations; AI-generated content labeling; real-name verification; pre-launch filing/security assessment for generative AI Cyberspace Administration of China (CAC); Ministry of Science and Technology Administrative penalties including fines, service suspension, and shutdown; amounts vary by regulation; criminal liability for severe violations Enacted Algorithmic Provisions: March 1, 2022; Deep Synthesis: January 10, 2023; GenAI Measures: August 15, 2023
Canada AIDA (proposed, expired); NIST-aligned voluntary frameworks; Directive on Automated Decision-Making (federal government) Comprehensive (proposed); principles-based (current) "High-impact AI systems" (proposed under AIDA); Algorithmic Impact Assessment (AIA) levels for federal government use Proposed: risk assessment, mitigation, monitoring, record-keeping; Current: federal AIA for government AI; privacy obligations under PIPEDA/CPPA Proposed: AI and Data Commissioner; Current: Office of the Privacy Commissioner; Treasury Board (federal government AI) Proposed under AIDA: criminal penalties for knowingly causing serious harm; administrative penalties TBD; Current: PIPEDA penalties up to CAD 100,000 Proposed (AIDA expired January 2025); enacted (Directive on Automated Decision-Making for federal government) Directive: April 2019; AIDA: expired; future legislation expected
Singapore Model AI Governance Framework (2019, updated 2020); A.I. Verify toolkit (2023) Voluntary, practical Guidance-based; no statutory risk tiers; organizations self-assess based on framework principles Four principles: explainability/transparency/fairness, human-centricity, regular model review, internal governance structures; A.I. Verify testing toolkit for quantitative verification Infocomm Media Development Authority (IMDA); Personal Data Protection Commission (PDPC) for data protection No AI-specific enforcement; PDPC penalties up to SGD 1M or 10% annual turnover for data protection violations Voluntary Framework: 2019; A.I. Verify: June 2023; ongoing updates
Japan Social Principles of Human-centric AI (2019); AI Guidelines for Business (2024) Principles-based, "agile governance" No statutory classification; guidance-based categories Social principles: human dignity, diversity/inclusion, sustainability, safety, security, privacy, fair competition, accountability, transparency; business guidelines aligned with Hiroshima AI Process No dedicated AI regulator; sector regulators apply principles within domains; Ministry of Economy, Trade and Industry (METI) provides guidance No AI-specific penalties; existing sector-specific penalties apply Voluntary Social Principles: 2019; AI Guidelines for Business: 2024
Brazil AI Bill (PL 2338/2023) Prescriptive, risk-based (proposed) Risk-based framework with similarities to EU AI Act (proposed); high-risk AI system designation Proposed: mandatory impact assessments for high-risk AI; transparency requirements; individual rights (explanation, human review, correction); rights for affected individuals Proposed: designated regulatory authority under "Brazilian System of AI Regulation"; currently ANPD (data protection authority) handles AI-related data issues Proposed: administrative sanctions including fines up to 2% of revenue (capped at BRL 50M per infraction); suspension of AI system operation Proposed (advancing through Senate) LGPD (data protection): 2020; AI Bill: pending
India No comprehensive AI legislation; Digital Personal Data Protection Act (2023); IndiaAI Mission (2024) Non-regulatory, promotion-focused No statutory AI risk classification DPDPA: consent requirements, data principal rights, data fiduciary obligations affecting AI data processing; government advisories on generative AI (non-binding) Proposed: Data Protection Board of India (under DPDPA); Ministry of Electronics and IT (MeitY) for AI policy DPDPA: up to INR 250 crore (approx. USD 30M) for data protection violations; no AI-specific penalties Enacted (DPDPA); voluntary (AI governance) DPDPA: August 2023; rules pending notification; IndiaAI Mission: March 2024
Australia No comprehensive AI legislation; Voluntary AI Ethics Framework (2019); proposed mandatory guardrails (2024 consultation) Principles-based, moving toward mandatory guardrails Proposed: high-risk AI settings requiring mandatory guardrails; voluntary framework covers all AI Voluntary: eight AI ethics principles (human/societal/environmental wellbeing, human-centered values, fairness, transparency/explainability, contestability, accountability, privacy, reliability/safety); Proposed: mandatory guardrails for high-risk AI No dedicated AI regulator; Australian Information Commissioner (privacy); ACCC (competition/consumer); sector regulators No AI-specific penalties; Privacy Act penalties up to AUD 50M or 30% of adjusted turnover Voluntary (current); mandatory guardrails proposed Voluntary framework: 2019; mandatory guardrails consultation: 2024; legislation timeline TBD
South Korea AI Framework Act (proposed); Personal Information Protection Act (PIPA, amended 2023) Comprehensive (proposed), risk-based Proposed: "high-impact AI" classification with mandatory requirements Proposed: risk assessment and management for high-impact AI; transparency; impact assessments; AI ethics education; regulatory sandbox; Existing: PIPA provisions on automated decision-making (right to explanation, right to refuse) Proposed: National AI Committee and designated agencies; Personal Information Protection Commission (PIPC) for data protection Proposed: administrative penalties for high-impact AI non-compliance; PIPA: fines up to 3% of relevant revenue Proposed (AI Framework Act); enacted (PIPA amendments) PIPA amendments: 2023; AI Framework Act: pending
Israel AI Policy and Regulation (proposed framework, 2024); Privacy Protection Regulations Principles-based, innovation-focused Proposed: risk-based approach with proportionate regulation; no enacted classification Proposed: sectoral regulatory guidance; responsible AI principles; innovation sandbox; AI ethics guidelines; Existing: privacy protection requirements affecting AI data processing Israel Innovation Authority (AI promotion); Privacy Protection Authority (data protection); sector regulators No AI-specific penalties; Privacy Protection Authority enforcement for data violations Proposed / voluntary Policy framework: 2024; ongoing development

How to use this table: Begin by identifying every jurisdiction where your company operates, serves customers, or processes data. For each jurisdiction, check the "Status" column to determine whether requirements are binding. Then map your AI systems against the risk classification scheme in each relevant jurisdiction. See Section F.6 for a step-by-step compliance checklist.


F.2 EU AI Act Deep Dive

The EU AI Act is the world's first comprehensive AI law and serves as the de facto global baseline for AI compliance. This section provides detailed reference material for compliance planning.

F.2.1 Risk Tiers with Examples

Risk Tier Regulatory Treatment Examples Key Obligation
Unacceptable Risk Prohibited outright Social scoring by public authorities; real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions); emotion recognition in workplaces and schools; AI manipulating behavior through subliminal techniques; exploitation of vulnerabilities (age, disability); untargeted scraping of facial images for facial recognition databases; biometric categorization by sensitive attributes (race, religion, sexual orientation) Do not develop, deploy, or make available in the EU market
High Risk Permitted with extensive requirements AI in medical devices; AI managing critical infrastructure (electricity, water, traffic); educational access and grading AI; resume screening and interview evaluation tools; credit scoring systems; insurance risk assessment; law enforcement risk tools; border control AI; judicial decision support Full conformity assessment; risk management system; data governance; technical documentation; logging; transparency; human oversight; accuracy and robustness testing (see checklist below)
Limited Risk Transparency obligations Chatbots and conversational AI; emotion recognition systems (outside workplace/school); biometric categorization systems (non-prohibited uses); AI-generated content and deepfakes Inform users they are interacting with AI; label AI-generated or manipulated content; disclose emotion recognition or biometric categorization
Minimal Risk No specific requirements Spam filters; AI-enabled video games; inventory optimization; predictive maintenance; recommendation engines (non-profiling); AI-assisted translation Voluntary codes of conduct encouraged; existing laws (data protection, consumer protection, product safety) still apply

Important distinction: The risk classification applies to the use case, not the technology. The same large language model can be minimal risk when used for email drafting and high risk when used for resume screening. Classification depends on the deployment context.

F.2.2 High-Risk AI Requirements Checklist

Organizations deploying high-risk AI systems in the EU must satisfy all of the following requirements. Use this checklist as a compliance planning tool.

Risk Management System (Article 9) - [ ] Establish a documented risk management process covering the entire AI system lifecycle - [ ] Identify and analyze known and reasonably foreseeable risks to health, safety, and fundamental rights - [ ] Evaluate risks based on post-market monitoring data - [ ] Adopt risk mitigation measures, prioritizing elimination of risk through design, then mitigation, then information/training - [ ] Test the system to identify the most appropriate risk management measures - [ ] Document residual risks and communicate them to deployers - [ ] Update the risk management system continuously as new information becomes available

Data and Data Governance (Article 10) - [ ] Use training, validation, and testing datasets that are relevant, sufficiently representative, and as free of errors as possible - [ ] Ensure datasets are appropriate for the intended geographic, behavioral, and functional context - [ ] Implement data governance practices covering data collection, data preparation, labeling, cleaning, and enrichment - [ ] Examine datasets for possible biases that could lead to discrimination, particularly regarding protected characteristics - [ ] Document data provenance, characteristics, and any data gaps or limitations - [ ] Where special categories of personal data are processed for bias monitoring, implement appropriate safeguards

Technical Documentation (Article 11) - [ ] Prepare technical documentation before the system is placed on the market - [ ] Include: general description of the system, detailed description of elements and development process, monitoring and control specifications, and detailed information about the system's purpose - [ ] Document: intended purpose, developer/provider identity, system version, hardware/software requirements, design specifications, system architecture, computational resources used - [ ] Document: data requirements (datasheets, training methodologies, data preparation measures, data origin and scope) - [ ] Document: performance metrics, known limitations, foreseeable unintended outcomes, input data specifications - [ ] Document: validation and testing procedures, results, and dates - [ ] Keep documentation up to date throughout the system lifecycle

Record-Keeping and Logging (Article 12) - [ ] Design the system to automatically record events (logs) relevant to identifying risks and enabling post-market monitoring - [ ] Ensure logging captures: periods of use, reference database against which input data was checked, input data for which the system produced a match, identification of natural persons involved in verification of results - [ ] Retain logs for a period appropriate to the intended purpose (minimum requirements set by sector-specific legislation where applicable)

Transparency and Information to Deployers (Article 13) - [ ] Design the system to be sufficiently transparent for deployers to interpret output and use it appropriately - [ ] Provide instructions for use including: provider identity, system characteristics/capabilities/limitations, intended purpose, accuracy/robustness/cybersecurity levels, known or foreseeable circumstances of misuse, human oversight measures, computational and hardware resource specifications, expected lifetime and maintenance requirements

Human Oversight (Article 14) - [ ] Design the system to enable effective human oversight during the period of use - [ ] Enable overseers to: fully understand system capabilities and limitations, properly monitor operation, remain aware of automation bias, correctly interpret output, decide not to use the system or override/reverse output, intervene in or halt operation - [ ] Where the high-risk system identifies a person, implement a "two-person rule" (confirmation by a second qualified person) before taking action, where appropriate

Accuracy, Robustness, and Cybersecurity (Article 15) - [ ] Achieve appropriate levels of accuracy for the intended purpose; declare accuracy levels in instructions for use - [ ] Design the system to be resilient against errors, faults, and inconsistencies in the operating environment - [ ] Implement technical redundancy solutions (backup plans, fail-safe mechanisms) where appropriate - [ ] Protect against unauthorized third-party manipulation of training data, inputs, or model architecture - [ ] Implement cybersecurity measures proportionate to the risk

Conformity Assessment (Article 43) - [ ] Determine the applicable conformity assessment procedure (self-assessment or third-party assessment) - [ ] For Annex III systems (stand-alone high-risk): self-assessment is generally permitted, except for biometric identification systems (which require third-party assessment by a notified body) - [ ] For product-embedded AI: follow the conformity assessment procedure required by the relevant product safety legislation - [ ] Prepare the EU Declaration of Conformity - [ ] Affix the CE marking upon successful conformity assessment - [ ] Register the system in the EU database (Article 71)

Post-Market Monitoring (Article 72) - [ ] Establish a post-market monitoring system proportionate to the nature and risk of the AI system - [ ] Actively and systematically collect, document, and analyze relevant data on performance throughout the system's lifetime - [ ] Use post-market monitoring findings to update the risk management system and compliance documentation - [ ] Report serious incidents to market surveillance authorities (Article 73)

F.2.3 General-Purpose AI Model Obligations

Obligation All GPAI Providers Systemic Risk GPAI Providers
Technical documentation (model capabilities, limitations, intended/foreseeable uses) Required Required
Information and documentation for downstream deployers Required Required
EU copyright law compliance (training data transparency) Required Required
Published summary of training data content Required Required
Model evaluation including adversarial testing --- Required
Systemic risk assessment and mitigation --- Required
Serious incident tracking, documentation, and reporting --- Required
Adequate cybersecurity protections --- Required
Energy consumption reporting --- Required

Systemic risk threshold: A GPAI model is presumed to have systemic risk if trained with more than 10^25 floating-point operations (FLOPs). The European Commission may also designate models based on other criteria (number of users, market impact, degree of autonomy, etc.).

Open-source exceptions: GPAI models released under open-source licenses are exempt from some obligations (technical documentation to downstream providers, copyright compliance summary) unless they present systemic risk.

F.2.4 Implementation Timeline

Date Milestone What It Means for Business
August 1, 2024 Entry into force The clock starts; no immediate compliance obligations
February 2, 2025 Prohibitions apply All unacceptable-risk AI practices must cease immediately; AI literacy obligations for providers and deployers take effect
August 2, 2025 GPAI provisions apply; governance structure established GPAI model providers must comply with documentation, transparency, and copyright provisions; systemic risk models face additional obligations; EU AI Office operational; codes of practice finalized
August 2, 2026 Most high-risk AI provisions apply Full conformity assessment, risk management, data governance, documentation, transparency, human oversight, and accuracy requirements for high-risk AI systems in Annex III categories; post-market monitoring obligations
August 2, 2027 Product-embedded high-risk AI provisions apply AI systems embedded in products covered by existing EU safety legislation (medical devices, vehicles, machinery, etc.) must comply

Practical implication: For a high-risk AI system targeted at the EU market, compliance work should begin no later than early 2025 to allow 12--18 months for documentation, risk management implementation, fairness testing, and conformity assessment before the August 2026 deadline.

F.2.5 Practical Compliance Steps for Businesses

Phase 1: Assessment (Months 1--3) 1. Inventory all AI systems within the organization 2. Classify each system under the EU AI Act risk tiers 3. Identify which systems are high-risk and require full conformity assessment 4. Map GPAI model usage (are you a provider or a deployer?) 5. Conduct a gap analysis between current practices and regulatory requirements 6. Estimate compliance budget and timeline

Phase 2: Foundation Building (Months 3--9) 1. Establish or update AI governance structures (AI ethics board, responsible AI team, escalation pathways) 2. Develop risk management system documentation templates 3. Implement data governance procedures for training, validation, and testing data 4. Build technical documentation templates aligned with Article 11 requirements 5. Design human oversight mechanisms for high-risk systems 6. Develop bias detection and mitigation pipelines for high-risk systems 7. Implement logging and record-keeping infrastructure

Phase 3: Compliance Implementation (Months 9--15) 1. Complete risk management documentation for each high-risk system 2. Prepare full technical documentation packages 3. Conduct bias audits and fairness testing 4. Implement transparency and disclosure mechanisms 5. Test human oversight capabilities (can human overseers effectively intervene?) 6. Conduct robustness and cybersecurity testing 7. Prepare conformity assessment materials

Phase 4: Verification and Maintenance (Months 15--18+) 1. Conduct internal conformity assessment or engage a notified body 2. Prepare and sign the EU Declaration of Conformity 3. Register high-risk systems in the EU database 4. Establish post-market monitoring processes 5. Build regulatory monitoring function for ongoing updates 6. Conduct periodic reviews and update documentation


F.3 US Regulatory Landscape

F.3.1 Federal Agencies with AI Authority

The United States has no single AI regulator. Instead, existing agencies exercise authority over AI within their established mandates.

Agency Domain Key AI-Relevant Actions Legal Authority
Federal Trade Commission (FTC) Consumer protection, competition Enforcement against deceptive AI claims; algorithmic fairness guidance; "data deletion" remedies (requiring destruction of models trained on improperly collected data); Operation AI Comply enforcement sweep Section 5 of FTC Act (unfair or deceptive acts); Health Breach Notification Rule; Children's Online Privacy Protection Act
Food and Drug Administration (FDA) Medical devices, pharmaceuticals Regulatory pathway for AI/ML-based Software as a Medical Device (SaMD); predetermined change control plans; over 900 AI-enabled medical devices authorized by 2025; Good Machine Learning Practice principles Federal Food, Drug, and Cosmetic Act; 21st Century Cures Act
Equal Employment Opportunity Commission (EEOC) Employment discrimination Guidance on AI in hiring (May 2023); emphasis on disparate impact liability; employer remains liable regardless of vendor Title VII of Civil Rights Act; Americans with Disabilities Act; Age Discrimination in Employment Act
Securities and Exchange Commission (SEC) Securities markets Proposed rules on predictive data analytics (PDA) in broker-dealer and investment adviser interactions; conflict of interest requirements for AI-driven recommendations Securities Exchange Act; Investment Advisers Act
Consumer Financial Protection Bureau (CFPB) Consumer financial services Interpretive guidance requiring "specific and accurate" explanations for AI-driven credit denials; adverse action notice requirements apply regardless of model complexity Equal Credit Opportunity Act (ECOA); Regulation B; Fair Credit Reporting Act
Federal Housing Finance Agency (FHFA) Housing finance Fair lending scrutiny of AI-driven mortgage underwriting; oversight of Fannie Mae/Freddie Mac AI policies Federal Housing Enterprises Financial Safety and Soundness Act
Department of Transportation / NHTSA Transportation Autonomous vehicle safety standards; investigation authority; Standing General Order requiring crash reporting for vehicles with ADS or Level 2 ADAS National Traffic and Motor Vehicle Safety Act
Office of the Comptroller of the Currency (OCC) Banking Model risk management guidance (SR 11-7/OCC 2011-12) applied to AI/ML models; third-party risk management guidance National Bank Act
Department of Defense (DoD) Military and intelligence AI Ethical Principles (2020); Responsible AI Strategy; Chief Digital and AI Office (CDAO) governance Various defense authorization acts
National Institute of Standards and Technology (NIST) Standards development AI Risk Management Framework (AI RMF) v1.0; Generative AI profile; AI Safety Institute (AISI) for frontier model evaluation; AI standards development NIST Act; National AI Initiative Act
Office of Management and Budget (OMB) Federal government AI use Memorandum M-24-10 requiring federal agencies to implement AI governance, designate Chief AI Officers, manage AI risks, and publish AI use case inventories E-Government Act; Federal Information Security Modernization Act

F.3.2 Key Executive Orders and Federal Actions

Action Date Key Provisions Current Status
Executive Order 14110 (Safe, Secure, and Trustworthy AI) October 30, 2023 Dual-use foundation model reporting (>10^26 FLOPs); red-team testing requirements; federal agency AI guidelines; AI-generated content watermarking; Chief AI Officer requirements; AI safety and security standards Issued by Biden administration; portions subject to modification by subsequent administrations
OMB Memorandum M-24-10 March 28, 2024 Federal agencies must: implement AI governance, designate Chief AI Officers, manage AI risks for rights-impacting and safety-impacting AI, conduct AI impact assessments, publish AI use case inventories Issued; agency compliance deadlines December 1, 2024
NIST AI RMF v1.0 January 26, 2023 Voluntary risk management framework organized around Govern, Map, Measure, Manage functions; generative AI profile added in 2024 Released; voluntary but increasingly referenced in procurement and legislation
Blueprint for an AI Bill of Rights October 4, 2022 Five principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, human alternatives/fallback Non-binding; aspirational framework from OSTP
National AI Initiative Act January 1, 2021 Established National AI Initiative Office; authorized NIST AI standards; created National AI Research Resource Task Force Enacted as part of NDAA FY2021

F.3.3 State-Level Legislation Summary

State Legislation Focus Key Requirements Effective Date
Colorado SB 24-205 (Colorado AI Act) Comprehensive; high-risk AI in consequential decisions Developer disclosure; deployer risk management, impact assessments, consumer notice, appeal mechanisms February 1, 2026
New York City Local Law 144 Employment (AEDTs) Annual independent bias audit; publication of results; candidate notice July 5, 2023
Illinois AI Video Interview Act (820 ILCS 42) Employment (video interviews) Employer notice and consent before AI analysis; applicant may request data destruction January 1, 2020
Illinois BIPA Biometrics Written consent for biometric data collection; retention/destruction policies; private right of action 2008
California SB 942 (California AI Transparency Act) Generative AI transparency AI content detection tools; manifest disclosures for GenAI providers January 1, 2025
California AB 2013 GenAI training data Transparency requirements for GenAI training data January 1, 2025
California SB 1047 (vetoed) AI safety (frontier models) Would have required safety testing for large models; Kill switch; whistleblower protections Vetoed September 2024
Utah AI Policy Act (SB 149) Disclosure; regulatory sandbox AI interaction disclosure requirements; AI regulatory sandbox program; AI learning laboratory May 1, 2024
Connecticut SB 2 (AI Act, proposed) Comprehensive; high-risk AI Risk assessments for high-risk AI; transparency; impact assessments Proposed
Texas HB 1709 (TTAIA) AI advisory; study Texas AI Advisory Council; AI use in government study September 1, 2023
Virginia HB 2094 (AI Act, proposed) Comprehensive; high-risk AI Risk-based framework; high-impact AI requirements Proposed

State landscape caveat: Over 40 states have introduced AI-related legislation. This table covers the most significant enacted or advanced bills. The landscape changes with every legislative session. Organizations operating across multiple US states must implement systematic legislative monitoring.

F.3.4 Sector-Specific AI Regulation in the US

Healthcare

Regulatory Element Details
FDA AI/ML SaMD framework Predetermined change control plans allow iterative updates; Total Product Lifecycle (TPLC) approach; over 900 authorized AI devices by 2025
Good Machine Learning Practice (GMLP) 10 principles jointly developed by FDA, Health Canada, and MHRA (UK); covers data management, model design, performance evaluation, and real-world monitoring
Clinical decision support (CDS) exemptions Certain CDS software is exempt from FDA device regulation if it meets four criteria (non-interventional, intended for professional use, displays source basis, allows independent review)
HIPAA considerations AI processing protected health information (PHI) must comply with HIPAA Privacy, Security, and Breach Notification Rules
ONC health IT certification AI incorporated into certified health IT must meet ONC certification criteria including algorithmic transparency requirements

Financial Services

Regulatory Element Details
OCC/Fed model risk management (SR 11-7) Applies to all models including AI/ML; requires model validation, ongoing monitoring, governance, and documentation; heightened scrutiny for complex "black box" models
Fair lending (ECOA, FHA) AI-driven lending decisions subject to fair lending analysis; adverse action notices must provide "specific and accurate" reasons regardless of model complexity; CFPB enforcement
SEC predictive data analytics Proposed rules would require broker-dealers and investment advisers to eliminate conflicts of interest when using PDA/AI in investor interactions
Anti-money laundering (BSA/AML) AI used in transaction monitoring and suspicious activity detection must meet BSA/AML compliance standards; FinCEN innovation program
Insurance State-level regulation of AI in underwriting and pricing (Colorado, Connecticut); NAIC model bulletins on AI use in insurance

Employment

Regulatory Element Details
EEOC guidance on AI in hiring Employers liable for disparate impact of AI hiring tools even when provided by third-party vendors; ADA reasonable accommodation obligations extend to AI-administered assessments
NYC Local Law 144 Annual bias audit; published summary of results; candidate notification (see state-level table)
Illinois AI Video Interview Act Notice, consent, and data destruction requirements for AI-analyzed video interviews
Colorado AI Act High-risk designation for AI in employment decisions; deployer obligations
Federal contractor obligations Executive orders and OFCCP guidance may impose additional AI transparency and fairness requirements on federal contractors

Autonomous Vehicles

Regulatory Element Details
NHTSA Standing General Order Mandatory crash reporting for vehicles with automated driving systems (ADS, Level 3--5) or Level 2 ADAS
State-level AV laws 30+ states have enacted AV legislation; requirements vary (some permit driverless testing, others require safety driver)
Federal AV legislation SELF DRIVE Act (House, 2017) and AV START Act (Senate, 2017) passed committee but never enacted; no comprehensive federal AV law
Federal Motor Vehicle Safety Standards (FMVSS) NHTSA rulemaking to update FMVSS for vehicles without traditional manual controls

F.4 Data Protection Regulations Relevant to AI

AI systems are fundamentally data-processing systems. Data protection laws impose requirements that directly constrain how AI systems can be built, trained, and deployed.

F.4.1 GDPR Key Provisions Affecting AI

GDPR Provision Relevance to AI Practical Implication
Article 5 --- Data minimization AI systems must process only data that is adequate, relevant, and limited to what is necessary for the stated purpose Cannot collect excessive data "just in case" a future model might need it; training data scope must be justified
Article 6 --- Lawful basis All AI data processing requires a valid legal basis (consent, legitimate interest, contractual necessity, legal obligation, vital interest, public interest) Legitimate interest requires a balancing test; consent must be freely given, specific, informed, and unambiguous; purpose limitation restricts repurposing of data for new AI applications
Article 9 --- Special categories Processing of sensitive data (race, ethnicity, health, biometrics, political opinions, religious beliefs, sexual orientation) is prohibited unless an exception applies AI training on sensitive data requires explicit consent or a specific exception; bias detection using sensitive attributes requires careful legal analysis
Article 13/14 --- Right to information Data subjects must be informed about automated decision-making, including meaningful information about the logic involved, significance, and envisaged consequences AI systems making decisions about individuals must provide accessible explanations; "black box" models create compliance risk
Article 15 --- Right of access Data subjects can request access to their personal data, including information about automated decision-making Organizations must be able to identify and retrieve an individual's data from AI training sets and processing pipelines
Article 17 --- Right to erasure Data subjects can request deletion of their personal data May require retraining AI models to remove the influence of deleted data; "machine unlearning" is technically challenging
Article 22 --- Automated decision-making Data subjects have the right not to be subject to decisions based solely on automated processing (including profiling) that produce legal or similarly significant effects, unless exceptions apply High-stakes AI decisions require human involvement (not merely human rubber-stamping); must provide meaningful human review, right to contest, and right to obtain human intervention
Article 25 --- Data protection by design and by default Data protection safeguards must be integrated into AI system design from the outset Privacy-preserving techniques (differential privacy, federated learning, anonymization) should be considered at the design stage
Article 35 --- Data Protection Impact Assessment (DPIA) DPIAs are required for processing likely to result in high risk to individuals, including systematic evaluation of personal aspects (profiling), large-scale processing of sensitive data, and systematic monitoring of publicly accessible areas Most high-risk AI systems will trigger a mandatory DPIA; the DPIA must assess necessity, proportionality, and risks, and identify mitigation measures

Key interaction between GDPR and the EU AI Act: The EU AI Act does not replace GDPR. Both apply simultaneously. A high-risk AI system must comply with both the AI Act's conformity assessment requirements and GDPR's data protection requirements. The AI Act's data governance provisions (Article 10) complement, but do not substitute for, GDPR compliance.

F.4.2 CCPA/CPRA Provisions Affecting AI

Provision Relevance to AI Practical Implication
Right to know Consumers can request disclosure of personal information collected and the purposes for collection Must disclose use of personal data in AI training and processing; automated decision-making purposes must be described
Right to delete Consumers can request deletion of personal information Similar to GDPR erasure implications for AI models trained on consumer data
Right to opt out of sale/sharing Consumers can opt out of "sale" or "sharing" of personal information; sharing includes cross-context behavioral advertising AI-driven advertising and profiling using consumer data across contexts may constitute "sharing" requiring opt-out mechanisms
Automated decision-making technology (ADMT) CPRA authorizes the California Privacy Protection Agency (CPPA) to issue regulations on ADMT, including access to information about ADMT, opt-out rights, and consumer right to human review Proposed CPPA regulations (under development) would require pre-use notice, opt-out rights for certain ADMT, and access to logic of ADMT; final rules pending
Data minimization CPRA requires that personal information collection and use be reasonably necessary and proportionate to the disclosed purpose AI systems cannot collect more data than necessary; purpose limitation applies
Risk assessments CPRA authorizes the CPPA to require businesses to conduct cybersecurity audits and submit risk assessments for processing that presents significant risk AI/ML processing of personal information for profiling, particularly in employment, credit, healthcare, and insurance, likely triggers risk assessment requirements

F.4.3 Other Data Protection Laws with AI Implications

Law Jurisdiction Key AI-Relevant Provisions
LGPD (Lei Geral de Protecao de Dados) Brazil Right to review of automated decisions (Article 20); right to explanation of automated decision criteria; data protection impact assessments for high-risk processing
PIPL (Personal Information Protection Law) China Consent requirements for automated decision-making; prohibition of unreasonable differential treatment in pricing/transactions; right to refuse solely automated decisions; personal information impact assessments required; strict cross-border transfer rules
PIPA (Personal Information Protection Act) South Korea Amended 2023 to include right to explanation of automated decisions; right to refuse solely automated decisions with significant impact; data protection impact assessments
DPDPA (Digital Personal Data Protection Act) India Consent-based framework; data fiduciary obligations; data principal rights (access, correction, erasure); significant penalties; rules and implementation details pending
PIPEDA / Proposed CPPA Canada Meaningful consent for AI data processing; Privacy Commissioner guidance on AI and privacy; proposed CPPA would strengthen automated decision-making provisions
Privacy Act 1988 (amended) Australia Australian Privacy Principles governing collection, use, and disclosure of personal information; reform underway to strengthen automated decision-making transparency; proposed targeted rules for high-risk AI

F.5 Industry-Specific AI Regulation

F.5.1 Financial Services

Jurisdiction Regulatory Body Framework / Guidance Key Requirements
US OCC, Federal Reserve, FDIC SR 11-7 / OCC 2011-12 (Model Risk Management) Model validation; ongoing performance monitoring; governance and controls; documentation of model development, testing, and deployment; independent model review
US CFPB ECOA / Regulation B interpretive guidance "Specific and accurate" adverse action reasons for AI-driven credit decisions; model complexity does not excuse non-compliance; creditor bears burden of ensuring AI outputs comply
US SEC Proposed PDA rules Broker-dealers and investment advisers must eliminate or neutralize conflicts of interest arising from use of predictive data analytics/AI in investor interactions
EU European Banking Authority (EBA) EBA Report on Big Data and Advanced Analytics (2020) Model governance; bias and discrimination monitoring; explainability requirements; data quality standards
EU European Securities and Markets Authority (ESMA) MiFID II algorithmic trading obligations Algorithmic trading systems must have effective controls, risk limits, testing; firms must notify regulators of algorithmic trading; market-making obligations; circuit breakers
UK FCA, PRA, Bank of England AI and Machine Learning in Financial Services (DP5/22) Model risk management; governance; fairness; consumer protection; operational resilience; firms expected to apply five AI principles within existing regulatory frameworks
Singapore MAS Principles on Fairness, Ethics, Accountability, and Transparency (FEAT) Financial institutions should promote fairness, ethics, accountability, and transparency in AI use; self-assessment methodology (Veritas toolkit)
International Basel Committee on Banking Supervision Newsletter on AI/ML in banking (2024) Supervisory expectations for model risk management of AI/ML models; emphasis on explainability, data quality, governance

F.5.2 Healthcare

Jurisdiction Regulatory Body Framework / Guidance Key Requirements
US FDA AI/ML-Based Software as a Medical Device (SaMD) Framework Total Product Lifecycle approach; predetermined change control plans (PCCPs) for iterative model updates; Good Machine Learning Practice (GMLP) principles; performance monitoring
US FDA, Health Canada, MHRA (UK) Good Machine Learning Practice (GMLP) --- 10 Guiding Principles Multi-disciplinary expertise; good software engineering; representative clinical study participants; independent test datasets; reference datasets; tailored model design; human-AI team performance focus; deployed model monitoring; manage retraining risks; provide transparency
EU Notified Bodies / Member States EU Medical Device Regulation (MDR 2017/745) + AI Act AI-enabled medical devices are high-risk under the AI Act and subject to MDR conformity assessment; dual compliance required; clinical evaluation must account for AI performance
EU European Medicines Agency (EMA) Reflection paper on AI in drug lifecycle (2023) Guidance on AI use in drug development, manufacturing, and pharmacovigilance; data quality, validation, and regulatory submission standards
UK MHRA Software and AI as a Medical Device Change Programme Roadmap for regulating AI-based medical devices post-Brexit; alignment with GMLP; proportionate regulation based on risk classification
International WHO Ethics and Governance of AI for Health (2021) Six guiding principles: protect autonomy, promote well-being, ensure transparency/explainability, foster responsibility/accountability, ensure inclusiveness/equity, promote responsive/sustainable AI

F.5.3 Employment

Jurisdiction Regulatory Body Framework / Guidance Key Requirements
US (Federal) EEOC Technical Assistance on AI and Title VII (2023) Employers liable for disparate impact of AI hiring tools regardless of vendor; four-fifths rule applies; reasonable accommodation obligations under ADA extend to AI assessments
US (Federal) DOJ Civil Rights Division Guidance on AI and ADA (2022) AI-driven hiring, performance management, and other employment decisions must comply with ADA; reasonable modifications for individuals with disabilities
US (NYC) DCWP Local Law 144 Annual independent bias audit of AEDTs; published audit summary; candidate notice of AEDT use, data categories, and job qualifications; 10-day notice before use
US (Illinois) AG / Private action AI Video Interview Act Employer notice before AI analysis of video interview; applicant consent required; data destruction on request; disclosure of AI characteristics evaluated
US (Colorado) AG Colorado AI Act AI systems making or substantially contributing to consequential employment decisions classified as high-risk; deployer risk management, impact assessment, consumer notice
EU National authorities EU AI Act (Annex III) AI in employment (resume screening, interview evaluation, promotion decisions, task allocation, performance monitoring, termination decisions) classified as high-risk; full conformity assessment required

F.5.4 Autonomous Vehicles

Jurisdiction Key Developments
US (Federal) NHTSA Standing General Order on crash reporting; FMVSS updates for ADS vehicles; no comprehensive federal AV law; proposed regulations for ADS safety frameworks
US (State) 30+ states with AV laws; California DMV autonomous vehicle testing permits; Arizona, Texas, Nevada permit commercial deployment of driverless vehicles; varying insurance and liability requirements
EU UNECE WP.29 regulations on automated lane-keeping systems (ALKS); EU type-approval framework for automated vehicles; AI Act applies to AI components in vehicles (high-risk, effective 2027)
UK Automated Vehicles Act 2024; creates legal framework for self-driving vehicles; establishes authorized self-driving entity (ASDE) liability; user-in-charge concept
China National standards for intelligent connected vehicles; pilot programs in major cities; road testing regulations; data security requirements for smart vehicles (CAC)

F.5.5 Defense and Intelligence

Jurisdiction Key Developments
US (DoD) AI Ethical Principles (2020): responsible, equitable, traceable, reliable, governable; Responsible AI Strategy and Implementation Pathway (2022); Chief Digital and AI Office (CDAO) governance; Directive 3000.09 on autonomous weapons (updated 2023): requires "appropriate levels of human judgment"
US (Intelligence Community) IC AI Ethics Principles (2020); AI ethics framework for intelligence activities; transparency requirements within classified constraints
NATO Principles of Responsible Use of AI in Defence (2021): lawfulness, responsibility/accountability, explainability/traceability, reliability, governability, bias mitigation; AI strategy emphasizing interoperability
EU AI Act explicitly excludes military and national security AI from scope; separate defense AI governance under development through European Defence Agency
International UN Convention on Certain Conventional Weapons (CCW) discussions on lethal autonomous weapons systems (LAWS); no binding international treaty as of 2026; Group of Governmental Experts continuing deliberations

F.6 Compliance Checklist

Use this checklist as a practical starting point for organizations deploying AI systems across multiple jurisdictions. It is not a substitute for qualified legal counsel but provides a structured framework for identifying and addressing compliance obligations.

Phase 1: Regulatory Mapping

  • [ ] Inventory AI systems. Create a complete inventory of all AI systems developed, deployed, or procured by the organization, including third-party AI tools and embedded AI components
  • [ ] Map jurisdictions. For each AI system, identify every jurisdiction where: (a) the organization operates, (b) the system is deployed, (c) the system's outputs affect individuals, (d) training data originates, (e) data is processed or stored
  • [ ] Classify by risk. For each AI system, determine the risk classification under: EU AI Act risk tiers; US sector-specific requirements; any other applicable jurisdictional framework
  • [ ] Identify applicable laws. For each AI system and jurisdiction, list all applicable laws, regulations, and binding guidance (use the comparison table in Section F.1 as a starting point)
  • [ ] Assess GPAI obligations. If the organization develops, distributes, or deploys general-purpose AI models, identify provider vs. deployer obligations under the EU AI Act
  • [ ] Document data flows. Map personal data flows for each AI system, including cross-border transfers, to identify data protection obligations (GDPR, CCPA/CPRA, PIPL, etc.)

Phase 2: Gap Analysis

  • [ ] Assess current state. For each compliance requirement identified in Phase 1, evaluate the organization's current level of compliance (fully compliant, partially compliant, non-compliant, not assessed)
  • [ ] Prioritize gaps. Rank gaps by: (a) regulatory enforcement risk, (b) financial penalty exposure, (c) reputational risk, (d) remediation complexity and cost
  • [ ] Estimate remediation costs. Develop cost estimates for closing each compliance gap, including internal resources, external counsel, technology investment, and ongoing maintenance
  • [ ] Develop timeline. Align remediation timelines with regulatory effective dates (see Section F.2.4 for EU AI Act timeline)

Phase 3: Governance and Infrastructure

  • [ ] Designate AI governance ownership. Appoint a responsible executive (Chief AI Officer, Chief Ethics Officer, or equivalent) with authority over AI compliance
  • [ ] Establish cross-functional team. Build an AI governance team spanning legal, engineering, data science, product management, risk, and ethics
  • [ ] Adopt a risk management framework. Implement the NIST AI RMF (or equivalent) as the organizational standard for AI risk management
  • [ ] Create documentation standards. Develop templates for: technical documentation (EU AI Act Article 11), model cards, datasheets for datasets, risk assessment reports, impact assessments, conformity assessment materials
  • [ ] Implement logging and monitoring. Deploy infrastructure for automatic event logging, performance monitoring, bias/drift detection, and incident tracking
  • [ ] Establish incident response procedures. Create documented procedures for responding to AI-related incidents, regulatory inquiries, and enforcement actions

Phase 4: System-Level Compliance

For each high-risk or regulated AI system:

  • [ ] Complete risk management documentation. Document risks, mitigation measures, residual risks, and ongoing monitoring plans
  • [ ] Verify data governance. Confirm that training, validation, and testing data meet representativeness, quality, and bias-detection requirements
  • [ ] Prepare technical documentation. Complete documentation covering system purpose, architecture, development methodology, performance metrics, and known limitations
  • [ ] Implement transparency mechanisms. Ensure required disclosures (AI nature of system, automated decision-making, data usage) are provided to affected individuals
  • [ ] Validate human oversight. Confirm that human oversight mechanisms allow effective intervention, override, and system halt
  • [ ] Conduct bias and fairness testing. Test for discriminatory outcomes across protected characteristics; document results and remediation actions
  • [ ] Conduct robustness and security testing. Verify system resilience against errors, adversarial inputs, and cybersecurity threats
  • [ ] Complete conformity assessment. Conduct self-assessment or engage notified body as required by applicable regulation
  • [ ] Complete data protection impact assessment (DPIA). Where required by GDPR or equivalent laws, conduct and document a full DPIA
  • [ ] Complete sector-specific requirements. Address any additional sector-specific obligations (FDA pre-market review, bias audit under NYC LL144, EEOC disparate impact analysis, etc.)

Phase 5: Ongoing Compliance

  • [ ] Establish post-market monitoring. Continuously monitor system performance, accuracy, fairness, and safety after deployment
  • [ ] Implement regulatory monitoring. Track regulatory developments across all relevant jurisdictions; assign responsibility for legislative monitoring
  • [ ] Conduct periodic reviews. Review and update risk assessments, documentation, and compliance materials at least annually, or upon material changes to the AI system or regulatory landscape
  • [ ] Train employees. Provide regular AI governance and compliance training to all employees involved in AI development, deployment, and oversight
  • [ ] Maintain audit trail. Preserve records of all compliance activities, assessments, audit results, and remediation actions
  • [ ] Engage with regulators. Participate in public consultations, industry forums, standard-setting processes, and regulatory sandbox programs where available

F.7 Regulatory Resources

Government Regulators and AI Bodies

Organization URL Focus
EU AI Office digital-strategy.ec.europa.eu/en/policies/ai-office EU AI Act implementation, GPAI oversight, codes of practice
European Commission --- AI Policy digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence EU AI strategy, regulation, and investment
NIST AI ai.nist.gov AI Risk Management Framework, AI Safety Institute, standards
US AI Safety Institute (at NIST) ai.nist.gov/aisi Frontier AI evaluation, safety research, standards
UK AI Safety Institute aisafety.gov.uk Frontier model evaluation, safety research
FTC AI Resources ftc.gov/technology/artificial-intelligence Consumer protection enforcement, AI guidance
FDA AI/ML Medical Devices fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-software-medical-device SaMD regulatory pathway, authorized AI device list
EEOC AI Guidance eeoc.gov/ai AI in hiring, Title VII and ADA compliance
IMDA (Singapore) imda.gov.sg/how-we-can-help/model-ai-governance-framework Model AI Governance Framework, A.I. Verify
Cyberspace Administration of China cac.gov.cn China AI regulations, algorithm registration
Canadian Office of the Privacy Commissioner priv.gc.ca/en Privacy and AI guidance for Canada
Australian Department of Industry industry.gov.au/science-technology-and-innovation/technology/artificial-intelligence Australia AI Ethics Framework, policy

Key Guidance Documents

Document Issuer Year Summary
EU AI Act (full text) European Parliament and Council 2024 Regulation 2024/1689; the world's first comprehensive AI law
NIST AI Risk Management Framework v1.0 NIST 2023 Voluntary framework: Govern, Map, Measure, Manage functions for AI risk
NIST Generative AI Profile (AI 600-1) NIST 2024 Companion to AI RMF addressing generative AI risks
OECD AI Principles OECD 2019 (updated 2024) Five principles for responsible AI; adopted by 46+ countries
Hiroshima AI Process International Guiding Principles G7 2023 11 guiding principles for organizations developing advanced AI
Hiroshima AI Process International Code of Conduct G7 2023 11 voluntary actions for organizations developing advanced AI
Blueprint for an AI Bill of Rights White House OSTP 2022 Five principles: safe systems, discrimination protections, privacy, notice, human alternatives
Good Machine Learning Practice (GMLP) FDA, Health Canada, MHRA 2021 10 guiding principles for AI/ML medical device development
SR 11-7 / OCC 2011-12 (Model Risk Management) Federal Reserve / OCC 2011 Foundational guidance for financial model risk management, applied to AI/ML
WHO Ethics and Governance of AI for Health WHO 2021 Six principles for responsible health AI
Singapore Model AI Governance Framework (2nd ed.) IMDA 2020 Practical governance guidance with implementation examples
IEEE 7000-2021 (Model Process for Addressing Ethical Concerns) IEEE 2021 Standard for integrating ethical considerations into system design
ISO/IEC 42001:2023 (AI Management System) ISO/IEC 2023 International standard for establishing, implementing, and maintaining AI management systems

Industry Groups and Standards Organizations

Organization Focus Website
Partnership on AI Multi-stakeholder responsible AI best practices partnershiponai.org
Frontier Model Forum Frontier AI safety research and best practices frontiermodelforum.org
OECD.AI Policy Observatory AI policy analysis and data across countries oecd.ai
Global Partnership on AI (GPAI) International AI governance collaboration (29 member countries) gpai.ai
IEEE Standards Association --- AI/AS AI and autonomous systems ethics and technical standards standards.ieee.org
ISO/IEC JTC 1/SC 42 (AI) International AI standards development iso.org/committee/6794475.html
World Economic Forum --- AI Governance Global AI governance frameworks and multi-stakeholder dialogue weforum.org/topics/artificial-intelligence-and-robotics
AI Now Institute Research on social implications of AI ainowinstitute.org
Center for AI Safety AI safety research and policy safe.ai
Alan Turing Institute --- AI Ethics AI ethics and governance research (UK) turing.ac.uk
Montreal AI Ethics Institute AI ethics research, education, and public engagement montrealethics.ai

A note on currency: AI regulation is among the fastest-moving areas of technology policy. Laws are being enacted, amended, and interpreted continuously. The information in this appendix was current as of early 2026 but may have been superseded by the time you read it. Treat this appendix as a starting framework for compliance research, not as a final authority. For binding legal obligations, always consult the primary legal texts and qualified legal counsel in the relevant jurisdiction.

For ongoing updates, the companion website for this textbook maintains a curated regulatory tracker with links to primary sources, updated quarterly.

See also: Chapter 27 (AI Governance Frameworks) for organizational governance structures; Chapter 28 (AI Regulation --- Global Landscape) for narrative analysis and strategic context; Chapter 29 (Privacy, Security, and AI) for data protection technical implementation; Chapter 30 (Responsible AI in Practice) for operationalizing regulatory requirements.