25 min read

> — Melvin Kranzberg, historian of technology (Kranzberg's First Law)

Learning Objectives

  • Compare different national/regional approaches to AI governance
  • Evaluate the EU AI Act's risk-based framework
  • Analyze challenges of regulating fast-moving technology
  • Assess industry self-regulation vs. government oversight
  • Formulate governance recommendation for specific AI application

Chapter 13: Governing AI — Policy, Regulation, and Global Approaches

"Technology is neither good nor bad; nor is it neutral." — Melvin Kranzberg, historian of technology (Kranzberg's First Law)

Imagine you are a legislator. A constituent calls your office, alarmed. She applied for a job and was rejected by an AI system before any human saw her resume. She suspects discrimination but cannot prove it — the company says the algorithm is proprietary. She wants to know: What law protects her? Who does she complain to? What are her rights?

You look into it. And you discover something disconcerting: depending on where she lives, what kind of job it was, and which specific AI system was used, the answer might be "no law specifically covers this," or "the FTC might investigate," or "the EU AI Act classifies this as high-risk and requires a human review," or "the company voluntarily committed to responsible AI principles, but those commitments are not legally binding."

Welcome to the state of AI governance in 2026. It is messy, fragmented, fast-moving, and genuinely consequential. This chapter will help you understand it — not because you need to become a policy expert, but because in a democracy, these decisions are ultimately yours. The rules that govern AI are shaped by elected officials, public pressure, industry lobbying, and civic engagement. Understanding how AI is governed is not an abstract policy exercise. It is a civic skill.

In Chapter 9, we explored how AI systems can discriminate and why fairness is harder to define than it sounds. In Chapter 12, we examined how AI amplifies surveillance and how different jurisdictions approach privacy. Now we zoom out to ask the bigger question: How should societies govern AI as a whole? Who gets to decide the rules, and what should those rules look like?


13.1 Why AI Governance Is Hard

Before we look at how different countries are trying to govern AI, we need to understand why this problem is so difficult. It is not that policymakers are lazy or uninformed (well, sometimes). It is that AI governance presents a set of genuinely hard structural challenges that do not have clean solutions.

The Pacing Problem

The single biggest challenge in AI governance is what scholars call the pacing problem: technology evolves faster than the rules that govern it.

Consider this timeline. GPT-3 was released in June 2020. GPT-4 arrived in March 2023. Between those dates — less than three years — the capabilities of large language models leaped from "impressive but clearly limited" to "passing bar exams and writing publishable code." Meanwhile, the EU AI Act, which began its legislative journey in April 2021, was not finalized until March 2024 and will not be fully enforceable until 2027.

By the time a regulation is drafted, debated, amended, passed, and implemented, the technology it was designed to govern may have changed dramatically. The AI system that prompted the regulation might be obsolete. New AI systems with capabilities no one anticipated might be in widespread use.

This is not unique to AI — the same challenge exists with biotechnology, cryptocurrency, and social media. But AI moves especially fast, and its applications span nearly every sector of the economy, making it particularly difficult to regulate through traditional legislative processes.

The Knowledge Gap

Effective regulation requires that regulators understand what they are regulating. This is straightforward for most industries: bank regulators understand banking, food safety inspectors understand food processing, aviation regulators understand aircraft.

AI is different. The people building cutting-edge AI systems work at a handful of companies and research labs. The technical details of model architecture, training data composition, and system capabilities are often proprietary. Many legislators and regulators lack the technical background to evaluate AI systems independently, which means they must rely on the very companies they are trying to regulate to explain what the technology does and what risks it poses.

This information asymmetry creates a risk of regulatory capture — a situation where the regulated industry gains undue influence over the regulators, shaping rules in ways that serve industry interests rather than public interests. We will return to this concept in Section 13.5.

The Jurisdictional Problem

AI systems operate across borders. A model trained in the United States on data scraped from the global internet can be deployed in Europe, make decisions affecting people in Africa, and store its outputs in data centers in Singapore. Which country's laws apply?

There is no global AI governance body. International organizations like the UN and the OECD have issued AI principles and guidelines, but these are non-binding. The result is a patchwork of national and regional regulations that sometimes conflict, sometimes overlap, and sometimes leave large gaps.

The Definition Problem

What counts as "AI"? This sounds like a simple question, but it has enormous regulatory consequences.

If you define AI narrowly — say, as deep learning systems with neural networks — you miss rule-based systems, statistical models, and simpler automated decision-making tools that can still cause significant harm. If you define AI broadly — as any automated system that processes data and makes or supports decisions — you sweep in everything from Excel spreadsheets to spam filters, creating regulatory obligations that may be disproportionate to the risks.

The EU AI Act spent years negotiating its definition. The final text defines an "AI system" as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

That is a wide net. Whether it is the right width is still debated.

💡 Key Insight: AI governance is hard not because policymakers are failing, but because the problem itself is structurally difficult. The technology moves fast (pacing problem), the expertise is concentrated in industry (knowledge gap), AI crosses borders (jurisdictional problem), and the thing being regulated is hard to define (definition problem). Any governance approach must contend with all four challenges simultaneously.


13.2 The EU AI Act: A Risk-Based Approach

The European Union's AI Act, which became law in August 2024, is the world's first comprehensive legal framework for AI. It represents a specific philosophy: that AI systems should be regulated according to the level of risk they pose to people's safety, rights, and well-being.

The Risk Pyramid

The EU AI Act classifies AI systems into four risk categories, forming a pyramid:

Unacceptable Risk (Banned) At the top of the pyramid are AI applications deemed so dangerous that they are prohibited outright: - Social scoring systems used by governments (think China's social credit system) - Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions for serious crimes) - AI systems that exploit vulnerable groups (children, people with disabilities) - Emotion recognition in workplaces and educational institutions - Predictive policing based solely on profiling

High Risk (Heavily Regulated) Below the banned category are AI systems that pose significant risks but are allowed under strict conditions. These include: - AI used in hiring and recruitment (relevant to our opening scenario) - AI in education (student assessment, admissions) - AI in critical infrastructure (energy, water, transport) - AI in law enforcement (risk assessment, evidence analysis) - AI in migration and border control - AI in healthcare and medical devices - AI used to evaluate creditworthiness or insurance pricing

High-risk systems must comply with extensive requirements: risk management systems, high-quality training data, documentation, transparency, human oversight, accuracy, and cybersecurity. They must undergo conformity assessments before being placed on the market — essentially, proof that they meet the regulation's requirements.

Limited Risk (Transparency Obligations) AI systems that interact with people must disclose that they are AI. This includes chatbots (users must know they are talking to a machine), deepfake generators (AI-generated content must be labeled), and emotion recognition systems (people must be told they are being analyzed).

Minimal Risk (Unregulated) The vast majority of AI systems — spam filters, video game AI, recommendation algorithms for music or movies — are considered minimal risk and face no specific regulation under the AI Act.

General-Purpose AI Models

The AI Act also addresses general-purpose AI models (GPAIs) — systems like GPT-4, Claude, and Gemini that can be used for a wide range of tasks. These models face transparency requirements: providers must publish summaries of training data, comply with copyright law, and provide technical documentation. Models deemed to pose "systemic risk" (roughly, the most powerful models from the largest companies) face additional obligations including adversarial testing, incident monitoring, and cybersecurity requirements.

CityScope Predict Under the EU AI Act

Let us apply this framework to one of our anchor examples. Where would CityScope Predict — the predictive policing system — fall in the EU AI Act's risk classification?

CityScope Predict predicts where crime is likely to occur and directs police resources accordingly. Under the AI Act, predictive policing based solely on profiling or assessing the risk of individuals committing offenses is classified as unacceptable risk — it is banned.

However, the line is drawn carefully. AI systems that support human decision-making in law enforcement — for example, analyzing evidence patterns or optimizing patrol routes based on historical crime data rather than profiling individuals — might be classified as high risk rather than banned. The distinction depends on whether the system profiles individuals or analyzes aggregate patterns, and whether a human makes the final decision.

This kind of fine-grained classification is both the strength and the weakness of the risk-based approach. It provides nuance, but it also creates ambiguity. A company deploying CityScope Predict could argue that it analyzes patterns, not individuals, and therefore falls into the "high risk" rather than "banned" category. Whether regulators agree would depend on the system's actual implementation — and on enforcement capacity.

Criticisms of the EU AI Act

The AI Act has been criticized from multiple directions:

From industry: Some companies argue that the compliance costs — especially for high-risk systems — will slow innovation and put European companies at a competitive disadvantage compared to less-regulated competitors in the U.S. and China.

From civil society: Some advocates argue the Act does not go far enough, particularly in its exceptions for national security and law enforcement. The real-time biometric identification ban, for example, includes exceptions for serious crimes, terrorism, and missing persons — exceptions that critics fear will be broadly interpreted.

From technologists: Some AI researchers argue that the Act's risk categories are too static for a fast-moving technology. A system classified as "minimal risk" today might become "high risk" tomorrow as it is applied in new contexts — but the classification does not automatically update.

📊 Comparison Table: EU AI Act Risk Categories

Risk Level Examples Requirements
Unacceptable Social scoring, predictive policing by profiling, exploiting vulnerable groups Banned
High Hiring AI, medical AI, credit scoring, educational assessment, law enforcement support Risk management, data quality, transparency, human oversight, conformity assessment
Limited Chatbots, deepfake generators, emotion recognition Must disclose AI use to users
Minimal Spam filters, game AI, music recommendations No specific requirements
GPAI models GPT-4, Claude, Gemini Training data transparency, copyright compliance; systemic risk models face additional testing

13.3 The U.S. Approach: Sectoral and Voluntary

If the EU AI Act represents one philosophy — comprehensive, prescriptive, risk-classified — the United States represents a fundamentally different one: sector-specific, largely voluntary, and market-oriented.

No Federal AI Law

As of early 2026, the United States has no comprehensive federal AI legislation comparable to the EU AI Act. This is not for lack of proposals — dozens of AI-related bills have been introduced in Congress — but none has achieved the consensus needed to pass.

Instead, the U.S. approach relies on three pillars:

Existing laws applied to AI. Federal agencies use their existing regulatory authority to address AI issues within their domains. The Federal Trade Commission (FTC) uses its consumer protection mandate to take enforcement action against deceptive or unfair AI practices. The Equal Employment Opportunity Commission (EEOC) applies anti-discrimination laws to AI-driven hiring. The Food and Drug Administration (FDA) regulates AI-powered medical devices. The Consumer Financial Protection Bureau (CFPB) addresses algorithmic bias in lending.

The advantage is that action can happen relatively quickly — agencies do not need new legislation. The disadvantage is that coverage is inconsistent. If an AI system falls between agencies' jurisdictions, it may face no regulation at all.

Executive orders and guidance. Presidents have issued executive orders on AI — most notably, President Biden's October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which directed federal agencies to develop AI safety standards, required AI developers to share safety test results with the government, and established principles for AI use in government. Executive orders can be rescinded by subsequent presidents, making them a less durable form of governance than legislation.

State-level regulation. In the absence of federal action, states have stepped in. Colorado passed a comprehensive AI discrimination law in 2024. California has proposed multiple AI bills. Illinois' Biometric Information Privacy Act (BIPA) has been used to challenge facial recognition practices. This creates a patchwork: the rules governing AI depend on which state you are in.

ContentGuard Under U.S. Regulation

Consider ContentGuard, our anchor example of a social media content moderation system. In the United States, ContentGuard operates in a distinctive regulatory environment.

Section 230 of the Communications Decency Act provides platforms broad immunity from liability for user-generated content and for their content moderation decisions. This means that ContentGuard — and the platform that uses it — has enormous discretion in deciding what content to allow, promote, or remove.

No federal law specifically requires ContentGuard to be fair, transparent, or accurate in its moderation decisions. No federal regulator conducts regular audits of content moderation algorithms. If ContentGuard's AI systematically suppresses speech from certain communities or viewpoints, there is no dedicated regulatory pathway for affected users to seek redress.

The FTC could potentially intervene if a platform's content moderation practices are deemed deceptive (for example, if a platform claims to treat all viewpoints equally but its algorithm demonstrably does not). But this requires a specific FTC investigation, which depends on the agency's enforcement priorities and resources.

Under the EU AI Act, by contrast, ContentGuard would likely be classified as a system that requires transparency obligations — users interacting with the system would need to know that an AI is making moderation decisions, and the platform would need to provide meaningful information about how the system works.

The Innovation vs. Regulation Debate

The U.S. approach reflects a deliberate policy choice: prioritizing innovation over precaution. Proponents argue that light-touch regulation has helped make the United States the world leader in AI development. Heavy regulation, they contend, would drive AI companies to less regulated jurisdictions and slow the development of beneficial AI applications.

Critics respond that the U.S. approach treats the public as guinea pigs — allowing potentially harmful AI systems to be deployed at scale and addressing problems only after harm has occurred, rather than preventing harm in the first place. They point to the documented cases of AI discrimination in hiring, lending, healthcare, and criminal justice as evidence that voluntary self-regulation is insufficient.

🔵 Debate Framework: Innovation vs. Precaution

The Innovation Principle: Regulation should not restrict AI development unless there is clear evidence of harm. The benefits of AI — in healthcare, education, productivity, and scientific discovery — are enormous, and over-regulation risks slowing progress that could benefit millions.

The Precautionary Principle: AI systems should be proven safe and fair before they are deployed at scale, not after. The burden of proof should be on AI developers to demonstrate their systems do not cause harm, rather than on affected communities to prove that harm has occurred.

Your analysis: Where do you fall on this spectrum? Does your answer depend on the specific AI application (medical AI vs. music recommendations), the population affected (children vs. adults), or the severity of potential harm (job loss vs. physical safety)?


13.4 China's AI Governance: State Control and Innovation

China's approach to AI governance is distinctive and does not fit neatly into Western categories of "pro-regulation" or "anti-regulation." It is simultaneously one of the most ambitious in promoting AI development and one of the most active in regulating specific AI applications.

The Dual Strategy

China's AI governance combines two objectives that many Western observers assume are contradictory:

Promoting AI as a strategic national priority. China's "New Generation AI Development Plan" (2017) set explicit goals for China to become the world leader in AI by 2030. The government has invested billions in AI research, infrastructure, and talent development. Provincial and municipal governments compete to attract AI companies with subsidies, tax breaks, and regulatory sandboxes.

Regulating AI applications that threaten social stability or state control. At the same time, China has enacted some of the world's most specific AI regulations:

  • Algorithmic recommendation regulations (2022): Require platforms to offer users the option to turn off algorithmic recommendations and prohibit algorithms that create "information cocoons" (filter bubbles) or promote addiction.
  • Deep synthesis regulations (2023): Require that AI-generated content (deepfakes, synthetic media) be clearly labeled and that providers verify users' identities.
  • Generative AI regulations (2023): Require generative AI services to reflect "core socialist values," undergo security assessments before public release, and respect intellectual property rights.

What This Reveals

China's approach reveals something important about the nature of AI governance: what a government regulates tells you a lot about what it is most concerned about.

The EU's AI Act prioritizes individual rights and safety — its highest-risk categories involve systems that threaten fundamental human rights.

The U.S. approach prioritizes innovation and market competition — its lightest regulation is in areas where regulation might slow commercial development.

China's regulations prioritize social stability and state authority — its strictest rules target AI applications that could undermine government narratives, spread "harmful" information, or threaten social order.

Each approach reflects not just a policy choice but a set of values about the relationship between technology, individuals, and the state.

🌐 Global Perspective: There is no objectively "correct" approach to AI governance. Each approach reflects a society's values, political system, and priorities. Understanding this is essential for AI literacy because it means the rules governing your data, your rights, and your interactions with AI systems depend on where you live — and on which values your society prioritizes. Being AI-literate means being able to evaluate these different approaches on their merits, not just accepting the one you happen to live under.


13.5 Industry Self-Regulation: Promise and Limitations

When governments are slow to regulate, industry often steps in with self-regulation — voluntary commitments, ethical principles, and internal governance structures. The AI industry has produced a remarkable volume of ethical guidelines, responsible AI principles, and voluntary commitments. The question is whether any of it works.

The Landscape of Self-Regulation

Major AI companies have published extensive responsible AI frameworks:

  • Google has its "AI Principles" (published 2018), which include commitments to fairness, safety, and accountability, along with a list of AI applications Google will not pursue.
  • Microsoft has a "Responsible AI Standard" with detailed implementation requirements for its product teams.
  • OpenAI publishes model cards and safety evaluations for its major releases.
  • Anthropic has articulated a "Responsible Scaling Policy" that ties the deployment of increasingly powerful models to demonstrated safety measures.

Industry groups have also produced collective commitments. In July 2023, seven leading AI companies made voluntary commitments to the White House on AI safety, including promises to conduct safety testing, share information about risks, and invest in cybersecurity.

The Limitations

Self-regulation has three fundamental limitations:

No enforcement mechanism. Voluntary commitments are, by definition, voluntary. If a company violates its own AI principles, there is no external authority that can impose penalties. When Google fired members of its AI ethics team in 2020 and 2021 — including researchers whose work highlighted the risks of large language models — it demonstrated that internal ethics commitments can be overridden by business decisions.

Structural conflicts of interest. Asking companies to regulate themselves creates a fundamental conflict: the practices that might need to be restricted (aggressive data collection, rapid deployment without sufficient testing, optimizing for engagement over well-being) are often the same practices that generate the most revenue. Self-regulation tends to address the problems that are easiest and least costly to fix, while leaving the most profitable harmful practices untouched.

Regulatory capture risk. When industry writes its own rules, there is a risk that those rules will be designed to look protective while actually preserving the status quo. Industry-written standards can also become barriers to entry: large companies with substantial compliance teams can easily meet complex standards, while smaller competitors and startups cannot. This can paradoxically make the industry less competitive and less innovative.

Where Self-Regulation Can Work

Despite these limitations, self-regulation is not entirely useless. It works best as a complement to government regulation, not a substitute for it.

Industry is often better positioned than government to develop technical standards — things like model evaluation benchmarks, safety testing protocols, and documentation formats. The most effective governance models tend to be multi-stakeholder: government sets the legal framework and enforcement mechanisms, industry develops technical standards and implementation practices, civil society provides oversight and represents affected communities, and academia contributes independent research and evaluation.

⚖️ Argument Map: Self-Regulation vs. Government Regulation

Dimension Self-Regulation Government Regulation
Speed Can respond quickly to new technologies Slow legislative process
Technical expertise Industry understands its own technology Regulators may lack technical depth
Enforcement No binding enforcement mechanism Legal penalties for violations
Conflict of interest Regulating your own profit centers External accountability
Innovation impact Designed to minimize business disruption May slow development
Scope Only covers willing participants Applies to entire industry
Accountability Voluntary, can be abandoned Legal obligation, harder to reverse
Democratic legitimacy Not democratically accountable Enacted by elected officials

13.6 Standards, Audits, and Accountability Mechanisms

Beyond laws and voluntary commitments, a third layer of AI governance is emerging: technical standards, algorithmic audits, and formal accountability mechanisms. These are the nuts and bolts of making governance work in practice.

Algorithmic Impact Assessments

An algorithmic impact assessment (AIA) is a structured evaluation of an AI system's potential effects on individuals and communities, conducted before the system is deployed. Think of it as an environmental impact assessment, but for algorithms.

A well-designed AIA examines: - What decisions the AI system will make or inform - What data it uses and where that data comes from - Who will be affected by the system's outputs - What harms could result from errors or biases - What safeguards are in place to prevent or mitigate those harms - How the system will be monitored after deployment

Canada became one of the first countries to require algorithmic impact assessments for government AI systems in 2019. New York City's Local Law 144, which took effect in 2023, requires that AI systems used in hiring undergo annual bias audits conducted by independent auditors.

Technical Standards

International standards bodies are developing technical standards for AI:

  • ISO/IEC 42001 (2023) establishes requirements for AI management systems — essentially, a framework for organizations to responsibly develop and deploy AI.
  • The NIST AI Risk Management Framework (2023) provides a voluntary framework for managing AI risks, organized around the principles of governance, mapping, measuring, and managing.

These standards are not laws, but they increasingly serve as reference points for regulation. The EU AI Act, for example, allows companies to demonstrate compliance partly through adherence to recognized technical standards.

Third-Party Auditing

Just as financial companies undergo independent audits, there is growing momentum for independent AI audits. An AI audit is an external examination of an AI system to assess its performance, fairness, safety, and compliance with relevant regulations.

The challenge is that AI auditing is still an immature field. There is no universally agreed-upon methodology for auditing an AI system. Auditors may not have access to a system's training data or internal architecture (companies often resist sharing proprietary information). And there is no established system for certifying AI auditors — anyone can claim to be one.

Despite these challenges, the direction of travel is clear: toward more external scrutiny, more structured evaluation, and more formal accountability for AI systems that affect people's lives.

CityScope Predict: What Accountability Looks Like

What would robust accountability look like for CityScope Predict? Consider this framework:

  1. Before deployment: A mandatory algorithmic impact assessment identifies potential harms (over-policing of certain neighborhoods, feedback loops, discriminatory patterns). The assessment is published and subject to public comment.

  2. During operation: Independent auditors conduct regular bias audits, examining whether the system's predictions correlate with demographic factors like race and income. Results are publicly reported.

  3. Ongoing oversight: A civilian oversight board with access to the system's data and methodology reviews complaints, monitors outcomes, and can recommend changes or discontinuation.

  4. Remediation: When the system produces harmful outcomes, there is a clear process for affected individuals to seek redress — not just from the city government, but from the AI developer.

No jurisdiction has implemented all of these elements for a single system. But pieces of this framework are emerging in different places, and together they sketch a picture of what meaningful AI accountability could look like.

Check Your Understanding: 1. What is the "pacing problem," and why does it make AI governance particularly challenging? 2. How does the EU AI Act classify AI systems by risk level? Give one example of a system in each category. 3. What are two limitations of industry self-regulation as an AI governance mechanism?


13.7 Chapter Summary

Governing AI is one of the defining policy challenges of our time. This chapter has explored why it is so difficult and how different societies are attempting it.

AI governance faces four structural challenges: the pacing problem (technology moves faster than regulation), the knowledge gap (expertise is concentrated in industry), the jurisdictional problem (AI crosses borders), and the definition problem (what counts as "AI" is contested).

The EU AI Act represents the world's first comprehensive AI law. Its risk-based framework classifies AI systems from unacceptable risk (banned) to minimal risk (unregulated), with extensive requirements for high-risk systems. It also addresses general-purpose AI models with transparency and safety obligations.

The U.S. takes a sector-specific, largely voluntary approach. Existing federal agencies apply their authorities to AI within their domains, executive orders provide guidance, and states are passing their own laws. The result is a patchwork with significant gaps.

China combines aggressive AI promotion with targeted regulation. Its rules focus on algorithmic recommendations, deepfakes, and generative AI, reflecting priorities of social stability and state authority.

Industry self-regulation has significant limitations. Voluntary commitments lack enforcement mechanisms, face structural conflicts of interest, and risk regulatory capture. Self-regulation works best as a complement to, not a substitute for, government oversight.

Emerging accountability mechanisms — algorithmic impact assessments, technical standards, and third-party audits — are developing but still immature. The direction is clear: toward more structured, independent scrutiny of AI systems that affect people's lives.

The overarching lesson of this chapter is that AI governance is not just a technical or legal question. It is a question of values. Different societies are making different choices about the balance between innovation and precaution, individual rights and collective interests, market freedom and government control. Being AI-literate means being equipped to evaluate these choices — and to participate in making them.


🔄 Spaced Review

Before moving on, let's revisit key concepts from earlier chapters:

From Chapter 7 (AI Decision-Making): - We learned that AI decisions are probability estimates, not truths. How does this insight change how you think about the EU AI Act's requirement for "human oversight" of high-risk AI systems? What should "human oversight" actually look like in practice?

From Chapter 9 (Bias and Fairness): - Chapter 9 introduced the concept that different definitions of fairness are mathematically incompatible. How does this "impossibility result" complicate the task of regulating AI for fairness? Can a law require an AI system to be "fair" without specifying which definition of fairness to use?

From Chapter 12 (Privacy and Surveillance): - We examined the GDPR and CCPA as privacy regulations. How do these privacy-specific regulations interact with the EU AI Act's broader framework? Are they complementary, redundant, or potentially conflicting?


📋 Progressive Project Checkpoint: Chapter 13

Task: Research how your AI system is (or is not) regulated, and recommend governance improvements.

Step 1: Regulatory Mapping Identify the regulatory environment for your chosen AI system: - What country or countries does it operate in? - What laws or regulations currently apply to it? (Consider both AI-specific regulations and general laws like anti-discrimination statutes, consumer protection laws, or sector-specific regulations.) - Are there any regulations that should apply but currently do not?

Step 2: Risk Classification Using the EU AI Act's risk framework, classify your system: - Would it be classified as unacceptable, high, limited, or minimal risk? Why? - Do you agree with this classification? Should it be higher or lower? - What requirements would the classification impose?

Step 3: Self-Regulation Assessment Examine the company or organization that operates your AI system: - Does it have published AI principles or a responsible AI framework? - Has it made any voluntary commitments regarding AI safety, fairness, or transparency? - Based on publicly available information, does the company appear to be living up to its commitments?

Step 4: Governance Recommendations Propose at least three specific governance improvements for your AI system: - At least one regulatory recommendation (what law or regulation should apply?) - At least one accountability mechanism (what oversight structure would improve the system?) - At least one transparency requirement (what information should be publicly available?)

For each recommendation, explain who would be responsible for implementation, how compliance would be verified, and what the likely objections would be.

Add your findings to your AI Audit Report under the heading "Governance and Regulatory Analysis."