48 min read

In March 2019, Google announced the formation of its Advanced Technology External Advisory Council — an eight-member board of external experts charged with advising the company on the ethical development of AI. The announcement was polished, the...

Chapter 6: Introduction to AI Governance


Opening: A Week That Revealed Everything

In March 2019, Google announced the formation of its Advanced Technology External Advisory Council — an eight-member board of external experts charged with advising the company on the ethical development of AI. The announcement was polished, the composition ostensibly diverse, and the mandate appropriately broad. It lasted exactly one week.

Employee petitions circulated within hours of the announcement. Critics outside the company joined in. The objections were specific and credible: one board member, the president of the Heritage Foundation, had publicly opposed LGBTQ protections and affirmed the legality of discriminating against transgender service members. Another board member led a drone company whose technology was directly relevant to the military AI work Google employees had already protested in the Project Maven controversy. Within eight days, two members had resigned. Google dissolved the entire council before it had held a single meeting.

The episode was widely reported as a failure of judgment in the board's composition. That reading is not wrong, but it is incomplete. The deeper failure was structural. Google's governance process for constituting the board had not built in meaningful mechanisms for stakeholder input before the announcement. The board had no clear authority — it was advisory to whom, exactly, about what, and with what power to compel action? The criteria for membership were unspecified. There were no defined processes for how the board would handle disagreements with leadership, or how its recommendations would travel through a corporation generating tens of billions of dollars in revenue. The announcement treated governance as a product launch rather than as a system requiring careful design.

This implosion illustrated, with uncomfortable clarity, how hard AI governance actually is — even for a company with world-class technical talent, a dedicated AI research division, substantial resources, and stated ethical commitments. Governance is not a document you publish. It is not a principles statement you release at a conference. It is not a board you announce. It is a system you build, sustain, and hold accountable. It requires structures with genuine authority. It requires processes that surface problems before they become catastrophes. It requires people with the expertise, independence, and institutional protection to say difficult things to powerful decision-makers — and it requires those decision-makers to actually listen.

This chapter builds the conceptual and practical infrastructure for understanding AI governance. We move from first principles — what governance is and why it matters — through the three levels at which AI governance operates, to the specific structures organizations can build and the questions that determine whether those structures are genuine or merely performative. The goal is not to equip you with a governance checklist. It is to equip you with the analytical capacity to distinguish real governance from its imitation.


Learning Objectives

By the end of this chapter, you will be able to:

  1. Define AI governance and explain how it differs from — and depends on — AI ethics.
  2. Describe the three levels at which AI governance operates (organizational, industry, societal) and explain the relationships among them.
  3. Identify the key structural elements of organizational AI governance, evaluate their design, and apply a structured checklist to assess AI projects.
  4. Critically assess industry self-regulation initiatives, distinguishing substantive governance from ethics washing.
  5. Compare the major national regulatory approaches to AI governance (EU, US, China, UK, Canada) and analyze their underlying political and philosophical assumptions.
  6. Explain the key international AI governance frameworks and analyze the coordination challenges that limit their effectiveness.
  7. Diagnose the sources of the AI governance gap and evaluate proposed remedies.
  8. Apply principles of effective governance design to evaluate real organizational governance structures.

Section 6.1: What Is AI Governance?

Defining the Concept

The word "governance" carries an air of bureaucratic abstraction. In practice, it refers to something concrete and consequential: the structures, processes, rules, roles, and accountabilities through which decisions get made, monitored, and enforced.

Applied to AI, governance encompasses everything that shapes how AI systems are conceived, built, deployed, monitored, and when necessary, shut down. It includes the internal corporate policies that govern how an engineering team develops a hiring algorithm. It includes the industry standards that define what "fairness" means for a credit-scoring model. It includes the government regulations that specify what AI systems may not be used for, and the international agreements that attempt to harmonize those requirements across borders. It includes the question of who is in the room when consequential decisions are made, who is not in the room, and what accountability exists for outcomes.

This definition is deliberately broad, because AI governance genuinely operates across multiple scales and institutional forms. A useful working definition: AI governance is the set of institutions, rules, processes, and norms through which AI development and deployment is directed, monitored, and corrected to serve human values and prevent harm.

The Three Levels

AI governance operates at three distinct but interacting levels.

Level 1: Organizational Governance is the internal governance that companies, governments, hospitals, universities, and other institutions exercise over their own AI activities. This is where the rubber meets the road: where engineering decisions are made, where deployment choices are taken, where accountability for outcomes ultimately resides. Organizational governance includes the policies, committees, review processes, documentation requirements, and reporting structures that shape what AI an organization builds and how.

Level 2: Industry Self-Regulation occurs when companies within a sector collectively develop standards, codes of practice, or certification schemes that govern their AI activities, without direct government mandate. Industry self-regulation can move faster than government regulation and benefit from deep technical expertise. It also carries inherent risks of capture and lowest-common-denominator standards. We will examine both the genuine contributions and the structural limitations of major industry initiatives.

Level 3: Societal Governance encompasses the governmental and international frameworks that set binding or semi-binding rules for AI. This includes national legislation (like the EU AI Act), regulatory agency enforcement (like FTC action on deceptive AI practices), and international agreements (like the OECD AI Principles). Societal governance provides the enforcement backstop that industry self-regulation typically lacks, but it operates more slowly and often with less technical precision than the systems it governs.

These three levels are not independent. Organizational governance occurs within the framework set by industry norms and government regulation. Industry standards often become the baseline from which regulation is written. Government regulation shapes what organizational governance must include. Understanding AI governance requires understanding how these levels interact — and where the gaps between them allow harms to fall through.

Governance Is Not Ethics

A critical distinction that business professionals must internalize: ethics and governance are not the same thing, though they are deeply interdependent.

Ethics — as explored in Chapters 1 through 5 — provides normative content. It tells us what we ought to value: fairness, transparency, human dignity, accountability. Ethical frameworks help us reason about what AI should and should not do.

Governance provides the operational machinery for translating ethical commitments into consistent practice. Ethics answers the question "what?" Governance answers "how?" and "who?" and "what happens when it goes wrong?"

An organization can have sophisticated ethical principles and no meaningful governance. Google's stated AI principles, published in 2018, were thoughtful and reasonably comprehensive. They did not prevent the advisory board fiasco. The principles existed; the governance system to implement them did not function. Conversely, organizations can have elaborate governance structures that do not rest on coherent ethical foundations — producing processes that generate the appearance of ethical review without the substance.

Genuine AI governance requires both: ethical substance and institutional machinery. Chapters 1 through 5 built the former. This chapter addresses the latter.

The Governance Gap

AI governance currently faces a structural crisis that researchers and regulators call the governance gap — the widening distance between the speed and scale of AI deployment and the capacity of governance institutions to keep pace.

The governance gap is not merely a matter of regulators being slow. It reflects genuine structural difficulties: the technical complexity of AI systems makes them hard to understand from the outside; the economic incentives to deploy AI quickly are enormous; the harms from AI are often diffuse, delayed, or invisible to those affected; and the institutions that might provide accountability — courts, legislatures, regulatory agencies — were built for different technologies in different eras.

We return to the anatomy of the governance gap in Section 6.6. For now, note that the gap is real, consequential, and not self-correcting.


Vocabulary Builder - Governance: The structures, processes, rules, and accountabilities through which decisions are made, monitored, and enforced. - Accountability: The obligation to explain and justify one's decisions and actions, and to bear consequences when they cause harm. - Legitimacy: The quality of being accepted as valid, proper, and authoritative — in governance, legitimacy derives from both procedural fairness and substantive outcome. - Enforcement: The practical capacity to compel compliance with rules and impose consequences for violations. - Soft law: Governance instruments (guidelines, principles, codes of practice) that are not legally binding but can shape behavior through reputational, normative, or market pressure. - Hard law: Legally binding rules backed by state enforcement authority — statutes, regulations, court orders.


Section 6.2: The Organizational Layer — Internal AI Governance

For most business professionals, organizational AI governance is where theory becomes practice. This is the governance you can actually build, the structures your decisions directly shape. It is also where the gap between stated and genuine governance is most immediately visible.

Effective organizational AI governance is not a single artifact. It is an interlocking system of structures, processes, and cultural conditions. This section examines the key elements of that system.

AI Principles and Policies

Most large technology companies have published AI principles. Microsoft, Google, IBM, Amazon, Meta, Salesforce — the list is extensive. These documents vary enormously in their specificity and operational utility.

Substantive principles are specific enough to rule something out. "We will not build AI systems intended to deceive users about their fundamental nature" is more substantive than "We are committed to trustworthy AI." Substantive principles create friction when someone proposes something that violates them. They generate meaningful review questions. They can be operationalized into checklists, impact assessments, and deployment gates.

Performative principles are aspirational statements that do not constrain any specific decision. They signal values without creating accountability. They are valuable for public relations purposes and occasionally useful for employee recruitment and morale, but they do not constitute governance.

The test of whether your organization's AI principles are substantive: has the principles review process ever blocked a revenue-generating project? If the answer is no, the principles are performing ethics rather than doing ethics.

Beyond principles, organizations need operational policies — specific rules governing concrete situations. What data sources may not be used to train models? What categories of AI application require ethics review? What human oversight is required before deploying a high-stakes automated decision system? What documentation must accompany a model into production? Policies translate principles from aspiration to operational requirement.

Ethics Review Processes

The ethics review process is the central mechanism through which organizational governance exercises judgment on specific AI projects. Like all governance mechanisms, it can be genuine or performative.

Genuine ethics review is embedded early in the project lifecycle — at the design stage, not the pre-launch stage. It is conducted by people with relevant expertise: not just lawyers checking regulatory compliance, but people with technical knowledge of AI systems, domain expertise in the application area, and perspective representing those likely to be affected. It has teeth: the authority to require design changes, impose monitoring requirements, or block deployment.

Performative ethics review occurs late in the development cycle, when the cost of changing course is prohibitive. It is conducted by lawyers focused on legal liability rather than ethical impact. It has no authority to require changes that would cost money or delay launch. It produces a document rather than a decision.

The timing problem is structural. Ethics review works best when it can shape fundamental design choices — what data to use, what the system is optimized for, what population it serves. These decisions are made early. If ethics review happens after the system is built, it can only recommend superficial adjustments.

AI Ethics Boards and Committees

Many organizations have established some form of AI ethics committee — a body charged with providing oversight, guidance, or review on AI governance matters. The design of these bodies varies enormously, and the design choices matter.

Composition determines the range of perspective the body brings. A committee composed entirely of engineers will analyze AI systems very differently than one that includes ethicists, social scientists, domain experts, legal professionals, and representatives of affected communities. Diversity of professional background and personal experience is not decorative — it is the mechanism through which different failure modes become visible.

Authority determines whether the committee's conclusions have any effect. Advisory committees with no formal authority over project approval are easily ignored by business units facing revenue targets. Committees with approval authority over high-risk deployments have structural leverage — though they may also face organizational resistance.

Independence determines whether the committee can deliver unwelcome findings. A committee whose members report to the same leaders whose projects it reviews is structurally compromised. Independence is most meaningful when committee members have employment protections, external credibility, and reporting lines that bypass the business units they oversee.

Mandate determines the scope of the committee's work. Does it review all AI projects? Only high-risk ones? Does it advise on policy, or does it review individual deployments, or both? A clear mandate prevents the committee from being overwhelmed with low-stakes reviews or sidelined from consequential ones.

Reporting line shapes both the committee's independence and its influence. A committee that reports to the Chief Ethics Officer (if one exists), or directly to the board of directors, has a different standing than one embedded within an engineering organization.

Responsible AI Teams

Beyond review processes, leading organizations have established dedicated responsible AI (RAI) teams — internal specialists whose job is to embed ethical consideration across the organization's AI work. These teams are distinct from governance committees in that they are operational rather than advisory: they work directly with engineering and product teams, developing tools, conducting assessments, and building the practical infrastructure of responsible practice.

Effective RAI teams combine multiple competencies: AI/ML technical expertise (to understand what systems actually do), social science and ethics training (to reason about impacts and values), legal and regulatory knowledge (to understand compliance requirements), and communication skills (to translate technical and ethical complexity for business audiences).

The placement of RAI teams in the organizational structure matters. Teams embedded within engineering organizations benefit from proximity and relationships but risk capture — pressure to become enablers of deployment rather than honest critics. Teams that sit within legal or compliance functions may have greater independence but lack the operational relationships to influence early design decisions. Some organizations have experimented with structures that give RAI teams dual reporting lines, or that position them within a dedicated trust and safety function with its own leadership.

Red-Teaming

Red-teaming — adversarial testing of AI systems by teams specifically tasked with finding failure modes, circumventing safety measures, and generating harmful outputs — has become a standard practice in responsible AI development, particularly for large language models and other generative systems.

The logic of red-teaming mirrors its origins in military and cybersecurity practice: if you want to know how your system will be attacked or misused, you need people actively trying to attack and misuse it. Red teams attempt to elicit harmful outputs, identify bias in system behavior, discover ways the system can be manipulated, and surface failure modes that standard testing might miss.

Effective red-teaming requires genuine adversarial intent — teams that are actually trying to break the system, not teams performing the ritual of adversarial testing while pulling their punches. It also requires that findings are taken seriously: red team results that sit in a report without changing deployment decisions provide no governance value.

Incident Response

Even well-governed AI systems cause harms. The question is not whether incidents will occur but whether the organization has the capacity to detect them, respond to them, remediate their effects, and learn from them.

AI incident response parallels cybersecurity incident response in structure: detection mechanisms (how does the organization learn that its AI system is causing harm?), escalation protocols (who is notified, in what order, with what urgency?), response authority (who has the power to modify or shut down a deployed system?), remediation processes (how are harms to affected parties addressed?), and post-incident review (what systemic changes prevent recurrence?).

The detection challenge is distinctive and underappreciated. Harms from AI systems are often diffuse, affect populations who lack voice or access, and manifest gradually rather than in dramatic events. Organizational incident response must be designed with this in mind — building feedback channels from affected communities, monitoring for distributional shifts in outcomes, and not waiting for harms to become news stories before taking them seriously.

Procurement Standards

Organizations that deploy AI systems built by vendors are not absolved of governance responsibility. The AI system embedded in your hiring platform or your customer-service chatbot or your fraud-detection tool causes harm under your brand, to your customers, and in some jurisdictions, under your legal liability.

Responsible AI procurement standards establish the ethical and technical requirements that AI vendors must meet. They may include: requirements for transparency about training data and model behavior, fairness testing across relevant demographic groups, documentation (model cards, system cards), security standards, incident reporting obligations, and audit rights.

Procurement standards are a mechanism through which organizations can drive governance requirements up the supply chain — creating market incentives for vendors to invest in responsible AI practices. They are also a source of accountability when AI vendors' systems cause harm.

Documentation

You cannot govern what you have not documented. The documentation infrastructure of AI governance includes:

  • Model cards: standardized documents describing what a model is, how it was trained, what it was evaluated on, its known limitations, and its recommended and contra-indicated uses. Introduced by Mitchell et al. (2019), model cards have become a widely adopted tool for transparency.
  • Datasheets for datasets: analogous documentation for training data — what the dataset contains, how it was collected, who is represented and who is not, what biases it may embed.
  • AI impact assessments: structured evaluations of the potential harms a planned AI deployment may cause to different stakeholder groups, conducted before deployment.
  • Audit trails: records of decisions made during model development, including what alternatives were considered and why choices were made.

Documentation serves multiple governance functions: it makes governance review possible by making systems legible; it creates accountability by establishing a record; it enables post-incident analysis by preserving the history of design decisions.


Template Box: AI Project Ethics Checklist

Stage 1 — Project Initiation - [ ] Is the purpose of this AI system clearly defined and documented? - [ ] Have potential harms to users and affected communities been identified? - [ ] Have relevant stakeholder groups been consulted or represented in design? - [ ] Does the project require formal ethics review under organizational policy?

Stage 2 — Data - [ ] Is the data sourced legally and with appropriate consent? - [ ] Is the dataset representative of the populations the system will serve? - [ ] Have known biases in the dataset been documented? - [ ] Has a datasheet for the dataset been completed?

Stage 3 — Model Development - [ ] Has the model been tested for performance disparities across demographic groups? - [ ] Are model limitations and failure modes documented? - [ ] Has the model been subject to adversarial/red-team testing? - [ ] Has a model card been completed?

Stage 4 — Pre-Deployment Review - [ ] Has an AI impact assessment been completed? - [ ] Has the responsible AI team reviewed the system? - [ ] Are human oversight mechanisms defined for high-stakes decisions? - [ ] Are incident detection and response protocols in place?

Stage 5 — Deployment and Monitoring - [ ] Is there a mechanism for affected parties to report harms? - [ ] Are monitoring metrics defined and actively tracked? - [ ] Is there a documented process for modifying or discontinuing the system? - [ ] Are vendor/third-party AI components governed by contract to responsible AI standards?


Section 6.3: Industry Self-Regulation — Standards and Codes of Practice

Between organizational self-governance and government mandate lies a middle layer: industry self-regulation. Companies within a sector — or across sectors — collectively develop standards, principles, certification schemes, or codes of practice that govern AI development and deployment. Understanding this layer requires engaging honestly with both its genuine contributions and its structural limitations.

The Case For Industry Self-Regulation

Proponents make three arguments for industry self-regulation that deserve serious consideration.

Speed: Government regulation moves through political processes — legislative drafting, public comment periods, implementation timelines — that take years. In sectors where technology evolves in months, waiting for government regulation means governing yesterday's systems while tomorrow's are already deployed. Industry self-regulation can respond more quickly to emerging challenges.

Technical expertise: Effective AI governance requires understanding how AI systems actually work. Regulatory agencies often lack the specialized technical staff to evaluate complex AI systems from the inside. Industry participants, by contrast, have deep technical knowledge. Self-regulatory bodies can leverage that expertise to develop standards that are technically sound in ways that legislative mandates often are not.

Flexibility: Hard law must be specific enough to be enforceable, which often means it is specific to a particular technical configuration that becomes obsolete. Industry standards can be updated more readily, and can be structured as principles rather than prescriptions, allowing organizations to meet them in context-appropriate ways.

The Case Against

The arguments against industry self-regulation are equally serious.

Capture: When an industry regulates itself, the regulated party controls the regulator. The resulting standards tend to reflect the interests and perspectives of the most powerful industry players. New entrants, smaller firms, civil society, and affected communities rarely have equivalent influence over self-regulatory bodies.

No enforcement: Most industry self-regulation lacks meaningful enforcement mechanisms. A company that fails to meet an industry code of conduct faces no fine, no lawsuit, no license revocation. The consequence is reputational — and only if someone is watching. In practice, the reputational cost of ethics washing is often lower than the cost of genuine ethical practice.

Lowest-common-denominator standards: When standards must achieve consensus among a diverse group of industry players, the temptation is to adopt language vague enough that everyone can claim compliance. Standards that everyone meets are often standards that require nothing.

Historical precedent is discouraging. In financial services, industry self-regulation failed to prevent the practices that caused the 2008 global financial crisis. In media, self-regulation of advertising standards has been persistently ineffective. In tobacco, industry self-regulation produced the Tobacco Research Council — a body created explicitly to cast doubt on scientific findings about smoking's harms. The pattern is clear: when commercial interests conflict with the purpose of self-regulation, commercial interests typically prevail.

Key Industry Initiatives

Partnership on AI (PAI) — Founded in 2016 by Amazon, Facebook, Google, IBM, Microsoft, and Apple (joined shortly after), PAI has grown to include academic institutions, civil society organizations, and companies. Its eight tenets address safety, fairness, transparency, and human-AI collaboration. PAI has produced useful research outputs and serves as a multi-stakeholder conversation forum. It has no enforcement mechanism and its outputs are non-binding.

IEEE Standards Association — IEEE's Ethically Aligned Design framework (now in its second edition) provides a comprehensive normative vision for human-centered AI. More operationally, the IEEE P7000-series standards address specific topics including algorithm bias (P7003), data privacy (P7002), and transparency (P7001). IEEE standards carry significant technical credibility and can be incorporated by reference into procurement contracts and regulations.

ISO/IEC JTC 1/SC 42 — The joint technical committee responsible for international AI standardization has produced standards covering AI concepts and terminology (ISO/IEC 22989), AI risk management (ISO/IEC 23894), and AI bias in datasets (ISO/IEC TR 24027), among others. ISO standards are developed through national standards bodies and carry quasi-regulatory weight in many jurisdictions.

NIST AI Risk Management Framework (AI RMF) — Published in January 2023, the NIST AI RMF is the most practically oriented organizational governance framework currently available. It structures AI risk management around four core functions: Govern (establishing the organizational culture, policies, and accountability structures for responsible AI), Map (understanding and categorizing AI risks in context), Measure (analyzing and assessing those risks), and Manage (prioritizing and responding to risks). The AI RMF is voluntary but has been widely adopted as an organizational baseline, and is increasingly referenced in government contracts and regulatory guidance.

Responsible AI Institute — Offers a certification scheme for organizations, providing structured assessments of responsible AI practice against defined criteria. Unlike frameworks that merely recommend, certification schemes create external accountability — though the rigor of that accountability depends on the independence and methodology of the certifying body.

Connection to Ethics Washing

The governance gap between industry self-regulation's substance and its presentation is a central site of what we call ethics washing — the performance of ethical commitment without the substance. When an organization publishes AI principles, joins a multi-stakeholder initiative, and adopts a voluntary framework, but none of these activities constrain any specific decision or create any accountability for harmful outcomes, the ethics machinery is decorative. Section 6.8 returns to the cultural conditions that distinguish genuine from performative governance.


Section 6.4: National and Regional Governance Frameworks

AI governance looks very different depending on which side of a border you stand on. National and regional governance frameworks reflect distinct political philosophies, institutional structures, and assessments of where the balance between innovation and harm prevention should fall. For business professionals operating across jurisdictions — or advising organizations that do — understanding this landscape is operationally essential.

The European Union: The EU AI Act

The European Union's AI Act, adopted in 2024, is the world's first comprehensive legal framework specifically governing AI systems. It represents the EU's characteristically precautionary, rights-based approach to technology governance — the same philosophy that produced the GDPR for data privacy.

The AI Act organizes AI systems into four risk tiers:

Unacceptable risk — systems that pose such fundamental threats to human rights and democratic values that they are prohibited outright. Prohibited applications include: AI systems that use subliminal techniques to manipulate behavior in ways that cause harm; systems that exploit vulnerabilities of specific groups; social scoring systems used by public authorities to evaluate citizens based on social behavior; and (with limited law-enforcement exceptions) real-time remote biometric identification in public spaces.

High risk — systems used in contexts where errors could have serious consequences for health, safety, or fundamental rights. This includes AI in critical infrastructure, education, employment (especially CV screening and candidate ranking), essential services (credit scoring, insurance), law enforcement, migration and border management, and administration of justice. High-risk systems face the most demanding requirements: pre-market conformity assessment, technical documentation, data governance requirements, transparency obligations, human oversight measures, accuracy and robustness requirements, and registration in an EU database.

Limited risk — systems with specific transparency obligations. AI systems that interact directly with humans (chatbots) must inform users that they are interacting with an AI. AI-generated content must be labeled as such.

Minimal risk — AI systems that pose no significant risks and face no specific requirements beyond existing law.

The EU AI Act also establishes a comprehensive governance architecture: national competent authorities in each member state, an EU AI Office for oversight of general-purpose AI models, a European AI Board for coordination, and scientific advisory panels. Enforcement penalties reach up to 35 million euros or 7% of global annual turnover for the most serious violations.

For business professionals, the EU AI Act is significant not only for EU operations but as a global standard-setter: companies seeking EU market access must comply, regardless of where they are incorporated.

The United States: A Fragmented Landscape

In contrast to the EU's comprehensive approach, US AI governance is a patchwork of executive action, sector-specific regulation, and state-level law — without, as of 2026, any comprehensive federal AI legislation.

Executive Order on AI (October 2023): President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directed federal agencies to take a range of actions: developing safety standards, addressing biosecurity and AI risks, promoting privacy-preserving research, advancing equity in federal AI use, and strengthening the workforce. The Order had significant scope — particularly in requiring that developers of the most powerful AI models report safety test results to the government — but its reach is limited to federal agencies and government contractors. It does not directly regulate private-sector AI development.

FTC Enforcement: The Federal Trade Commission has used its existing authority under Section 5 of the FTC Act — which prohibits unfair or deceptive practices — to pursue AI-related harms. The FTC has brought actions against companies for deceptive claims about AI capabilities, for algorithmic systems that discriminate, and for AI-facilitated privacy violations. This enforcement approach is nimble but reactive: it addresses harm after it occurs rather than preventing it prospectively.

EEOC Guidance: The Equal Employment Opportunity Commission has issued guidance on AI in employment, clarifying that existing employment discrimination law applies to algorithmic hiring tools. Employers using AI-based screening systems are responsible for discriminatory outcomes, regardless of whether the discriminatory mechanism is human or algorithmic.

State-level regulation: In the absence of federal legislation, states have moved to fill gaps. Illinois' Biometric Information Privacy Act (BIPA) regulates the collection and use of biometric data with strict consent requirements and a private right of action. New York City's Local Law 144 requires bias audits of AI tools used in employment decisions. Colorado's SB 21-169 regulates the use of external consumer data and algorithms in insurance decisions. The result is a growing compliance patchwork that imposes particular burdens on companies operating nationally.

China: State-Led Governance

China's approach to AI governance reflects its broader model of state-directed technology development: the government sets strategic direction, promotes national champions, and regulates to prevent social instability — while actively using AI for state surveillance and control in ways that would be prohibited in liberal democratic contexts.

China's AI Ethics Principles (2021) established national-level commitments to human-centeredness, fairness, and transparency. More concretely, China has enacted regulations targeting specific AI applications: regulations on recommendation algorithms (2022) require transparency about algorithm logic and prohibit the use of personal information to charge different users different prices; regulations on deepfake technology (2022) require labeling of synthetic media and prohibit uses that could endanger national security or harm the public interest; regulations on generative AI (2023) require security assessments before public release and prohibit content that contradicts Party leadership or socialist values.

China's approach is characterized by specificity, speed, and enforceability in some domains — but its governance is fundamentally shaped by the state's interest in maintaining political control, which means that the most consequential AI applications (mass surveillance, social credit, predictive policing) are governed not through restraint but through state sanction.

The United Kingdom: Pro-Innovation Post-Brexit

Post-Brexit, the UK has positioned itself as a jurisdiction that can move faster than the EU and be more accommodating to AI innovation. The UK's approach is sector-by-sector rather than horizontal: existing regulators (the Financial Conduct Authority, the Information Commissioner's Office, the Care Quality Commission) apply their existing frameworks to AI within their domains, guided by cross-cutting AI principles.

The UK established an AI Safety Institute in 2023 — the world's first national body specifically dedicated to evaluating the safety of advanced AI models. The Institute conducts evaluations of frontier AI systems and publishes findings, positioning the UK as a credible technical voice in international AI governance.

The UK approach has genuine advantages in flexibility and speed. Its limitation is coordination: sector-by-sector regulation produces inconsistency at the boundaries between sectors, and the absence of a horizontal AI law leaves gaps that cross-sector AI applications can fall through.

Canada: Evolving Frameworks

Canada has taken a dual approach: a voluntary AI Code of Conduct (2023) for companies developing and deploying frontier AI, while advancing proposed comprehensive legislation through the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27. AIDA would establish requirements for high-impact AI systems including risk assessments, mitigation measures, monitoring, and transparency. As of 2026, federal legislative progress has been deliberate.


Comparative Table: AI Governance Across Jurisdictions

Dimension European Union United States China United Kingdom
Approach Comprehensive horizontal legislation Sectoral + state patchwork State-directed, application-specific Sector-by-sector + pro-innovation
Legal basis EU AI Act (2024) FTC Act, sector law, state law National regulations by application Existing sector regulation
Risk framework Four-tier risk classification No unified framework Application-specific Context-dependent
Enforcement Fines up to 7% global turnover FTC enforcement, civil litigation Administrative penalties Sector regulator enforcement
Key values emphasized Fundamental rights, precaution Innovation, existing rights law Social stability, Party leadership Innovation, safety research
High-risk AI requirements Extensive pre-market requirements Voluntary/sector-specific Registration and assessment Sector regulator guidance
Prohibited applications Social scoring, biometric surveillance, manipulation Relatively few explicit prohibitions Content against Party values Limited explicit prohibitions
International posture Standard-setter, Brussels effect Influence through industry leadership Parallel global engagement Technical safety leadership

Section 6.5: International Governance Frameworks

AI systems do not observe national borders. A model trained in the United States on data from around the world, deployed via a platform accessible globally, producing effects that play out in dozens of jurisdictions simultaneously — this is not an edge case. It is the normal operating condition of contemporary AI. National governance frameworks, operating within territorial boundaries, are structurally limited in their ability to govern systems that operate globally. International governance frameworks attempt to fill this gap, with mixed success.

OECD AI Principles (2019)

The Organisation for Economic Co-operation and Development AI Principles represent the first intergovernmental agreement on AI. Adopted in May 2019 by OECD member states and subsequently endorsed by G20 leaders, the Principles establish five value-based principles (inclusive growth and well-being; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and accountability) and five recommendations for national AI policies.

The OECD Principles are soft law — they are not binding and carry no enforcement mechanism. Their significance lies in establishing normative consensus: by securing formal endorsement from 42 governments (including OECD members and partner countries), they created a shared reference point that has influenced subsequent national regulation and industry frameworks.

UNESCO Recommendation on AI Ethics (2021)

The UNESCO Recommendation on the Ethics of AI, adopted by all 193 member states in November 2021, is the first global normative framework for AI ethics. It is significantly more comprehensive than the OECD Principles: 40 pages of detailed text addressing values and principles, policy areas (including data governance, environment and ecosystems, gender, health, and education), and governance recommendations.

The Recommendation explicitly addresses dimensions often absent from Western-dominated governance frameworks, including environmental sustainability, cultural diversity, and the particular vulnerabilities of lower-income countries without advanced AI industries. Its adoption by all UNESCO member states — including China, Russia, and the United States — gives it unusual geographic reach. Its status as a recommendation rather than a treaty means it lacks enforcement, but member states have committed to self-assessment processes to evaluate implementation.

G7 Hiroshima AI Process (2023)

At the 2023 G7 summit in Hiroshima, leaders launched an international dialogue on the governance of advanced AI — particularly generative AI. This produced the Hiroshima AI Process Guiding Principles and Code of Conduct for organizations developing advanced AI systems, endorsed in October 2023. The code addresses safety, transparency, information-sharing about AI risks, and responsible use of capabilities.

The Hiroshima process is notable for moving quickly — from summit commitment to published code within months — and for explicitly addressing frontier AI development rather than AI in general. Its limitation is its membership: the G7 represents wealthy democracies, not the full diversity of national interests in AI governance.

Global Partnership on AI (GPAI)

The Global Partnership on AI, launched in 2020 under Canadian and French leadership, is a multi-stakeholder forum that includes governments, industry, civil society, and academia from its 29 member states. GPAI conducts working group research on priority topics and provides a forum for international knowledge-sharing. It has produced useful research outputs but lacks the mandate or authority to set binding standards.

UN AI Advisory Body

In 2023, UN Secretary-General António Guterres established a High-level Advisory Body on Artificial Intelligence, tasked with making recommendations on international AI governance. The Body's interim report identified the need for international governance mechanisms that go beyond soft-law frameworks — including proposals for a potential UN-affiliated international AI governance entity. The debate over what such an entity should look like, and who should control it, reflects deep tensions among major AI powers.

The Coordination Challenge

The fragmentation of international AI governance reflects real political constraints. Governance frameworks embody values, and the major AI powers have genuinely different values. The EU prioritizes fundamental rights and democratic accountability. The United States emphasizes market freedom and national security. China pursues state-directed development and social stability. These are not merely rhetorical differences — they reflect different political systems with different visions of the relationship between technology and human society.

International AI governance must navigate these differences while moving at something approaching AI's pace. The historical analogues are not encouraging: international governance of nuclear technology, climate, and digital trade has been slow, contested, and imperfect. The expectation that AI will be governed more effectively by international institutions is not well-supported by historical precedent.

This connects directly to Theme 5: the question of whose values dominate international AI governance standards is fundamentally a question about global power. Western-dominated international bodies produce governance frameworks that reflect Western liberal values. Chinese participation in international AI governance advocacy increasingly reflects Chinese state values. The majority of the world's population, located in the Global South, has relatively limited influence over frameworks that will nonetheless shape their AI future.


Section 6.6: The Governance Gap — Why Governance Struggles to Keep Pace

The governance gap — the widening distance between AI's development and deployment and governance institutions' capacity to address it — is not a temporary lag that will close as regulators catch up. It is a structural condition produced by multiple reinforcing dynamics. Understanding its anatomy is the prerequisite for addressing it.

The Pacing Problem

The most visible dimension of the governance gap is temporal. Legislative and regulatory processes were designed for a world where the objects of regulation change slowly. Chemical safety regulation, securities regulation, food and drug approval — these frameworks operate on timelines that made sense when the regulated technologies evolved over years or decades. AI systems can change meaningfully in months. A regulatory framework designed for one generation of AI may be obsolete before it takes effect.

The pacing problem is acute for generative AI in particular. The GPT series of language models went from GPT-2 (2019, partially withheld due to misuse concerns) to GPT-4 (2023, deployed to hundreds of millions of users) in four years — a period during which no comprehensive AI regulation was enacted anywhere in the world. By the time the EU AI Act's high-risk requirements take full effect, the AI landscape will look substantially different from the landscape that motivated the Act's drafting.

The Expertise Gap

Effective regulation requires understanding the regulated object. AI systems — particularly large neural networks — are technically complex in ways that most legislators and many regulators do not have the background to fully grasp. This expertise gap has consequences: regulations may be technically imprecise, governance requirements may be administratively burdensome in ways that disadvantage smaller players without improving safety, and regulators may be unable to independently evaluate industry claims about what is and is not technically feasible.

The expertise gap also flows in the other direction: technical AI professionals are often unfamiliar with regulatory design, legal frameworks, and the normative dimensions of governance. Building the population of people who are literate in both technical and governance dimensions of AI is a slow process that does not match the speed of deployment.

The Capture Problem

When industries have the resources to shape the processes that govern them — through lobbying, through participation in advisory bodies, through the revolving door between regulatory agencies and industry — the resulting regulations tend to reflect industry preferences. This dynamic, known as regulatory capture, is well-documented across many sectors and is a significant risk in AI governance.

Industry actors have structural advantages in influencing AI governance: they possess technical expertise that regulators lack, they have financial resources to fund lobbying and policy engagement, and they can credibly claim that overly restrictive regulation will drive development to less regulated jurisdictions. The result is governance frameworks that often incorporate industry's preferred definitions, exemptions, and compliance mechanisms.

The Jurisdiction Problem

AI systems operate globally; governance systems are territorial. A facial recognition system deployed by a US company, trained on data from multiple continents, used by clients in multiple countries, produces legal ambiguity at every border. Which jurisdiction's law applies? Which regulator has authority? Who can bring a claim on behalf of an affected person?

These questions are not merely theoretical. They are actively litigated and actively exploited: companies seeking to minimize regulatory burden can structure their operations to take advantage of gaps between jurisdictions. Data protection law and AI governance law face the same challenge: technology has dramatically lowered the cost of cross-border operations while governance frameworks remain territorially bounded.

The Definitional Problem

Defining "AI" for regulatory purposes is harder than it appears. Too narrow a definition — limited to, say, neural networks — fails to capture other algorithmic systems that produce similar harms. Too broad a definition — covering any automated decision-making — captures spreadsheet macros and conventional software in ways that are neither useful nor manageable. The definitional debate is not merely semantic: it determines which systems face governance requirements, and industry participants have strong incentives to advocate for definitions that exclude their products.

The EU AI Act's definition — "a machine-based system that, for a given set of objectives, infers, from the input it receives, how to generate outputs such as predictions, recommendations, decisions, or content that can influence real or virtual environments" — represents one carefully negotiated approach, but its application to specific systems will be subject to ongoing interpretation and litigation.

The Enforcement Problem

Governance frameworks without enforcement are aspirational documents. Enforcement of AI governance is complicated by several structural factors: AI systems are technically opaque, making it difficult to determine from the outside whether they comply with specified requirements; regulatory agencies typically lack the resources to conduct systematic investigations of large numbers of AI deployments; and the harms from AI are often diffuse and difficult to attribute to a specific system or decision.

The Adversarial Problem

The final dimension of the governance gap is perhaps the most difficult: sophisticated actors optimize for the appearance of compliance rather than its substance. Organizations subject to governance requirements have strong incentives to meet the letter of requirements in ways that minimize impact on operations. Ethics washing — discussed throughout this textbook — is precisely this phenomenon: producing the documentation, establishing the committees, running the review processes, while ensuring none of these activities meaningfully constrain revenue-generating decisions.

Governance systems designed without accounting for adversarial optimization will systematically be gamed by the most sophisticated actors — precisely those whose AI systems are often most consequential.


Section 6.7: Effective Governance Design — Principles for Organizations

Understanding why governance fails is a precondition for designing governance that works. This section moves from description to prescription: what principles should guide organizations in building genuine AI governance?

Authority Matters

Governance structures without authority are rituals. The fundamental question is whether the people and bodies responsible for AI governance can actually stop things from happening. Can the ethics review committee require that a high-risk project be redesigned before deployment? Can the responsible AI team flag a model for additional testing, and will that flag delay deployment? Can the Chief Ethics Officer raise concerns directly to the board?

The test of authority is not whether it is claimed in an organizational chart. The test is whether authority is exercised when the cost of exercising it is high — when a revenue-generating project is at stake, when a powerful business unit is pushing for deployment, when the timeline is tight. Organizations should be able to point to specific instances in which governance authority was exercised against economic pressure. If no such instances exist, the authority is notional.

Independence Matters

Ethics review cannot be meaningfully independent if reviewers' employment, compensation, and advancement are controlled by the same leaders whose projects they review. Independence is a structural condition, not a personal virtue. Even principled, courageous individuals will shade their assessments when their job security is at risk.

Genuine independence requires: reporting lines that bypass business units under review; employment protections for ethics function staff; external credibility that gives ethics professionals standing outside the organization; and board-level visibility for AI governance that is not filtered through business unit leadership.

Diversity Matters

AI governance bodies that lack diversity will systematically miss the failure modes that most affect underrepresented populations. A technology ethics committee composed entirely of white men from engineering backgrounds will analyze algorithmic bias differently than a committee that includes people who have experienced discrimination in hiring, credit, or law enforcement. Diversity of professional background, personal experience, and cultural context is not aesthetic — it is a functional requirement for adequate governance.

This connects to Theme 4 (Diversity and Inclusion): the people most likely to be harmed by poorly governed AI systems are often the people least represented in the rooms where governance decisions are made.

Documentation Matters

Governance requires a record. Without documentation, there is no accountability: no ability to verify what was decided, when, and on what basis; no audit trail for post-incident review; no mechanism for external oversight. Documentation should be a first-order governance requirement, not an administrative afterthought.

The NIST AI RMF's governance function makes documentation foundational: policies, procedures, roles, responsibilities, risk tolerances, and governance decisions should all be recorded and accessible to those with legitimate oversight responsibilities.

Accountability Matters

Accountability without consequences is compliance theater. For AI governance to function, individuals and organizational units must bear real consequences for governance failures — not just for regulatory violations, but for ethical failures that do not (yet) violate any law. Accountability structures should specify: who is responsible for what aspects of AI governance, how that responsibility is tracked, and what consequences follow from governance failure.

Iteration Matters

AI systems are not static; they change through retraining, through changes in use, and through changes in the environments they operate in. Governance must keep pace: regular reviews of deployed systems, updating of policies as technology evolves, and processes for incorporating lessons from incidents and near-misses.

Transparency Matters

Stakeholders cannot hold organizations accountable for decisions they don't know about. Transparency obligations in AI governance serve multiple functions: they enable external scrutiny, they create market incentives for responsible practice, and they empower affected individuals to understand and contest automated decisions that affect them. Transparency must be calibrated — not every implementation detail should be public — but the baseline presumption should favor disclosure.

The NIST AI RMF as a Practical Framework

The NIST AI Risk Management Framework provides the most practically developed organizational governance framework currently available. Its four functions — Govern, Map, Measure, Manage — provide a coherent structure for organizational AI risk management.

Govern establishes the organizational foundation: the culture, policies, roles, and accountability structures that make the other functions possible. Govern asks: does the organization have the institutional capacity to take AI risk seriously?

Map provides situational awareness: understanding what AI systems exist, in what contexts they operate, and what risks they pose. Map asks: do we know what AI we are running and what it might do?

Measure develops the analytical tools to quantify and assess AI risks. Measure asks: how bad is the risk, for whom, and under what conditions?

Manage prioritizes and responds to identified risks. Manage asks: what are we doing about it, and is it working?

The AI RMF is not a compliance checklist — it is a framework for building organizational capability. Organizations at different stages of AI governance maturity will implement it differently. Its value lies in providing a common vocabulary and structure for a discipline that has not historically had either.


Section 6.8: Governance as Culture, Not Just Structure

The sections above have focused on structural governance: the committees, policies, processes, and frameworks that constitute the formal machinery of AI accountability. Structural governance is necessary. It is not sufficient.

The Limits of Structural Governance

Rules can always be gamed. Any governance requirement specific enough to be meaningful is also specific enough to be met in a way that satisfies its letter while violating its spirit. Documentation requirements produce documentation. Ethics review requirements produce reviews. Neither requirement necessarily produces the thing it was designed to produce: genuine ethical consideration of consequential decisions.

The limitations of structural governance are visible in every major AI controversy. Facebook had elaborate content governance structures — Community Standards, content policy teams, an Oversight Board — that did not prevent documented contributions to genocidal violence. Amazon had an AI ethics team when it deployed a hiring algorithm with demonstrated gender bias. Structural governance that is not backed by a genuine organizational culture of ethical consideration tends to produce the forms of accountability without the substance.

The Culture Dimension

What does a genuine AI governance culture look like? Several markers are observable:

Psychological safety for ethical dissent: Do employees who raise ethical concerns about AI systems feel safe doing so? Do they get heard? Or does raising concerns get you labeled a blocker, reassigned, or managed out? Organizations where ethical concerns are welcomed and taken seriously produce better governance than organizations where raising concerns is career risk.

Leadership modeling: Do senior leaders talk about AI ethics as a genuine constraint on business decisions, or only as a public relations asset? Do they defer to ethics review even when it costs revenue? Do they ask ethics questions in meetings about AI products? Tone at the top is not just metaphor — research consistently shows that employee behavior reflects what they observe in leaders.

Ethics in incentive structures: What gets rewarded? If engineers who ship fast are celebrated and engineers who raise ethics concerns are sidelined, the incentive structure is speaking louder than the ethics policy. Genuine governance culture aligns incentives with governance values.

Genuine diversity of voice: Do the people most likely to be harmed by poorly governed AI have voice in governance processes? This is both a structural and a cultural question: structures can create seats; culture determines whether those seats carry real influence.

Structural and Cultural Governance Must Reinforce Each Other

The relationship between structural and cultural governance is mutual. Structures shape culture: when an organization invests in genuine ethics review, with real authority and real consequences, it signals that ethics is not a decoration. When leaders defer to governance processes even at cost, they model the behavior they want to see. Over time, structures that are genuinely exercised become embedded in organizational culture.

Culture shapes structures: organizations with strong ethical cultures tend to build governance structures that reflect that culture. They staff ethics functions with people who have real expertise and real authority. They design review processes that surface problems rather than paper over them.

The connection to Chapter 22 on whistleblowing is direct: structural governance and cultural governance must reinforce each other to create conditions where employees can surface problems before they become catastrophes. Whistleblowing is what happens when structural governance has failed and cultural governance is insufficient to surface problems through normal channels. The goal is organizations where problems are surfaced through governance before they require whistleblowing.


Section 6.9: The Future of AI Governance

AI governance is a field in rapid evolution. Several trajectories deserve attention.

Adaptive Regulation

Traditional regulation is static: a regulation is written, goes through public comment, and takes effect — after which it may remain unchanged for years. AI's pace of development demands regulatory approaches that can adapt more quickly. Regulatory sandboxes — controlled environments where companies can deploy AI systems under enhanced regulatory supervision and reduced liability — allow regulators to learn alongside industry while limiting exposure. The EU AI Act incorporates regulatory sandboxes explicitly. Iterative rule-making — regulatory processes that build in scheduled review and update cycles — is another adaptive mechanism. Neither is a complete solution, but both represent more realistic approaches to the pacing problem than static regulation.

Algorithmic Auditing

Third-party auditing of AI systems — conducted by independent organizations with the technical expertise to assess system behavior — is an emerging mechanism for creating accountability outside both organizational self-governance and government regulation. Algorithmic auditors can assess systems for bias, accuracy, transparency, and compliance with stated design specifications. The field is nascent: there are no universally accepted audit methodologies, auditor qualifications are inconsistently defined, and the relationship between audit findings and regulatory consequence is unclear. But the trajectory is toward institutionalization — the EU AI Act incorporates third-party conformity assessment for high-risk systems, and several civil society organizations have developed independent audit methodologies.

AI Governance as a Profession

The demand for people who can design, implement, and operate AI governance systems has grown dramatically and shows no signs of declining. The professional roles include: AI Ethics Officers (responsible for organizational ethics strategy and culture), Responsible AI Leads (embedded within engineering teams to provide applied ethics guidance), AI Compliance Officers (responsible for regulatory compliance), and AI Auditors (assessing AI systems for risk and compliance). Universities and professional organizations are beginning to develop formal credentials in these areas. AI governance is becoming a profession with its own knowledge base, standards of practice, and career paths.

International Harmonization

The slow progress toward common international AI governance standards continues. The dynamic follows a familiar pattern from other technology governance domains: industry prefers harmonized global standards to reduce compliance burden; governments prefer standards that reflect their political values; civil society advocates for standards that genuinely protect rights. Progress is possible — the GDPR's extraterritorial reach has effectively made it a global standard for companies serving EU users — but it requires both political will and technical agreement that have been difficult to achieve simultaneously.

Democratic Legitimacy

Perhaps the most fundamental question in the future of AI governance is who should decide how AI is governed. Current governance frameworks are largely the product of expert and governmental processes, with limited direct participation by the people most affected by AI systems. The question of democratic legitimacy — whether governance decisions reflect the genuine preferences and values of the people they affect — is not resolved by technical excellence or legal sophistication. It requires ongoing engagement with questions of political philosophy: what are the appropriate mechanisms for democratic input into complex technical governance? How do we ensure that those with the least power have voice in processes where those with the most power have the strongest incentive to dominate? These are not questions with clean answers, but they are questions that genuine AI governance cannot avoid.


Discussion Questions

  1. Google's AI ethics advisory board lasted one week. What structural design changes — to composition criteria, authority, mandate, or process — might have made it more durable? What would it have required from Google's leadership to be genuine rather than performative?

  2. The EU AI Act and US regulatory approach represent genuinely different philosophies about the appropriate relationship between innovation and precaution. If you were advising a global company building AI systems, how would you characterize the practical implications of these differences for product design and governance?

  3. Industry self-regulation has a poor historical track record in sectors from tobacco to finance. What conditions, if any, would be required for AI industry self-regulation to be genuinely effective? Are those conditions currently present?

  4. The NIST AI RMF identifies "Govern" — establishing organizational culture, policies, and accountability structures — as the foundational function that makes the other three (Map, Measure, Manage) possible. What does this imply about the sequencing of AI governance implementation in an organization that is starting from scratch?

  5. Consider an organization where the ethics review process has never blocked a revenue-generating project. Is this evidence that the organization's AI systems are all ethical, or evidence that the governance system is not functioning? How would you determine which interpretation is correct?

  6. The governance gap is not simply a matter of regulators being slow. It reflects structural conditions — the pacing problem, the expertise gap, the capture problem, the jurisdiction problem — that are genuinely difficult to address. Which of these structural conditions do you think is most amenable to near-term solutions, and what would those solutions look like?

  7. Whose values should dominate international AI governance standards? What mechanisms could give lower-income countries and marginalized communities more meaningful voice in international governance processes that currently reflect the priorities of wealthy nations and large corporations?


Chapter 6 connects forward to Chapter 21 (Corporate Governance and AI) for deep treatment of board-level AI oversight, Chapter 22 (Whistleblowing and Ethical Dissent) for the cultural conditions of genuine governance, and Chapter 32 (Global AI Governance) for extended analysis of international coordination challenges.