Case Study 1: Google's AI Ethics Board — 10 Days from Launch to Dissolution


Introduction

On March 26, 2019, Google announced the formation of the Advanced Technology External Advisory Council (ATEAC) --- an eight-member board tasked with providing "guidance and recommendations" on the ethical implementation of Google's AI Principles, which the company had published nine months earlier. By April 4, 2019 --- just ten days after its announcement --- Google dissolved the council.

The ATEAC's rapid collapse is the most prominent example of AI ethics governance done wrong. Its failure did not stem from the concept of an external advisory council, which remains a sound governance mechanism. It stemmed from fundamental errors in design, composition, process, and stakeholder engagement --- errors that offer direct and actionable lessons for any organization seeking to operationalize responsible AI.

This case study examines what went wrong, why, and what business leaders should learn from it.


Background: Google's AI Principles

In June 2018, Google published a set of seven AI Principles, developed partly in response to internal controversy over Project Maven --- a Pentagon contract that used Google AI to analyze drone surveillance footage. Approximately 4,000 Google employees had signed an internal petition opposing the project, and several engineers resigned. Google ultimately declined to renew the Maven contract and published its AI Principles as a framework for future decisions.

The principles committed Google to developing AI that:

  1. Is socially beneficial
  2. Avoids creating or reinforcing unfair bias
  3. Is built and tested for safety
  4. Is accountable to people
  5. Incorporates privacy design principles
  6. Upholds high standards of scientific excellence
  7. Is made available for uses that accord with these principles

The principles also identified four AI applications that Google "will not pursue," including weapons, surveillance that violates international norms, technology that contravenes international law, and technology whose purpose is to cause injury.

The principles were widely praised as thoughtful and substantive --- more specific than most corporate AI ethics statements. The question was how to operationalize them. ATEAC was Google's answer.


The Council's Composition

Google selected eight members for ATEAC, drawn from academia, industry, policy, and civil society:

Member Affiliation Background
Dyan Gibbens Trumbull Unmanned (drone company) Aerospace engineer, military veteran
Joanna Bryson University of Bath AI ethics researcher
Kay Coles James Heritage Foundation Conservative policy leader, former US OPM Director
Luciano Floridi Oxford Internet Institute Philosopher of information
Alessandro Acquisti Carnegie Mellon University Privacy and behavioral economics researcher
Bubacarr Bah African Institute for Mathematical Sciences Machine learning researcher
William J. Burns Former Deputy Secretary of State Foreign policy and national security
De Kai UC Berkeley / HKUST Machine translation and computational linguistics

The composition immediately drew criticism on multiple fronts.


What Went Wrong

Problem 1: A Politically Divisive Appointment

The inclusion of Kay Coles James, president of the Heritage Foundation, became the flashpoint. The Heritage Foundation had published positions opposing LGBTQ+ rights, questioning the scientific consensus on climate change, and advocating for restrictive immigration policies. Employees and external observers argued that appointing a leader of an organization with these positions to an AI ethics board was incompatible with Google's stated commitment to building AI that "avoids creating or reinforcing unfair bias."

Within days, a petition organized by Google employees and external AI researchers gathered over 2,300 signatures demanding James's removal. The petition argued: "Such a position is fundamentally incompatible with a commitment to fairness, justice, and the wellbeing of all communities impacted by Google's technology."

Supporters of James's inclusion argued that an ethics board should include diverse political perspectives and that excluding conservative viewpoints would itself be a form of bias. Google's management had reportedly sought to include representatives from across the political spectrum.

Business Insight: The James controversy illustrates a fundamental governance design question: what does "diversity" mean on an ethics board? Cognitive diversity --- different perspectives, different analytical frameworks, different lived experiences --- strengthens ethical deliberation. But diversity of perspective does not require including individuals whose organizations' stated positions contradict the values the ethics board is meant to uphold. The line between "productive disagreement" and "structural contradiction" requires careful judgment --- and that judgment should be made before appointments are announced, not after.

Problem 2: Lack of Stakeholder Consultation

Google designed ATEAC internally, selected members without broad consultation, and announced the council's existence and composition simultaneously. Affected stakeholders --- including Google employees, AI ethics researchers, civil rights organizations, and the communities most affected by AI --- had no input into the council's design, mandate, or composition.

This top-down approach violated a core principle of responsible AI governance: those affected by AI systems should have a voice in how those systems are governed. Google's AI Principles state that AI should be "accountable to people." ATEAC was accountable to Google's leadership --- not to the people affected by Google's AI.

The contrast with other governance models is instructive. Microsoft's Aether Committee (AI, Ethics, and Effects in Engineering and Research) was developed through an iterative process involving internal stakeholders over several years. The Partnership on AI, a multi-stakeholder organization co-founded by Google, was designed collaboratively among companies, civil society organizations, and academic institutions. ATEAC had none of this process.

Problem 3: Unclear Mandate and Authority

ATEAC's mandate was described as providing "guidance and recommendations" on Google's AI Principles. But critical details were undefined:

  • Scope: Which of Google's AI projects would the council review? All of them? Only the controversial ones? Who would decide?
  • Authority: Were the council's recommendations binding? Advisory only? What happened if Google disagreed with a recommendation?
  • Transparency: Would the council's deliberations be public? Would its recommendations be published? Would Google be required to explain why it accepted or rejected specific recommendations?
  • Access: Would council members have access to Google's internal AI projects, data, and technical documentation? Or would they be limited to reviewing information that Google chose to share?
  • Resources: Would council members have dedicated staff support, independent research budgets, and the ability to commission external reviews?

Without clarity on these questions, ATEAC risked becoming what critics called "ethics washing" --- a high-profile advisory body that lends legitimacy to Google's AI activities without meaningfully influencing them.

Caution

An external advisory council without clear authority, access, and transparency provisions is worse than no council at all. It creates the appearance of governance without the substance, which can make the organization more vulnerable to criticism (for hypocrisy) rather than less.

Problem 4: Inadequate Vetting and Preparation

The controversy around ATEAC's composition suggested that Google had not adequately vetted members' backgrounds, public positions, and potential conflicts of interest --- or had vetted them and underestimated the reaction. In either case, the appointment process failed.

Effective governance body design requires:

  • Background review of all potential members, including their organizations' stated positions, public statements, and potential conflicts
  • Stakeholder testing --- consulting with key stakeholders (employees, civil society, academic partners) before finalizing appointments, not after
  • Scenario planning --- anticipating potential controversies, preparing responses, and establishing decision protocols before the public announcement
  • Onboarding --- ensuring that all members understand the mandate, the expectations, and the commitment before their appointment is announced

ATEAC appears to have skipped most of these steps, or to have conducted them too quickly to be effective.

Problem 5: Inadequate Institutional Support

Council member Luciano Floridi later described the experience in a published essay. He noted that the council had not yet met --- the first meeting was scheduled for April 2019 --- when the controversy erupted. Members had no communication infrastructure, no staff support, and no mechanism for responding collectively to the public backlash. Individual members made public statements, some defending the council's composition and others expressing concern, further fragmenting the response.

The lack of institutional support meant that when the crisis hit, the council was unable to function as a deliberative body. It could not meet to discuss the controversy, could not issue a collective statement, and could not propose a path forward. The council existed on paper but had no operational capability.


The Dissolution

On April 4, 2019, Google announced that it was dissolving ATEAC. A company spokesperson said: "It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board."

The dissolution came after:

  • Over 2,300 signatures on a petition demanding Kay Coles James's removal
  • Several council members publicly expressing discomfort with the composition
  • Extensive negative media coverage
  • Internal turmoil among Google employees

Alessandro Acquisti, a privacy researcher at Carnegie Mellon, resigned from the council before Google formally dissolved it, citing concerns about "the way in which the council was set up" and the impact of "its membership on Google's reputation."


Aftermath and Lessons

What Google Did Next

Google did not abandon external consultation on AI ethics. Instead, it:

  • Expanded its internal responsible AI team, investing in dedicated staff for fairness testing, model review, and policy development
  • Engaged with existing multi-stakeholder organizations (such as the Partnership on AI) rather than creating a new proprietary body
  • Published research on fairness, bias, and responsible AI through its research division
  • Implemented internal review processes for high-risk AI applications, including a formal review for "sensitive applications" of AI

However, Google's subsequent responsible AI trajectory was also marked by controversy. In December 2020, Google fired Timnit Gebru, co-lead of its Ethical AI team, after a dispute over a research paper on the environmental and social costs of large language models. In February 2021, Google fired Margaret Mitchell, the other co-lead of the team. Both firings generated significant backlash and raised questions about whether Google's commitment to responsible AI extended to tolerating internal criticism.

In 2023, Google restructured its responsible AI function, dissolving the dedicated team and distributing responsible AI responsibilities across product groups. This decision was part of a broader industry pattern of responsible AI team reductions.

Lessons for Business Leaders

Lesson 1: Governance Design Requires the Same Rigor as Product Design

ATEAC failed not because external advisory councils are a bad idea, but because this particular council was poorly designed. The composition was not adequately vetted. The mandate was undefined. The authority was unclear. The process was opaque. If Google had applied the same design rigor to ATEAC that it applies to a new product launch --- user research, testing, iteration, stakeholder feedback --- the outcome would likely have been different.

Lesson 2: Stakeholder Engagement Must Precede Governance Announcements

Announcing a governance body without prior stakeholder consultation is a fundamental error. The people most affected by AI governance --- employees, researchers, civil rights organizations, affected communities --- must have input into how governance structures are designed. Top-down governance in a domain that affects diverse communities is structurally fragile.

Lesson 3: Ethics Washing Is Worse Than No Ethics

An ethics governance body that lacks real authority, transparency, and independence does more harm than good. It creates the appearance of oversight without the substance, which erodes trust more than the absence of oversight. Organizations should either invest in genuine governance --- with real authority, real transparency, and real independence --- or acknowledge that they are not yet ready to do so. The middle ground of performative governance is the worst of all options.

Lesson 4: Diversity on Ethics Boards Must Be Carefully Defined

Cognitive diversity strengthens ethical deliberation. But "diverse perspectives" does not require including individuals whose stated positions are incompatible with the values the board is meant to advance. The composition of an ethics board communicates the organization's values as clearly as any published principles --- and the composition will be scrutinized.

Lesson 5: Institutional Support Is Not Optional

A governance body without communication infrastructure, staff support, meeting protocols, and crisis response capability is not a governance body. It is a mailing list. ATEAC's inability to function as a collective body when controversy erupted was a direct consequence of inadequate institutional support.

Lesson 6: Responsible AI Governance Must Survive Leadership Changes and Market Pressure

The most damning aspect of Google's responsible AI trajectory is not the ATEAC failure. It is the subsequent pattern: the firing of the Ethical AI team co-leads, the dissolution of the dedicated responsible AI team, and the redistribution of responsibilities. These decisions suggest that Google's commitment to responsible AI was contingent on favorable conditions --- sufficient when there was no cost, insufficient when it created friction. Governance that survives only when it is convenient is not governance.


The Broader Context

The ATEAC case is part of a larger pattern. Between 2018 and 2023, many major technology companies established and then reduced or eliminated dedicated responsible AI teams:

Company Established Restructured/Reduced Pattern
Google 2018 (Ethical AI team) 2020-2023 (firings, dissolution) Team leaders fired; team dissolved
Microsoft 2019 (Office of Responsible AI) 2023 (ethics & society team laid off) Dedicated team reduced; responsibilities distributed
Meta 2021 (Responsible AI team) 2023 (team disbanded) Members reassigned to generative AI products
Twitter 2021 (ML ethics team) 2022 (team eliminated post-acquisition) Entire team eliminated

This pattern raises a structural question: is the dedicated responsible AI team a sustainable organizational model? Or does it create a team that is inherently in tension with the product and revenue functions of the organization --- a team that will always be vulnerable to cost cuts, reorganization, and leadership changes?

The chapter's discussion of the hybrid model --- centralized standards with embedded practitioners --- may be more resilient precisely because it distributes responsible AI across the organization rather than concentrating it in a single team that can be eliminated in a single decision.


Discussion Questions

  1. Governance Design. If you were tasked with designing a successor to ATEAC --- an external advisory council for a major technology company's AI ethics --- how would you structure it? Address composition, mandate, authority, transparency, and institutional support.

  2. The Diversity Question. The controversy over Kay Coles James raises the question of what "diversity" means on an ethics board. Should an AI ethics board include members who hold views that are widely considered discriminatory? What principles should guide composition decisions?

  3. Ethics Washing. The chapter argues that "ethics washing is worse than no ethics." Do you agree? Are there circumstances where a well-intentioned but flawed ethics initiative is better than no initiative at all?

  4. Structural Vulnerability. The pattern of establishing and then eliminating responsible AI teams suggests a structural tension between responsible AI and business objectives. What organizational structures might make responsible AI more resistant to business cycle pressures?

  5. The Employee Voice. Google employees played a significant role in both the Project Maven controversy and the ATEAC dissolution. What role should employees play in AI ethics governance? How should organizations balance employee voice with management authority?

  6. Applying to Athena. Ravi is establishing Athena's AI Ethics Board (introduced in Chapter 27). Based on the ATEAC case, what three specific design decisions should Ravi make differently? Draft a brief memo from Ravi to Athena's CEO outlining these decisions and their rationale.

  7. The Long View. Google published its AI Principles in 2018. By 2023, the company had dissolved both its external advisory council and its internal ethics team. Does this trajectory mean the principles were meaningless? Or can principles have value even when the governance structures intended to implement them fail?


This case study connects to Chapter 30's discussion of the principles-to-practice gap, responsible AI team design, and the importance of institutional support for AI governance. The broader pattern of responsible AI team reductions is discussed in the context of organizational commitment and competitive pressure. Google's AI Principles and the EU AI Act's governance requirements are examined in Chapter 28.