Case Study: Ethical Review in Tech: Google's AI Ethics Board Controversy

"You cannot build an ethics board in a week, dissolve it in a week, and claim to take ethics seriously." -- Anonymous Google employee, quoted in Vox, April 2019

Overview

In March 2019, Google announced the formation of the Advanced Technology External Advisory Council (ATEAC), an external ethics board intended to guide Google's AI development. The council lasted seven days. Its rapid formation and dissolution became one of the most discussed episodes in the history of corporate AI ethics -- a case study in how not to build an ethical review mechanism. This case study examines what ATEAC was supposed to do, why it failed, what happened in its aftermath (including the firing of two prominent AI ethics researchers), and what the episode reveals about the structural challenges of corporate ethical review.

Skills Applied: - Evaluating institutional design for ethical review bodies - Analyzing the relationship between composition, authority, and effectiveness - Understanding the structural challenges of corporate ethics review - Connecting institutional failures to broader patterns of ethics-washing


What ATEAC Was Supposed to Do

The Announcement

On March 26, 2019, Google's Senior Vice President of Global Affairs, Kent Walker, announced the creation of ATEAC. The council was described as an external advisory body that would "provide recommendations on ethical issues raised by AI and other emerging technology, including facial recognition, recommender systems, and fairness in machine learning."

The announcement came during a period of intense internal and external scrutiny of Google's AI practices. In 2018, approximately 4,000 Google employees had signed a letter protesting Project Maven, a Pentagon contract for AI-assisted drone surveillance. Google had ultimately decided not to renew the Maven contract and published a set of "AI Principles" in June 2018. ATEAC was positioned as the governance mechanism to implement those principles.

The Eight Members

Google appointed eight members to the council:

  1. Dyan Gibbens -- CEO of Trumbull Unmanned, a drone technology company
  2. Alessandro Acquisti -- Professor of Information Technology at Carnegie Mellon, a respected privacy researcher
  3. Bubacarr Bah -- Mathematics professor at the African Institute for Mathematical Sciences
  4. De Kai -- Computer science professor at UC Berkeley and the Hong Kong University of Science and Technology
  5. Joanna Bryson -- Computer science professor at the University of Bath, an AI ethics researcher
  6. Kay Coles James -- President of the Heritage Foundation, a conservative policy organization
  7. William J. Burns -- Former US Deputy Secretary of State (later CIA Director)
  8. Luciano Floridi -- Professor of Philosophy and Ethics of Information at the Oxford Internet Institute

The list was notable for both its breadth and its contradictions. It included respected scholars (Acquisti, Floridi, Bryson) alongside a drone company CEO (Gibbens) and the head of a policy organization (James) with documented positions against LGBTQ+ rights and immigration -- issues directly relevant to how AI systems affect marginalized communities.


Why It Failed

The Composition Controversy

The appointment of Kay Coles James ignited immediate opposition. The Heritage Foundation, which James led, had:

  • Opposed the Equality Act, which would extend civil rights protections to LGBTQ+ individuals
  • Published reports characterizing transgender identity as a "delusion" and opposing trans-inclusive policies
  • Advocated for restrictive immigration policies

For Google employees working on AI systems that directly affected LGBTQ+ individuals and immigrants -- through content moderation, ad targeting, and government-facing products -- James's appointment was not merely political disagreement. It was a governance design failure. An ethics board member whose organization actively worked against the rights of communities affected by AI could not credibly serve as an ethical voice for those communities.

Within hours of the announcement, Google employees organized opposition. By April 1, over 2,500 employees had signed a petition demanding James's removal. The petition stated: "The explicit anti-trans, anti-LGBTQ stances of one of the Advisory Council's members make clear that the committee cannot effectively serve the needs of the trans and broader LGBTQ communities that are deeply affected by algorithmic bias."

The Missing Architecture

Beyond the composition controversy, ATEAC had fundamental structural problems:

No charter. The council was announced without a published charter defining its scope, responsibilities, decision-making processes, or relationship to Google's product development. Advisory bodies without charters are advisory bodies without purpose.

No process. There was no defined review process -- how would cases reach the council? What criteria would trigger review? How would recommendations be communicated? How would Google respond?

No authority. ATEAC was described as advisory, with no binding power. But even advisory bodies can be designed with escalation mechanisms, response requirements, and transparency obligations. ATEAC had none.

No integration. The council was external to Google's product development pipeline. Even if it had survived and functioned perfectly, its advice would have been structurally disconnected from the engineering decisions that shape AI products. There was no pathway from ATEAC recommendations to product changes.

No internal buy-in. The petition signed by 2,500+ employees demonstrated that the council was formed by Google's senior leadership without adequate consultation with the workforce whose work it would affect.

The Dissolution

On April 4, 2019 -- seven days after the announcement -- Google dissolved ATEAC. A Google spokesperson said the company "realized that the current format of ATEAC was not working." Some members had resigned; others expressed concern about the controversy. Google stated it would find "different ways of getting outside opinions on these topics."

Joanna Bryson, one of the appointed members, described the dissolution as "a mess" but noted that the council had been poorly designed from the start. Alessandro Acquisti withdrew before the dissolution, citing concerns about the process.


The Aftermath: Gebru and Mitchell

The Firing of Timnit Gebru (December 2020)

Eighteen months after ATEAC's dissolution, the question of ethical review at Google took a darker turn. Timnit Gebru, co-lead of Google's Ethical AI team and co-author of the influential "Gender Shades" study on facial recognition bias, was terminated in December 2020.

The immediate trigger was a dispute over a research paper. Gebru had co-authored a paper titled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" which examined the environmental costs and social risks of large language models -- a category that included Google's own products. Google's research leadership requested the paper be retracted or Gebru's name removed, citing concerns about the review process and the paper's conclusions.

Gebru responded with an email to an internal mailing list expressing frustration with what she described as a pattern of suppressing ethics research that conflicted with Google's commercial interests. Google characterized her subsequent actions as a resignation; Gebru and her supporters described it as a firing.

The Firing of Margaret Mitchell (February 2021)

Two months later, Margaret Mitchell, the other co-lead of Google's Ethical AI team and the lead author of the influential "Model Cards for Model Reporting" paper (discussed extensively in Chapter 29), was also terminated. Google cited violations of the company's code of conduct and security policies. Mitchell had been using automated scripts to search her corporate email for evidence of what she described as discriminatory treatment of Gebru.

The Pattern

The ATEAC failure and the Gebru/Mitchell firings, while separated by time, reveal a pattern: Google's ethical review mechanisms -- whether external (ATEAC) or internal (the Ethical AI team) -- were structurally unable to constrain Google's most consequential decisions. When external advisors raised uncomfortable questions, the advisory body was dissolved. When internal researchers produced findings that challenged commercial priorities, the researchers were dismissed.

This pattern does not mean that everyone involved acted in bad faith. It means that the institutional structures were not designed to withstand the pressures they faced. An advisory board without a charter cannot survive its first controversy. An ethics research team that reports to the same leadership making the commercial decisions it scrutinizes cannot maintain independence.


Analysis Through Chapter Frameworks

IRB Comparison

Comparing ATEAC against the academic IRB model from Section 28.3:

Feature Academic IRB ATEAC
Legal mandate Required for federally funded research Voluntary
Charter Detailed written charter with defined scope None
Process Defined review protocols None
Authority Can block research from proceeding Advisory only, no binding authority
Independence Structurally independent from researchers Appointed by Google, serving at Google's pleasure
Transparency Protocols publicly registered Council's activities were to be confidential
Accountability Subject to federal oversight (OHRP) Accountable only to Google's senior leadership
Duration Standing body with ongoing mandate Dissolved after seven days

ATEAC failed on every dimension that makes IRBs functional.

Ethics-Washing Assessment

Was ATEAC ethics-washing? The evidence is mixed but suggestive:

Evidence of ethics-washing: The council was announced with fanfare during a period of regulatory and public scrutiny. It had no charter, no process, and no authority. It appeared designed to generate a press release ("Google appoints ethics board") rather than a functioning governance mechanism.

Counter-evidence: Google appointed several highly respected scholars whose participation suggests genuine intent. The dissolution, while abrupt, was a response to real concerns rather than a pre-planned abandonment.

Assessment: ATEAC was likely formed with genuine intent but without genuine design. The distinction matters: you can sincerely want an ethics board while failing to invest the institutional design work needed to make one functional. The result -- an ethics body that collapsed on contact with its first real challenge -- is functionally equivalent to ethics-washing, regardless of intent.

The Accountability Gap

ATEAC's failure exposed the accountability gap discussed in Section 28.8.2: who reviews the reviewer? When Google's ethics board failed, no external body had the authority to require Google to establish a better one. When Google's ethics researchers were fired, no external mechanism could compel Google to restore the function. The accountability gap means that the organization with the greatest need for ethical oversight -- the one whose products affect billions of people -- is also the one with the most power to resist it.


Discussion Questions

  1. The composition problem. Could ATEAC have survived if Kay Coles James had not been appointed? Or were the structural deficiencies (no charter, no process, no authority) sufficient to guarantee failure regardless of composition?

  2. External vs. internal review. ATEAC was external; the Ethical AI team was internal. Both failed. Is the answer external review, internal review, or some combination? What structural conditions would make either approach functional?

  3. The Gebru/Mitchell connection. How do the firing of AI ethics researchers relate to the failure of the external advisory board? What pattern do they reveal about Google's institutional relationship with ethical oversight?

  4. The charter question. Draft a one-page charter for a technology company's AI ethics advisory council. Specify: scope, composition requirements, review process, authority level, transparency obligations, and term protections.

  5. Reform or regulation? After ATEAC, Google invested in internal responsible AI processes. Is internal self-reform sufficient, or does the ATEAC episode demonstrate that external regulation (like the EU AI Act) is necessary? Make the case for each position.


Your Turn: Mini-Project

Option A: Board Design. If you were hired to design a replacement for ATEAC -- an external AI ethics advisory body for a major technology company -- what would you build? Specify composition, charter, process, authority, integration with product development, transparency, and accountability. Address the specific failure modes that ATEAC exhibited.

Option B: The Research Freedom Question. Research the cases of Timnit Gebru, Margaret Mitchell, and at least one other AI ethics researcher who has faced institutional pressure. Write a 1,200-word analysis examining the structural tensions between corporate-funded AI ethics research and corporate commercial interests.

Option C: Comparative Ethics Governance. Compare Google's post-ATEAC ethics governance approach with that of at least one other technology company (Microsoft, Meta, Amazon, or Salesforce). Which approach is more structurally robust? Which is more likely to survive the pressures that destroyed ATEAC?


References

  • Metz, Cade, and Daisuke Wakabayashi. "Google Scraps Ethics Board Just a Week After Forming It." The New York Times, April 4, 2019.

  • Ghaffary, Shirin. "Google's AI Ethics Board Lasted Just Over a Week. Here's Why." Vox, April 5, 2019.

  • Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 610-623.

  • Simonite, Tom. "What Really Happened When Google Ousted Timnit Gebru." Wired, June 8, 2021.

  • Mitchell, Margaret, Simone Wu, Andrew Zaldivar, et al. "Model Cards for Model Reporting." Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT)*, 220-229.

  • Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of FAT, 2018, 77-91.

  • Metcalf, Jacob, Emanuel Moss, and danah boyd. "Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics." Social Research 86, no. 2 (2019): 449-476.

  • Hao, Karen. "We Read the Paper That Forced Timnit Gebru Out of Google. Here's What It Says." MIT Technology Review, December 4, 2020.