Case Study 34-2: Google Project Maven and the Limits of Corporate Ethics Commitments
When Surveillance Capitalism's Infrastructure Meets Military Targeting
Background: What Was Project Maven?
In 2017, the U.S. Department of Defense awarded Google a contract worth approximately $9 million under "Project Maven" — officially the Algorithmic Warfare Cross-Functional Team. The project tasked Google with applying its machine learning and computer vision capabilities to drone footage analysis: specifically, automating the identification of objects and people in surveillance footage from military drones to assist in targeting decisions.
This was not a weapons contract in the narrow sense — Google was not building the weapons themselves. It was building the artificial intelligence tools that would make drone surveillance footage more useful for military analysis, including analysis that would inform decisions about lethal strikes.
The contract was not publicly announced. Google employees learned about it through internal communications and quickly began organizing opposition.
The Employee Revolt
In April 2018, approximately 3,000 Google employees signed an open letter to CEO Sundar Pichai. The letter began:
"We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
Subsequent weeks saw additional employees resign over the project, further letters, internal organizing, and extensive media coverage. The employee activism was notable for its directness — employees were not asking for better ethics review processes or more transparency. They were demanding cancellation.
The arguments made by employee organizers included:
The "don't be evil" standard: Google's informal founding motto — "don't be evil" — had been dropped from the corporate code of conduct in 2018 and replaced with "do the right thing." Employees argued that building AI for military targeting failed either standard.
The accountability gap: Google's AI capabilities were being applied to military targeting without any public disclosure, ethical review visible to employees, or accountability process. Employees who discovered the project had not been consulted and had no formal mechanism for raising concerns.
The dual-use risk: The machine vision and behavioral analysis capabilities Google was developing for Maven could be repurposed for many applications — including surveillance applications that Google had publicly committed to avoiding. Once the technical capabilities existed and had been demonstrated in a military context, containing their use would be difficult.
The trust relationship with users: Google's behavioral data collection depends on users trusting the company with their data. A contract to build military targeting AI might, if known, undermine that trust — raising the question of whether user data might be incorporated into military intelligence applications.
Google's Response and the Outcome
Google initially defended the contract, with executives arguing that the company was providing only existing technology in a way that was "non-offensive" and that Google was applying ethical review to AI development.
In June 2018 — following the employee letter, multiple resignations, and sustained media coverage — Google announced it would not renew the Maven contract when it expired in 2019.
Google also announced, in June 2018, its "AI Principles" — a set of commitments governing AI development:
- Google would not pursue AI applications for weapons or other technologies that cause or are likely to cause overall harm
- Google would not build surveillance systems "violating internationally accepted norms"
- Google would not build AI technologies that contravene international law
- AI applications would be assessed across dimensions including safety, fairness, accountability, and privacy
The AI Principles appeared to be a direct response to the Maven revolt and represented a significant corporate policy commitment.
After Maven: The Limits of Ethical Commitments
The story after Maven illustrates the gap between corporate ethics commitments and corporate behavior.
JEDI contract controversy: In 2019, Google declined to bid on JEDI (Joint Enterprise Defense Infrastructure), a $10 billion DoD cloud computing contract, citing its AI Principles. Google's competitors — Microsoft and Amazon — proceeded. Microsoft won. This appeared to validate the AI Principles as a genuine constraint.
Project Dragonfly: Simultaneously with the Maven controversy and the AI Principles announcement, it emerged that Google had been developing "Project Dragonfly" — a censored search engine for China that would identify users who searched for politically sensitive terms. This project directly violated the AI Principles' commitment against surveillance violating internationally accepted norms. Google canceled Dragonfly after employee and media pressure.
Israeli cloud contract (2021): Google (with Amazon) won Project Nimbus, a $1.2 billion cloud computing contract with the Israeli government and military. Google employees again organized, arguing that the contract violated the AI Principles and that cloud infrastructure provided to a military context could be used for surveillance and targeting. More than 1,000 Google employees signed opposition letters. In April 2024, Google fired approximately 50 employees who participated in protests related to Project Nimbus.
The pattern: Google's AI Principles are real ethical commitments that sometimes constrain behavior. They are also interpreted narrowly when constraining behavior would be expensive. The threshold for applying them appears responsive to employee and public pressure, economic stakes, and executive preferences — not to consistent, independently audited ethical standards.
What This Case Reveals About Surveillance Capitalism and Its Ethics
1. Corporate ethics depend on enforcement mechanisms. Voluntary ethics commitments without independent enforcement, binding contract provisions, or regulatory accountability are negotiable. When the economic stakes are high enough, "do the right thing" is interpreted to allow the profitable thing.
2. Surveillance capitalism's infrastructure is dual-use. The behavioral analysis, machine learning, and computer vision capabilities that Google developed for advertising (understanding what users want, predicting their behavior) are directly transferable to military surveillance and targeting applications. The infrastructure built for commercial behavioral modification does not become non-transferable when it is sold to governments.
3. Employee activism is a real but limited accountability mechanism. The Maven revolt produced a genuine corporate policy change and significant public attention. It did not produce a binding constraint — the Project Nimbus situation shows that employees who organized against a subsequent contract faced termination. The legal environment for employee activism in the United States is limited: employees can organize and speak out; they can be fired for doing so outside of NLRA protections.
4. Zuboff's analysis may understate the military dimension. Zuboff's surveillance capitalism is primarily a commercial and political phenomenon. The Maven case shows that the same technological capabilities are directly applicable to military surveillance and targeting. A complete analysis of surveillance capitalism's harms must include its military applications, not just its advertising and political dimensions.
5. User trust is surveillance capitalism's vulnerability. The concern expressed by employee organizers that military contracts might undermine user trust in Google's data practices points to a real structural vulnerability: surveillance capitalism depends on users continuing to provide behavioral data. Anything that substantially erodes user trust threatens the data collection on which the business model depends.
Analysis Questions
1. Google employees argued that building AI for military targeting violated "don't be evil" or "do the right thing." Is building targeting AI for the U.S. military necessarily evil? What moral framework would lead you to that conclusion, and what framework might lead to the opposite conclusion?
2. Google's AI Principles represent a voluntary ethics commitment without independent enforcement. Design a better governance mechanism: what oversight structure, what commitment types, and what enforcement mechanisms would produce more reliable ethical behavior from a technology company?
3. The dual-use argument — that surveillance capitalism's commercial infrastructure is directly transferable to military applications — has implications for how we think about behavioral data collection generally. If Google's user data could be used for military targeting purposes, does that change your evaluation of Google's commercial data collection practices?
4. Google fired approximately 50 employees for organizing against Project Nimbus. What does this response reveal about the limits of employee activism as an accountability mechanism? What structural conditions would be needed to make employee activism more effective?
5. Zuboff focuses on behavioral modification at commercial scale. The Project Maven case suggests that the same capabilities are applicable to kinetic military operations. How should this dimension — surveillance capitalism as military infrastructure — change Zuboff's analysis? What implications does it have for privacy advocacy that focuses primarily on commercial data practices?
This case study connects to Chapter 34's core themes of surveillance capitalism, ethics, and structural critique. It connects backward to Chapter 9 (mass interception and the national security dimension of data) and forward to Chapter 38 (AI governance) and Chapter 39 (designing for privacy). The Project Maven employee revolt also connects to Chapter 33's examination of activism — but as tech worker organizing rather than civil society activism.