Case Study 37.1: Project Maven and the Tech-Military Complex

When Silicon Valley Goes to War


Overview

In 2017, the United States Department of Defense launched Project Maven — formally the Algorithmic Warfare Cross-Functional Team — with a mandate to rapidly integrate AI into military operations. The initial application was analysis of drone surveillance video: the U.S. military was collecting far more drone footage than its human analysts could review, and AI offered the prospect of automatically identifying objects, people, and activities of interest, directing human analyst attention to the most significant material.

Google agreed to provide AI capabilities for Project Maven, under a contract that used TensorFlow — Google's open-source machine learning framework — along with Google engineering support, to build image classification models for the project. The contract was not publicly announced.

In 2018, an internal Google employee petition opposing the company's Maven involvement attracted more than 3,000 signatures. Several employees resigned. After months of internal controversy, Google announced that it would not renew the Maven contract when it expired. Google subsequently published AI Principles that committed the company not to build AI for weapons.

The episode became one of the most significant moments in the history of technology workers asserting collective agency over their employers' ethical choices. It reshaped the policies of multiple major technology companies on military AI contracting. It created a market opening for a new generation of defense-oriented technology companies explicitly positioned to provide what Google declined. And it raised questions — still unresolved — about how democratic societies should govern the relationship between commercial technology development and military capability.


Background: The Algorithmic Warfare Cross-Functional Team

Project Maven emerged from the recognition that the U.S. military's surveillance collection capacity had dramatically outrun its analysis capacity. The proliferation of unmanned aerial vehicles (UAVs) — military drones — had created the capability to collect continuous surveillance footage over large areas. Analyzing that footage required human analysts who could identify objects (vehicles, weapons, structures), people, and patterns of behavior from video feeds. Human analysts were a bottleneck.

The Algorithmic Warfare Cross-Functional Team was established by Deputy Secretary of Defense Patrick Shanahan in April 2017, with a mandate to accelerate the integration of AI and machine learning capabilities into Department of Defense operations. Deputy Secretary of Defense Robert Work, who championed the initiative, emphasized the need for speed: China and Russia were developing military AI capabilities, and the U.S. needed to accelerate its own development to maintain advantage.

The immediate focus of Project Maven was computer vision for drone video analysis — AI that could automatically flag objects and activities of interest in drone footage, reducing the human analyst workload and enabling faster identification of targets. The longer-term vision was broader: AI capabilities that could process intelligence data at machine speed and scale, supporting military decision-making across a range of operations.


Google's Involvement

Google was approached in mid-2017 about providing AI capabilities for Project Maven. The arrangement was structured as a cloud computing and professional services contract under which Google would provide access to TensorFlow and Google engineering support for developing the object classification models.

The contract value was reported at approximately $9 million — modest for Google, significant as a proof of concept. Google's involvement was not publicly announced by either party.

The internal decision to pursue the Maven contract appears to have been made without broad consultation across Google's engineering workforce. Employees who later became aware of the contract described learning about it through media reports and informal internal communications rather than through official announcement. Google leadership subsequently acknowledged that the handling of the Maven decision — the lack of broad internal consultation for a project that many employees would have strong views about — was a communications failure.

Google's public argument for its Maven involvement was that the contract was for non-offensive applications: the AI was analyzing video, not making targeting decisions or directly controlling weapons. Google representatives emphasized that the contract was for object detection in unclassified video, that the data provided to Google was not classified, and that Google was not building autonomous weapons.

Critics of this framing noted that the chain from drone video analysis to targeting decisions is short. Object detection in surveillance footage identifies potential targets. That identification is a step in the targeting process. Characterizing it as "non-offensive" draws a line that is technically defensible but ethically thin.


The Employee Petition and Internal Opposition

In early April 2018, internal opposition to Project Maven began to organize. A group of Google employees drafted an internal letter addressed to CEO Sundar Pichai opposing the company's involvement. The letter's core argument was direct:

"We believe Google should not be in the business of war. Therefore we ask that Project Maven contract not be renewed after it expires in 2019, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

The letter circulated internally and gathered more than 3,000 signatures. This was a significant fraction of Google's workforce — not a fringe dissident group. The signatories included senior engineers, researchers, and employees across multiple teams and locations.

Several employees went further than signing the petition. A number resigned over the contract, publicly citing their opposition to Google's military AI involvement. Their departures included engineers who had significant technical expertise and whose resignations were not merely symbolic — they represented real costs to Google's engineering capacity.

The internal debate was intense and sustained. Employees who opposed the contract argued that Google's mission — "to organize the world's information and make it universally accessible and useful" — was incompatible with building systems that supported lethal military operations. Employees who supported it argued that working with the U.S. military was consistent with values of democracy and national security, that the specific Maven application was defensive rather than offensive, and that the alternative — leaving military AI development to companies with fewer ethical constraints — was worse.


Google's Decision and the AI Principles

After months of internal deliberation and sustained employee pressure, Google announced in June 2018 that it would not renew the Maven contract when it expired in 2019. Shortly afterward, Google published its AI Principles — a set of commitments governing the AI applications the company would and would not pursue.

The Principles stated that Google would not pursue AI applications in several categories: - Technologies that cause or are likely to cause overall harm - Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people - Technologies that gather or use information for surveillance violating internationally accepted norms - Technologies whose purpose contravenes widely accepted principles of international law and human rights

The Principles were both substantive and carefully hedged. They committed Google to specific categories of non-pursuit while preserving flexibility in areas Google would continue to engage — including defense and government contracting that did not fit the prohibited categories.

Reception of the Principles

The AI Principles were received with a range of responses. Within Google, employees who had organized against Maven generally welcomed the Principles as a meaningful step, while some expressed concern about their vagueness and enforceability. External observers noted that the Principles left significant room for government and defense contracting, that the "primary purpose" framing allowed dual-use applications, and that the Principles were not legally binding on the company.

Subsequent episodes raised questions about consistency. Google's involvement with Project Nimbus — the cloud computing contract with the Israeli government and military, signed in 2021 — generated controversy among employees who argued it was inconsistent with the AI Principles. Google dismissed employees who participated in workplace protests against Project Nimbus in 2024, a decision that drew criticism from some who argued it violated the spirit of the employee voice that had shaped the Maven outcome.


The Pentagon's Response and the Defense Industrial Pivot

The Maven episode had significant effects on the Department of Defense's approach to commercial technology partnerships.

In the immediate aftermath, DoD proceeded with Project Maven's AI objectives through alternative commercial contractors. Microsoft, which had not taken the same principled stance as Google, provided Azure cloud services for subsequent phases of the program. The project continued, demonstrating that Google's withdrawal did not end the Pentagon's access to frontier AI capabilities — it changed which commercial partners provided them.

The broader defense AI ecosystem shifted in the wake of Maven. A new generation of companies positioned themselves explicitly as defense technology companies willing to build what Google and other "traditional" technology companies declined:

Palantir Technologies, founded in 2003 with origins in CIA-funded data analytics, has become a primary data and AI platform provider for the DoD. Its Foundry and Gotham platforms handle military intelligence, logistics, and operations data. Palantir has publicly positioned itself as the alternative to reluctant commercial AI providers, with CEO Alex Karp explicitly criticizing Silicon Valley companies that decline military contracts.

Anduril Industries, founded in 2017 by Palmer Luckey (who had sold Oculus VR to Meta), was explicitly founded to build defense technology using frontier AI and software. Anduril's products include autonomous surveillance towers, counter-drone systems, and autonomous unmanned underwater vehicles. The company has raised multiple rounds of venture capital at high valuations and has grown rapidly in the defense market. Anduril explicitly positions itself as a technology company building defense products — not a traditional defense contractor.

Shield AI, Joby Aviation (with defense applications), and numerous other startups have entered the defense AI market, many with venture capital backing. The category has been called "defense tech" or "dual-use tech," and it has attracted significant investment from venture capital firms that had previously focused on consumer and enterprise software.

The result of Project Maven and its aftermath is a bifurcated technology ecosystem: major platform companies (Google, with qualifications; some others) maintaining stated limits on certain military AI applications, while a rapidly growing defense tech sector explicitly fills the space those limits create.


JEDI and the Cloud Wars

The Joint Enterprise Defense Infrastructure (JEDI) cloud contract — a single-vendor cloud computing contract valued at up to $10 billion — further illustrated the dynamics of tech-military contracting in the post-Maven era.

Google declined to bid on JEDI, citing conflicts with its AI Principles (the contract's requirements, Google stated, would require building capabilities that conflicted with the Principles) and concerns about its ability to obtain necessary security clearances for all relevant personnel.

Microsoft ultimately won the JEDI contract in 2019. Amazon, which lost the contract, filed a lawsuit alleging improper influence by the Trump administration in the award process. The contract was eventually canceled in 2021 amid the legal dispute, replaced by the JWCC (Joint Warfighting Cloud Capability) contract under which multiple vendors — Amazon, Google, Microsoft, Oracle — shared the work.

Google's participation in JWCC, after declining to bid on JEDI, illustrated the complexity of principled stances on military contracting. Providing cloud computing infrastructure — storage, computation, networking — is several steps removed from building weapons, but those services are essential to all military digital operations, including weapons development and deployment.


What Project Maven Reveals About Governance

The Project Maven episode reveals several dimensions of the challenge of governing military AI in democratic societies.

Employee voice as a governance mechanism: The Maven employee revolt demonstrated that organized technology workers can exercise meaningful governance influence over their employers' decisions about military AI. This is an unconventional governance mechanism — corporate governance by employee petition — but it was effective in this case. The question is whether it is reliable and scalable: it requires a sufficiently large and organized employee community, a corporate leadership that is responsive to employee concerns (at least when the concerns carry reputational risk), and a specific decision point around which organizing is possible.

Principled withdrawal has substitution effects: Google's withdrawal from Maven did not prevent the project from proceeding; it changed which companies provided the AI capabilities. If one company declines on principled grounds and others proceed, the net governance effect is limited: the harmful application occurs, and the company with principles has surrendered its ability to shape it. This substitution effect is a genuine challenge to the strategy of principled non-participation in military AI.

Tech company AI principles require meaningful enforcement mechanisms: The AI Principles that Google published following Maven were a positive development, but their effectiveness depends on consistent application. The Project Nimbus controversy suggests that principled commitments can be inconsistently applied when commercial and contractual obligations conflict with stated principles. Principles without enforcement mechanisms are statements of aspiration, not governance.

The absence of a public governance framework creates market-driven decisions: In the absence of public deliberation about what military AI applications are acceptable in a democratic society, decisions about which AI capabilities the military receives are made by commercial companies responding to commercial incentives, with employee activism as an occasional check. This is not an adequate governance framework for decisions of this magnitude. Democratic societies need public deliberation — through legislative processes, regulatory frameworks, and public debate — about what military AI their governments develop and deploy.


Discussion Questions

  1. Google employees argued that working on Project Maven was incompatible with Google's mission and their personal ethics. How should individual technology professionals determine where to draw ethical lines about their work? Is it sufficient to decline to work on specific projects while remaining employed at a company that pursues them?

  2. Google's withdrawal from Maven was followed by Palantir, Anduril, and Microsoft filling the role Google declined. Does this substitution effect mean that principled non-participation in harmful military AI is futile? Or does it serve important functions even if it does not prevent the application?

  3. Google's Project Nimbus contract for cloud services to the Israeli military was structured as "commercial" cloud services not specifically directed at military applications. Is there a meaningful ethical distinction between providing cloud computing infrastructure to a military and building specific AI tools for military use? Where would you draw the line?

  4. The U.S. DoD has argued that it is better for democratic societies to develop military AI capabilities than to cede that territory to authoritarian states that will develop the same capabilities without democratic governance constraints. Evaluate this argument. Does it justify commercial technology company participation in military AI development?

  5. What public governance framework should democracies establish for military AI development — including what applications are permitted, what human control requirements must be met, and what transparency is owed to the public? How should that framework be created and enforced?

  6. Palmer Luckey founded Anduril explicitly to build defense AI that Google and other Silicon Valley companies declined to build. How should we evaluate this decision? Is Anduril filling a governance gap by providing military AI with commercially-sophisticated development norms, or is it weakening governance by eliminating the substitution effect that principled non-participation creates?