Case Study 3.2: Virtue Ethics and Corporate AI Culture — Google's Project Maven
When Stated Values Meet Real Decisions
Estimated reading time: 25–35 minutes Primary framework: Virtue Ethics Secondary frameworks: Deontology, Contractualism Themes: Ethics Washing vs. Genuine Ethics; Power and Accountability; Innovation vs. Harm Prevention
Introduction: The Character Test
In virtue ethics, character is revealed not in comfortable moments but in moments of genuine moral difficulty. Anyone can behave ethically when ethics costs nothing. The virtuous person — and, by extension, the virtuous organization — is distinguished by how they behave when ethics is costly, when stakeholder pressures are in conflict, and when the easy path and the right path diverge.
In April 2018, Google faced precisely this kind of character test.
The company had taken on a Department of Defense contract to provide AI technology for Project Maven — a Pentagon program using machine learning to analyze aerial drone footage and improve targeting accuracy. When employees discovered the arrangement, approximately 4,000 of them signed an open letter to CEO Sundar Pichai demanding that the company cancel the contract and commit to never building AI for weapons. Dozens resigned.
For a company that had built its public culture around the motto "Don't Be Evil," the Project Maven episode was a stress test. What did the company actually value? When its stated commitment to beneficent AI conflicted with a lucrative government contract, which prevailed? And what did the resolution of that conflict reveal about the company's actual character?
The answers were not simple, and the lesson is not moralistic. The Project Maven case is valuable for AI ethics education precisely because it resists easy conclusions. It reveals the complexity of virtue ethics applied to large organizations — the gap between stated values and practiced values, the role of employee voice in organizational ethics, and the difficulty of maintaining genuine ethical culture at scale.
Part 1: What Project Maven Was
Project Maven was formally known as the Algorithmic Warfare Cross-Functional Team. Launched by the Department of Defense in April 2017 under Deputy Secretary of Defense Robert Work, its stated mission was to accelerate the integration of AI and machine learning into DoD operations.
The initial focus was on computer vision — specifically, using AI to analyze the enormous volume of video footage generated by surveillance drones. The volume of drone footage vastly exceeded the human capacity to review it. A single surveillance flight could generate hours of footage; across the theater of operations in the Middle East, the footage generated daily overwhelmed the analytical capacity of military intelligence units. Project Maven was designed to automate the analysis: AI systems would scan footage, identify objects and persons of interest, and flag relevant items for human review.
The Pentagon's contract with Google, reported by Gizmodo in March 2018, was valued at approximately $9 million for 2017, with the potential for much larger future contracts. Google's cloud and AI capabilities — particularly its TensorFlow machine learning framework — were seen as superior to existing DoD alternatives.
The nature of the work was precisely defined in the contract: Google would help the Pentagon use its AI for object detection and classification in drone imagery. The Pentagon was clear that the results would be used to improve the efficiency of military strike operations. This was not a contract for general-purpose AI or cloud computing that happened to be used by the military. It was a contract explicitly linking Google's AI capabilities to the analysis of targets for potential military action.
Part 2: How Employees Found Out and What They Did
The contract was not publicly announced. Employees learned of it through internal communications — a blog post by a member of Google's cloud team promoting the use of TensorFlow for Project Maven. The post described the work in matter-of-fact terms that, for many employees, was the first indication that Google was doing military work of this nature.
The internal reaction was rapid and intense. Employees began raising concerns in internal forums, emailing executives, and meeting informally. The concerns were not primarily about Google doing any government work, or even about working with the military in general. They were specifically about the use of Google's AI for what employees characterized as autonomous weapons development — a characterization the company contested, arguing that the work was analytical and involved human oversight of all targeting decisions.
In April 2018, approximately 4,000 of Google's roughly 70,000 employees at the time signed an open letter addressed to CEO Sundar Pichai. The letter read, in part:
"We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
The letter was remarkable for several reasons. First, its scale: 4,000 signatories represented a significant proportion of the workforce, including senior engineers whose skills were in high demand. Second, its directness: the letter did not ask for dialogue or clarification — it demanded cancellation and a categorical commitment. Third, its framing: it appealed explicitly to Google's corporate identity and self-conception as a beneficent technology company.
Alongside the letter, some employees circulated an internal document arguing that Project Maven was not just ethically problematic but strategically self-defeating: Google's reputation as a trustworthy steward of data and technology was one of its most valuable assets, and that reputation would be damaged by association with military targeting operations.
Over the following weeks, approximately a dozen senior employees resigned specifically over Project Maven. The resignations were notable because they involved people with specialized AI expertise who had other employment options — their departure was economically costly to Google, not merely symbolically significant.
Part 3: Google's Stated AI Principles vs. Its Actions
In June 2018, Google published its "AI Principles" — a statement of the values and guidelines that would govern the company's AI development. The timing was not coincidental: the principles were developed in response to the Project Maven controversy, as a direct attempt to articulate the ethical commitments that employees had been demanding.
The principles identified specific categories of AI applications Google would not pursue:
"Technologies that cause or are likely to cause overall harm." "Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people." "Technologies that gather or use information for surveillance violating internationally accepted norms." "Technologies whose purpose contravenes widely accepted principles of international law and human rights."
The principles also committed to seven properties Google would seek to apply: being socially beneficial, avoiding the creation or reinforcement of unfair bias, being built and tested for safety, being accountable to people, incorporating privacy design principles, upholding high standards of scientific excellence, and being made available for uses consistent with these principles.
These principles, taken at face value, would clearly have prohibited Project Maven — a program whose purpose was to improve the efficiency of targeting operations, a form of technology "whose principal purpose or implementation is to cause or directly facilitate injury to people."
The question the virtue ethicist must ask is not whether the principles are admirable — they are — but what they reveal about the company's actual character. And the answer is complicated by timing: Google published these principles after the controversy erupted, after employees resigned, and in the context of public pressure. Are principles that emerge from reputational crisis expressions of genuine virtue or strategic reputation management?
The evidence is mixed. Some who were involved in drafting the principles argue they represented genuine ethical deliberation among senior leaders who were already grappling with these questions before Project Maven became public. Others argue that the timeline — principles published two months after news of Maven broke — tells a different story. A company with genuinely embedded AI ethics would have had these principles before taking on the contract, and would have applied them during the contracting process.
Part 4: The Virtue Ethics Lens
What Would a Virtuous Organization Do?
Aristotelian virtue ethics, applied to organizations, asks a specific question: what practices, habits, and dispositions characterize an organization that reliably acts well in situations of genuine moral difficulty?
Applied to Google and Project Maven, several virtuous organizational dispositions are relevant.
Integrity — consistency between stated values and actual behavior — requires that an organization's public commitments reflect its genuine operating priorities. A virtuous organization does not deploy the language of ethics to attract talent and build public trust while making decisions in conflict with that language. The gap between "Don't Be Evil" as a cultural touchstone and the decision to provide AI for military targeting operations raises integrity questions, regardless of how the work is characterized.
Practical wisdom (phronesis) — the capacity for sound judgment in complex situations — requires, at minimum, that ethical questions be raised before decisions are made, not after. A virtuous organization would have had the Project Maven conversation before signing the contract: does this work align with our values? What are the ethical risks? Who should be consulted? What commitments would our employees and users expect us to honor? The fact that these questions were raised publicly by employees rather than internally by leadership suggests a deficit of practical wisdom in the contracting process.
Honesty — including transparency with stakeholders about decisions that affect them — requires that an organization not obscure the nature of its work from the people whose labor makes it possible. The initial blog post that revealed Project Maven to employees was promotional, not informative. The contract was not disclosed proactively to the workforce. This is not consistent with organizational honesty, and it generated precisely the trust breakdown — employees feeling misled about the work they were contributing to — that honesty is designed to prevent.
Courage — the willingness to accept costs for the sake of what is right — is revealed in Google's eventual decision to not renew the Maven contract. This decision came with real costs: the Pentagon contract, potential future DoD relationships, and the competitive pressure from companies less constrained by employee activism. If the decision was driven by genuine ethical conviction rather than purely by reputational calculation, it represents organizational courage.
But courage must be distinguished from prudence. A company that withdraws from a controversial contract primarily because the controversy is bad for business is not displaying courage — it is displaying competent risk management. The virtue ethics question is about motivation and character, not just behavior.
The Ethics Washing Question
The Project Maven case is a canonical example of the ethics washing problem — the deployment of ethical language, principles, and processes in ways that serve reputational and strategic interests without reflecting genuine ethical commitment.
Several features of Google's response raise ethics washing concerns. The AI Principles were published after the crisis, not before. They were published at a level of generality that would never generate clear operational guidance about specific contracts. No enforcement mechanism was specified — there was no independent body charged with evaluating proposed projects against the principles. And subsequent controversies — including the 2019 Project Dragonfly episode (a censored search engine for China) and ongoing debates about defense contracts at other cloud providers — suggest that the principles did not create durable institutional constraints.
This is not to say that the principles were purely cynical. Many large organizations develop ethical commitments in response to crises; the origin of a commitment does not necessarily determine its sincerity. But ethics washing is diagnosed not by motivation alone but by outcomes: does the ethical framework change actual decisions over time? A principles document that does not change behavior is, by the virtue ethics standard, performance rather than practice.
Part 5: Google's Decision to Withdraw — and Subsequent Controversies
In June 2018, Google announced that it would not renew the Project Maven contract when it expired at the end of that year. Sundar Pichai stated that Google had decided not to continue with the work "after a review of its AI Principles," which he characterized as consistent with those principles.
The decision was welcomed by the employees who had protested. But the story did not end there.
In July 2019, it was reported that Google had submitted a bid for the Pentagon's JEDI (Joint Enterprise Defense Infrastructure) cloud contract — a massive, potentially $10 billion, 10-year contract that would provide cloud computing services across DoD operations, including warfighting systems. Google eventually withdrew from the JEDI bid, citing concerns that the work might not comply with its AI Principles. But the submission and subsequent withdrawal raised questions about the consistency with which the principles were being applied.
Meanwhile, other major AI companies — including Amazon, Microsoft, and Palantir — continued and expanded their defense contracting work. The competitive landscape created pressure: if Google abstained from defense AI markets on ethical grounds and competitors did not, the practical result was that DoD would use AI developed by companies with fewer ethical constraints, not that it would use AI developed by more ethical companies. Some Google employees who had originally protested Project Maven subsequently argued that Google's ethical abstention made defense AI worse, not better — an argument that the consequentialist framework would take seriously.
This subsequent history illustrates a genuine dilemma that virtue ethics must engage: is organizational virtue compatible with competitive markets? Can a single company sustain genuine ethical constraints when competitors are not similarly constrained, without simply ceding the market to less ethical actors?
Part 6: What Organizational Culture Actually Enables or Prevents
The Project Maven case reveals how organizational culture simultaneously enables and prevents ethical AI development.
What Google's culture enabled: The openness of Google's internal culture — the existence of internal forums where dissent could be expressed, the absence of explicit prohibition on collective employee action, the genuine access that employees had to senior leadership — created conditions in which ethical concerns could be raised, heard, and taken seriously. The 4,000-person petition was possible because the organization tolerated and even encouraged employee voice. Many comparable organizations would have suppressed the dissent, dismissed the concerns, or retaliated against organizers.
What Google's culture prevented: The same culture that enabled employee voice after the fact failed to create proactive mechanisms for ethical review before contracts were signed. The contracting decision was made by a relatively small group of business development and leadership personnel, without apparent systematic consultation with employees who would work on the technology, ethicists who might evaluate the implications, or public interest stakeholders who might raise the concerns that eventually emerged organically. The culture of bottom-up ethical correction — reacting to discovered problems — is inferior to a culture of embedded ethical prevention.
The scale problem: Google employs tens of thousands of people across dozens of countries and business lines. Maintaining genuine ethical culture at this scale is a fundamentally different challenge than maintaining it in a 50-person startup. Virtue in large organizations requires institutional structures — ethics review processes, reporting mechanisms, clear accountability lines — because personal virtue among individual employees is insufficient to ensure organizational virtue when the organization is too large for personal relationships and shared norms to govern every decision.
The incentive problem: Google's culture, like most technology company cultures, rewards growth, innovation, and technical excellence. These incentives are not inherently incompatible with ethics, but they create structural pressure against the kind of slow, deliberate, sometimes deal-breaking ethical review that genuine virtue requires. A culture that treats ethical review as a cost center rather than a value center will systematically under-invest in it.
Part 7: Lessons for Leaders
The Project Maven case offers several lessons for leaders responsible for building ethical AI cultures.
Build ethics into the contracting process, not just the product process. The Maven crisis was enabled by a contracting decision that did not include systematic ethical review. By the time employees raised concerns, the contract was signed and the relationship was established. Ethical review must occur before commitments are made, not after.
Create genuine psychological safety for ethical concerns. The employees who signed the Maven letter took real professional risks in doing so. In many organizations, such a public challenge to a leadership decision would result in marginalization or dismissal. That Google took the concerns seriously rather than retaliating reflects well on the organization — but waiting for employees to take those risks is not the right governance model. Leaders should create channels for ethical concerns that do not require employees to organize publicly to be heard.
Distinguish principles from constraints. Google's AI Principles are articulated as values — aspirations that guide development. Genuine ethical constraints are different: they specify categories of activity that the organization will not engage in regardless of business considerations, with enforcement mechanisms to ensure compliance. The Project Maven principles emerged after the fact, at a level of generality that required interpretation to apply, without enforcement mechanisms. Future iterations should be specific, pre-specified, and enforced.
Engage external stakeholders in ethics governance. Internal employee voice, however important, is not sufficient for genuine ethical governance. External stakeholders — civil society organizations, affected communities, independent ethicists, public interest lawyers — bring perspectives that internal employees, however concerned, cannot substitute for. Organizations with genuine ethical commitments create meaningful external engagement in their ethics governance, not just internal deliberation.
Accept the cost of genuine virtue. The lesson of Project Maven is ultimately that genuine virtue has costs. Google paid a cost in declining to renew Maven and in declining to bid for JEDI — both in direct contract revenue and potentially in future DoD relationships. Organizations that are willing to accept such costs have genuine ethical commitments. Organizations that retreat from ethical positions only when the reputational cost of retreat is lower than the reputational cost of maintaining the position do not.
Part 8: Discussion Questions
-
Google's AI Principles were published in the wake of Project Maven, not before it. Does the reactive origin of the principles affect their ethical value — both morally and practically? What would it look like if they had emerged proactively?
-
The "competitive market" problem: Google's decision to withdraw from certain defense AI markets means that competitors with fewer ethical constraints win those contracts. A strict consequentialist might argue that Google's ethical abstention makes AI-enabled warfare worse, not better. How should an organization weigh this argument? Does virtue ethics have an answer?
-
The Project Maven letter was signed by 4,000 employees out of approximately 70,000 — roughly 6 percent of the workforce. Should employee voice of this magnitude be determinative in corporate ethical decisions? What are the arguments for and against employee democracy in corporate AI ethics?
-
Virtue ethics holds that character is revealed in difficult moments. Based on the full arc of the Project Maven case — the contracting decision, the employee response, the AI Principles, the Maven non-renewal, and the subsequent controversies — how would you characterize Google's organizational virtue? What evidence supports your assessment, and what evidence complicates it?
This case study connects to Section 3.4 (Virtue Ethics) and Section 3.10 (Ethics in Organizational Practice) of the main chapter. Primary sources include: Daisuke Wakabayashi and Scott Shane, "Google Will Not Renew Pentagon Contract That Upset Employees," New York Times, June 1, 2018; Shannon Vallor and Bendert Zevenbergen, "How to Think About AI Ethics for the 21st Century," IEEE Spectrum, 2019; and Sundar Pichai, "AI at Google: Our Principles," Google Blog, June 7, 2018.