Case Study: Ethics Washing: When Corporate Ethics Is Performance

"Ethics-washing is the data age's equivalent of greenwashing: a public relations strategy dressed in the language of moral philosophy." -- Ben Wagner, "Ethics as an Escape from Regulation"

Overview

In 2019, a new term entered the data governance lexicon: "ethics-washing." Borrowed from the concept of greenwashing in environmental policy, ethics-washing describes the practice of using the appearance of ethical commitment — published principles, advisory boards, public pledges — to deflect calls for meaningful regulation or operational change. This case study examines the phenomenon across multiple organizations and industries, analyzing the structural dynamics that produce it, the signals that distinguish genuine ethics from performance, and the consequences for the communities who depend on corporate ethics commitments being real.

Skills Applied: - Identifying the indicators of ethics-washing versus genuine ethics practice - Analyzing institutional dynamics that produce performative ethics - Evaluating the relationship between corporate ethics and regulatory avoidance - Connecting ethics-washing to the power asymmetry, consent fiction, and accountability gap themes


The Phenomenon

What Is Ethics-Washing?

Ethics-washing occurs when an organization uses the language, symbols, and structures of ethical governance to create the appearance of responsible practice without making substantive operational changes. The organization performs ethics — publishing principles, appointing officers, forming boards — while continuing business practices that the principles would prohibit if applied rigorously.

The term was coined by Ben Wagner, a professor at the Vienna University of Economics and Business, who observed a pattern in the European technology policy landscape: as regulatory pressure on the tech industry intensified (culminating in the GDPR and early discussions of the AI Act), major technology companies increasingly adopted the language of ethics as a strategy for demonstrating that self-regulation was sufficient and statutory regulation unnecessary.

"The deployment of ethics," Wagner argued, "serves as a strategy to delay, dilute, or substitute for enforceable regulation."

The Ethics-Washing Playbook

Across industries, ethics-washing follows recognizable patterns:

Step 1: Publish principles. When public pressure or regulatory scrutiny mounts, announce a set of ethical principles. Make them aspirational and broad — Fairness, Transparency, Responsibility, Human-Centeredness. Avoid specifics that could be measured or enforced.

Step 2: Appoint visible leadership. Create a Chief Ethics Officer role or form an external advisory board. Announce it with a press release. Ensure the appointees are respected figures whose names lend credibility.

Step 3: Invest in research. Fund academic ethics research at prestigious universities. The research is genuine, but the funding relationship creates an implicit expectation of favorable — or at least non-confrontational — analysis.

Step 4: Participate in multi-stakeholder initiatives. Join industry consortia, sign pledges, participate in conferences. These activities create the impression of a company deeply engaged with ethical questions.

Step 5: Continue business as usual. Underneath the visible ethics infrastructure, the core business practices remain unchanged. The data collection doesn't shrink. The algorithmic systems aren't restructured. The surveillance products are still sold. The ethics program generates reports; the product teams ship features.

The Asymmetry of Evidence

Ethics-washing is difficult to prove definitively because genuine ethics programs and performative ones can look identical from the outside. Both publish principles. Both form committees. Both produce reports. The difference lies in operational impact — and operational impact is largely invisible to external observers.

This asymmetry benefits the ethics-washer. A company can point to its published principles, its advisory board, and its conference participation as evidence of genuine commitment. Critics who argue that these activities produce no meaningful change bear the burden of proving a negative — demonstrating that internal practices haven't changed, which requires access to internal documents and processes that the company controls.


Case Examples

Google's Advanced Technology External Advisory Council (ATEAC)

In March 2019, Google announced the formation of ATEAC, an external advisory council intended to provide ethical guidance on Google's advanced technology projects. The council had eight members drawn from academia, industry, and civil society.

ATEAC lasted one week.

Why it failed: The council included Kay Coles James, then president of the Heritage Foundation, whose organization had a record of opposing LGBTQ+ rights and supporting anti-immigration policies. Over 2,500 Google employees signed a petition opposing James's appointment, arguing that her views were incompatible with an ethics board reviewing technologies that directly affect marginalized communities.

Google dissolved the council on April 4, 2019 — seven days after announcing it.

The ethics-washing analysis: ATEAC exhibited multiple ethics-washing indicators:

  • No charter or process. The council was announced without a published charter, review methodology, or decision-making authority. It was a board without a job.
  • No integration with product development. ATEAC was external and advisory, structurally disconnected from the engineering decisions that determine how Google's AI is built and deployed.
  • Composition reflected political balance, not ethical expertise. Members were selected to represent a "diversity of perspectives" — but diversity of political opinion is not the same as diversity of ethical expertise. The inclusion of someone whose organization worked against the rights of communities affected by AI was not "balance" — it was a governance design failure.
  • Rapid dissolution suggests shallow commitment. A genuine governance mechanism would have been designed with sufficient care to survive its first controversy. ATEAC's immediate dissolution suggests it was never built to be durable.

What happened afterward: Google shifted to internal responsible AI processes — the Responsible AI and Human-Centered Technology team — and published AI Principles. But the subsequent firing of prominent AI ethics researchers Timnit Gebru (December 2020) and Margaret Mitchell (February 2021) raised further questions about whether Google's ethics infrastructure constrains its most consequential decisions.

The European AI Ethics High-Level Expert Group

In June 2018, the European Commission appointed a High-Level Expert Group on AI (AI HLEG) to develop ethical guidelines for trustworthy AI. The group produced the "Ethics Guidelines for Trustworthy AI" in April 2019 — a widely cited document that proposed seven requirements for AI systems: human agency and oversight, technical robustness, privacy, transparency, non-discrimination, societal well-being, and accountability.

The ethics-washing critique: Corporate members of AI HLEG were accused of using the guidelines process to advocate for voluntary self-regulation as an alternative to binding EU legislation. Internal documents reported by Politico showed that industry representatives pushed to weaken specific provisions that would have constrained commercial AI deployment.

The critique was not that the guidelines were bad — many provisions were substantive and influential. The critique was that the process of developing voluntary ethics guidelines served as a strategy for delaying the binding regulatory framework that became the EU AI Act. By the time the AI Act was adopted in 2024, five years had passed — five years during which the industry operated under voluntary guidelines rather than enforceable law.

"Every month spent discussing voluntary ethics principles," one advocacy group argued, "was a month not spent implementing enforceable rules."

Facebook's "Privacy Principles"

In 2018, in the aftermath of the Cambridge Analytica scandal, Facebook published a set of privacy principles emphasizing user control, data minimization, and transparency. The principles were presented as evidence of the company's commitment to responsible data practice.

The gap between principles and practice: Between 2018 and 2022, Facebook (now Meta) continued practices that contradicted its published principles:

  • User control: Facebook's privacy settings remained notoriously complex, with defaults set to maximize data sharing. Research by Consumer Reports found that fully "locking down" a Facebook account required navigating 55 separate privacy settings across multiple menus.
  • Data minimization: Meta continued to collect vast quantities of user data, including off-platform tracking through the Meta Pixel and detailed behavioral profiling.
  • Transparency: The company's advertising data practices remained opaque, with researchers consistently reporting difficulty obtaining clear information about how user data was used for ad targeting.

The Irish Data Protection Commission fined Meta 1.2 billion euros in 2023 for transferring EU user data to the United States in violation of GDPR — the largest GDPR fine in history. The fine was levied during the same period that Meta was publicly promoting its privacy principles.


Structural Dynamics

Why Ethics-Washing Happens

Ethics-washing is not simply corporate cynicism. It emerges from structural dynamics that make performative ethics rational for organizations:

Regulatory pressure. When regulation threatens profitable practices, demonstrating "self-regulation" through voluntary ethics commitments can delay or weaken statutory requirements. If an organization can credibly claim that it is governing itself responsibly, the political argument for external regulation weakens.

Public pressure. Media coverage of data harms creates reputational risk. An ethics program — visible, quotable, employable in PR responses — helps manage that risk. The program's effectiveness at managing reputation is independent of its effectiveness at changing behavior.

Talent competition. Engineers and data scientists increasingly want to work for ethical organizations. An ethics program helps attract talent, regardless of whether the program constrains the organization's most consequential decisions.

The measurement problem. Genuine ethics practice is difficult to measure externally. How do you verify that a company's internal review process actually changed a product? How do you know that an ethics committee's recommendation was followed rather than overruled? The difficulty of measurement creates space for organizations to claim ethical practice without demonstrating it.

How to Distinguish Genuine Ethics from Performance

The chapter's framework suggests several diagnostic signals:

Signal Genuine Ethics Ethics-Washing
Authority Ethics bodies can block or modify products Ethics bodies advise but cannot constrain
Independence External members, separate reporting, budget independence Internal members, reports to business leadership
Transparency Publishes decisions, disagreements, overrides Publishes only principles and positive outcomes
Resource allocation Ethics function grows with product deployment Ethics function is static or shrinks during scaling
Consequences Products have been changed, delayed, or cancelled for ethical reasons No evidence of products being constrained
Discomfort Ethics program produces findings that the organization would rather not discuss Ethics program produces only affirmative findings

Sofia Reyes put it simply during a panel at the DataRights Alliance annual conference: "If an ethics program has never told the company 'no,' it's not an ethics program. It's a press release."


The Consequences

For Regulation

Ethics-washing can undermine the regulatory process. When organizations point to voluntary ethics commitments as evidence that regulation is unnecessary, policymakers may delay statutory action. This delays the enforceable protections that affected communities need. The five-year gap between the AI HLEG's voluntary guidelines and the binding AI Act represents millions of decisions made under unenforceable standards.

For Genuine Ethics Efforts

Ethics-washing creates cynicism that damages genuine ethics programs. When high-profile ethics initiatives collapse (ATEAC), when ethics teams are laid off during AI scaling (Microsoft), when ethics principles coexist with record GDPR fines (Meta), the public learns to distrust all corporate ethics claims. Organizations with genuine programs find it harder to be believed.

For Affected Communities

Most consequentially, ethics-washing leaves affected communities unprotected. A community subjected to biased algorithmic policing, or patients whose medical data is mishandled, or consumers whose data is exploited receive no benefit from an ethics program that exists on paper but does not constrain the practices that harm them.


Discussion Questions

  1. The identification problem. How can external observers — journalists, regulators, affected communities — distinguish genuine ethics programs from ethics-washing? What information would they need, and how could they obtain it?

  2. The self-regulation question. Is voluntary corporate ethics governance ever sufficient, or should enforceable regulation always be the primary mechanism? Can you construct a scenario where voluntary governance would be superior to regulation?

  3. The funding dilemma. Many AI ethics researchers receive funding from the technology companies they study. Does this create an inherent conflict of interest? How should the ethics research community manage this dependency?

  4. The reform question. If you were advising a company that has been accused of ethics-washing, what specific changes would you recommend to transform its program from performative to genuine? Be concrete.

  5. Connecting themes. How does ethics-washing relate to the power asymmetry? Who benefits from the appearance of ethical governance, and who bears the cost when the appearance is not matched by substance?


Your Turn: Mini-Project

Option A: Ethics-Washing Audit. Select a company that has published data or AI ethics principles. Research the company's actual data practices (through news reports, regulatory actions, privacy policy analysis, and public filings). Write a two-page assessment evaluating whether the company's practices align with its stated principles. Use the genuine-vs-performance signals table from this case study.

Option B: Regulatory Strategy Analysis. Research the lobbying activities of a major technology company during the development of a specific regulation (GDPR, AI Act, CCPA, or a sector-specific rule). Assess whether the company's ethics commitments were deployed as arguments against regulation. Write a one-page analysis.

Option C: Alternative Governance Design. If voluntary ethics governance is insufficient but pure regulation is slow and blunt, what governance mechanisms could fill the gap? Propose a hybrid model that combines elements of corporate ethics programs, regulatory oversight, and community accountability. Ground your proposal in specific lessons from this case study.


References

  • Wagner, Ben. "Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping." In Being Profiled: Cogitas Ergo Sum, edited by Emre Bayamlioglu et al., 84-89. Amsterdam: Amsterdam University Press, 2019.

  • Metcalf, Jacob, Emanuel Moss, and danah boyd. "Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics." Social Research 86, no. 2 (2019): 449-476.

  • Bietti, Elettra. "From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT)*, 210-219.

  • Jobin, Anna, Marcello Ienca, and Effy Vayena. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1 (2019): 389-399.

  • Hagendorff, Thilo. "The Ethics of AI Ethics: An Evaluation of Guidelines." Minds and Machines 30 (2020): 99-120.

  • Resseguier, Anais, and Rowena Rodrigues. "AI Ethics Should Not Remain Toothless!" Journal of Artificial Intelligence Research 68 (2020): 771-787.

  • Rességuier, Anaïs, and Rowena Rodrigues. "Ethics as Attention to Context: Recommendations for the Ethics of Artificial Intelligence." Open Research Europe 1, no. 27 (2021).

  • Greene, Daniel, Anna Lauren Hoffmann, and Luke Stark. "Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning." In Proceedings of the 52nd Hawaii International Conference on System Sciences, 2122-2131. 2019.