Case Study: Facebook and Epistemic Power — The 2021 Whistleblower Documents
"The thing I saw over and over again at Facebook was that there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests." — Frances Haugen, testimony before the U.S. Senate Commerce Subcommittee, October 5, 2021
Overview
In September 2021, former Facebook product manager Frances Haugen disclosed thousands of pages of internal company documents to the Wall Street Journal, the U.S. Securities and Exchange Commission, and the U.S. Congress. These documents — which became known as the Facebook Papers — revealed something remarkable: Facebook's own researchers had produced extensive internal studies demonstrating that the platform's products caused measurable harm to users, particularly teenagers, and that the company's algorithms amplified divisive and inflammatory content. More remarkable still, the documents showed that Facebook's leadership was aware of these findings and, in many cases, chose not to act on them or actively suppressed them.
The Haugen disclosures are not simply a story about a company behaving badly. They are a case study in epistemic power — the ability to control what is known, by whom, and on what terms. Facebook possessed knowledge about its own effects that no one else had access to. It used this knowledge asymmetry to shape public discourse, deflect regulatory scrutiny, and maintain the information conditions that protected its business model.
This case study examines the Facebook Papers through the theoretical lenses of Chapter 5: information asymmetry, epistemic power, the transparency paradox, epistemic injustice, and the relationship between knowledge and accountability.
Skills Applied: - Analyzing epistemic power and information asymmetry in platform governance - Identifying how knowledge suppression functions as a form of power - Evaluating whistleblowing as a form of resistance within power/knowledge dynamics - Applying the data justice framework to platform accountability
The Situation: What Facebook Knew
The Internal Research
Between 2018 and 2021, researchers within Facebook's Integrity team, Core Data Science team, and other internal groups produced dozens of studies examining the platform's effects on users and society. The disclosed documents reveal findings across several domains:
Harm to teenage users. In a 2019 internal presentation titled "Teen Mental Health Deep Dive," Facebook researchers found that Instagram — owned by Meta (Facebook's parent company) — made body image issues worse for one in three teenage girls. The presentation stated: "We make body image issues worse for one in three teen girls" and "Teens blame Instagram for increases in the rate of anxiety and depression." A 2021 internal study found that among teenagers who reported suicidal ideation, 13% of British users and 6% of American users traced the ideation to Instagram specifically.
Amplification of divisive content. Internal researchers found that Facebook's core ranking algorithm — the system that determines what appears in users' News Feeds — systematically amplified content that provoked outrage and division. A 2018 internal memo noted: "Our algorithms exploit the human brain's attraction to divisiveness." Researchers proposed changes to the algorithm that would reduce the amplification of inflammatory content, but these proposals were rejected or watered down because they would reduce engagement metrics — the numbers that drive advertising revenue.
Political polarization. Facebook's researchers documented that the platform's recommendation systems — "Groups You Should Join," "Pages You Might Like" — funneled users toward increasingly extreme political content. An internal report found that 64% of all people who joined an extremist group on Facebook did so because the platform's algorithm recommended it. A 2019 internal study by researcher Monica Lee found that Facebook's recommendation engine was not merely reflecting existing extremism but actively producing it.
Disinformation and public health. Internal analyses showed that Facebook's systems were ineffective at catching harmful health misinformation, including anti-vaccine content. Researchers estimated that the company detected and acted on only a small fraction of violating content. In some countries, particularly in the Global South, content moderation capacity was negligible, allowing the platform to operate with virtually no safety infrastructure.
Differential global impact. The documents revealed that Facebook allocated the vast majority of its content moderation and safety resources to English-language content and markets in the United States and Western Europe. Countries in Africa, Southeast Asia, the Middle East, and Latin America — where Facebook had hundreds of millions of users — received minimal investment in safety infrastructure. Internal researchers warned that this created conditions for the platform to be weaponized for political violence, ethnic conflict, and authoritarian manipulation.
What Facebook Did With This Knowledge
The documents show a consistent pattern across each domain of harm:
-
Internal research identified the problem. Facebook's researchers — many of them genuinely concerned about the platform's effects — produced rigorous analyses documenting specific harms.
-
Findings were circulated internally. The research was presented to leadership, including senior executives and, in some cases, CEO Mark Zuckerberg.
-
Proposals for reform were weakened or shelved. When researchers proposed changes that would reduce harmful effects but also reduce engagement metrics, leadership chose engagement. The documents contain multiple instances of safety recommendations being overridden by growth teams.
-
External communications denied or minimized the harms. While internal researchers documented Instagram's harmful effects on teenagers, the company's external communications team told Congress and the public that the platform's effects on teen mental health were neutral or positive. In March 2021 — months after the internal "Teen Mental Health Deep Dive" was circulated — Facebook head of safety Antigone Davis told Congress that the company's research was "not causal" and that it showed Instagram was broadly beneficial for teenagers.
Power Analysis
Epistemic Power as Described in Section 5.3.1
Section 5.3.1 defines epistemic power as the ability to shape what people know and believe. Facebook exercised epistemic power in two distinct registers:
Platform-level epistemic power. Through its algorithms, Facebook shaped what its 2.9 billion users saw, read, and engaged with. The internal documents confirm that this shaping was not neutral: algorithms amplified divisive content, funneled users toward extremism, and prioritized engagement over accuracy. The platform did not merely reflect public discourse — it actively constructed the informational environment in which public discourse occurred.
Meta-level epistemic power. Beyond shaping what users saw on the platform, Facebook shaped what the public knew about Facebook itself. By controlling access to internal research, the company maintained an information monopoly on the most important questions about its own effects. Regulators, journalists, researchers, and users could speculate about the platform's harms, but only Facebook possessed the data necessary to answer the questions definitively. And Facebook chose not to share that data — or, worse, to actively misrepresent what the data showed.
This meta-level epistemic power is what makes the Haugen disclosures so significant. The problem was not merely that Facebook caused harm; it was that Facebook knew about the harm and controlled access to the knowledge that would have allowed others to hold it accountable.
Information Asymmetry at Scale
The information asymmetry between Facebook and the public was not a gap — it was a chasm:
| Facebook Knew | The Public Knew |
|---|---|
| Instagram worsened body image issues for 1 in 3 teen girls | Facebook's public messaging claimed Instagram was broadly beneficial |
| The recommendation algorithm drove 64% of extremist group joins | Users experienced radicalization but could not prove the algorithm's role |
| Proposed safety changes were rejected to protect engagement metrics | The public assumed that safety was a genuine priority |
| Content moderation in the Global South was negligible | Users in affected countries had no basis for comparison |
| Internal researchers warned of specific, documented harms | External researchers had to rely on limited, controlled data releases |
This asymmetry was not accidental. It was maintained through deliberate corporate strategy: restricting researcher access to platform data, managing public communications to minimize harm disclosures, and lobbying against transparency requirements that would have narrowed the gap.
The Transparency Paradox, Inverted
Section 5.2.2 describes how transparency can be formally satisfied while substantively failing. The Facebook case presents an inversion: the company did not even offer formal transparency. It actively prevented transparency by:
- Restricting external researchers' access to platform data through tightly controlled programs (like Social Science One) that provided limited, curated datasets
- Publicly disputing findings from external researchers who identified harms the company's own internal research had already confirmed
- Framing its own internal research as "preliminary" or "non-causal" in public settings, even when the research was presented internally as conclusive
When transparency did arrive — through Haugen's disclosures — it arrived not through the company's governance structures but through an act of resistance: whistleblowing.
Epistemic Injustice: Whose Testimony Was Credible?
For years before the Haugen disclosures, individuals and communities had reported harms from Facebook's platform:
- Teenagers and their parents described Instagram's damaging effects on body image and self-worth.
- Civil rights organizations documented the platform's role in amplifying hate speech and enabling coordinated harassment.
- Researchers in the Global South warned that Facebook was being weaponized for ethnic violence (most devastatingly in Myanmar, where a UN investigation found Facebook played a "determining role" in the genocide of the Rohingya people).
- Media literacy advocates argued that recommendation algorithms were radicalizing users.
In each case, these testimonies were met with skepticism, deflection, or outright denial — from Facebook, from sympathetic commentators, and sometimes from regulators. The consistent response was some variant of: Where is the evidence? The data doesn't support that claim. These are anecdotal concerns. The platform's effects are complex and not well understood.
This response constituted testimonial injustice on a massive scale. The claims of affected communities were systematically devalued — not because the claims were wrong, but because the evidence needed to substantiate them was locked inside the very company whose practices were being challenged. Facebook's control of the data created a catch-22: communities could report harms, but without access to Facebook's internal data, their reports could be dismissed as subjective and unverifiable. Facebook then used its exclusive access to the data to tell a different story.
The Haugen documents shattered this dynamic by making visible what Facebook's researchers had known all along: the community reports were accurate. The platform was causing the harms that affected people said it was causing. The injustice lay not in the final vindication but in the years of dismissal that preceded it — years during which real people experienced real harm while their testimony was treated as less credible than the company's curated public statements.
Hermeneutical Injustice and Platform Effects
Section 5.5.1 describes hermeneutical injustice as the condition of lacking conceptual resources to understand one's own experience. Before the Haugen disclosures, many users experienced the platform's effects — feeling worse after using Instagram, noticing that political content seemed increasingly extreme, sensing that the algorithm was manipulating their attention — without having a clear framework for understanding why.
The vocabulary that the disclosures made available — "engagement-based ranking," "algorithmic amplification," "cross-platform recommendation pathways" — gave users and the public the conceptual tools to name what they had experienced. This shift from vague unease to articulable critique is exactly the transition from hermeneutical injustice to hermeneutical justice that Fricker describes.
Whistleblowing as Resistance
Frances Haugen's disclosures can be understood through the resistance framework in Section 5.3.3. Her action combined elements of multiple resistance strategies:
Sousveillance. Haugen turned the tools of observation back on the institution. By collecting internal documents, she subjected Facebook to the kind of scrutiny it normally reserved for its users — exposing the gap between the company's internal knowledge and its external claims.
Counter-data. The disclosed documents functioned as counter-data — evidence produced from within the institution that contradicted the institution's public narrative. Facebook claimed its effects were benign; Facebook's own research showed otherwise.
Data activism. Haugen did not simply leak documents. She strategically shared them with journalists (the Wall Street Journal's "Facebook Files" series), filed formal complaints with the SEC, testified before Congress, and spoke publicly to maximize the disclosures' policy impact. This combination of legal, media, and legislative strategies is a textbook case of data activism.
The effectiveness of Haugen's resistance depended on a specific condition: she was a former insider with access to the company's internal knowledge. This raises a troubling implication. If accountability for platform power depends on insider whistleblowers, the system is structurally fragile. Whistleblowers face enormous personal and professional risks. Most employees with knowledge of harmful practices do not come forward. A governance system that relies on individual acts of courage rather than institutional mechanisms for accountability is, by definition, inadequate.
Discussion Questions
-
Knowledge as power, knowledge as liability. Facebook's internal research made it more knowledgeable about its effects — and also more culpable for failing to act. Does producing internal research about potential harms create an ethical obligation to act on the findings? What if acting on them would reduce profits? Should companies be required to disclose internal safety research?
-
The information monopoly. Facebook possessed data about its own effects that no external researcher could replicate. Analyze this situation using the information asymmetry framework from Section 5.2. What structural changes would reduce this asymmetry? Consider: mandatory data sharing with independent researchers, regulatory access to internal datasets, or platform transparency requirements.
-
Testimonial injustice and timing. Communities reported Instagram-related harms to teenagers for years before the Haugen disclosures. After the disclosures, these same claims were treated as credible. What changed — the evidence, or the source of the evidence? What does this tell us about whose testimony is valued in technology governance debates?
-
Mira's dilemma. Imagine Mira working at VitraMed and discovering internal research showing that the company's predictive health model produces significantly less accurate results for patients from racial minorities — a finding the company has not disclosed publicly. Using the frameworks from Chapter 5, analyze her situation. What forms of power constrain her? What forms of resistance are available? What should she do?
-
Platform governance after Haugen. The Haugen disclosures generated significant public outrage and congressional hearings. But as of 2025, no comprehensive federal platform governance legislation has been enacted in the United States. Using the concepts of structural power, the transparency paradox, and the accountability gap, explain why knowledge of harm has not translated into governance reform. What would need to change?
Your Turn: Mini-Project
Option A: Document Analysis. Read one article from the Wall Street Journal's "Facebook Files" series (available online). Identify: (a) what internal finding the article describes, (b) what the company said publicly about the same issue, (c) the information asymmetry between the company and the public, and (d) which form of epistemic injustice (if any) was involved. Write a two-page analysis using at least two frameworks from Chapter 5.
Option B: Comparative Whistleblower Analysis. Compare Frances Haugen's disclosures to another technology whistleblower case — Edward Snowden's NSA revelations (2013), Christopher Wylie's Cambridge Analytica disclosures (2018), or Timnit Gebru's forced departure from Google AI (2020). Write a 1,500-word comparison analyzing: (a) what knowledge was disclosed, (b) what information asymmetry it revealed, (c) how the institution responded, and (d) what governance changes (if any) resulted.
Option C: Policy Proposal. Draft a one-page policy proposal for a "Platform Internal Research Transparency Act" that would require social media companies to disclose safety-relevant internal research to a designated regulatory body. Address: (a) what would be required, (b) how proprietary information would be protected, (c) what enforcement mechanisms would apply, and (d) how the policy would address the epistemic power dynamics identified in this case study.
References
-
Haugen, Frances. Testimony before the U.S. Senate Committee on Commerce, Science, and Transportation, Subcommittee on Consumer Protection, Product Safety, and Data Security. October 5, 2021.
-
Horwitz, Jeff, and Deepa Seetharaman. "Facebook Knew Instagram Was Toxic for Teens, Company Documents Show." Wall Street Journal, September 14, 2021.
-
Wells, Georgia, Jeff Horwitz, and Deepa Seetharaman. "Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show." Wall Street Journal, September 14, 2021.
-
Hagey, Keach, and Jeff Horwitz. "Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead." Wall Street Journal, September 15, 2021.
-
United Nations Human Rights Council. "Report of the Independent International Fact-Finding Mission on Myanmar." A/HRC/39/64, September 12, 2018.
-
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.
-
Fricker, Miranda. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press, 2007.
-
Persily, Nathaniel, and Joshua A. Tucker (eds.). Social Media and Democracy: The State of the Field and Prospects for Reform. Cambridge: Cambridge University Press, 2020.