Case Study 1: Frances Haugen and the Facebook Papers

What the Whistleblower Revealed


Background

In the fall of 2021, Frances Haugen walked into the United States Capitol and told Congress what she had spent eighteen months carefully documenting: that Facebook knew its products were causing harm, had the internal research to prove it, and had repeatedly chosen not to act when acting would reduce engagement metrics and, by extension, revenue.

It was not the first time someone had raised concerns about Facebook's practices. Researchers, journalists, and former employees had been publishing concerns about algorithmic amplification, misinformation, and mental health effects for years. What made Haugen's disclosure categorically different was the evidence. She had not described what she believed was happening. She had provided tens of thousands of pages of internal documents — research studies, executive memos, product review slides, internal survey data — that showed what Facebook knew, when it knew it, and what it chose to do with that knowledge.

The documents were provided simultaneously to a consortium of news organizations — The Wall Street Journal, The New York Times, The Washington Post, CNN, NPR, and others — and to regulators in the United States and Europe, under the protection of the Securities and Exchange Commission's whistleblower program. On October 3, 2021, the same day Haugen testified before the Senate, the consortium published what became known as the "Facebook Papers." The resulting coverage ran for weeks across dozens of publications and represented the most comprehensive public accounting of a platform company's internal decision-making that had ever been produced.


Who Was Frances Haugen?

Frances Haugen was not a disgruntled junior employee with a narrow grievance. She was 37 years old, with a computer science degree from Olin College of Engineering and an MBA from Harvard Business School. Before joining Facebook, she had worked at Google, Yelp, Pinterest, and Gigster — a career trajectory that gave her substantial comparative experience with platform company cultures.

She joined Facebook in 2019, specifically requesting assignment to the civic integrity team. Her motivation was personal: a close friend had been radicalized through online communities, drawn into conspiracy theories through a process that Haugen understood involved algorithmic recommendation. She wanted to work on political misinformation because she believed that was where she could do the most good.

Her assignment to civic integrity proved, in retrospect, to be strategically important for what came later. The civic integrity team worked on the intersection of platform design and democratic processes — election integrity, misinformation about voting, radicalization pathways. This placed her in a position to observe and document some of the most sensitive intersections between Facebook's product decisions and their political and social effects.

She spent approximately two years at Facebook. During this time, she systematically began downloading and preserving internal documents — a process she has described as motivated by the growing conviction that the company would not change from within, that the public and regulators needed to see what she was seeing, and that the internal channels for raising concerns were producing no meaningful results.


What the Documents Showed

The Facebook Papers disclosed findings across multiple domains. The core pattern across all of them was the same: Facebook had conducted research showing harm, knew the research's implications, and had not acted on it in ways that would reduce engagement.

The Recommendation Rabbit Hole

Internal research from 2019 documented what researchers described as a "rabbit hole" effect in Facebook's group recommendation systems. Users who joined one politically oriented group were systematically recommended progressively more extreme groups. The research team modeled this as an "engagement trap": the more extreme content generated stronger emotional reactions, stronger reactions generated more engagement (likes, comments, shares), and the algorithm interpreted this engagement as a signal of relevance, recommending still more extreme content.

Facebook's data scientists had proposed interventions that would reduce this effect — modifications to the recommendation algorithm that would weight away from increasingly extreme content. The interventions were tested. They reduced the rabbit hole effect. They were not implemented at scale. Internal documents showed that the projected impact included reduced engagement, which translated to reduced time-on-platform, which translated to reduced advertising revenue.

The deliberateness of this non-action was striking. This was not a case of the company being unaware of the problem. It was a case of the company being aware, having tested a solution, finding that the solution worked, and then deciding not to implement it because of the cost in engagement metrics.

Instagram and Teenage Mental Health

Among the most widely reported findings from the Facebook Papers was a set of internal Instagram research documents dating from 2019 to 2021, studying the platform's effects on teenage users — particularly teenage girls.

One internal presentation, which became an emblematic document of the entire Papers disclosure, included the finding: "We make body image issues worse for one in three teen girls." More specifically, the research found that 32% of teenage girls who reported already feeling bad about their bodies said that Instagram made them feel worse. The research documented social comparison dynamics specific to Instagram's visual and algorithmic design: the platform's emphasis on appearance, lifestyle imagery, and follower counts created comparison pressures that were measurable in their effects on self-perception.

A related internal study found that teenagers themselves reported being aware of the harmful effects but unable to stop using the platform. One document quoted teenage users saying things like "I know this is bad for me but I keep doing it anyway" — testimony that directly echoed the behavioral profiles associated with addictive products.

Instagram had, at the time of this research, approximately one billion users globally. A significant proportion of them were teenagers. The research was complete and internally circulated. It was not published. It was not used to substantially modify the platform's design.

Vaccine Misinformation and Public Health

The Facebook Papers also documented the company's failure to adequately address vaccine misinformation during the critical period of the COVID-19 pandemic. Internal documents showed that Facebook's systems were amplifying vaccine misinformation — posts claiming vaccines were ineffective, dangerous, or connected to various conspiracy theories — even as the company made repeated public commitments to addressing health misinformation.

The gap between public commitment and internal reality was starkly illustrated by one internal document that showed the 10 most-engaged posts on Facebook for a given period in 2021. The list was dominated by posts from sources the company's own internal systems had flagged as misinformation. The document was circulated internally; it did not produce the scale of intervention that might have been expected given the stakes of the ongoing pandemic.

Ethnic Violence and the Global South

Perhaps the most disturbing set of findings in the Papers concerned Facebook's role in facilitating ethnic violence in countries including Ethiopia, India, and Myanmar. The Myanmar case had been extensively reported before the Papers — the company's own commissioned report in 2018 acknowledged that Facebook had played a "contributing role" in the Rohingya genocide. The Papers extended this picture.

Internal documents showed that Facebook had been repeatedly warned — by employees, by researchers, by NGO partners — that its systems were being used to spread hate speech and incitement to violence in countries where the platform was, for many users, effectively the internet. The documents showed that the company's content moderation resources were heavily concentrated on the United States and Western Europe, with minimal resources deployed for non-English languages.

This was not primarily a case of the algorithm behaving unexpectedly. It was a case of resource allocation decisions: Facebook had chosen to invest content moderation resources where its business interests were concentrated, and had underinvested in markets that were demographically large but economically less central to its advertising business. People in Ethiopia and Myanmar faced violence facilitated by a platform that had not assigned adequate resources to their safety because their safety was not connected to sufficient revenue to prioritize it.


Facebook's Response

Facebook's response to the Papers combined several strategies. The company disputed specific characterizations of research findings, argued that individual documents were being taken out of context, pointed to the extensive investment it had made in safety and integrity work, and questioned Haugen's standing and motives.

Mark Zuckerberg issued a statement in October 2021 saying that the picture painted by the documents was "a false image of the company." He noted that Facebook had invested heavily in safety research. He did not dispute the specific findings documented in the internal studies. In a post on his own Facebook page, he argued: "At the heart of these accusations is this idea that we prioritize profit over safety and wellbeing. That's just not true." The internal documents that showed recommendation changes being rejected because they would reduce engagement were not addressed in this statement.

Facebook subsequently hired a public relations firm to organize a coordinated response. The company rebranded as Meta in late October 2021 — a timing that some observers noted had the effect of shifting the news cycle away from the Papers at a critical moment. The rebrand was presented as a strategic business decision; it was also, functionally, a brand management intervention.


Congressional Testimony and Its Aftermath

Haugen's October 3 Senate testimony was an unusual event in the history of technology regulation. Senators from both parties asked substantive questions and expressed genuine outrage across party lines — a rarity in the polarized American legislative environment. Haugen's presentation was widely praised for its clarity: she was a technically credible witness who could engage with algorithmic specifics while translating them into terms accessible to a non-technical audience.

She testified: "Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they'll click on less ads, they'll make less money. Facebook has demonstrated they cannot act independently. Facebook, over and over again, has shown it chooses profit over safety."

The testimony did not produce major federal legislation in the United States. Bills were introduced, committees held hearings, and regulatory agencies announced enhanced scrutiny. The Federal Trade Commission and the Department of Justice had existing antitrust investigations. The SEC whistleblower program that had protected Haugen's disclosure became a model discussed for future cases. But the specific harms documented in the Papers — the recommendation rabbit hole, the Instagram mental health research, the vaccine misinformation amplification — were not addressed by legislation.

European regulators moved more decisively. The Digital Services Act, which came into force in 2023, imposed significant new accountability requirements on very large online platforms, including transparency about algorithmic recommendation systems, requirements for risk assessments of systemic harms, and independent auditing provisions. Haugen provided testimony to European regulators and was widely credited with accelerating the DSA's implementation timeline.


What the Case Reveals

The Haugen case is instructive not primarily as a story about one company's bad behavior. It is instructive as a case study in the structural gap between internal knowledge and organizational action.

Facebook had the research. The research was conducted by skilled people who understood what it showed. The research was circulated internally. In some cases, interventions were tested that would have reduced the documented harms. The interventions were not implemented. The non-implementation was not an accident or an oversight; it was, in many cases, a deliberate decision made with full awareness of the tradeoffs.

This pattern — internal knowledge without organizational action — is not unique to Facebook. It is what happens when an organization's incentive structures do not reward acting on knowledge of harm. The OKR system measured engagement. The advertising business model monetized engagement. The competitive position in the market depended on engagement. Acting on knowledge of harm, when that harm was produced by engagement-maximizing design, would have reduced engagement. The internal logic of the system pointed against action, and the system followed its internal logic.

What the Haugen case adds to this structural story is documentation. Because she preserved and disclosed the internal documents, the specific mechanisms and specific decisions are on the public record in an unusually detailed way. The company cannot claim it did not know. The question is whether "knowing and not acting" produces consequences — legal, regulatory, reputational — sufficient to change the incentive calculation that produced the non-action in the first place.

As of this writing, that question remains incompletely answered. The Facebook Papers produced significant public understanding, some regulatory movement in Europe, and a permanent change in how platform company internal culture is discussed. They did not produce a fundamental restructuring of the incentive systems that generated the decisions they documented.


Discussion Questions

  1. The chapter argues that the Facebook Papers revealed not just corporate wrongdoing but structural failure — a gap between knowledge and action produced by incentive systems, not individual malice. Do you find this structural account more or less convincing than an account that focuses on the decisions of specific executives? What are the implications of each account for how we should respond?

  2. Facebook's response to the Papers included pointing to the investments it had made in safety work. How should we evaluate this response in light of the specific documented cases (recommendation changes not implemented, internal research not published) described in the Papers?

  3. Haugen's disclosure produced more regulatory movement in Europe than in the United States. What factors might explain this difference? What does it suggest about the conditions under which whistleblower disclosures produce structural change?

  4. Consider the finding that Facebook allocated content moderation resources heavily to the United States and Western Europe, with minimal resources for the countries where violence facilitation was most severe. How does this reflect the business model described in the chapter? Is this a failure of ethics, a failure of resource allocation, or a direct expression of the platform's incentive structure?