Case Study 1: The Facebook Papers — What Internal Documents Revealed About Algorithmic Harm


Overview

On October 3, 2021, Frances Haugen appeared on CBS News's 60 Minutes and identified herself as the source who had been providing tens of thousands of internal Facebook documents to the Wall Street Journal, which had been publishing a multi-part investigative series called "The Facebook Files" throughout September of that year.

What followed was one of the most significant corporate accountability moments in the history of the technology industry. Within weeks, the documents — now circulated among a consortium of news organizations under the collective title "The Facebook Papers" — had prompted congressional hearings, renewed calls for federal platform legislation in the United States, accelerated investigations in the European Union, and generated hundreds of news stories about the gap between what Facebook's public communications claimed and what its internal research actually showed.

This case study examines what the Facebook Papers revealed, how Meta responded, and what the episode means for the broader question of platform accountability. It draws on documented reporting and public testimony rather than on any documents not in the public record.


Background: The Meaningful Interactions Pivot

To understand the significance of what the Facebook Papers revealed, it is necessary to understand the context they addressed.

In January 2018, Facebook CEO Mark Zuckerberg announced a major change to the News Feed algorithm. The change, framed publicly as an effort to prioritize "meaningful interactions," was described by Zuckerberg in a public post as an effort to make time on Facebook "time well spent" — to surface content that would lead to genuine social connection rather than passive content consumption.

This framing was broadly well-received by researchers and journalists who had been concerned about the platform's role in amplifying low-quality news and divisive content. The pivot appeared to represent an acknowledgment of the problems that had been raised since the 2016 US presidential election, during which Facebook's algorithm had come under scrutiny for amplifying misinformation.

What the Facebook Papers would later reveal was that the engineers implementing the 2018 changes had, within months, identified a significant problem: the algorithm change was increasing emotional engagement across the board, including engagement with angry, divisive, and outrage-inducing content. One internal study cited in the Journal's reporting found that the algorithm was rewarding content that provoked "outrage" reactions — the angry face emoji response — at a rate that engineers internally flagged as disproportionate.

Internal proposals to reduce the weight given to "outrage reactions" in the algorithm's training signal were developed, tested, and — according to reporting based on internal documents — ultimately not implemented, partly because they would have reduced overall engagement metrics. The engineers who raised the concern documented the outcome and moved on.

This sequence — problem identified internally, fix proposed, fix shelved because of engagement impact, problem continues — became one of the central patterns that the Facebook Papers would establish across multiple domains.


The Teen Body Image Research

Among the most consequential revelations in the Facebook Papers was a set of internal research documents on Instagram's effects on teenage girls.

In 2019, Facebook's internal research team conducted a study examining how Instagram affected the mental health and body image of teenage users, particularly girls. The study, documented in an internal slide deck titled with language about "teen girls and body image issues," found that approximately one in three teenage girls who reported feeling bad about their bodies said that Instagram made those feelings worse.

The research also found that Instagram was associated with increases in rates of anxiety, depression, and body image dissatisfaction among teenage girl users who engaged heavily with the platform. Some of the most striking findings were about the self-reinforcing nature of the problem: the research suggested that once the algorithm identified a user who was engaging with content related to body image or diet, it would surface more such content, creating a feedback loop that exacerbated the problem.

This research was not disclosed publicly. It was conducted, completed, presented internally, and — when it was not acted on in proportion to its findings — filed away. When Frances Haugen provided the documents to the Wall Street Journal, the resulting story, published in September 2021, was headlined "Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show."

Meta's initial response was to dispute the characterization of the findings. Company spokespeople argued that the headline was misleading, that the research was more nuanced than the coverage suggested, and that the company had, in fact, been working on features designed to address teen well-being. A subsequent independent review of the full documents confirmed the reporting's essential characterization.

The teen body image research is significant not merely for what it showed about Instagram's effects, but for what it showed about the institutional logic of a large platform. The research was conducted with apparent seriousness. The researchers who produced it were doing their jobs. The findings were real. And the gap between the findings and the action taken was the product of an institution in which user wellbeing was a consideration, but engagement was the central metric against which all considerations were weighed.


The Political Content Problem

The Facebook Papers also included extensive documentation of internal debate about the platform's role in amplifying political misinformation and divisive political content.

Internal studies cited in reporting on the papers found that the algorithm's optimization for high-engagement content disproportionately amplified content associated with political outrage, conspiracy theories, and misinformation. One internal document, cited in reporting by multiple outlets, described the problem in terms of the algorithm's tendency to reward "sensational, divisive, and low-quality" content because such content reliably generated high engagement metrics.

Internal proposals to address the problem took various forms. Some engineers and researchers proposed reweighting the algorithm to penalize content that, while high-engagement, was identified as low-quality by the company's own content reviewers. Some proposed expanding fact-checking partnerships. Some proposed changes to the reshare mechanics that had allowed misinformation to spread rapidly during the 2016 and 2020 election cycles.

The internal documents showed ongoing tension between the teams that wanted to make these changes and the teams responsible for engagement metrics and business performance. The documents showed, in some cases, that proposed interventions were tested, found to reduce engagement metrics, and shelved. In other cases, interventions were implemented in limited forms or only temporarily.

The pattern was consistent with what the 2018 meaningful interactions story had shown: the company had the research, had internal debates about what to do with it, and made decisions in a context where engagement was the primary optimization target.


Meta's Response: The Institutional Playbook

Meta's response to the Facebook Papers followed a recognizable pattern that historians of corporate accountability document across industries.

Disputing the framing. Meta argued persistently that individual documents, taken out of context, gave a misleading picture of the company's culture and decision-making. This argument is not entirely wrong — internal documents, like any evidence, require context and interpretation. But the argument was deployed selectively to dismiss the significance of the pattern rather than to provide the fuller context that might have changed the interpretation.

Emphasizing internal debate as evidence of healthy culture. Meta argued that the existence of internal debate about harmful features was evidence of a company that took these concerns seriously — that the papers showed engineers and researchers raising concerns, not a company that was ignorant of or indifferent to them. This argument, too, has partial validity. Companies that don't research their impacts are worse than companies that do. But the argument was used to deflect attention from the more specific question: when internal research identified harm, what happened next?

Pointing to changes made. Meta repeatedly noted changes it had made to its platform in response to identified problems — fact-checking partnerships, content moderation investments, teen well-being features, transparency reports. The changes were real and genuinely represented improvements over the counterfactual in which no changes were made. They were also, according to the internal documents themselves, significantly smaller than what the research suggested was warranted.

Attacking the whistleblower. Haugen faced significant personal and professional scrutiny after her disclosure. Questions were raised about her technical qualifications, her motivations, and whether she had fully understood the documents she was disclosing. Some of these questions were legitimate; any whistleblower's testimony requires scrutiny. Others were ad hominem in character and functioned to redirect attention from the substance of the documents to the credibility of the person who disclosed them.

The 2020 Advertiser Boycott

In June 2020, a coalition of civil rights organizations including the Anti-Defamation League, the NAACP, and Color of Change launched the Stop Hate for Profit campaign, calling on advertisers to pause advertising on Facebook for the month of July. More than 1,000 advertisers — including Unilever, Verizon, Coca-Cola, and Ford — participated.

The boycott did not produce the sweeping platform reform that advocates hoped for. Facebook's revenues were minimally affected, in part because large advertisers represent a relatively small fraction of Facebook's revenue base compared to the aggregate of small and medium-sized businesses. Facebook took some meetings with civil rights organizations, announced some policy changes, and the boycott wound down.

What the boycott demonstrated was that advertiser concern about platform content was real and could be mobilized — and that its effect on the company's behavior was limited when the financial impact was limited. It also demonstrated the structural challenge of using consumer and advertiser pressure as a reform mechanism on a platform with near-monopoly market power: the alternative for most advertisers, at the scale they needed, was not to exist.


The Ongoing Implications

The Facebook Papers did not produce the legislative response that their revelations seemed to warrant. Hearings were held. Zuckerberg testified. Draft legislation circulated. As of the publication of this book, the United States has not passed comprehensive federal platform accountability legislation, though state-level laws have proliferated and some have survived initial legal challenges.

In the European Union, the Digital Services Act — which entered force in 2023 — established new obligations for large platforms, including transparency requirements, algorithmic auditing provisions, and new accountability mechanisms for systemic risk assessment. The DSA represents the most significant new platform regulation in any major jurisdiction and its effects on platform behavior are becoming visible, though its full impact will take years to assess.

In the meantime, Meta has continued to evolve its approach in ways that are consistent with the patterns the Facebook Papers documented. In 2024, Meta announced the removal of third-party fact-checkers in the United States, framing the decision as a response to concerns about bias and free speech. The company has leaned into AI-generated content at scale, raising new questions about how recommendation algorithms interact with content that was never created by a human at all.

The story the Facebook Papers told — of a company that knew, debated, and did not act proportionately — is not a story that ended with Frances Haugen's testimony. It is a story that continues to develop, with new chapters, new protagonists, and the same underlying structural logic.


What This Case Study Demonstrates

The Facebook Papers case study is valuable not because it tells us something unique about Facebook, but because it tells us something general about the institutional logic of engagement-optimized platforms at scale.

Knowledge is not action. A company can have research demonstrating harm and not act on it proportionately. The decision about whether to act is made in a context shaped by metrics, incentives, competitive pressure, and the organizational power of teams responsible for growth versus teams responsible for safety.

Internal debate is not a proxy for user protection. The existence of internal advocates for better design — the engineers who flagged outrage amplification, the researchers who documented teen body image harm — is important and real. It is also not the same as the structural accountability that comes from external oversight. The fate of those internal advocates' recommendations depended on the same organizational logic that produced the harm.

The public record is a foundation for accountability. The Facebook Papers exist because Frances Haugen chose to create them and because journalists chose to report them and because congressional committees chose to hold hearings. The accountability surfaces those choices produced are imperfect and incomplete. They are also the primary mechanism by which the public has access to information about what these institutions know and do.

Whistleblowers are a structural feature, not a coincidence. The fact that major platform accountability moments have been driven by whistleblowers — not by voluntary disclosure, not by regulatory audit, not by platform initiative — tells us something about the adequacy of current accountability structures. A regulatory environment in which the public's primary window into platform behavior is individual employees willing to risk their careers is a regulatory environment that is relying on personal courage where it should have institutional mechanisms.

The Facebook Papers are not the last chapter of this story. But they are, as of this writing, the most complete public documentation of the gap between what engagement-optimization platforms know and what they disclose — and they belong in the permanent record of what this era of digital technology actually was.


Discussion Questions

  1. Meta argued that the internal debate documented in the Facebook Papers showed a healthy culture of self-scrutiny. Is this argument valid? Under what conditions would internal debate be sufficient to address platform harm, and under what conditions is it not?

  2. The 2020 advertiser boycott produced limited changes. What does this suggest about the effectiveness of market pressure as a reform mechanism for platforms with near-monopoly market power?

  3. Frances Haugen chose to disclose the documents she had access to, at significant personal and professional cost. What obligations, if any, do people who become aware of institutional harm have to disclose it? What protections should exist to enable such disclosure?

  4. The chapter notes that the United States has not passed comprehensive platform accountability legislation in the years since the Facebook Papers. What structural and political factors explain this? What would need to change for such legislation to become possible?

  5. Meta's response to the Facebook Papers followed a playbook described as similar to responses by other industries (tobacco, lead paint, pharmaceuticals) to evidence of product harm. What does this pattern across industries suggest about the general relationship between corporate institutions and evidence of harm? What structural changes most reliably break this pattern?


End of Case Study 1