Case Study 8.1: Facebook's 2021 Internal Research Disclosures — The Frances Haugen Papers
What Facebook's Own Research Showed About Algorithmic Harm and the Ethical Implications
Overview
In October 2021, Frances Haugen — a former Facebook data scientist with product management responsibilities across civic misinformation, counter-espionage, and news integrity teams — made one of the most consequential whistleblower disclosures in the history of Silicon Valley. Before leaving the company, Haugen had gathered and preserved thousands of pages of internal Facebook research documents, communications, and analyses. These documents were provided to the Securities and Exchange Commission (SEC) and to a consortium of news organizations including The Wall Street Journal, which published a series called "The Facebook Files."
The documents revealed what had been widely suspected by outside researchers but never definitively confirmed: Facebook's own internal research showed that its platform caused documented harms across multiple dimensions — youth mental health, political polarization, the spread of misinformation — and that the company had, in many cases, been aware of these harms and had deprioritized fixing them when doing so conflicted with engagement metrics.
This case study examines the specific disclosures, their ethical implications, and the responses they generated from policymakers, the public, and Facebook itself.
Background: Frances Haugen
Frances Haugen joined Facebook in 2019 with a specialization in "algorithmic product management" — the specific discipline of managing products that use machine learning to make decisions at scale. Her prior experience included time at Google, Pinterest, and Yelp. She was specifically recruited to Facebook's civic integrity team, which worked on problems of political misinformation, election integrity, and the platform's effects on democratic processes.
Haugen's position gave her access to internal research repositories, communications, and analyses that are not accessible to outside researchers. Unlike external academic researchers, who must work with data Facebook chooses to share or with data scraped from public feeds, Haugen had direct access to the internal research that informed Facebook's product decisions.
Her stated motivation for disclosing the documents was that she believed Facebook was "choosing profit over safety" in ways that damaged democratic institutions and public health. "I saw a bunch of social networks optimized for engagement and profit at the expense of safety and humanity," she told the Senate Commerce Committee in October 2021.
Section 1: Algorithmic Amplification of Harmful Content
The Angry Emoji Problem
One of the most significant disclosures concerned the "angry" reaction emoji. Facebook added reaction options (Like, Love, Haha, Wow, Sad, Angry) in 2016, allowing users to express a range of emotions in response to posts. Internally, these reactions were weighted differently in the News Feed algorithm: content that generated Angry reactions was given more algorithmic weight than content that generated simple Likes.
The logic was straightforward from an engagement perspective: an Angry reaction indicates a strong emotional response, which is a form of high engagement. Content that makes people angry is content they pay attention to, comment on, and share. From the platform's perspective, this is valuable signal.
Facebook's internal researchers found that this weighting had a specific and problematic consequence: it systematically promoted posts that generated outrage. One internal document reviewed by The Wall Street Journal found that Facebook's own data showed that posts generating Angry emoji reactions were disproportionately misinformation and divisive political content.
Internal communications described this as a known problem as early as 2017. Proposed solutions — including removing Angry reactions from algorithmic weighting or down-weighting them — were discussed internally. They were not implemented at meaningful scale, in part because any change that reduced the algorithmic weight of high-engagement content risked reducing overall engagement metrics.
The Divisiveness-Engagement Correlation
Related research revealed that Facebook's algorithm was promoting content that it internally classified as "integrity violating" — misinformation, hate speech, and content that generated toxic engagement — because such content generated high engagement metrics. The algorithm did not distinguish between engagement from positive interactions and engagement from outrage-driven interactions.
One particularly striking internal document described a finding that researchers called the "engagement-integrity tradeoff": Facebook's integrity team had identified categories of changes to the algorithm that would reduce the spread of divisive and integrity-violating content. These changes were estimated to improve user wellbeing and reduce political polarization. They were also estimated to reduce overall engagement — and they were not implemented.
A 2019 internal slide deck reviewed by The Wall Street Journal showed that Facebook's data scientists had documented that content generating "outrage" and "anger" was being promoted by the algorithm specifically because it drove high engagement. The document noted: "Our algorithms exploit the human brain's attraction to divisiveness." Whether this quote was from the document or paraphrasing of researchers' findings, its content was consistent with the broader pattern of disclosures.
Section 2: Instagram and Youth Mental Health
The Teenage Body Image Research
Perhaps the most publicly impactful of Haugen's disclosures concerned Instagram's effects on teenage users, particularly girls. Facebook had conducted internal research on this question beginning around 2019, and the findings were striking.
The internal research found that:
- 32% of teenage girls who felt bad about their bodies said that Instagram made them feel worse.
- Instagram's effects on mental health were worse than those of other social media platforms for certain groups, particularly girls.
- The platform created a "social comparison" dynamic in which users, especially young users, compared their real lives to the curated images of others and found their real lives lacking.
- These effects were documented in Facebook's own longitudinal data, not merely in user surveys.
One internal presentation slide, reported by The Wall Street Journal, stated: "We make body image issues worse for one in three teen girls."
This research had been conducted internally and was not disclosed to users, parents, policymakers, or the research community. The week before Haugen's testimony, Instagram had announced a plan to develop an "Instagram for kids" product — a version of the platform targeted at users under 13. Following the disclosures, Instagram paused this plan.
Facebook's Public Position vs. Internal Research
The gap between Facebook's public statements and its internal research was a central element of Haugen's testimony. In March 2021, Instagram CEO Adam Mosseri had testified before Congress that research on Instagram's effects on teenagers was "inconclusive." Haugen's disclosures showed that Facebook's own internal research was not inconclusive — it documented specific, measurable harms.
This discrepancy is particularly significant from a regulatory and ethical perspective. A corporation that publicly represents to legislators and the public that research is inconclusive, while internally possessing research documenting harm, is in a significantly different ethical and potentially legal position than a company that genuinely has inconclusive evidence.
Section 3: The Civic Integrity Team and Political Misinformation
Post-Election Disbanding
Facebook assembled a "Civic Integrity" team in 2020 to address election integrity concerns, particularly around the 2020 US presidential election. The team's mandate included identifying and reducing the spread of political misinformation, election interference content, and inflammatory false claims about voting.
In the period around the election, Facebook implemented significant "emergency measures" — changes to its algorithm and content moderation policies designed to reduce the spread of viral misinformation. These measures were effective: internal data showed a reduction in the spread of political misinformation during the period they were in effect.
Following the election, Facebook significantly reduced these measures and disbanded substantial portions of the Civic Integrity team. Haugen testified that this decision was made before January 6, 2021, when the US Capitol was attacked by a mob that had organized significantly on Facebook and other platforms.
Internal communications reviewed by The Wall Street Journal suggested that some Facebook employees believed the reduction of election-period protections contributed to conditions that enabled the January 6 attack, although this causal claim is contested.
The Profit-Safety Tradeoff in Political Content
Facebook's internal research documented that political content — particularly divisive, tribal, and outrage-generating political content — was among the most effective drivers of engagement on the platform. Political content generates comments, shares, and heated debate in ways that many other content categories do not.
This created a specific tension: political misinformation and divisive political content were both harmful (by internal research metrics) and commercially valuable (by engagement metrics). The documentation shows that when this tradeoff was explicit, engagement considerations consistently prevailed in product decisions.
Haugen summarized this dynamic in her Senate testimony: "Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they'll click on less ads, they'll make less money."
Section 4: Ethical Analysis
Dimensions of the Ethical Problem
The Facebook Papers raise ethical questions at multiple levels:
Platform-level ethics: Facebook faced a documented tradeoff between commercial interests (engagement → advertising revenue) and user welfare. The ethical question is whether a corporation has obligations to users beyond what is legally required, especially when those users include minors, when the product is designed to be habit-forming, and when internal research documents specific harms.
Disclosure ethics: Facebook possessed internal research documenting specific, measurable harms to users, including teenage girls. The company did not disclose this research to the public, to regulators, or to researchers. The ethical question is whether corporations with such evidence have a disclosure obligation — and if not legally required to disclose, whether they have a moral obligation to do so.
Research ethics: Facebook's internal researchers conducted studies on user well-being that produced findings the company considered sensitive. The ethical question is whether researchers who produce such findings, and who are then unable to publish or disclose them through their employment, have obligations to the research community and the public that might justify whistleblowing.
Engineering ethics: Facebook's product engineers designed and deployed the systems (Angry emoji weighting, engagement optimization, teen-targeted features) whose harms were subsequently documented. The ethical question is whether engineers have obligations to consider and respond to the foreseeable social consequences of their designs, and what institutional mechanisms might support such consideration.
The Fiduciary Argument
Law scholar Lina Khan (subsequently appointed FTC Chair) and others have argued that social media platforms should be subject to a "fiduciary duty of care" to their users — a legal obligation, analogous to the duties of doctors and lawyers, to prioritize user welfare over commercial interests. The Facebook Papers provide strong empirical support for this argument: they demonstrate that without such a duty, the commercial incentive to prioritize engagement consistently prevailed over user welfare.
The fiduciary argument also addresses the disclosure question: fiduciaries are generally required to disclose material information that affects the interests of those they serve. Under a fiduciary duty framework, Facebook's suppression of internal mental health research might constitute a violation of disclosure obligations.
The Corporate Governance Argument
The Facebook Papers also raise questions about corporate governance. Mark Zuckerberg controls Facebook's parent company Meta through a dual-class share structure that gives him approximately 57% of voting power despite holding a much smaller percentage of equity. This structure, common among tech founders, insulates executives from shareholder pressure that might otherwise force disclosure and policy changes.
Several governance scholars argued after the Haugen disclosures that the governance structure of major tech platforms — which concentrates effective decision-making authority in founders who are insulated from outside accountability — is a structural contributor to the pattern the Papers revealed. Corporations with more conventional governance structures, where shareholders can exert meaningful pressure, might have made different tradeoffs.
Section 5: Responses and Consequences
Congressional Response
Haugen's Senate testimony led to renewed Congressional interest in social media regulation. Several legislative proposals were advanced, including:
- Kids Online Safety Act (KOSA): Would require platforms to provide certain safety tools and defaults for minors.
- Algorithmic Accountability Act: Would require large platforms to conduct and publish algorithmic impact assessments.
- Platform Accountability and Consumer Transparency (PACT) Act: Would impose transparency requirements and conditional liability for platform algorithmic amplification.
As of this writing, comprehensive federal social media legislation has not passed in the United States, though state-level measures (particularly in California) have advanced.
Regulatory Response
The Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), and state attorneys general opened or expanded investigations following the disclosures. The SEC investigation focused specifically on whether Facebook's public statements about internal research constituted securities fraud — whether investors had been materially misled about the company's knowledge of its products' harms.
In Europe, the UK Information Commissioner's Office and the Irish Data Protection Commission (Facebook's European data regulator) also expanded scrutiny of Facebook's practices following the disclosures.
Facebook's Response
Meta (which Facebook had rebranded to in October 2021, weeks after Haugen's testimony) issued extensive responses to the disclosures, generally arguing that the documents had been "mischaracterized" and that the company actively invested in safety research and interventions. The company commissioned independent assessments and published more detailed documentation of its safety research programs.
Haugen and her legal team argued that Facebook's response confirmed rather than refuted their account: the company did not seriously dispute the research findings but argued that it had made good-faith efforts to address them.
Lessons for the Study of Platform Ethics
Lesson 1: Internal Research and Public Accountability
Corporations that conduct internal research on their products' social effects but do not disclose that research to the public create accountability gaps that are difficult to close through external mechanisms. External academic research has limited access to platform data; regulators have limited technical capacity; journalists can investigate but cannot access internal documents without leaks.
The Haugen case demonstrates that one of the most effective accountability mechanisms is internal whistleblowing — but this depends on individuals willing to bear professional and personal costs. Systemic accountability requires structural mechanisms: mandatory disclosure requirements, independent algorithmic auditing, or regulatory data access rights.
Lesson 2: The Engagement-Safety Tradeoff Is Real and Documented
Before the Haugen disclosures, the argument that engagement optimization creates safety tradeoffs rested primarily on theoretical reasoning and external academic evidence. The disclosures provided internal documentation of this tradeoff: Facebook's own data scientists identified specific changes that would improve safety, estimated their costs in engagement, and those costs were weighed against safety in decision-making.
This documentation changes the nature of the policy argument. The question is no longer whether the tradeoff exists — Facebook's own research confirms it — but who should have authority to make the tradeoff and what weight safety should receive.
Lesson 3: Age-Specific Harms Require Age-Specific Policy
The documentation of Instagram's specific, measurable harms to teenage girls provides a strong empirical basis for age-specific platform regulation. Several policy responses — age verification requirements, specific default settings for minors, restrictions on algorithmic personalization for users under 18 — address the particular vulnerability of younger users without restricting adult access.
Discussion Questions
-
Frances Haugen is widely described as a "whistleblower," but she did not reveal illegal activity — she revealed internal research that documented harms that were not, in themselves, illegal under existing law. How should we evaluate the ethics of her disclosure? Does the absence of illegal activity change the moral calculus?
-
Facebook's internal researchers documented harms and recommended remediation. Many of their recommendations were not implemented. What obligations did these researchers have? Could they have acted differently? What institutional structures would better support researchers who find evidence of harm within corporations?
-
The gap between Facebook's public statement (that teenage mental health research was "inconclusive") and its internal research (which was not inconclusive) is ethically striking. Is this misrepresentation, selective disclosure, or spin? Does the distinction matter legally? Does it matter ethically?
-
Evaluate the fiduciary duty argument: should social media platforms that are designed to be habit-forming, serve minors, and conduct research on their users' welfare be subject to fiduciary obligations? What would this require in practice?
-
If you were a product engineer at Facebook who had access to the internal research Haugen ultimately disclosed, what would your ethical obligations have been? Would you have the same obligations as Haugen, a product manager with broader visibility into the research? Does hierarchy affect ethical responsibility?
This case study is prepared for educational use as part of "Misinformation, Media Literacy, and Critical Thinking in the Digital Age." All facts are drawn from documented public reporting.