Case Study 01: The Facebook Internal Memo and the Outrage Algorithm
"Our Algorithms Exploit the Human Brain's Attraction to Divisiveness"
Background
In September 2021, the Wall Street Journal published the first installments of what would become known as the "Facebook Files" — a series of investigative reports drawing on thousands of pages of internal Facebook research documents shared by former employee Frances Haugen. Among the most explosive revelations was a 2018 internal memo from a Facebook researcher containing a line that became the defining encapsulation of the platform's outrage problem:
"Our algorithms exploit the human brain's attraction to divisiveness."
The memo, and the broader tranche of internal documents, provided the first sustained look at what Facebook's own researchers knew about the effects of the company's recommendation and ranking systems on political discourse, user well-being, and social cohesion — and, critically, what the company chose to do with that knowledge.
This case study examines the internal research that produced the memo, the organizational context in which it was written and received, the specific algorithm dynamics it described, and what the revelations mean for our understanding of how platforms manage the relationship between engagement optimization and social harm.
Timeline
2016-2017: The Integrity Research Foundation Following criticism that Russian disinformation operations had exploited Facebook's platform during the 2016 US election, Facebook significantly expanded its integrity research team. Researchers were tasked with identifying how the platform's features and algorithms were being misused and what effects they were having on political discourse. This expansion funded the research that would eventually produce the leaked documents.
2017: Early Internal Warnings Internal documents from 2017 show integrity researchers flagging concerns about the News Feed algorithm's tendency to amplify "sensational" and emotionally activating content. These warnings are shared within the integrity team and with some product leadership but do not produce significant algorithm changes.
2018: The Meaningful Social Interactions Pivot and Its Aftermath Mark Zuckerberg publicly announces the "meaningful social interactions" (MSI) change to the News Feed algorithm in January 2018. Integrity researchers studying the MSI implementation document that the change amplifies rather than reduces divisive content. The internal memo containing the "exploit the human brain's attraction to divisiveness" line is written during this period.
2019: The Five Percent Proposal Internal researchers propose limiting the spread of political content to a "five percent" cap — ensuring that political posts constitute no more than five percent of any user's News Feed. The proposal is reportedly debated internally and eventually not implemented at the scale researchers recommended, partly due to concerns about engagement metrics.
2020-2021: The Election and Its Aftermath Facebook implements emergency "civic integrity" measures around the 2020 US election, including reducing the amplification of political content and misinformation. Post-election, documents show internal debate about whether to maintain these "break glass" measures or return to previous settings. Many are removed in January 2021, shortly before the January 6th Capitol attack.
September 2021: The Facebook Files Frances Haugen shares documents with the Wall Street Journal. The Facebook Files series publishes in September 2021. Haugen testifies before the US Senate in October 2021. The documents reveal the gap between internal research knowledge and external company communications about platform harms.
The Algorithm Dynamics Documented
What the Researchers Found
The internal research documented in the Facebook Files described several specific algorithm dynamics related to outrage amplification:
Reaction emoji weighting: When Facebook introduced its expanded reaction emoji suite (including "angry" and "haha" in addition to "like") in 2016, internal testing found that content eliciting "angry" reactions was amplified by the algorithm because angry reactions were counted as strong engagement signals. A post that generated one hundred angry reactions was ranked as highly engaging as one that generated one hundred likes — despite the qualitative difference in the nature of the engagement. Internal researchers proposed downweighting angry reactions in the algorithm. This was eventually implemented, but research suggested the weighting change may have been insufficient.
Resharing amplification: The Facebook Files documents included research showing that content passed through multiple reshares (a post shared by User A, then User B shares A's share, then User C shares B's share, etc.) became progressively more emotionally extreme and less accurate with each successive share. Despite this finding, the algorithm continued to significantly amplify widely reshared content.
"Borderline" content amplification: The most significant finding for the outrage thesis was research documenting that content that approached but did not clearly violate Facebook's community standards — content that was inflammatory, divisive, and outrage-inducing but technically compliant — was being amplified more than compliant content. The algorithm could not distinguish between content that made people engaged-and-happy and content that made people engaged-and-angry; it simply saw engagement and amplified accordingly.
The meaningful interactions backfire: As noted in the chapter, the MSI algorithm change — intended to prioritize genuine social connection over passive content consumption — increased the weight given to comments and shares relative to passive likes. Because outrage content generates disproportionately more comments and shares than other content types, the change amplified outrage content. Internal researchers documented this effect and it was reported in subsequent analyses.
What Was Known vs. What Was Done
Perhaps the most significant element of the Facebook Files revelations is not what the research found — the outrage amplification dynamics it documented were broadly consistent with what external researchers had been describing — but the documented gap between what was known internally and what was done.
Internal research recommended: - Downweighting angry reactions more aggressively - Reducing amplification of reshared content - Implementing political content volume caps - Maintaining election-era integrity measures long-term - Making the algorithm more transparent to external researchers
What was implemented: - Modest downweighting of angry reactions - Incremental changes to resharing amplification - No implementation of political content volume caps at proposed scale - Removal of many election-era integrity measures after January 2021 - Continued resistance to external algorithmic research access
The documents show that the gap was not primarily a result of researchers being wrong or their recommendations being impractical. It was, according to reporting and Haugen's testimony, substantially attributable to the concern that reducing outrage amplification would reduce engagement metrics, and that engagement metrics were central to Facebook's advertising revenue and thus to the company's financial performance.
Organizational Analysis
The Facebook Files case illustrates several organizational dynamics relevant to understanding how large technology companies manage conflicts between business interests and social harm.
The two-team problem: Facebook's integrity researchers and its product and business teams were effectively operating on different objective functions. Integrity researchers were measuring harm; product teams were measuring engagement and revenue. When their findings conflicted — integrity research showing that high-engagement features were also high-harm — there was no neutral organizational mechanism for resolving the conflict. In most cases, the product and business logic prevailed.
The evidence-to-action gap: Academic and policy discussions often assume that better evidence about social media harms will lead to better platform behavior. The Facebook Files suggest otherwise: even with detailed, rigorous internal evidence about the harms of outrage amplification, the platform did not take the actions its own research recommended. The limiting factor was not information but incentives. This has significant implications for how policy should be designed: if disclosure obligations and research access do not change incentives, they may not change behavior.
Regulatory threat as a moderating force: Documents show that Facebook's willingness to implement some integrity measures (including the election-era "civic integrity" policies) was influenced by regulatory and reputational risk. When regulatory pressure was high, Facebook was more willing to reduce outrage amplification even at engagement cost. When regulatory attention shifted, restrictions were eased. This suggests that sustained regulatory pressure — not voluntary commitment — may be the most reliable lever for changing platform behavior.
Voices from the Field
"What the Facebook Files showed is not that Facebook was uniquely evil. It's that they were in a structural situation where every incentive pointed toward doing the wrong thing — or at least, not doing the right thing consistently. The researchers who raised these concerns were not ignored because they were wrong. They were insufficiently prioritized because fixing the problem would have cost money."
— Technology policy researcher, speaking in a 2022 academic context
"I saw that Facebook was aware of the harms it was causing, and was taking steps to fix some of them, but the steps were not going fast enough and they kept being reversed when they hit business metrics."
— Frances Haugen, Senate testimony, October 2021
Discussion Questions
-
The memo line "Our algorithms exploit the human brain's attraction to divisiveness" was written by a Facebook researcher in an internal document, not a public communication. Does the fact that internal researchers were willing to write this frankly about their own platform's practices change how you evaluate Facebook's organizational responsibility? What does it suggest about the culture at the company?
-
The case documents a consistent pattern in which integrity research recommendations were implemented only partially and sometimes reversed when they affected engagement metrics. Is this a failure of individual moral courage on the part of executives, a structural failure of organizational design, or an inevitable result of the business model? Which diagnosis matters most for developing effective policy responses?
-
Frances Haugen shared internal documents with journalists and Congress, actions that violated Facebook's confidentiality requirements. Evaluate Haugen's decision using ethical frameworks covered elsewhere in this course: Was she a whistleblower whose actions were justified by the public interest, an employee who violated legitimate confidentiality obligations, or something more complex than either description allows?
-
The case shows that Facebook implemented stronger integrity measures when regulatory threat was high (around the 2020 election) and eased them when regulatory attention decreased. What does this pattern imply for effective regulatory design? If regulatory threat is the primary driver of platform behavior improvement, what regulatory mechanisms would create sustained rather than episodic incentives for harm reduction?
-
The Facebook Files coverage focused heavily on what Facebook knew and didn't do. But consider the research teams whose work was reported: they documented these harms, recommended changes, and had those recommendations partially ignored. What ethical responsibilities do researchers working inside technology companies have when their findings are not acted upon? What options do they have short of the dramatic whistleblowing step Haugen took?
What This Means for Users
The Facebook Files revelations are significant for users in several concrete ways.
First, they confirm that concerns about outrage amplification are not speculative or based only on external research — they are documented by the platform's own internal research teams, using the platform's own behavioral data. This gives significant weight to the claim that outrage amplification is a real and measurable phenomenon, not merely a popular narrative.
Second, the documents reveal that Facebook had the capacity to implement more aggressive integrity measures (as it demonstrated during the 2020 election period) but chose not to maintain them due to business concerns. This means the outrage amplification problem is not technically intractable — it is commercially constrained. That distinction matters for understanding what regulatory or market interventions could actually achieve.
Third, the case illustrates the limits of expecting platforms to self-regulate toward social benefit when self-regulation conflicts with financial incentives. Users who rely on platforms' public communications about their commitment to user safety and healthy discourse should understand that internal research sometimes presents a substantially different picture. Independent research access and regulatory oversight are not merely academic concerns — they are the mechanisms by which the gap between internal knowledge and external accountability can be closed.
Finally, the case is a reminder of the concrete value of whistleblowers and investigative journalism in surfacing information about platform practices that would not otherwise be publicly known. The outrage machine operates most powerfully in the dark — when its mechanisms are invisible to users, researchers, and regulators. Transparency, however uncomfortable for platforms, is the precondition for any meaningful response.