Case Study 01: Facebook and the 2020 US Election — Preparations, What Happened, and Aftermath
Background
No event has done more to crystallize the debate about social media and political polarization than the 2020 US presidential election cycle and its aftermath. The election occurred in the context of a global pandemic that made voting logistics uniquely contentious, a deeply polarized electorate, a president who had signaled willingness to contest electoral results, and four years of documented disinformation campaigns on social media platforms. Facebook's specific role in this period—the preparations it made, the decisions it took during the election, the events of January 6, 2021, and the subsequent policy reversals—provides a detailed case study in how a major social media platform navigates the intersection of political content, engagement incentives, and democratic integrity.
The case is valuable not just as a historical account but as an illustration of core tensions in platform governance: the gap between a platform's self-interest and the public interest; the difficulty of making and sustaining safety-oriented decisions that reduce engagement; the role of internal research in identifying and failing to prevent harms; and the speed with which emergency safety measures were relaxed after the immediate crisis had passed.
Pre-Election Preparations
In the months before the November 2020 election, Facebook implemented a range of what it internally called "Break Glass" measures—a term from emergency planning referring to extreme, unusual interventions held in reserve for extraordinary circumstances.
Content distribution changes: Facebook substantially reduced the distribution of political content in News Feed through algorithm changes designed to lower the temperature of political discourse. News publishers saw significant drops in referral traffic from Facebook during this period. Content labeled as misinformation by third-party fact-checkers was given reduced distribution.
Voter information initiatives: Facebook added authoritative voter information labels to political content, created a dedicated Voting Information Center that directed users to official election information, and added labels to posts about the election that directed users to official results (rather than candidate claims) once vote counting began.
Election interference responses: Following the documented Russian interference operations in 2016 (which used Facebook extensively for divisive content and targeted advertising), Facebook implemented measures to detect and remove coordinated inauthentic behavior—networks of accounts acting together to spread political content while disguising their coordination.
Delayed news feed: Facebook considered but did not fully implement a "circuit breaker" that would have delayed the spread of viral content to allow more time for fact-checking. The company did implement more aggressive fact-check labeling and reduced viral distribution of posts sharing disputed information.
These preparations reflected years of learning from the 2016 election cycle and represented a genuine departure from Facebook's prior practice of treating political content like any other user expression. They also represented significant commercial costs: reducing the distribution of political content, which is among the highest-engagement content on the platform, directly reduced advertising revenue.
The Election Period and January 6
The November 2020 election proceeded without the expected crisis of election day itself. Vote counting extended over several days given the volume of mail-in ballots, a process that generated substantial political content on social media—much of it pushing the false claim that the extended counting itself was evidence of fraud.
Facebook's election period measures held through November and into December. The platform applied labels to posts making unsubstantiated claims about election fraud, significantly reduced the distribution of news content that might inflame tensions, and temporarily shifted News Feed toward content from friends and family rather than public pages—a significant algorithmic change that reduced the reach of political publishers and partisan political pages.
On January 6, 2021, supporters of President Trump gathered at the Ellipse for a rally and subsequently marched to the US Capitol, where a portion of the crowd breached the building, disrupted the congressional certification of the Electoral College results, and occupied the Capitol for several hours. Five people died in connection with the events. The day had been organized substantially through social media platforms including Facebook, where the Stop the Steal movement had built a substantial following and where specific calls to gather in Washington on January 6 had spread widely.
Facebook's response after January 6 included suspending President Trump's account (indefinitely at first, later with a defined reinstatement timeline set by the company's Oversight Board) and removing content that called for violence or claimed the election was stolen. The company also kept in place the reduced political content distribution that had been implemented before the election.
The Reversal: Removing the Break Glass Measures
The most controversial decision in Facebook's 2020 election management was not what it did before the election but what it did after. According to internal documents obtained by whistleblower Frances Haugen, Facebook systematically dismantled its election safety measures in the months following the election, at a pace that internal researchers considered dangerously fast.
The internal research team had recommended that many of the "Break Glass" measures be maintained longer or made permanent, on the grounds that they were reducing harmful content and that the political environment remained volatile. These recommendations were not acted upon. The company began restoring algorithmic amplification of political and news content, reduced the aggressiveness of misinformation labeling, and restored the News Feed configuration that had been in place before election period.
The internal research also documented what the researchers called "MSI" (Meaningful Social Interactions) effects: the algorithm changes implemented in 2018 had substantially increased the prevalence of divisive political content in users' feeds, and researchers recommended changes to address this. These recommendations were also largely not implemented.
According to Haugen's account (and the internal documents she provided to Congress and journalists), the fundamental reason for the reversal of safety measures was commercial: the election safety measures reduced engagement, and engagement drove revenue. The decision to relax the measures was driven by product and business teams over the objections of safety and research teams.
Internal Research That Was Not Acted Upon
The Frances Haugen documents revealed a significant pattern: Facebook's internal research teams had identified specific product decisions and their harmful effects, recommended changes, and been overruled by product and business teams focused on engagement and revenue metrics.
Among the documented findings that were not acted upon:
MSI amplification of divisive content: Researchers documented that the 2018 "Meaningful Social Interactions" algorithm change had dramatically increased political and partisan content in News Feed, and that this content was disproportionately divisive, sensational, and often misinformative. Recommendations to adjust the MSI ranking to reduce this content were not implemented.
Misinformation resharing: Research showed that a small number of "superspreader" accounts were responsible for a disproportionate share of misinformation distribution, and that targeted interventions (reduced distribution for known misinformation accounts) would substantially reduce total misinformation exposure with minimal impact on normal users. These interventions were implemented on a small scale but not broadly deployed.
Groups radicalization: Research on Facebook Groups documented that groups recommending increasingly extreme content to users who had expressed interest in initially moderate political content. The pattern was well-documented and resembled the YouTube radicalization pathway that had received public attention. Recommendations to limit the groups recommendation system's tendency toward increasingly extreme content were implemented in attenuated form.
In each case, the internal research documentation followed a similar pattern: researchers identifying a product decision that was causing harm, recommending changes, and receiving limited or no response from product teams that were focused on engagement metrics and business outcomes.
The Haugen Testimony and Its Aftermath
Frances Haugen's testimony before the Senate Commerce Subcommittee on Consumer Protection in October 2021 provided the most detailed public account of Facebook's internal knowledge of harms and decision-making processes. Her testimony was extensively prepared with documentary evidence and was broadly credible because it was supported by tens of thousands of internal documents that she had preserved and provided to regulators and journalists.
The testimony catalyzed legislative action. Multiple bills targeting social media platform accountability were introduced in the months following. The FTC began investigating Facebook's privacy practices with renewed intensity. International regulatory bodies—particularly in the European Union—used the Haugen documents as evidence in their own regulatory proceedings.
Facebook's public response followed a familiar corporate communications playbook: denying that the documents showed what critics claimed they showed, arguing that its own research demonstrated a commitment to safety rather than evidence of knowing harm, and announcing new safety initiatives timed to the news coverage. The company was disadvantaged in these responses because the internal documents were extensive, specific, and in many cases clearly characterized harm in the company's own language.
Mark Zuckerberg's October 2021 response, in which he characterized the reporting as an "inaccurate picture of our company," and specifically challenged the claim that the company put profits before safety, was widely criticized as inconsistent with the documentary evidence. The documents did not show that Facebook intended to cause harm—the pattern was more subtle and more structural: commercial incentives and engagement metrics systematically prevailing over safety team recommendations, without anyone explicitly deciding to cause harm.
What This Means for Users
Takeaway 1: Platform safety measures are not permanent by default. Facebook's temporary implementation and subsequent rollback of election safety measures illustrates that protective platform policies, when not mandated by regulation, are subject to continuous revision based on commercial considerations. Users and policymakers should not assume that safety measures announced in response to a crisis will be maintained.
Takeaway 2: Internal research identifying harm does not guarantee remediation. The pattern documented in Facebook's internal research—harm identified, recommendation made, recommendation not implemented—is not unique to Facebook. It reflects a structural dynamic in which commercial incentives (engagement, revenue) systematically outweigh safety considerations unless external pressure (regulatory, legal, reputational) makes safety failures more commercially costly than safety investments.
Takeaway 3: The "engagement vs. safety" trade-off is real. Facebook's election safety measures reduced engagement, and the decision to relax them was commercially motivated. This trade-off is genuine and structural, not simply a matter of corporate ethics. Any platform operating under the current incentive structure—where engagement drives advertising revenue, and advertising revenue drives business survival—faces pressure to prioritize engagement over safety when the two conflict.
Takeaway 4: Transparency is essential but insufficient. The Haugen documents provided unprecedented transparency into Facebook's internal decision-making and created significant public, regulatory, and legislative pressure. But the specific harms they documented—MSI amplification of divisive content, groups radicalization pathways—were not fully remediated even after the documents became public. Transparency creates accountability but does not automatically produce change.
Timeline Summary
2018: Facebook implements "Meaningful Social Interactions" algorithm change, boosting comments and reactions over passive viewing. Internal research identifies that this amplifies divisive political content.
2020 (Pre-election): Facebook implements "Break Glass" measures including reduced political content distribution, voter information labels, and increased misinformation labeling.
November 3, 2020: Election day. Facebook's measures remain in place through the counting period.
November-December 2020: Vote counting proceeds; Facebook maintains election safety measures and applies labels to posts claiming widespread fraud.
January 6, 2021: Capitol attack. Facebook suspends Trump's account.
January-February 2021: Facebook begins relaxing election safety measures over internal researchers' objections.
October 2021: Frances Haugen testifies before Congress, presenting internal documents. Facebook is renamed Meta.
2022: Meta implements reduced political content policy, claiming it reflects user preferences.
2023-2024: Congressional investigations continue; litigation by state attorneys general proceeds; Facebook's role in January 6 remains the subject of ongoing political and legal scrutiny.
Discussion Questions
-
Facebook's election safety measures were implemented temporarily and then relaxed. What would make these measures permanent? What combination of regulatory, market, or social pressures would be required to maintain safety measures when they conflict with engagement?
-
The internal research documenting harms at Facebook is extensive and specific. What does the gap between this research and the company's product decisions tell us about the organizational culture and decision-making structures of large technology companies? How might those cultures and structures be changed?
-
January 6 occurred after Facebook had restored its normal algorithmic operation following the election safety measures. Does this create any causal or moral link between Facebook's product decisions and the events of that day? What standard of evidence would be needed to establish such a link, and what would that mean for corporate accountability?
-
Mark Zuckerberg's response to the Haugen disclosures was widely criticized as inconsistent with the documentary evidence. What would a credible, honest corporate response to the internal documents have looked like? How would such a response have differed from what was actually said?
-
Facebook's election safety measures were commercially costly and were relaxed for commercial reasons. Design a regulatory approach that would align commercial incentives with safety—making it more commercially costly to relax safety measures than to maintain them. What would such regulation look like, and what objections would it face?