Case Study 01: The 2016 US Election and Facebook's Algorithm
Facebook's News Feed: A Decade of Optimization Against Users
Background
When historians look back at the 2016 US presidential election, they will examine many contributing factors: candidate characteristics, economic anxieties, demographic shifts, media coverage patterns, and the role of foreign interference. Among these factors, the role of Facebook's News Feed algorithm has attracted particular scholarly and journalistic attention — and has generated considerable controversy about what research does and does not demonstrate.
This case study examines the specific role of Facebook's algorithmic content ranking in the 2016 election information environment: what research shows, what remains contested, what Facebook's internal documents revealed about the company's own understanding of its role, and what decisions Facebook made in response to the evidence it had. It is not a comprehensive account of the election or of social media's political effects. It is a focused examination of the relationship between algorithmic design and the quality of democratic information.
The case study is organized around a central question: did Facebook know that its News Feed algorithm was amplifying content harmful to democratic information quality, and if so, what did the company do about it?
Timeline
2015: The Bakshy Study and the Filter Bubble Debate
In May 2015, Facebook published a study in the journal Science, authored by researchers including Eytan Bakshy, that examined whether Facebook's algorithm created ideological "filter bubbles" — information environments in which users saw only content confirming their existing political views, insulated from exposure to cross-cutting perspectives.
The study's headline finding was reassuring: Facebook's algorithm exposed users to slightly more cross-cutting ideological content than they would have encountered in a purely friend-filtered feed. The algorithm, the study concluded, was not the primary driver of filter bubbles; individual user choice (which links users clicked) was a larger factor.
The study was immediately seized on by Facebook in public discussions of its algorithm's political effects. The company cited it repeatedly in press materials and in meetings with policymakers and journalists concerned about the platform's political influence.
What Facebook did not foreground were the study's important limitations. The study examined a narrow time window, used a particular definition of "cross-cutting" content that was methodologically contested, and looked at exposure rather than influence. More importantly, the study examined whether users saw cross-cutting content — not whether they engaged with it, whether it changed their views, or whether it was algorithmically amplified relative to content that confirmed their priors.
Subsequent research found that the relationship between algorithmic exposure and political belief formation was considerably more complex than the Bakshy study suggested. And internal Facebook research, not publicly disclosed in 2015, was telling a different and more troubling story.
2016: The Election Year
The 2016 election year brought to public attention what researchers had been documenting for some time: the viral spread of political misinformation on Facebook at extraordinary scale.
The BuzzFeed News analysis by Craig Silverman, published in November 2016, provided the most widely cited evidence. Silverman's team analyzed the 20 top-performing fake news stories about the election in the final three months of the campaign and compared their Facebook engagement (shares, reactions, comments) to the 20 top-performing stories from major news organizations including the New York Times, Washington Post, NBC News, and others.
The results were striking: the 20 most-viral fake news stories generated approximately 8.7 million shares, reactions, and comments on Facebook in the final three months of the campaign. The 20 top stories from the 19 major news outlets generated approximately 7.4 million. The fake news had outperformed the real news on Facebook's platform by every engagement metric available.
The most-shared fake news stories included fabrications that served clear partisan purposes: a story claiming that Pope Francis had endorsed Donald Trump (which he had not); a story claiming that Hillary Clinton had sold weapons to ISIS (she had not); a story claiming that an FBI agent involved in a Clinton email investigation had been found dead in an apparent murder-suicide (false). Each story was structured to be emotionally compelling, to confirm existing partisan suspicions, and to generate the specific kind of outraged sharing that Facebook's algorithm identified as high-quality engagement.
Late 2016: Facebook's Internal Response
Internal documents disclosed through the Frances Haugen whistleblower release in 2021 revealed that Facebook's own researchers had been studying the political misinformation problem extensively — and had reached conclusions significantly more alarming than the Bakshy study's public-facing findings suggested.
Internal research found that the algorithmic amplification of content from Pages (organizational accounts rather than personal accounts) was a significant driver of political polarization. Pages optimized for engagement — many of them partisan organizations, foreign interference actors, or monetized clickbait operations — were consistently outcompeting authentic individual expression for feed real estate, because their content was engineered to generate the signals (particularly high comment counts and shares) that the algorithm rewarded.
An internal analysis found that 64 percent of all extremist group joins on Facebook came from the platform's own algorithmic recommendations — specifically from the "Groups You Should Join" and "Discover" features. Users joined extremist groups not primarily through search or friend referral but because Facebook's algorithm had recommended the groups based on the user's engagement history.
Internal researchers proposed interventions. The disclosed documents show that several specific changes were modeled: reducing the weight given to reshared content in the ranking algorithm (which would have reduced the spread of viral misinformation), adjusting the weight of political content from Pages relative to content from personal connections, and modifying the recommendation system for Groups to reduce the promotion of extremist communities.
Each intervention was evaluated. Each was found to reduce engagement metrics. According to the disclosed documents, the recommendation to reduce the weight of reshared content — specifically because resharing was a primary mechanism for the spread of misinformation — was blocked by a senior product executive on the grounds that it would harm engagement numbers. A researcher described the blocked intervention in a memo as one that "would have reduced the spread of misinformation at the cost of overall engagement."
Post-Election: The Public Response and the Zuckerberg Denial
In the immediate aftermath of the election, Mark Zuckerberg made a public statement that would prove deeply damaging to Facebook's credibility. On November 12, 2016, at the Techonomy conference, Zuckerberg said: "The idea that fake news on Facebook — of which it's a very small amount of content — influenced the election in any way, I think is a pretty crazy idea."
The statement was factually inaccurate (by Facebook's own internal data, the amount of fake news on the platform was not "a very small amount"), methodologically confused (the claim that influence was absent required evidence that was not available), and — given what internal research had shown — difficult to reconcile with what senior company officials would have known at the time.
Zuckerberg subsequently walked back the statement, acknowledging in subsequent years that the problem was real. By 2019, he was calling election integrity one of Facebook's top priorities. But the November 2016 statement crystallized a pattern that the Haugen disclosures would later document in detail: the gap between what Facebook's internal research showed and what the company's leadership said publicly.
Analysis
What Research Demonstrates and What Remains Contested
The scholarly literature on Facebook's role in the 2016 election is large and continues to develop. Several claims are well-supported:
Facebook was a significant source of political information. By 2016, a substantial portion of American adults reported getting news from Facebook, making it one of the largest distributors of political information in the country.
Fake news stories spread widely on Facebook. The BuzzFeed analysis, and subsequent academic studies, confirm that false political stories achieved massive reach on the platform. The specific mechanism — algorithmic amplification combined with partisan sharing behavior — is also well-documented.
Facebook's algorithm systematically rewarded content that generated emotional engagement regardless of accuracy. This is documented both by the platform's public descriptions of its ranking factors and by internal research disclosed through the Haugen release.
Facebook's internal research found that algorithmic recommendations contributed to extremist content exposure. The 64 percent figure on extremist group joins from algorithmic recommendations is drawn from internal documents; external researchers cannot fully replicate the analysis, but the directional finding is consistent with independent research on algorithmic amplification of extreme content.
What remains more contested is the magnitude of effect — specifically, what percentage of voters changed their votes as a result of Facebook-mediated misinformation exposure. This is an extremely difficult causal question. Exposure to false information is not the same as believing it; believing false information is not the same as changing one's vote because of it. The research literature does not provide a reliable estimate of how many votes, if any, were changed by Facebook-spread misinformation in 2016.
What Facebook Chose Not to Do
The most important finding in the internal documents for the purposes of this case study is not what Facebook knew but what Facebook chose not to do with that knowledge.
The disclosed documents show that Facebook's integrity researchers identified specific, technically feasible interventions that would have reduced the spread of misinformation and the algorithmic amplification of political extremism. These interventions were declined on the grounds that they would reduce engagement metrics.
This decision structure — where the cost of integrity interventions is measured in engagement metrics and the cost of not implementing them is measured in harm to users and democratic information quality — reveals the architecture of values embedded in Facebook's product decision-making. Engagement was the primary criterion. Integrity was a secondary consideration evaluated against its engagement cost.
The decision was not made by malicious actors. It was made by product managers and executives using the analytical framework their business model had produced: a framework in which attention is the product, engagement is the metric, and anything that reduces engagement is a cost. Within that framework, the decisions were internally consistent. The problem was the framework itself.
The Velocity Media Parallel
At Velocity Media, the equivalent moment came when Dr. Aisha Johnson presented data showing that the recommendation algorithm was systematically surfacing politically divisive content — not because users sought it out, but because the algorithm had learned that divisive content drove higher comment counts, longer sessions, and more return visits.
Marcus Webb's response was to run the same cost-benefit analysis that Facebook had run: what happens to engagement metrics if we reduce the amplification of divisive content? The analysis showed a short-term engagement decline.
Sarah Chen, as CEO, faced the question that Facebook's leadership faced: is the engagement decline acceptable? The structural pressures at Velocity Media are identical to those at Facebook — investor expectations, competitive pressure, advertising revenue tied to engagement volume. The question is whether the structural pressures are determinative or whether the space for ethical choice exists.
What the Facebook story reveals is that structural pressure alone did not force Facebook's choices. Specific humans made specific decisions not to implement specific interventions. The structural pressure made those decisions more likely; it did not make them inevitable. This is the narrow but real space in which corporate ethical responsibility operates.
Discussion Questions
-
The BuzzFeed analysis found that fake news outperformed real news in Facebook engagement during the 2016 campaign. Given what you know about EdgeRank and its successors, explain mechanistically why this outcome was predictable. What properties of fake news stories aligned with what the algorithm was designed to reward?
-
Facebook cited the Bakshy 2015 study in public discussions of political polarization while withholding internal research that told a different story. How should we evaluate the ethics of this selective disclosure? Is there a meaningful distinction between lying and selective disclosure of truth?
-
Mark Zuckerberg's November 2016 statement calling the fake news influence claim "a pretty crazy idea" appears to contradict what Facebook's internal research had found. What are the possible explanations for this statement? How does this gap between public statement and internal knowledge inform your assessment of Facebook's culpability?
-
The chapter identifies that 64 percent of extremist group joins came from Facebook's own algorithmic recommendations. What does this finding imply about the relationship between user agency and algorithmic influence? If users chose to engage with the content after the recommendation, does that engagement constitute "choice" in a meaningful sense?
-
The structural argument suggests that any company in Facebook's position would have faced the same pressures and likely made similar choices. If this is true, what are the implications for how we should regulate social media platforms? Is regulation of individual companies sufficient, or does the structural argument require structural regulatory responses?
What This Means for Users
The 2016 election case study has several concrete implications for anyone using algorithmic social media platforms to consume political information.
The algorithm does not distinguish between true and false content. News Feed ranking systems measure engagement signals — shares, comments, reactions, time-on-screen. True and false content can produce identical engagement signals. A highly engaging false story will be algorithmically indistinguishable from a highly engaging true story. Users who rely on algorithmic curation as a quality filter are relying on a system that does not perform quality filtering.
Emotionally compelling content is not the same as accurate content. The content optimized for Facebook engagement — designed to provoke outrage, vindication, fear, or anger — is not optimized for accuracy. These two properties are independent. Users should approach emotionally compelling content, particularly political content, with heightened rather than reduced skepticism: the more viscerally compelling a story feels, the more likely it is that its emotional impact was intentionally engineered.
Cross-platform verification is essential. A story that appears only on Facebook, that arrived in your feed through an algorithmically amplified share rather than from a source you actively follow, should be verified through independent sources before being accepted as true or shared further.
Algorithmic amplification creates false impressions of consensus. When a piece of content appears repeatedly in a feed from multiple different sources, it creates the impression of widespread agreement or factual consensus. This impression may be entirely an artifact of algorithmic amplification rather than of genuine widespread endorsement. The frequency of appearance of a claim in an algorithmic feed is not evidence of the claim's truth or of the breadth of genuine support for it.
The platform's interest and your interest diverge. The platform optimizes for content that keeps you on the platform. Your interest is in accurate information that helps you make good decisions. These interests are not aligned. The platform is not a neutral information intermediary; it is a business whose product is your attention, and its algorithm is designed to capture that attention, not to inform you.