Case Study 02: The 2020 US Election — Dramatically Different Information Environments

Liberal vs. Conservative Facebook Users and the Fragmentation of Shared Reality


Background

Democratic governance depends, at a minimum, on citizens sharing enough factual common ground to participate in the same public deliberative process. They can — must — disagree about values, policies, and priorities. But when citizens inhabit information environments so different that they are operating with substantially different empirical premises about what events have occurred and what facts are true, the shared factual foundation for democratic deliberation is threatened.

The 2020 US presidential election provided the most extensively studied real-world test of whether social media's personalization dynamics had fragmented American political information environments to the point of creating meaningfully different factual realities. The answer that emerged from academic research, journalistic investigation, and platform-provided data is complex: the differences were real, in some cases substantial, but their distribution was uneven and their relationship to electoral outcomes is difficult to establish with confidence.

This case study examines the documented information environment asymmetries between liberal and conservative Facebook users before and after the 2020 election, analyzing their sources, their documented effects, and what they mean for democratic information ecosystems.


Timeline

January - October 2020: The Campaign Period Researchers at the Election Integrity Partnership (a consortium of research groups including Stanford Internet Observatory, University of Washington, and others) monitor misinformation on social platforms. Multiple academic teams deploy surveys and behavioral tracking to document what political information users with different partisan orientations are exposed to.

October 2020: Pre-Election Research Peak Several major research reports are released in the weeks before the election documenting information asymmetries. The New York University Center for Social Media and Politics (CSMaP) reports document significant differences in the news sources appearing in liberal vs. conservative social media environments. Pew Research Center data shows dramatically different trust levels in news sources across partisan lines.

November 3-4, 2020: Election Night and its Immediate Aftermath The election is called for Biden, but several states are still counting mail ballots. Conservative social media environments begin circulating claims about election irregularities and fraud. Liberal social media environments report election results and call for the counting process to continue. The two environments cover the same events in ways that become progressively less factually overlapping.

November - December 2020: The Contested Election Period Research documents the dramatic divergence in what liberal and conservative Facebook and Twitter users are seeing about the election results, the counting process, and fraud allegations. NYU Ad Observatory data shows that election fraud claims circulate extensively in conservative political Facebook communities and rarely penetrate liberal ones.

January 6, 2021: The Capitol Attack The attack on the US Capitol is interpreted through dramatically different information environments, with liberal and conservative social media users seeing not just different interpretations but different factual claims about what happened and why.

Post-January 2021: Research and Accountability Academic research on the information environment differences continues. Multiple studies are published throughout 2021-2022 examining what users with different political profiles saw before and after the election. Facebook provides limited data to academic researchers under the Social Science One arrangement, enabling some platform-specific analysis.


Documenting the Information Environment Asymmetries

The News Source Asymmetry

The most documented information environment asymmetry between liberal and conservative Facebook users in the 2020 election period was which news sources they encountered. Research by the NYU Center for Social Media and Politics, drawing on data from the Facebook Ad Observatory browser extension (which collected anonymous data on what news stories participating Facebook users saw), documented that:

Conservative users' news environments contained substantially higher proportions of content from hyperpartisan conservative websites, Facebook Pages associated with conservative political figures and organizations, and outlets that the Ad Fontes Media bias-reliability chart placed low on both accuracy and far-right on orientation.

Liberal users' news environments contained higher proportions of content from mainstream legacy news outlets (New York Times, Washington Post, NPR, CNN) and liberal-oriented outlets, with less representation of hyperpartisan left sources than conservative users saw from hyperpartisan right sources.

This asymmetry in source composition matters because the sources in each environment had different editorial standards, different fact-checking practices, and different relationships to the specific factual claims about election integrity that became central to the post-election period. Conservative users were more likely to encounter election fraud claims from sources that treated them as credible; liberal users were more likely to encounter news from sources that characterized fraud claims as unfounded.

The Election Fraud Information Environment

The specific case of election integrity and fraud claims provides the sharpest illustration of information environment divergence in the 2020 election period.

Research by Pennycook and colleagues (2020) found that in the weeks following the election, a significant proportion of Trump-voting survey respondents reported believing that the election had involved widespread fraud. Substantially smaller proportions of Biden-voting respondents believed this. This gap in belief about a factual question — whether widespread election fraud had occurred — was not merely a difference of interpretation but a difference about facts.

The information environments each group was inhabiting provide context for why this gap developed. Research using participants from both partisan groups and comparing their media environments showed that:

  • Claims of widespread election fraud were circulated extensively within conservative Facebook networks and received substantial engagement
  • These claims were rarely seen in liberal Facebook networks, where they appeared primarily in the form of fact-checks and denials
  • When the same user received fact-checking content alongside misinformation, engagement with the misinformation was substantially higher for users already predisposed to believe the claims (consistent with motivated reasoning research)

The result was not simply that one group was "wrong" and another was "right" — though in this specific case the factual record is unambiguous — but that the two groups were inhabiting different informational environments, exposed to different claims from different sources, and had limited exposure to the other group's informational environment.

What Research Found About Filter Bubble vs. Echo Chamber Contributions

An important methodological question for this case is how much of the information environment asymmetry is attributable to algorithmic personalization (filter bubbles) vs. social selection (echo chambers). The research consensus is that both contributed, but that their relative magnitudes are difficult to disentangle:

Echo chamber effects: Conservative and liberal users were following different accounts, sharing to and from different networks, and actively choosing different news sources. Social selection — whom you follow and associate with — contributed substantially to information environment divergence.

Filter bubble effects: Within each group's social network, algorithmic personalization further concentrated engagement with like-partisan content. Users who engaged with election fraud claims received more election fraud claims in their feeds. Users who engaged with election integrity fact-checks received more fact-checks. The algorithm amplified the existing social selection tendencies.

Cross-cutting exposure and its effects: When users in either group were exposed to cross-cutting content — conservative users seeing mainstream news, liberal users seeing election fraud claims — the results were consistent with Bail et al.'s finding: cross-cutting exposure tended to activate motivated reasoning and may have reinforced rather than challenged existing beliefs.


The Mechanics of Information Environment Fragmentation

The 2020 case illustrates specific mechanisms through which information environment fragmentation operates that go beyond the simple "algorithm shows you what you already believe" narrative.

The Asymmetric Virality of Misinformation

Research consistently shows that misinformation — particularly emotionally activating political misinformation — spreads faster and farther on social media than factual corrections or nuanced reporting. In the post-election period, election fraud claims were new, emotionally activating, and consistent with existing political suspicions for conservative users — all properties that social sharing research predicts will increase virality. Fact-checks are typically less emotionally activating and spread more slowly.

The virality asymmetry means that even if the algorithm treated both types of content equally, misinformation would spread more within susceptible networks simply because of its emotional activation properties. The algorithm did not treat them equally — it amplified whichever content generated more engagement signals — but even without this amplification, the virality dynamics would have produced information environment asymmetry.

The Engagement-Based Trust Signal

When users in conservative Facebook networks saw election fraud claims generating thousands of likes, shares, and affirming comments from members of their community, this engagement created a social proof signal — the content must be reliable and important if so many people in my community believe it. This social proof dynamic is separate from and reinforcing of the algorithmic amplification: the algorithm amplified high-engagement content, and the high engagement itself served as a credibility signal for community members.

Research on misinformation in social networks by Vosoughi, Roy, and Aral (2018) is relevant here: false news spreads faster than true news on Twitter, partly because novelty-seeking drives sharing of surprising claims, and partly because false news tends to generate more emotional activation — particularly fear and disgust — than true news. In the 2020 election context, election fraud claims had both properties: they were novel and surprising, and they generated strong emotional activation (outrage, fear) among users predisposed to find them credible.

The Temporal Asymmetry

The 2020 election information environment divergence was also temporally shaped. Election fraud claims emerged and circulated rapidly in conservative networks in the hours and days after election night. Thorough factual analysis of these claims — explaining why recounts, audits, and court decisions were finding no evidence of widespread fraud — took longer to produce and circulate. By the time systematic fact-checking was widely available, many users in conservative networks had already formed beliefs about election fraud that motivated reasoning would then protect from revision.

This temporal asymmetry — misinformation moves fast, careful analysis moves slow — is a structural feature of social media information environments that interacts with personalization dynamics. Filter bubble and echo chamber effects create information environments where misinformation travels quickly within networks of believers; the personalization that served the initial viral claims then continues to serve more claims to the same users.


Platform Response and Its Limits

In the post-election period, Facebook and Twitter implemented specific content moderation measures targeting election fraud claims. Twitter added labels to tweets sharing contested election fraud claims. Facebook reduced the distribution of content challenging election results, added information panels linking to authoritative election information, and removed some accounts and content circulating fraud claims.

Research on the effectiveness of these measures is mixed. Some studies found that labels reduced sharing of labeled content. Others found minimal effects or even backfire effects among users predisposed to distrust fact-checks. The fundamental challenge is that content moderation operates at the level of individual pieces of content, while information environment asymmetry is a structural property of personalized networks — it requires structural responses, not only content-by-content interventions.


Voices from the Field

"What struck me studying the 2020 election was not just that conservatives and liberals had different views of the election. It was that they had different factual environments. People were asking different questions, checking different facts, and trusting different sources. When you're in a situation where 40 percent of one party's supporters believe something empirically false about the election process, you have to ask: where did they get that information? And the answer, in 2020, runs directly through algorithmically curated social media feeds."

— Political communication researcher, paraphrased from academic conference (2021)

"The 2020 election showed us that we are not just dealing with a filter bubble problem in the abstract — we're dealing with a democracy problem in the concrete. When citizens can't agree on basic facts about how elections work, democratic institutions are under genuine threat. And social media's information architecture was a significant contributor to that condition."

— Digital democracy researcher, paraphrased from policy testimony (2022)


Discussion Questions

  1. The 2020 election case shows that conservative and liberal Facebook users inhabited dramatically different factual information environments about election integrity. Does this information environment asymmetry shift moral responsibility from individual voters (who believed false things) toward platforms (whose personalization architecture created the environment)? Or does individual responsibility remain primary regardless of information environment?

  2. Platform content moderation measures (labels, reduced distribution of contested claims) were implemented in the post-election period. Research on their effectiveness is mixed. What does this mixed evidence suggest about the appropriate role of content moderation as a response to information environment fragmentation? What alternative or complementary responses might be more effective?

  3. The temporal asymmetry described in the case — misinformation spreads fast, thorough fact-checking takes time — is a structural feature of social media information environments. What design choices could platforms make to address this asymmetry? Are there precedents in other information contexts (journalism, public health communication) for slowing the spread of unverified claims while analysis is conducted?

  4. The case shows that cross-cutting exposure (conservative users seeing mainstream news, liberal users seeing fraud claims) did not reliably correct false beliefs and may have reinforced them. If cross-cutting exposure doesn't work as a remedy, what does this imply for how we should address the democratic consequences of information environment fragmentation? What would work?

  5. The 2020 case is a US-specific example. Research has found similar information environment fragmentation dynamics in other countries during contested elections (notably Brazil 2022 and various European elections). What do these cross-national comparisons suggest about whether the 2020 US case reflects platform design problems that are universal or American-specific political culture problems?


What This Means for Users

The 2020 election case has several concrete implications for individual users seeking to navigate politicized information environments.

The limits of platform moderation: Platform labels and reduced content distribution are imperfect and can be ineffective or counterproductive for users who distrust mainstream news sources and fact-checkers. Users who want to maintain accurate political information cannot rely on platform safety systems; they must engage in active epistemic hygiene.

Source literacy over platform trust: Rather than trusting any platform's curation of political news, investing in source literacy — understanding the editorial standards, funding sources, and track records of news organizations you consult — provides more durable protection against misinformation than relying on algorithmic filtering.

Social network effects on belief: The 2020 case demonstrates that your social network's collective engagement with information claims is itself a credibility signal that may be stronger than your critical evaluation of individual claims. Being aware of this dynamic — that widespread belief in your community is not evidence of accuracy — is a precondition for epistemic independence.

Active consultation of primary sources: For specific factual questions about governmental processes, legal decisions, and official statistics — the types of claims contested in election integrity debates — consulting primary sources (official government websites, court documents, audit reports) rather than social media representations of those sources provides access to facts that are not subject to personalized curation.

Communicating across the divide: For users who want to communicate effectively with people in dramatically different information environments, understanding what information the other person has and has not been exposed to is essential for productive conversation. Rather than debating interpretations of events, first establishing shared facts — consulting primary sources together, acknowledging the different media environments each person has been in — provides more productive common ground than argument from within one's own information environment.