Case Study 5.2: Anatomy of a Viral Disinformation Post

Tracing a False Story from Origin to Spread

On the evening of November 4, 2020 — the day after the U.S. presidential election, when vote-counting was ongoing in several key states — a post began circulating on Twitter and Facebook. It showed a blurry photograph, claimed it depicted election workers in Philadelphia engaging in voter fraud, and was captioned with variants of "LOOK at what they're doing to Trump's votes!!!" It was shared tens of thousands of times in the first hour.

The photograph had actually been taken in a different context entirely — the precise origin varied across documented cases, but the pattern was consistent: viral election fraud posts in 2020 repeatedly used photographs that were not from the location or context claimed.

This case study applies the five-part framework to this category of disinformation — not to a single specific post, whose details are contested, but to the documented pattern, which multiple research organizations including the Stanford Internet Observatory, the Election Integrity Partnership, and First Draft have analyzed extensively.


The Pattern: What the Research Shows

Researchers analyzing thousands of social media posts during the 2020 election period documented a consistent pattern:

  • Posts claiming to document election fraud frequently used images from unrelated contexts, different countries, or different years
  • The posts were disproportionately shared during the periods of highest vote-counting uncertainty (election night through the following days)
  • Many originated from accounts with limited prior posting history, suggesting coordinated inauthentic behavior
  • Emotional intensity was maximized through capitalization, exclamation points, direct second-person address ("YOUR vote"), and urgency framing
  • The claims could typically be refuted through basic reverse image search — but most users did not perform this check before sharing

Five-Part Analysis of the Viral Election Fraud Post Pattern

Source: Anonymous or pseudonymous social media accounts. Many were later identified as either automated bots, coordinated networks of inauthentic accounts, or accounts created specifically for high-volume sharing during election periods. Some were organic individual users who had received false content from amplification networks and reshared in genuine belief.

Interest: In the documented network cases, the interest was to amplify distrust of the election outcome among audiences predisposed to that belief. In organic individual cases, the interest was social sharing (signaling alarm to one's community).

Credibility: The posts had essentially no institutional accountability — they were anonymous. They derived apparent credibility from being shared by people one trusted, which substituted for institutional credibility.

Concealment: The actual source — the origin of the photograph, the identity of the accounts that first posted and amplified — was systematically obscured. Reverse image search could have traced the photographs, but the design of the posts did not invite this.

Message content: The explicit claim was usually a variant of "this photograph shows voter fraud in [location]." The implicit claims were: voter fraud is widespread, the outcome of the election is being manipulated, and the viewer should be alarmed and act (by sharing).

Evidence: Zero. The photograph did not document what the caption claimed. In the documented cases, the actual photograph origin (different context, different country, different year) thoroughly refuted the explicit claim — but the refutation was not visible to most viewers and was not sought by most sharers.

Emotional register: Outrage and urgency, maximized by every available textual technique. Capitalization ("LOOK," "YOUR," "STOP THEM"). Multiple exclamation points. Direct second-person address ("your vote"). Time-pressure framing ("before they delete this"). The emotional design was specifically engineered for immediate sharing before evaluation — a design feature that the research on viral spread (see Chapter 16) confirms is effective.

The claim of imminent deletion ("before they delete this!!!") is particularly analytically significant: it creates urgency that specifically discourages the pause needed for verification. If you pause to verify, the content will be gone. The engineering of the emotional register serves the strategic goal of bypassing System 2 processing.

Implicit audience: The posts were designed for people who already believed election fraud was widespread — specifically, people who had supported one candidate and were primed to interpret ambiguous information as evidence of an unfavorable outcome being stolen.

The posts did not attempt to persuade neutral audiences. They activated existing belief and converted it into high-engagement sharing behavior. The "evidence" (the photograph) was not evaluated because the audience had no motivation to evaluate it — the post confirmed what they already believed and provided social currency for expressing alarm.

Strategic omission: The actual provenance of the photograph. The statistical rarity of documented voter fraud. The existence of multiple independent mechanisms for securing election integrity. The identities of the accounts that originated and amplified the post. The research literature on the security of the specific voting processes depicted.

Perhaps most importantly: the omission of the fact that claims of this type were systematically refutable through simple verification tools available to any smartphone user.


The Spread Mechanism

The Vosoughi et al. finding that false news spreads faster than true news because it is more novel and emotionally intense is directly illustrated here. The corrective content — "that photograph was taken in a different context" — arrived later, traveled shorter distances, and produced less sharing engagement. The emotional intensity of the original post was higher; the corrective was lower. The novelty of "voter fraud" was higher than the novelty of "this is fine."

Researchers at MIT and Carnegie Mellon documented that corrections of viral election fraud claims during 2020 reached approximately 1/10 to 1/30 the audience of the original claim in comparable time windows.


Why the Anatomy Matters

The five-part analysis of this pattern reveals several features that are non-obvious from simple observation:

The emotional engineering is not incidental. Every textual feature of the post design — capitalization, exclamation points, second-person address, imminent deletion framing — is specifically calibrated to minimize the probability of System 2 engagement before sharing. This is intentional design, not accidental.

The target audience was not uninformed. Research on who shared these posts found that they were disproportionately shared by people who were highly politically engaged and highly motivated. They were not primarily passive recipients of misinformation — they were active distributors motivated by identity and political urgency. The anatomy's "implicit audience" component explains why: the posts were designed to activate identity-protective motivation to share, not to provide information.

The absence of attribution is a feature, not a bug. Anonymous sourcing is not an oversight — it is a design choice that prevents the audience from applying source credibility evaluation. If the post had a disclosed sponsor, the audience could evaluate that sponsor. Without disclosure, the post substitutes social proof (my network is sharing this) for institutional credibility.


Discussion Questions

  1. The case study identifies the "before they delete this!!!" framing as analytically significant because it creates urgency that discourages verification pauses. Can you identify three other textual or design features of this post category that serve a similar function — specifically engineering for sharing before evaluation?

  2. Many of the people who shared these posts were not lying — they genuinely believed them. Under the working definition from Chapter 1, are these individual sharers propagandists? What does it mean to be a "non-intentional" distributor of a propaganda campaign?

  3. The corrective content reached roughly 1/10 the audience of the original false claim. What structural changes — to platform design, to media literacy education, or to the information environment — would most directly address this asymmetry? Be specific.

  4. Apply the framework's "implicit audience" analysis: design a counter-post intended to reduce the sharing of election fraud misinformation among the audience that the original post was designed for. What would it say, how would it be structured, and what would it be careful to include and omit?