31 min read

Political misinformation sits at the intersection of information disorder and democratic theory. It encompasses false or misleading content that pertains to political actors, political processes, or political outcomes — content that, when believed...

Chapter 15: Political Misinformation and Election Integrity

Learning Objectives

By the end of this chapter, students will be able to:

  1. Define political misinformation and distinguish it from other forms of false information, including satire, honest error, and strategic deception.
  2. Describe the major categories of election misinformation and provide documented examples of each type.
  3. Analyze the Internet Research Agency's 2016 operations using primary source materials from congressional investigations.
  4. Explain the "Big Lie" narrative surrounding the 2020 US presidential election, trace its spread across media ecosystems, and evaluate the judicial record.
  5. Assess the potential and documented impacts of synthetic media (deepfakes) on electoral processes.
  6. Identify patterns of voter suppression information operations, including targeting of specific communities.
  7. Compare election interference patterns across multiple democracies, identifying common tactics and national variations.
  8. Distinguish between technical election security and election integrity as competing political narratives.
  9. Evaluate the effectiveness of platform policies, legal responses, and civil society interventions in protecting democratic discourse.

Section 15.1: The Scope of Political Misinformation

Defining the Territory

Political misinformation sits at the intersection of information disorder and democratic theory. It encompasses false or misleading content that pertains to political actors, political processes, or political outcomes — content that, when believed, shapes how citizens understand and participate in governance. Yet the category is contested from multiple directions.

On one side, critics argue that the label "misinformation" is itself weaponized politically: governments and platforms invoke it to suppress legitimate dissent, and the line between false information and contested interpretation is rarely as clean as fact-checkers suggest. On the other side, researchers document real harms: voters who receive false information about polling locations may fail to vote; citizens who believe fabricated stories about candidates may make different choices; populations fed coordinated narratives about electoral fraud may reject legitimate election results.

Scholars have proposed several frameworks for distinguishing types of information disorder. Claire Wardle and Hossein Derakhshan's widely cited model distinguishes mis-information (false content shared without harmful intent), dis-information (false content shared with intent to harm), and mal-information (true content shared with intent to harm). Within political contexts, each type appears:

  • Misinformation: A genuine rumor about a candidate's health record, shared by citizens who believe it.
  • Disinformation: A coordinated state-sponsored operation fabricating documents to damage a political opponent.
  • Malinformation: Publishing accurate information about a politician's private life specifically to suppress their voters.

The boundaries blur in practice. A state-sponsored operation plants a fabricated story; domestic partisans pick it up and share it believing it to be true; a tabloid publishes it for profit; social media algorithms amplify it to engaged audiences. The same false claim flows through channels with varying levels of intent, making causal attribution extremely difficult.

The Partisan Asymmetry Debate

One of the most politically charged questions in misinformation research is whether political misinformation is symmetrically distributed across the ideological spectrum or concentrated more heavily on one side.

A significant body of research, particularly from the United States context, suggests asymmetric distribution. Studies by Andrew Guess, Jonathan Nagler, and Joshua Tucker found that in 2016, sharing of misinformation was concentrated among older, conservative-leaning users. The Oxford Internet Institute's Computational Propaganda Project has consistently found that professional-grade computational propaganda — automated bot accounts, coordinated content, fabricated media — is more heavily deployed by right-wing actors in Western democracies, though not exclusively so.

The Iffy Quotient, developed by NewsGuard and the Online News Association, measures the share of low-credibility content in users' news diets. Studies using this metric have found that right-leaning media diets contain substantially more low-credibility content than centrist or left-leaning diets in the US context.

However, methodological critiques are significant. Research designs often reflect the assumptions of their creators; what counts as a "credible source" may itself encode liberal mainstream media assumptions. Some research finds that left-leaning actors employ different forms of misleading content — not fabricated stories but selective framing, decontextualized statistics, and misleading visuals. The comparative literature outside the US is more mixed: in Hungary, authoritarian right-wing actors control mainstream media while opposition misinformation operates online; in Venezuela and Nicaragua, state-aligned left-wing governments deploy sophisticated disinformation apparatus.

The asymmetry debate matters because it shapes policy responses. If misinformation is symmetrically distributed, platform interventions should be politically neutral; if it is asymmetric, symmetrical enforcement may amount to false equivalence.

Cross-National Patterns

Studies comparing misinformation across democracies reveal both universal features and important national variations. The Reuters Institute Digital News Report surveys media trust and news consumption across 46 countries annually. Several findings are consistent:

Media trust correlates negatively with exposure to misinformation, but the relationship is complex. In high-trust media environments like Finland and Denmark, citizens report high confidence in their ability to identify misinformation, but this confidence may itself reduce vigilance. In low-trust environments like the United States and United Kingdom, high exposure to partisan media creates distinct information ecosystems that rarely overlap.

Social media platform penetration shapes misinformation dynamics. In countries where WhatsApp is the primary news channel — India, Brazil, Indonesia — misinformation spreads through encrypted private channels that researchers and platforms cannot easily monitor. In countries where Facebook dominates — the Philippines, Cambodia, much of sub-Saharan Africa — public sharing enables both viral spread and easier intervention.

Authoritarian and hybrid-regime states actively export information operations. Russia's Internet Research Agency (discussed in Section 15.3), China's "Fifty Cent Army," and Iran's coordinated networks have all been documented interfering in democracies beyond their borders. But domestic actors in democracies also engage in sophisticated information operations — sometimes learning from foreign models.

Legal and regulatory environments differ dramatically. Germany's NetzDG law requires platforms to remove clearly illegal hate speech within 24 hours and has been both praised and criticized. France has anti-manipulation laws specifically targeting electoral periods. The United States, with its First Amendment tradition, has been more reluctant to regulate speech directly, though Federal Election Commission regulations govern some campaign-related communications.


Section 15.2: Election Misinformation Typology

A Framework for Classification

Election misinformation encompasses a wide range of false claims targeting different aspects of the electoral process. Researchers at the Election Integrity Partnership, a coalition formed before the 2020 US election and involving Stanford Internet Observatory, University of Washington, Graphika, and the Atlantic Council's Digital Forensic Research Lab, developed a typology that has become widely used in both academic research and platform policy.

Voter Suppression Disinformation

Voter suppression disinformation deliberately targets eligible voters with false information designed to prevent or discourage their participation. This category includes:

False information about logistics: Incorrect dates for Election Day, wrong polling locations, fabricated changes to voting hours, false claims that only registered voters who requested mail ballots can use them. In the 2010 California gubernatorial race, robocalls were sent to Democratic-leaning Latino communities with instructions in Spanish that Election Day was November 3rd — one day after the actual election. Similar operations have been documented across multiple election cycles.

False eligibility claims: Fabricated information telling immigrants, felons, people with outstanding warrants, or people who owe money that they cannot vote. In many cases, these false claims target communities that are genuinely uncertain about their eligibility, exploiting existing knowledge gaps.

Intimidation messaging: False claims that law enforcement will be present at polling locations to check immigration status or arrest people with outstanding warrants. These operations have been documented targeting Latino communities in particular.

Vote-by-mail disinformation: In 2020, claims proliferated that mail-in ballots would not be counted if postmarked after a certain date, that voters who had requested mail ballots could not vote in person, or that ballot drop boxes were operated by partisan actors.

The targeting of these operations frequently follows racial and community lines. Research by the Brennan Center for Justice has documented disproportionate targeting of Black, Latino, and Native American voters.

Candidate Misrepresentation

False claims about candidates' positions, statements, records, or personal characteristics. This category includes:

Fabricated quotes: Statements falsely attributed to candidates, including completely fabricated quotations and taken-out-of-context statements edited to reverse their meaning. Selective editing of video and audio — "cheap fakes" or "shallow fakes" — has become particularly prevalent, requiring no sophisticated technology.

False biographical claims: Fabricated criminal records, false military service records, invented educational credentials. The "birther" conspiracy claiming Barack Obama was not born in the United States represents a sustained, years-long campaign of biographical misinformation with documented racial dimensions.

Manufactured controversy: False claims about candidates' health, mental fitness, or behavior. During the 2016 campaign, false stories about Hillary Clinton's health circulated extensively; in 2020, similar false claims were made about Joe Biden.

Policy misrepresentation: Claims that candidates have promised positions they have not taken, or that they secretly hold positions they publicly disavow.

Process Misinformation

False claims about how voting and vote-counting work, including:

Voting technology misinformation: Claims that voting machines are easily hacked, that they have "switches" that flip votes, or that they are connected to the internet and remotely manipulated. While genuine cybersecurity concerns about voting infrastructure exist, these are distinct from fabricated claims about specific machines in specific elections.

Ballot-counting process misinformation: False claims about how ballots are handled, observed, counted, or audited. In 2020, false claims proliferated about ballot harvesting, illegal adjudication, "ballot dumps" in the middle of the night, and the significance of normal vote-counting processes that occur when urban areas count last.

Mail-in voting misinformation: False claims about the security and verification requirements of mail-in voting, including false assertions that signatures are not verified or that ballots can be cast in someone else's name without detection.

Results Misinformation

False claims about election outcomes, including:

Pre-result fraud claims: Asserting fraud before any results are known, effectively pre-emptively delegitimizing any unfavorable outcome.

False fraud allegations: Claiming specific, verifiable fraud occurred when courts and election officials have found no such evidence.

Delegitimization narratives: Broader narratives that an election was "rigged" or "stolen" without specific verifiable allegations, making them difficult to directly refute because there is no specific claim to examine.

The 2020 US election produced an unprecedented volume of results misinformation, discussed in detail in Section 15.4.


Section 15.3: The 2016 US Election — What We Know and What We Don't

The Internet Research Agency Operations

The Internet Research Agency (IRA), a St. Petersburg-based company funded by Yevgeny Prigozhin and connected to Russian state intelligence, conducted a multi-year influence operation targeting US political discourse. The operation was extensively documented in the Senate Intelligence Committee's Report on Russian Interference in the 2016 Election (Volume 2, 2019), the Mueller Report, and independent research commissioned by the Senate from New Knowledge (later Renée DiResta and colleagues) and Oxford University.

The IRA's operations were sophisticated, sustained, and specifically targeted. Key documented activities include:

Social media account operations: The IRA created thousands of fake American personas across Facebook, Instagram, Twitter, YouTube, Reddit, Tumblr, Pinterest, and Google+. These personas posed as authentic American political activists, community organizers, gun rights advocates, Black Lives Matter supporters, conservative Christians, and other identity groups. The goal was not simply to promote Russian interests but to amplify existing divisions and erode trust in democratic institutions.

The racial dimension: A crucial finding from the Senate Intelligence Committee's independent research was that the IRA's most extensive targeting was of Black American communities. The IRA created large, authentic-feeling Facebook pages and Instagram accounts specifically targeting Black Americans — amplifying legitimate grievances about police violence, racial injustice, and voter suppression. This content was designed to depress Black voter turnout in 2016 by generating frustration with the Democratic Party and encouraging third-party voting or non-participation, rather than by promoting Republican candidates.

Scale: By the time Facebook testified before Congress in 2018, the company estimated that approximately 126 million Americans may have seen IRA-produced content on Facebook alone. On Instagram, roughly 20 million accounts followed IRA-controlled accounts. These numbers require context: reach does not equal influence, and social media exposure is typically measured in seconds.

Event organization: The IRA organized real-world rallies and protests in the United States, including a "Texas secession" rally in Austin in 2015 and events in New York City and elsewhere. Real Americans attended events organized by people they believed were fellow citizens.

The actual impact on voting behavior: This is where certainty breaks down sharply. Multiple social scientists who examined the IRA data have concluded that the measurable effect on voting behavior was small, though this is not the same as zero. A study by Hunt Allcott, Levi Boxell, Jacob Conway, Matthew Gentzkow, Michael Thaler, and David Yang found that news consumption from fake news sites was concentrated among a small fraction of politically engaged users who were unlikely to change their votes. The IRA's targeting, while sophisticated, paled compared to domestic partisan media in scale and reach.

Cambridge Analytica

Cambridge Analytica, a data analytics firm associated with Republican megadonors Robert Mercer and Steve Bannon, harvested the personal data of approximately 87 million Facebook users without their consent, using a personality quiz app created by researcher Aleksandr Kogan. The company claimed to use this data to develop psychographic profiles that could be used to target political messages to persuadable voters.

The actual efficacy of Cambridge Analytica's methods has been substantially disputed. Multiple data scientists who have examined the company's claims have concluded they were significantly overstated, amounting to essentially fraud on clients. The company's techniques were not significantly different from standard political micro-targeting that campaigns already used through voter file data. The breach of Facebook's data policies was real and serious; the claimed psychographic revolution in political persuasion was largely marketing mythology.

Domestic Hyperpartisan Sites

While attention focused on foreign interference, research documented a substantial ecosystem of domestic hyperpartisan websites producing and distributing false or misleading political content. Sites like True Pundit, the Gateway Pundit, and dozens of smaller operations produced fabricated or heavily distorted stories that were amplified by mainstream conservative media figures and shared widely on social media.

A 2019 study by Yochai Benkler, Robert Faris, and Hal Roberts, published as "Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics," traced information flows using a dataset of 1.25 million stories. They found that right-wing media had developed a distinct, insular ecosystem with Fox News at its center, in which hyperpartisan sites and mainstream conservative media mutually reinforced each other in a way that had no equivalent on the left. This ecosystem, they argued, was more consequential for misinformation spread than Russian interference.

What the Research Does and Does Not Show

The 2016 election produced an enormous research literature. A careful reading reveals:

  • Russian IRA operations were real, sophisticated, and specifically designed to exploit racial and political divisions.
  • The measurable direct impact on voting behavior was likely small relative to domestic factors.
  • Cambridge Analytica's methods were substantially less effective than claimed.
  • Domestic hyperpartisan media was a larger driver of political misinformation than foreign operations.
  • The experience revealed significant vulnerabilities in social media platform governance and democratic information ecosystems.

Section 15.4: The "Big Lie" and January 6th

The Stolen Election Narrative

In the months before the November 2020 presidential election, Donald Trump and allied media began laying the groundwork for claiming the election would be fraudulent. This pre-emptive delegitimization is documented in hundreds of Trump statements and tweets, as well as coordinated messaging from allied media figures, in the months preceding the election.

After the election, as it became clear that Joe Biden had won a substantial Electoral College majority and a popular vote margin of over 7 million votes, Trump and allied actors deployed a sequence of fraud claims with specific characteristics:

Geographic specificity with shifting targets: Claims focused first on Nevada, then Pennsylvania, then Georgia, then Arizona, then Michigan, and then multiple states simultaneously. When claims about a specific county or precinct were investigated and disproven, attention shifted rather than the narrative being updated.

Technical complexity as epistemic cover: Many claims were framed in technical language — Dominion Voting Systems' "algorithms," "weighted race features," "adjudication," "fractional votes" — that sounded precise but was either fabricated or represented normal vote-counting processes.

Affidavit-based claims: Thousands of affidavits were collected from poll workers and voters claiming to have observed irregularities. Courts examined many of these claims and consistently found them insufficient — often because affiants misunderstood normal procedures they had observed, or because claims were not supported by direct evidence.

The Judicial Record

The legal record is unusually clear. Over 60 lawsuits challenging the 2020 election results were filed in state and federal courts. The outcomes:

  • Courts dismissed or rejected all but one of these cases on the merits.
  • Cases were rejected by judges appointed by Republican presidents, Democratic presidents, and Trump himself.
  • The Supreme Court declined to hear cases brought by Texas and other Republican state attorneys general.
  • Federal judges specifically commented on the lack of evidence in multiple decisions. Judge Stephanos Bibas, appointed by Trump, wrote: "Charges require specific allegations and then proof. We have neither here."
  • Judges appointed by Trump wrote opinions rejecting specific claims that Trump allies were simultaneously promoting publicly.

No court found that fraud sufficient to change the election outcome occurred. Investigations by Republican-led election officials in contested states — Georgia Secretary of State Brad Raffensperger, Arizona Governor Doug Ducey — confirmed Biden's victories.

How the Narrative Spread

The stolen election narrative spread through a specific media ecosystem. Research by Yochai Benkler and colleagues at the Berkman Klein Center traced the narrative's amplification through Fox News, Newsmax, One America News Network, and a network of hyperpartisan websites and social media influencers. The narrative received legitimacy from elected Republican officials — US senators, House members, and state legislators — who amplified it on social media and in public statements.

Sidney Powell, Rudy Giuliani, and other figures made specific claims in press conferences that were not subsequently supported in court filings, because attorneys who file false claims face sanctions but public statements are protected. This created a dual-channel operation: dramatic public claims for media amplification, vague or withdrawn assertions in legal filings.

January 6th and Its Aftermath

The January 6th, 2021 attack on the US Capitol by a crowd of thousands who had gathered for a rally where President Trump spoke is the most documented domestic consequence of sustained political misinformation in American history. The House Select Committee's investigation, documented in 17 hours of public hearings and a comprehensive final report, established:

  • The attack was not spontaneous but preceded by weeks of organizing by militia groups and other extremists who explicitly discussed disrupting the certification of electoral votes.
  • Participants cited the stolen election narrative as their motivation.
  • President Trump and allies were aware of violence potential and took no action to stop it.
  • The attack interrupted but did not prevent the certification of Biden's Electoral College victory.

Subsequent polling found that a substantial minority of Americans — depending on the poll, between 25% and 40% — continued to believe the 2020 election was stolen. This belief correlates strongly with Republican Party identification and conservative media consumption. The persistence of this belief illustrates one of the central challenges of political misinformation research: comprehensive judicial and administrative rejection does not necessarily update beliefs for audiences embedded in alternative information ecosystems.


Section 15.5: Political Deepfakes and Synthetic Media in Elections

Documented Cases

Synthetic media — audio, video, and images generated or substantially manipulated using artificial intelligence — presents distinct challenges in electoral contexts. It is important to distinguish between:

Shallow fakes (cheap fakes): Low-technology manipulations using standard video editing — slowing footage to suggest intoxication, cutting audio to reverse meaning, splicing different statements together. These have been the most common form of political media manipulation in documented cases.

Deepfakes: AI-generated or AI-manipulated content, primarily using deep learning techniques such as generative adversarial networks (GANs). Truly high-quality deepfakes require significant computational resources and expertise, limiting their current prevalence, but costs are declining rapidly.

Documented cases include: - A 2018 video of Belgian Prime Minister Sophie Wilmès circulated by a climate activist group, with a fabricated speech linking COVID-19 to climate change. The video was clearly labeled as artificial but circulated without labels. - A 2019 video purportedly showing Gabon's President Ali Bongo was alleged to be a deepfake by opponents who claimed he was incapacitated; experts were divided on whether manipulation occurred. - Multiple clips of US politicians with audio manipulated to suggest statements they did not make, including a 2020 clip of House Speaker Nancy Pelosi with altered audio (not a deepfake but an audio manipulation). - In the 2024 New Hampshire primary, a deepfake robocall using an AI-generated voice mimicking President Biden told Democrats not to vote in the primary.

Potential Impacts

Scholars have identified several mechanisms through which synthetic media could affect elections:

Direct deception: A convincing deepfake of a candidate making damaging statements, released days before an election, could influence voter decisions before it is debunked.

The liar's dividend: Even when deepfakes are not widely deployed, their existence allows public figures to deny authentic embarrassing recordings or statements as deepfakes. This is arguably the most significant current impact: the concept of deepfakes is weaponized to create doubt about authentic evidence.

Emotional manipulation: Synthetic audio and video are particularly effective at conveying emotional content and may be more persuasive than text-based misinformation.

Cross-platform amplification: A synthetic video clip can be shared on mobile messaging apps before platforms can assess and label it.

Detection Challenges

Detection of deepfakes remains technically challenging. Commercial deepfake detection tools have demonstrated high error rates in adversarial conditions, and the arms race between generation and detection tends to favor generation. Platform-level detection is further complicated by:

  • Compression artifacts from video processing degrade detection algorithms.
  • Videos shared via WhatsApp or other messaging apps are often compressed and passed through multiple hands, making provenance difficult to establish.
  • Detection algorithms trained on existing deepfakes may fail on new generation techniques.

The Authentication Coalition, a partnership including news organizations, technology companies, and civil society groups, has worked on provenance standards — technical metadata embedded in authentic content at the time of creation — as an alternative to detection-after-the-fact. The Content Authenticity Initiative (CAI) and C2PA (Coalition for Content Provenance and Authenticity) are working on open technical standards for this approach.


Section 15.6: Voter Suppression Information Operations

Documented Operations

Voter suppression information operations — disinformation campaigns specifically designed to prevent eligible voters from casting ballots — have a documented history in American elections. They typically share several characteristics:

Community targeting: Operations are targeted at specific demographic communities, typically those perceived as supporters of one party. Black, Latino, Native American, and youth communities have been disproportionately targeted in documented US cases.

Plausibility exploitation: The most effective voter suppression disinformation exploits real barriers and legitimate uncertainties. Because many communities have genuine reasons to be uncertain about their eligibility, feel fear around immigration status, or have experienced genuine disenfranchisement, false claims about voting barriers land on prepared ground.

Multi-channel deployment: Operations deploy through robocalls, text messages, fliers, social media posts, and — increasingly — private messaging apps.

The Maryland 2010 gubernatorial election produced the most legally documented case of deliberate voter suppression disinformation. A Republican campaign consultant, Paul Schurick, authorized robocalls sent to African American Democratic voters on Election Day, telling them that Democratic candidate Martin O'Malley had won and they could "relax" and not vote. Schurick was convicted of election fraud.

The 2020 Documented Cases

In 2020, the Election Integrity Partnership tracked voter suppression disinformation in real time, identifying and reporting more than 700 incidents to platforms. Documented cases included:

  • False claims that COVID-19 restrictions had changed voting hours or closed polling places.
  • False information specifically targeting Native American voters about ID requirements in Arizona.
  • False claims on Spanish-language social media that mail-in ballots requested in California could be dropped at locations other than official drop boxes.
  • Images of fake official-looking fliers posted in predominantly Black neighborhoods with incorrect voting dates.

Research by the Brennan Center for Justice and other organizations has documented the overlap between historically disenfranchised communities and targeted disinformation recipients, suggesting systematic rather than opportunistic targeting.


Section 15.7: Global Election Interference Patterns

Brazil 2018 and 2022

Brazil's elections in 2018 and 2022 demonstrated how WhatsApp-based disinformation can shape electoral outcomes in countries where the platform is the primary news medium. Bolsonaro's 2018 campaign deployed messaging through business WhatsApp accounts — which can send messages to unlimited recipients without the privacy protections of personal accounts — to spread campaign content and, researchers documented, misinformation.

The Brazilian Electoral Court's investigation found that Bolsonaro's campaign financed coordinated messaging operations in violation of campaign finance law. The content included fabricated quotes attributed to Workers' Party (PT) candidate Fernando Haddad, false claims about PT education policy involving gay "kit" materials for schools, and content claiming PT had a secret plan to nationalize churches.

In 2022, after Bolsonaro lost to Luiz Inácio Lula da Silva, he deployed a stolen election narrative with deliberate parallels to the US 2020 experience, including claims about electronic voting machine fraud. Bolsonaro allies stormed government buildings in Brasília on January 8, 2023, with deliberate echoes of the January 6th US attack.

Brexit

The 2016 UK referendum on European Union membership was accompanied by substantial misinformation from both sides, but most documented cases involved Leave campaign content. The infamous "£350 million per week to the EU" claim on the Vote Leave campaign bus — a figure that did not account for the UK's rebate and other adjustments — was widely acknowledged as misleading but highly effective. A £350 million NHS pledge, printed in the campaign, was disowned by Leave campaign leader Boris Johnson the day after the referendum.

Russian state media and IRA operations targeted Brexit with content designed to amplify divisions, though the UK's distinct media environment and strong television regulation (ITV, BBC, Channel 4) likely limited their reach relative to social media-dominated information environments.

France 2017

In the final hours before the second round of France's 2017 presidential election, a massive document dump — 9 gigabytes of files from Emmanuel Macron's campaign, mixed with fabricated documents — was released on 4chan before spreading to far-right media. The "MacronLeaks" operation bore markers consistent with Russian state actors and was attributed to the same Fancy Bear (APT28) group responsible for the DNC hack.

France's media had agreed to an electoral blackout in the 36 hours before the vote, limiting the dump's impact in French mainstream media. Macron won decisively. The case illustrates how legal and institutional frameworks can limit the impact of information operations that would be more damaging in less regulated media environments.

The Philippines

The Philippines under Rodrigo Duterte's presidency (2016-2022) represented a case of incumbent government using coordinated disinformation to suppress opposition and discredit critics. Research by Rappler, the investigative news outlet whose editor Maria Ressa won the Nobel Peace Prize in 2021, documented systematic harassment campaigns against journalists, coordinated inauthentic behavior on Facebook, and government-aligned troll farms producing pro-Duterte content.

The Duterte case illustrated how election misinformation tools can be turned into governance tools once power is secured, and how platforms operating in countries with limited regulatory capacity and high violence risk face difficult content moderation decisions.


Section 15.8: Election Security vs. Election Integrity Narratives

Technical Security

Election security in the technical sense refers to the cybersecurity and physical security of voting infrastructure: voting machines, voter registration databases, election management systems, and the networks connecting them. This is a legitimate and important concern.

The Cybersecurity and Infrastructure Security Agency (CISA), created in 2018, has worked with state and local election officials to assess and improve the security of election infrastructure. CISA's 2020 election security assessment, released shortly after the election, concluded: "The November 3rd election was the most secure in American history." This statement was signed by election security officials from both parties and across government agencies. It specifically addressed the technical security of the election infrastructure.

Legitimate technical security concerns exist. Voting machines in many jurisdictions run outdated operating systems. Some jurisdictions lack voter-verified paper audit trails. Network connections between components of election management systems have been identified as potential vulnerabilities. The Senate Intelligence Committee found that Russian actors had sought to access election infrastructure in all 50 states in 2016, succeeding in accessing some voter registration systems (though there is no evidence they altered any data).

The Integrity Narrative

"Election integrity" as deployed in American political discourse after 2020 operates differently from technical security. The phrase became associated with the stolen election narrative and with policy efforts — stricter voter ID requirements, limitations on mail-in voting, reduction of drop boxes, purging of voter rolls — justified by claims of widespread fraud that courts and election officials found no evidence for.

The distinction matters analytically: claims made under the banner of "election integrity" often cannot be evaluated on technical security grounds because they pertain to the legitimacy of results rather than the security of infrastructure. Courts and election officials who evaluated specific fraud claims found no supporting evidence, but the narrative is structured to treat rejection by courts and officials as itself evidence of a cover-up.

CISA Director Chris Krebs was fired by President Trump shortly after signing the secure election statement. His firing illustrates the political vulnerability of technical security officials who contradict integrity narratives.


Section 15.9: Protecting Democratic Discourse

Platform Policies and Their Limitations

Major social media platforms have developed increasingly elaborate policies for handling election misinformation, driven by congressional pressure, advertiser concerns, and reputational considerations.

Facebook/Meta has deployed: - Labels on posts about voting that direct to official information. - Removal of networks of fake accounts engaging in coordinated inauthentic behavior. - Reduced algorithmic amplification of political content (a policy that critics on both left and right have objected to). - Fact-checker partnerships that apply labels to rated content.

Twitter/X has significantly rolled back election-related policies under Elon Musk's ownership, removing many labels, reducing trust and safety staff, and restoring accounts previously banned for election misinformation.

YouTube has restricted election denialism content but faces challenges around content that makes true statements out of context or frames claims as questions rather than assertions.

Research on the effectiveness of platform labels suggests modest but real effects on content sharing. Labels reduce sharing of labeled content but can produce "implied truth effects" — unlabeled false content may be perceived as more reliable because it lacks labels.

The US legal environment for addressing political misinformation is constrained by First Amendment protections. Direct legal restrictions on false political speech face strict scrutiny and rarely survive. However, several legal tools exist:

Election law: False statements about voting logistics (dates, times, locations) may violate federal or state election laws. The Maryland robocall case resulted in a criminal conviction.

Federal Election Commission: Campaign advertising is regulated; foreign contributions to campaigns are prohibited; coordinated behavior between campaigns and PACs has limits.

Civil litigation: Dominion Voting Systems and Smartmatic successfully brought defamation suits against Fox News, One America News Network, Newsmax, Giuliani, and Powell, resulting in a $787.5 million settlement from Fox News, jury verdict of $148 million against Giuliani, and ongoing cases.

Foreign agent registration: Undisclosed foreign nationals engaging in political advertising may violate the Foreign Agents Registration Act, as some IRA-linked operations did.

Civil Society Interventions

Civil society organizations have developed multiple interventions:

Fact-checking networks: The International Fact-Checking Network (IFCN), a project of the Poynter Institute, accredits fact-checking organizations globally. Members must adhere to standards of non-partisanship, transparency, and correction policies. Studies show fact-checking has modest effects on belief correction.

Election monitoring: Organizations including Common Cause, the Brennan Center, and the Election Assistance Commission provide resources to election officials and voters about official voting procedures, countering misinformation with accurate information.

Prebunking campaigns: Building on psychological inoculation theory (see Chapter 16), organizations including the Cambridge Social Decision-Making Lab have developed "prebunking" videos that teach manipulation techniques without exposing viewers to the misinformation itself. These have shown promise in large-scale online experiments conducted with YouTube.

Digital literacy programs: Programs like MediaWise (Poynter), the News Literacy Project, and SIFT (Stop, Investigate the Source, Find Better Coverage, Trace Claims) work with young people to develop media evaluation skills.


Callout Boxes

METHODOLOGICAL NOTE: Measuring Misinformation Exposure Studies of misinformation reach and influence face a common challenge: the counterfactual is impossible to observe. We can document how many accounts saw a piece of IRA content, but we cannot know what they would have believed without seeing it, or how it interacted with all the other political content they consumed. This is why researchers speak carefully about "exposure" rather than "impact," and why causal claims about misinformation effects require experimental designs or natural experiments.

KEY CASE: The Dominion Defamation Suits When Dominion Voting Systems sued Fox News, Sidney Powell, and others for defamation, the discovery process produced internal Fox News communications showing hosts and executives privately doubting the claims they broadcast publicly. Dominion alleged that Fox knew the claims were false and broadcast them anyway to retain an audience that wanted to hear them. Fox settled for $787.5 million without admitting liability. The case illustrates how defamation law can function as a corrective mechanism when public figures make specific, verifiable false claims about identifiable persons or companies.


Key Terms

  • Disinformation: False information deliberately created and spread to deceive.
  • Election integrity narrative: Political framing that challenges election legitimacy without specific verifiable evidence of fraud.
  • IRA (Internet Research Agency): Russian state-connected organization that conducted social media influence operations targeting US elections and political discourse.
  • Coordinated inauthentic behavior: Platform policy term for networks of fake accounts that work together to manipulate information ecosystems.
  • Prebunking: Inoculation-based approach to misinformation resilience that teaches manipulation techniques before exposure.
  • Synthetic media: Audio, video, or images artificially generated or substantially manipulated using AI tools.
  • Cheap fake / shallow fake: Low-technology media manipulation using standard editing software rather than AI generation.
  • Liar's dividend: The phenomenon whereby the existence of deepfake technology allows public figures to deny authentic recordings as potentially fake.
  • CISA: Cybersecurity and Infrastructure Security Agency, the US federal agency responsible for election infrastructure security.
  • Voter suppression disinformation: False information deliberately targeting eligible voters to discourage or prevent their participation.

Discussion Questions

  1. If political misinformation is more prevalent on one side of the partisan spectrum, should platform enforcement be asymmetric, or does asymmetric enforcement create its own problems for democratic legitimacy? What evidence would you need to resolve this question?

  2. Courts rejected all meaningful legal challenges to the 2020 US election, including challenges before judges appointed by the losing candidate. Yet a significant minority of Americans continue to believe the election was stolen. What does this tell us about the role of judicial authority in democratic epistemics, and what alternative mechanisms might be more effective?

  3. The IRA's most extensive content targeting was of Black American communities, amplifying legitimate grievances. How should researchers, policymakers, and platform moderators think about the ethics of moderating this kind of content — which was both authentic-feeling and manipulative?

  4. The "liar's dividend" suggests that the threat of deepfakes may be more consequential than their actual deployment. Do you agree? What evidence would support or challenge this claim?

  5. Compare the information environments of the 2020 US election, the 2022 Brazilian election, and the 2017 French election. What institutional and media structure differences seem most important in explaining the different outcomes?

  6. If you were designing a platform policy for election misinformation, what types of content would you restrict, label, or leave unmoderated? How would you handle the problem of defining "misinformation" in contested political contexts?


Summary

Political misinformation represents one of the most consequential forms of information disorder because it directly threatens the legitimacy of democratic governance. This chapter has traced the phenomenon from definitional debates about what counts as political misinformation through specific documented cases — the IRA's 2016 operations, the "Big Lie" narrative, Brazilian WhatsApp disinformation, the Philippines' troll farms — to cross-cutting analysis of typologies, detection challenges, and intervention approaches.

Several themes emerge from this survey. First, the most consequential political misinformation typically combines sophisticated targeting with exploitation of existing social divisions; it does not create new conflicts but amplifies existing ones. Second, the domestic-foreign distinction, while analytically useful, obscures how foreign operations interact with and depend on domestic partisan media ecosystems. Third, platform policies, legal mechanisms, and civil society interventions all show some effectiveness but none is sufficient alone. Fourth, the persistence of false beliefs after comprehensive official rejection illustrates that evidence alone is insufficient to change beliefs embedded in tribal identity.

The next chapter examines scientific misinformation — climate, vaccines, GMOs — where similar dynamics operate but the reference point is scientific consensus rather than election outcomes.