28 min read

"Election interference" is used to describe a wide range of activities, from sophisticated state-sponsored cyberoperations to domestic political spin. Analytical precision requires distinguishing between categories that have very different causes...

Chapter 32: Election Interference: Case Studies and Countermeasures

Learning Objectives

By the end of this chapter, students will be able to:

  1. Distinguish between the major categories of election interference: infrastructure hacking, voter roll manipulation, influence operations, and domestic vs. foreign actors.
  2. Analyze the documented operations targeting the 2016 and 2020 US elections, distinguishing between what is well-evidenced and what remains contested.
  3. Explain European democracies' experiences with election interference and what lessons they offer.
  4. Describe the patterns of election-related disinformation in the Global South, including WhatsApp-based campaigns, domestic troll armies, and platform-specific dynamics.
  5. Distinguish between election security (the technical integrity of voting systems) and election integrity narratives (political claims about electoral legitimacy).
  6. Evaluate the effectiveness and limitations of platform responses to election disinformation.
  7. Describe the legal and regulatory frameworks governing election-related disinformation in different jurisdictions.
  8. Design evidence-based countermeasures appropriate to specific election interference threats.

Section 32.1: Election Interference Typology

What Is Election Interference?

"Election interference" is used to describe a wide range of activities, from sophisticated state-sponsored cyberoperations to domestic political spin. Analytical precision requires distinguishing between categories that have very different causes, mechanisms, and appropriate countermeasures.

Hacking operations involve unauthorized access to computer systems with the goal of compromising election-related data or processes. Three subcategories matter:

  • Voter registration system hacking: Unauthorized access to the databases that determine who is eligible to vote, what their correct address is, and whether they have voted. Successful compromise can enable targeted voter roll purges, false voter roll information, or data theft.
  • Election management system compromise: Access to the systems that configure voting machines, tally votes, and report results. This category represents the most direct threat to the actual vote count but is also the most difficult to execute given physical security measures.
  • Campaign and party infrastructure hacking: Unauthorized access to campaign email systems, servers, and communications — targeted not to alter votes but to obtain materials for influence operations (hack-and-leak) or to gain intelligence on campaign strategy.

Influence operations targeting elections seek to shape how voters understand the candidates, the issues, and the election process itself. These operations range from traditional advertising (legal if disclosed) to covert state-sponsored manipulation (illegal under various statutes). Subcategories include: - Targeted social media disinformation campaigns - Fake news website networks producing election-related false content - Voter suppression messaging (claiming wrong election dates, false eligibility requirements) - Candidate impersonation and fabricated statements - Coordinated inauthentic amplification of authentic partisan content

Electoral administration disinformation specifically targets public understanding of how elections work — claiming that specific procedures are fraudulent, that certain polling places will be closed, that voting machines are unreliable, or that election officials are corrupt. This category is distinct from traditional political disinformation in that it attacks confidence in the democratic process rather than merely in specific candidates.

The Foreign-Domestic Distinction

Election interference is often discussed primarily as a foreign threat, but research consistently identifies domestic actors as responsible for the largest volume of election disinformation in most democracies. The foreign-domestic distinction matters for several reasons:

Legally: Most democratic countries have laws specifically prohibiting foreign nationals from spending money to influence elections. The Foreign Agents Registration Act (FARA) in the United States requires disclosure by agents of foreign principals. Equivalent laws exist in most European democracies. Domestic election disinformation, by contrast, is protected political speech in most jurisdictions.

Analytically: Domestic election disinformation is typically far larger in volume than foreign operations, draws on intimate cultural knowledge of target audiences, and exploits genuine domestic political grievances. Research on the 2016 US election found that domestic hyperpartisan media produced more election disinformation by volume than foreign-origin sources.

For countermeasures: Responses to foreign interference (attribution, sanctions, diplomatic pressure, platform enforcement) differ fundamentally from responses to domestic disinformation (media literacy, independent journalism, regulatory requirements for political advertising disclosure).

Callout Box: The "Election Security" vs. "Election Integrity" Semantic Distinction In US political discourse, "election security" and "election integrity" have acquired different political meanings. "Election security" is primarily used by election officials, cybersecurity professionals, and researchers to describe the technical system ensuring votes are accurately recorded and counted. "Election integrity" has increasingly been used in political discourse — particularly after 2020 — to frame claims of widespread fraud that lack evidentiary support. Understanding this semantic distinction is important for students analyzing media coverage of elections: the two phrases often signal membership in different epistemic communities with different evidential standards.


Section 32.2: The 2016 US Election

Russian Hacking Operations

The Russian intelligence community's operations targeting the 2016 US election combined cyberoperations with influence operations in an integrated campaign. The hacking component was conducted by the GRU (Russian military intelligence), while the influence operations were conducted by the IRA (covered in Chapter 31) and by GRU's information operations units.

The GRU hacking operation targeted multiple organizations. The most consequential compromises were:

The Democratic National Committee (DNC): GRU hackers (operating under the codenames APT28/Fancy Bear and APT29/Cozy Bear) penetrated the DNC network in the spring of 2016. The intrusion went undetected for months. Cybersecurity firm CrowdStrike, called in to investigate unusual network activity, identified the intrusion in June 2016. Stolen materials — primarily internal communications including emails and opposition research — were provided to WikiLeaks and released beginning in July 2016.

John Podesta: The chairman of Hillary Clinton's presidential campaign was compromised through a targeted spear-phishing email in March 2016. His Gmail account's contents — approximately 50,000 emails — were released by WikiLeaks in October 2016, timed to coincide with the "Access Hollywood" tape revelations in an apparent attempt to manage news cycles.

State election infrastructure: The Senate Intelligence Committee's Volume 1 report (2019) documented that Russian actors scanned election infrastructure in all 50 states and successfully gained access to some state voter registration databases. The Illinois state board of elections database was accessed, with personal data on approximately 200,000 voters potentially accessed. The report found no evidence that actual vote tallies were altered, but documented the breadth of Russian probing of election systems.

The IRA Social Media Operation

The IRA's operations targeted at the 2016 election are covered in detail in Chapter 31. Key points for understanding 2016 specifically:

The IRA's American operations were underway well before any specific election-related targeting began — the operation was designed to influence American political culture broadly, not just the 2016 election specifically. The election-focused phase intensified beginning in 2016 but built on two years of audience development on Facebook, Instagram, and Twitter.

The IRA's election-specific content had several distinct streams: - Enthusiastic pro-Trump content targeting conservative audiences - Voter suppression content targeting Black and progressive audiences (encouraging votes for Jill Stein, claiming Clinton's record disqualified progressive support, promoting electoral abstention) - Anti-Clinton content emphasizing alleged corruption, health concerns, and character issues - Amplification of authentic hyperpartisan content from domestic sources

The Mueller Report concluded that these operations constituted a "sweeping and systematic" interference in the 2016 election. What remains contested is the electoral significance of these operations — whether they changed outcomes in states decided by small margins.

Cambridge Analytica and Data Analytics

Cambridge Analytica, a political data analytics firm, became a significant 2016 controversy. The firm obtained Facebook data on approximately 87 million American users through a researcher's psychological profiling app — data obtained in violation of Facebook's terms of service. Cambridge Analytica claimed to use this data for highly targeted political advertising on behalf of the Trump campaign.

The Cambridge Analytica story is important for what it revealed about data ecosystem vulnerabilities rather than for its specific electoral effects. Subsequent academic analysis has found limited evidence that Cambridge Analytica's psychographic targeting was uniquely effective — the firm's claims about its capabilities appear to have significantly exceeded its demonstrated performance. However, the revelations about Facebook's data practices and the vulnerability of user data to political manipulation sparked significant regulatory attention and contributed to the General Data Protection Regulation (GDPR) in Europe and ongoing US Congressional debate about platform data practices.

Domestic Hyperpartisan Media

The 2016 US information environment included a large domestic hyperpartisan media ecosystem that operated independently of (though sometimes in interaction with) foreign influence operations. Researchers Benkler, Faris, and Roberts (Network Propaganda, 2018) documented through systematic content analysis that right-wing media was more insulated from mainstream fact-checking, more likely to share false or misleading content, and less likely to correct errors than left-leaning or centrist media.

This domestic disinformation ecosystem was responsible for far more disinformation by volume than foreign operations. Sites like Breitbart, InfoWars, The Daily Caller, and numerous smaller outlets produced a constant stream of false and misleading content targeting Clinton and Democrats. This domestic production interacted with IRA content in ways that made attribution of specific effects to foreign vs. domestic sources essentially impossible.

What Was Decisive vs. Marginal?

The question of what determined the 2016 election outcome is genuinely contested among political scientists. Multiple factors had larger demonstrable effects than any specifically attributable foreign interference: - The FBI Director's letter about emails (released 11 days before the election) - The candidates' debate performances - Economic conditions in key Midwestern states - Clinton campaign resource allocation decisions - Structural factors (party fundamentals, candidate favorability)

This does not mean Russian operations were inconsequential — in an election decided by approximately 80,000 votes across three states (Wisconsin, Michigan, Pennsylvania), even marginal effects can be decisive. But responsible analysis requires acknowledging the genuine uncertainty about effects and resisting both the claim that Russian interference "hacked the election" and the claim that it had no significance.


Section 32.3: The 2020 US Election

The "Big Lie" as Election Interference

The 2020 US presidential election presents a distinct and analytically novel form of election interference: a large-scale domestic information operation by a losing candidate and allied political actors claiming, without credible evidence, that the election had been stolen through widespread fraud.

The claims — that voting machines had been manipulated, that mail-in ballots were fraudulently tabulated, that hundreds of thousands of illegal votes had been cast — were litigated in more than 60 courts and rejected in virtually every case, including by judges appointed by the claimants' own party. The Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) issued a statement on November 12, 2020, calling the 2020 election "the most secure in American history." Trump's own Attorney General, William Barr, told the Associated Press he saw no evidence of fraud sufficient to change the election outcome.

The significance of the "Big Lie" for students of election interference is several-fold:

First, it demonstrates that election interference through disinformation does not require foreign actors — the most consequential election disinformation operation of the 2020 cycle was conducted by domestic political actors with significant resources and institutional support.

Second, the "Big Lie" represents a case where electoral disinformation achieved significant political effects on the information environment: majorities of Republican voters came to believe the election had been stolen, and this belief ultimately led to the January 6, 2021, attack on the US Capitol.

Third, the case illustrates the challenge of countering election disinformation when the source is a sitting president and the claims are amplified by significant domestic political infrastructure.

Foreign Interference in 2020

Foreign interference in the 2020 election was documented but less operationally significant than 2016. Key documented operations:

Iranian influence operation: The Intelligence Community Assessment on 2020 election interference found that Iran conducted a "multi-faceted campaign" including sending threatening emails to Democratic voters, creating a fake "Proud Boys" website, and producing a video claiming to show election fraud. The operation was detected and partially countered.

Russian interference: The IC Assessment found Russia conducted a "steady campaign" of influence operations during 2020, primarily focused on undermining confidence in the integrity of the election and denigrating Biden. Russian state media extensively amplified Biden-related disinformation. However, the operation was assessed as less aggressive than 2016.

China: The IC Assessment found China "considered but did not deploy" influence operations targeting the 2020 election, assessing that the costs outweighed the benefits.

CISA and Election Security Infrastructure

The 2020 election saw significantly enhanced election security infrastructure compared to 2016, reflecting the lessons of Russian operations. CISA's Election Infrastructure Information Sharing and Analysis Center (EI-ISAC) provided threat intelligence to all 50 states and 2,500+ local election offices. Paper ballot requirements had expanded significantly. Post-election audit protocols were more robust.

CISA Director Christopher Krebs's statement that the 2020 election was "the most secure in American history" was both technically accurate (given the enhanced security measures) and politically consequential — Krebs was fired by President Trump shortly after making the statement, illustrating the political costs of accurate election security communication in a highly politicized environment.


Section 32.4: Europe's Experiences

The 2017 French Presidential Election

The 2017 French presidential election is analyzed in detail in Case Study 1 of this chapter. Key elements: the GRU-attributed hack of Emmanuel Macron's En Marche campaign, the timed release of documents 44 hours before the election (within France's pre-election media blackout period), and France's relatively successful counter-response. The French case demonstrates that preparation, media coordination, and rapid prebunking can significantly mitigate hack-and-leak operations' effects.

The German Bundestag Hack

German federal parliament (Bundestag) networks were comprehensively penetrated by Russian GRU hackers (attributed to APT28) beginning in May 2015. Approximately 16 gigabytes of data were stolen from multiple Bundestag members' accounts, including members of the parliamentary committee on European affairs and data from Chancellor Angela Merkel's constituency office. Unlike the US DNC hack, the German data was not publicly released through WikiLeaks or other channels — suggesting the primary purpose may have been intelligence collection rather than influence operations.

The German government's response, including a major overhaul of Bundestag network security and eventual public attribution of the attack to Russia, was relatively measured in tone — reflecting Germany's complex economic relationship with Russia (Nord Stream 2 pipeline) and the difficulty of taking strong public positions without triggering escalation.

Brexit Referendum Information Environment

The 2016 UK Brexit referendum preceded the US election by several months and operated in a similarly turbulent information environment. Key elements:

Domestic disinformation: The "£350 million per week to the NHS" claim (the figure painted on the official Vote Leave campaign bus) was extensively fact-checked and found to be false — it represented the UK's gross contribution to the EU rather than net contribution, and the claim that Brexit would redirect this money to the NHS was misleading. Nevertheless, it proved highly effective as a campaign message.

Russian influence operations: The British House of Commons Intelligence and Security Committee's November 2020 report ("the Russia Report") found that the UK government "actively avoided" investigating Russian interference in the Brexit referendum — neither commissioning nor directing an investigation into potential Russian interference. The report criticized this institutional failure without documenting specific operational details, leaving the question of Russian effects on Brexit genuinely uncertain.

Cambridge Analytica connection: The Vote Leave campaign used digital advertising services with connections to Cambridge Analytica's data practices. The UK Information Commissioner's Office investigated these connections.

The Brexit case is important because it illustrates how the foreign-domestic distinction breaks down when domestic campaign actors use data practices and targeting techniques that raise the same analytical issues as foreign influence operations.

2019 European Parliament Election

The 2019 European Parliament election occurred against a backdrop of significant concern about coordinated influence operations targeting multiple EU member states simultaneously. The EU East StratCom Task Force documented dozens of specific disinformation narratives targeting the election from pro-Kremlin sources. The election itself proceeded without major documented disruption — suggesting that the combination of elevated awareness, platform enforcement actions, and media literacy efforts had some effect — but the long-term trend of increasing nationalist and Eurosceptic parties in the Parliament raised ongoing questions about the cumulative effects of influence operations on European political culture.


Section 32.5: Global South Patterns

Philippines 2016: Duterte's Troll Army

The 2016 Philippine presidential election produced one of the first extensively documented examples of a domestic political movement deploying troll farm tactics against electoral opponents. Rodrigo Duterte's campaign was supported by a network of paid social media workers — what journalist Maria Ressa and researchers at Rappler documented as a systematic, paid operation to produce and amplify pro-Duterte content and attack opponents online.

The Philippine operation differed from Russian-style troll farms in several analytically significant ways: - It was domestically organized and funded, by Duterte's campaign and allied businesspeople - It operated on Facebook, which is effectively the primary internet for many Filipinos (with Facebook's "Free Basics" program subsidizing Facebook access specifically) - It relied heavily on authentic patriotic enthusiasm mixed with paid coordination — the line between genuine supporters and paid operators was deliberately blurred - It introduced tactics that were later widely observed across developing democracies: "keyboard armies," coordinated harassment of critical journalists, and Facebook group-based narrative amplification

Maria Ressa, the Nobel Peace Prize-winning journalist and Rappler founder who documented these operations, subsequently became the target of both legal harassment by the Philippine government and coordinated online attacks. Her case illustrates the connections between electoral influence operations and attacks on journalists who cover them.

Brazil 2018 and 2022: WhatsApp Campaigns

Brazil's electoral cycles of 2018 and 2022 introduced WhatsApp as a major vehicle for electoral disinformation — a pattern with significant implications for democracies with high smartphone penetration and WhatsApp use.

2018: The election of Jair Bolsonaro was preceded by a controversy over what appeared to be coordinated WhatsApp campaigns spreading disinformation about his Workers' Party opponent, Fernando Haddad. Brazilian news outlet Agência Pública documented evidence of large-scale, potentially paid WhatsApp messaging campaigns — funded by businesses allied with Bolsonaro — that spread false information about Haddad through pre-existing WhatsApp group networks. The content included fabricated quotes, false images, and disinformation about Haddad's alleged positions.

The WhatsApp disinformation problem posed specific technical challenges: WhatsApp's end-to-end encryption meant that content could not be monitored by the platform in the same way as public social media posts, making coordinated campaigns far harder to detect and disrupt.

2022: Following Bolsonaro's defeat to Lula da Silva, Bolsonaro and allied figures promoted extensive "Big Lie"-style claims about the integrity of Brazil's electronic voting system — an internationally respected system that had operated without credible documented fraud for decades. These claims generated significant social unrest and ultimately the January 8, 2023, riots in which Bolsonaro supporters stormed and vandalized the Brazilian presidential palace, National Congress, and Supreme Court — a set of events with striking parallels to the US January 6, 2021, events.

The Brazil case demonstrates the globalization of "stolen election" narrative templates and the specific vulnerability of closed messaging platforms to coordinated disinformation campaigns that bypass platform content moderation.

India: The BJP IT Cell

India's ruling Bharatiya Janata Party (BJP) is documented to operate a sophisticated digital communications operation that includes what critics call "The IT Cell" — a semi-formal network of digital activists, party workers, and allied social media volunteers who coordinate content production and amplification across WhatsApp, Twitter, and Facebook. The IT Cell has been documented by researchers at Oxford Internet Institute's Computational Propaganda Project and the Hindu newspaper.

The Indian case raises particularly complex analytical questions because the IT Cell operates at the boundary between legitimate political communication and coordinated inauthentic behavior: it involves real people with genuine political beliefs, operating in coordination with a political party, producing content that sometimes includes disinformation. The question of when coordinated partisan digital communication becomes election interference has no clean answer in most legal frameworks.

Africa: Kenya and Nigeria

Electoral disinformation in Kenya and Nigeria illustrates patterns common across sub-Saharan Africa: - Social media platforms (particularly Facebook and WhatsApp) serving as primary information sources for large urban populations - Incitement content — including ethnic and religious disinformation — contributing to real-world violence - Domestic political actors as primary producers of election disinformation - Significant resource constraints for fact-checking organizations and journalism - Platform moderation that is far less robust for local languages than for English

Kenya's 2017 election saw coordinated disinformation campaigns on both sides of the Jubilee-NASA political divide. The election was annulled by the Supreme Court — in part based on irregularities in the electronic transmission of results — and a repeat election boycotted by the opposition produced a technically clean but politically delegitimized outcome.


Section 32.6: Election Security vs. Election Integrity Narrative

The Technical Security System

Contemporary election security in well-resourced democracies involves multiple overlapping layers:

Paper ballots and audit trails: Most US states have returned to or never abandoned paper ballots, which provide a physical record that can be audited independently of any digital system. Paper-based systems are immune to remote hacking of the vote count itself.

Post-election audits: Risk-limiting audits (RLAs) use statistical sampling to verify that the reported outcome is consistent with the physical ballot record. Colorado pioneered RLAs in US election administration; multiple other states have adopted them. The 2020 US election saw more extensive post-election auditing than any previous election.

Decentralized administration: In the United States, election administration is extraordinarily decentralized — administered by thousands of county and local jurisdictions, with no single system that could serve as a single point of compromise. This decentralization is a significant security advantage: a successful attack on one jurisdiction would not affect other jurisdictions.

Chain of custody and physical security: Voting equipment is subject to physical security measures, chain of custody documentation, and pre-election testing. Most jurisdictions have detailed procedures for equipment handling that make covert tampering extremely difficult.

CISA and federal support: Following 2016, CISA has provided cybersecurity assessments, threat intelligence, and technical assistance to state and local election officials, significantly raising the baseline security of election infrastructure across all states.

The Political Narrative War

Against this technical security backdrop, a parallel narrative war has developed around election integrity. The "stolen election" narratives propagated after the 2020 US election did not engage with or rebut the technical security evidence — they made claims that were either demonstrably false (voting machines were pre-programmed to switch votes, for which no evidence was found despite extensive forensic examination) or analytically untestable (claims about mass fraud in mail-in ballots, despite the absence of any credible evidence from the jurisdictions that processed those ballots).

Understanding why these narratives were effective despite the absence of supporting evidence requires engaging with the psychology of political belief: motivated reasoning, in-group epistemic authority, and the asymmetric persuasiveness of emotionally resonant claims relative to technical rebuttals. Studies of people who believed the 2020 election was stolen found that the strongest predictor of this belief was not exposure to specific false claims but pre-existing mistrust of Democratic political actors and identification with Trump's political movement. The claims worked as political identity markers as much as empirical claims.


Section 32.7: Platform Responses

Facebook's Ad Transparency Library

Following the 2016 election revelations about IRA advertising, Facebook introduced the Ad Library (initially called "Ad Archive") — a publicly searchable database of all political and issue advertising on Facebook and Instagram, with information about advertiser identity, spending, and targeting parameters. The Ad Library represented a significant transparency improvement over the pre-2017 situation, in which political advertising on social platforms was essentially invisible to researchers, journalists, and regulators.

The Ad Library has significant limitations: it covers only paid advertising, not organic content; it requires human review to identify politically deceptive content; it does not include WhatsApp advertising; and its completeness depends on Facebook's own categorization of content as "political," which has been inconsistent.

Twitter's Political Advertising Ban

Twitter adopted an absolute ban on all political advertising in October 2019, announced by CEO Jack Dorsey. The policy addressed concerns about micro-targeted political advertising by removing the entire category rather than attempting to regulate it. Critics noted that the ban had limited impact on organic political disinformation (unpaid content), which represents a far larger volume of election-related activity on the platform, and that it disadvantaged challengers and smaller campaigns relative to incumbents who benefit from greater organic media attention.

Following Elon Musk's acquisition of Twitter (rebranded X) in 2022, many of Twitter's election disinformation policies were reversed or significantly weakened. The effective dissolution of Twitter's Trust and Safety team and its external Trust and Safety Advisory Council removed significant institutional capacity for election-related content moderation.

Google's Election Policies

Google has maintained restrictions on targeting electoral advertising by specific voter file segments — prohibiting micro-targeting of political ads by political affiliation, while permitting targeting by geography, demographics, and interests. Google's search and YouTube systems have implemented specific interventions for election-related queries, including information panels and fact-check labels for searches related to candidates and voting.

The effectiveness of these labels in reducing belief in election disinformation is contested in the research literature: some studies find modest positive effects; others find labels can backfire by increasing attention to labeled content or by seeming politically motivated.

Effectiveness Assessments

A comprehensive assessment of platform interventions on election disinformation must acknowledge several findings:

  1. Removals of detected influence operation content reduce circulation but do not affect narratives already in organic circulation.
  2. Labeling false content with fact-check panels has modest, inconsistent effects on belief change.
  3. Advertising transparency requirements significantly improve research access to political advertising but do not directly reduce the circulation of organic disinformation.
  4. Algorithm adjustments that reduce the virality of "borderline" political content produce measurable effects on circulation but are difficult to implement without political controversy about who defines "borderline."
  5. Platform interventions are most effective before narratives achieve organic self-sustaining circulation; once narratives are self-sustaining, platform interventions have limited effects.

Federal Election Campaign Act (FECA) prohibits foreign nationals from making contributions or expenditures in connection with federal, state, or local elections. The IRA's advertising expenditures were a violation of FECA; the associated individuals were indicted by the Mueller investigation, though the practical enforceability of these indictments is nil.

Foreign Agents Registration Act (FARA) requires individuals acting as agents of foreign principals engaged in political activities to register with the Department of Justice and disclose their activities. FARA enforcement has been historically lax but was significantly increased following 2016, with several high-profile prosecutions of individuals who had failed to register as foreign agents.

Communications Act Section 315 (Equal Time Rule) and Honest Ads Act proposals: The Honest Ads Act, which would require online political advertising disclosure equivalent to that required of broadcast political advertising, has been repeatedly introduced in Congress without passing. This gap means that online political advertising remains far less transparent than broadcast advertising.

European Union Framework

The EU Political Advertising Regulation (adopted 2023) represents the most comprehensive regulatory framework for digital political advertising globally. Key provisions: - Mandatory transparency labels on all political advertising, including online advertising - Public repository of political advertising with information about sponsors, spending, targeting parameters, and reach - Restrictions on targeting political advertising using sensitive personal data (political opinions, religious beliefs, health data) - Specific requirements for advertising from outside the EU targeting EU audiences - Enforcement mechanisms including significant fines

The EU also addressed election disinformation through the Digital Services Act (DSA), which requires very large online platforms to assess and mitigate systemic risks to elections and requires transparency about algorithmic recommendation systems.

FARA and Lobbying Disclosure

The Foreign Agents Registration Act has been described as the principal US legal tool for addressing foreign election interference through influence operations. FARA requires registration and disclosure by persons who act as agents of foreign principals and engage in political activities, public relations activities, or information-dissemination activities. The Mueller investigation produced multiple FARA charges and plea agreements (including Paul Manafort and Rick Gates). However, FARA's reach is limited to individuals who actually serve as registered agents of foreign principals — it does not reach ordinary social media users who amplify foreign disinformation.


Section 32.9: Electoral Countermeasures

Pre-Bunking Election Misinformation

Research on pre-election inoculation against electoral disinformation has converged on several effective approaches:

Technique-based inoculation: Rather than debunking specific false claims, technique-based inoculation teaches audiences to recognize the specific manipulative techniques used in election disinformation (false urgency, impersonation of official sources, fabricated statistics, emotional exploitation). Sander van der Linden's inoculation research and the "Go Viral!" game specifically designed for COVID misinformation has inspired election-specific applications.

Strategic communication by trusted messengers: Research consistently finds that election officials communicating proactively and specifically about how election processes work — before disinformation narratives emerge — are more effective than reactive corrections. The Bipartisan Policy Center's "Trusted Messenger" project documents this approach.

Newsroom prebunking protocols: News organizations increasingly employ pre-emptive factual briefings on expected election disinformation narratives, allowing journalists to recognize and appropriately frame false claims when they emerge rather than treating them as novel revelations.

Voter Protection Hotlines

A network of voter protection organizations maintains telephone and digital hotlines during election periods to answer voter questions, report disinformation, and direct voters to accurate official resources. The nonpartisan Election Protection coalition (866-OUR-VOTE) is the largest in the United States, with multilingual support. These hotlines serve a dual function: providing accurate information directly to voters who have encountered false claims and collecting real-time intelligence on what specific disinformation narratives are circulating in specific communities.

Election Official Communication Strategies

The 2020 election produced important lessons about election official communication. In particular, the practice of providing pre-planned explanations of election procedures before those procedures triggered disinformation ("why it takes days to count mail-in ballots," "why results in some counties may change over time") proved effective in giving news organizations factual framing before false framings could fill the information vacuum.

The "Transparency Project" by the Stanford Internet Observatory in collaboration with election officials developed playbooks for this proactive communication approach, including specific language for explaining complex election administration procedures in terms accessible to non-specialist audiences.


Key Terms

Coordinated inauthentic behavior: Platform term for organized efforts in which multiple actors work together to misrepresent their identity or the authentic popularity of content, in order to manipulate political discourse.

Election infrastructure: The physical and digital systems used to administer elections, including voter registration databases, election management systems, voting machines, and tabulation systems.

Hack-and-leak: An influence operation combining unauthorized computer access with strategic public release of stolen materials to maximum political effect.

Risk-limiting audit (RLA): A statistical post-election audit procedure that provides strong evidence of a correct election outcome when the audit is completed, while limiting the resources required when the margin is large.

Sockpuppet: A fake online identity used to make one person (or organization) appear to be many independent individuals, often to create the false impression of broad support for a political position.

Voter suppression messaging: Content designed to discourage eligible voters from exercising their right to vote, through false claims about eligibility, wrong dates, fear of legal consequences, or other means.


Discussion Questions

  1. The 2020 US "Big Lie" represents a case of domestic political actors engaging in what would be called election interference if conducted by foreign actors. Does the foreign-domestic distinction matter morally and legally? Should the same regulatory responses that apply to foreign election interference apply to domestic actors making demonstrably false claims about election integrity?

  2. WhatsApp's end-to-end encryption makes it extremely difficult to detect coordinated disinformation campaigns but also protects genuinely private political communications. How should democracies balance these competing values?

  3. CISA Director Chris Krebs was fired after accurately stating that the 2020 election was "the most secure in American history." What does this episode reveal about the political constraints on election security communication? How should election security officials navigate these constraints?

  4. Research suggests that platform labeling of election disinformation has modest and inconsistent effects on belief change. Should platforms continue investing in labeling interventions despite mixed evidence of effectiveness? What alternative interventions might be more effective?

  5. The Philippines, Brazil, and India all show patterns of domestic political actors deploying troll farm-style tactics. Does this "domestication" of influence operation tactics represent a fundamental shift in the nature of election interference? What are its implications for countermeasures that focus on foreign actors?

  6. France's relatively successful response to the Macron Leaks suggests that democracies can effectively counter hack-and-leak operations when prepared. What preparations would a democracy need to make in advance to replicate France's counter-response?

  7. Risk-limiting audits provide strong statistical evidence of correct election outcomes but are technically complex and difficult for non-specialists to evaluate. How should election officials communicate the results of RLAs in ways that are both technically accurate and accessible to skeptical lay audiences?


Summary

Election interference has expanded from the relatively narrow Cold War concept of foreign propaganda and covert operations into a complex, multidimensional challenge involving foreign hacking operations, state-sponsored influence campaigns, domestic political actors employing troll farm tactics, platform dynamics that amplify all forms of political disinformation, and the deliberate weaponization of electoral process distrust. The 2016 US election brought unprecedented public attention to foreign election interference; the 2020 US election demonstrated that domestic election disinformation can be more consequential than foreign operations; and the experiences of France, Germany, Brazil, the Philippines, and African democracies demonstrate both the global spread of these challenges and the diversity of local forms they take.

The countermeasures that research supports — prebunking, proactive transparent communication by trusted election officials, platform transparency requirements, FARA enforcement, and sustained media literacy education — address different aspects of a multidimensional problem. No single intervention is sufficient. The most durable protection against election interference is the combination of technically secure election infrastructure, a media ecosystem capable of rapid accurate debunking, a well-informed electorate, and political cultures that treat electoral legitimacy as a shared value rather than a partisan instrument.