Case Study 29-2: Deepfakes in the 2024 Election Cycle
Overview
The 2024 global election cycle — which included major elections in more than 40 countries, including the United States, India, the United Kingdom, Mexico, Indonesia, and the European Parliament — was widely anticipated as the first election cycle to feature large-scale operational deployment of AI-generated synthetic media (deepfakes) as a tool of political manipulation. That anticipation proved accurate. Multiple documented cases of AI-generated audio, video, and imagery appeared in election contexts across jurisdictions, ranging from relatively crude voice clones to sophisticated synthetic video of political candidates. This case examines what actually occurred, what the regulatory and technological responses were, and what the 2024 experience reveals about the trajectory of AI and elections.
Documented Cases of AI-Generated Political Content in 2024
The New Hampshire Biden Robocall
The incident that drew the most regulatory attention in the United States occurred in January 2024, before the New Hampshire Democratic presidential primary. A robocall using a voice that convincingly mimicked President Biden's voice was sent to approximately 5,000 New Hampshire voters. The message, mimicking Biden's characteristic speech patterns, told Democratic voters not to vote in the primary — telling them "to save your vote for the November election." The message included Biden's telephone number and the telephone number of the New Hampshire Democratic Party.
The audio was produced using AI voice cloning technology — analysis by audio forensics experts identified the voice as synthetically generated. Political consultant Steve Kramer, working for minor candidate Dean Phillips, later claimed responsibility for ordering the robocall as a "wake-up call" to draw attention to AI election manipulation risks. Kramer retained Martina Navratilova's voice for the project; the audio was generated by a vendor using publicly available voice cloning tools at a cost reportedly under $1 per minute of audio.
The New Hampshire robocall illustrated several features of AI election interference: the low cost of production (voice cloning is accessible through numerous commercial services at minimal cost); the high production quality (the voice was convincing enough that initial press reports characterized it as "Biden-like"); the deliberate targeting of a specific action (suppressing turnout in a specific primary); and the ease of attribution (in this case, the source was identified relatively quickly, but attribution in a more sophisticated operation would be much harder).
The Federal Communications Commission issued a ruling in February 2024 that AI-generated voices in robocalls violated the Telephone Consumer Protection Act without prior consent, providing a regulatory hook for enforcement. The Department of Justice opened an investigation into the robocall as potential voter suppression.
Bangladesh Election Deepfakes
In the lead-up to Bangladesh's January 2024 general election, AI-generated video content depicting opposition politicians was circulated on social media. The content included synthetic video depicting opposition party officials making statements they had not made, and fabricated imagery designed to damage reputations. The Bangladesh election was won by Prime Minister Sheikh Hasina's Awami League in circumstances that drew international observer criticism; the role of AI-generated content in the campaign was documented by local and international journalists but its precise electoral impact could not be isolated.
Bangladesh provided an example of sophisticated deepfake deployment in a context with weaker institutional resilience: press freedom restrictions limited investigative journalism's ability to expose and counter AI-generated content; digital literacy was lower than in high-income democracies; and regulatory capacity to address AI election manipulation was minimal.
Slovakia Election Audio
In the Slovak parliamentary election of September 2023 — the last significant election before 2024's cycle — AI-generated audio content emerged two days before polling, depicting Progressive Slovakia party leader Michal Šimečka apparently discussing how to rig the election by buying votes. The audio was disseminated on social media during the pre-election "black-out" period during which campaigning is prohibited in Slovakia.
Audio forensics analysis of the recording by multiple experts found evidence of AI generation, but Slovak fact-checkers working under the black-out period's constraints had limited ability to publish debunking within the timeline of distribution. Šimečka denied the content was authentic. The election resulted in a narrow victory for Robert Fico's Smer party over Progressive Slovakia — a margin within the range where the disinformation content, if it influenced a small number of voters, could theoretically have mattered.
The Slovakia case illustrated the particular vulnerability of elections to deepfake content released just before the black-out period, when the political actor cannot respond, the media faces publishing constraints, and voters encounter the content without context for evaluation.
India and the AI-Enabled Disinformation Ecosystem
India's 2024 general election — the world's largest democratic exercise — saw extensive documented use of AI-generated political content, though in a more diffuse pattern than discrete deepfake incidents. AI-generated imagery depicting major political figures (including Prime Minister Modi) in politically convenient scenarios was widely circulated on WhatsApp, which operates largely outside public view and content moderation reach. AI voice cloning was used to create audio content in multiple regional languages depicting political figures making statements in support of campaigns.
The diffuse nature of India's AI-related election disinformation — distributed primarily through encrypted messaging rather than public social media, in multiple regional languages, across a large and varied electorate — made comprehensive documentation difficult. The Election Commission of India reported receiving complaints about AI-generated content and took some enforcement actions, but the scale of the problem exceeded the regulatory capacity available.
The United States 2024 Presidential Election
The US 2024 presidential election was anticipated to be the most significant battleground for AI-generated political content, and it attracted the most advance regulatory attention. State-level disclosure requirements passed in California, Minnesota, and Michigan required disclosure when AI-generated imagery or audio appeared in political advertising. Several major platforms announced election-specific policies: Meta required disclosure for "digitally altered" content in political ads; Google required disclosure for synthetic content in election advertising; YouTube announced enhanced enforcement of its synthetic media disclosure policies.
The documented actual cases of AI political content in the US 2024 election cycle included: AI-generated imagery distributed through social media without disclosure; AI-generated parody content (where the satirical nature was sometimes unclear); AI-assisted generation of high-volume email and social media content by campaigns (legal but increasingly prevalent); and at least several cases of synthetic audio content circulating through messaging apps.
What did not materialize at the scale predicted was a single decisive deepfake that demonstrably altered mass opinion. This is consistent with the nuanced picture that research on political persuasion suggests: a single piece of content, however sophisticated, rarely moves aggregate opinion dramatically; the effect of AI disinformation operates through cumulative erosion of epistemic authority and the creation of plausible deniability for authenticated content.
Regulatory Responses
Federal Level: The Gap in US Law
As of the 2024 election, the United States had no comprehensive federal law specifically addressing AI-generated content in political advertising or election-related communications. The Federal Election Commission's regulations on political advertising were designed for the era of print, broadcast, and then digital advertising — requiring disclaimers identifying who paid for advertising, but not specifically addressing AI-generated content. A petition to the FEC to require disclosure of AI-generated content in political advertising was pending through the 2024 cycle without resolution.
The absence of a comprehensive federal framework left enforcement to a patchwork of applicable laws: the FCC ruling on AI robocalls; state-level disclosure requirements where they applied; platform policies that applied to advertising but not to organic content; and existing election law provisions that prohibited voter suppression through false information.
State-Level Initiatives
The state-level legislative response to AI deepfakes in elections was more active. By the 2024 election cycle, approximately 15 states had enacted some form of AI disclosure requirement for political advertising or prohibition on deepfake election content. These laws varied in scope and enforcement mechanism:
California's AB 602 and AB 730 (subsequently updated) required labeling of AI-generated content in political advertising and prohibited distribution of deepfake content designed to influence elections within 60 days of an election without disclosure.
Minnesota's AI election transparency law required disclosure of AI-generated "deepfakes" in political advertising and created a private right of action for political candidates depicted in unauthorized deepfakes.
Texas and Georgia passed measures making election-related deepfakes that are designed to harm candidates illegal within 30 days of an election.
The state-level patchwork created compliance complexity for national campaigns and raised First Amendment questions about regulations on political speech that the courts had not fully resolved.
International Approaches
The European Union's Digital Services Act, which reached full enforcement in 2024, required very large platforms to conduct systemic risk assessments related to election integrity and to implement reasonable mitigation measures. The EU AI Act's high-risk designation for certain AI applications in democratic contexts and its watermarking requirements for AI-generated content provided additional regulatory infrastructure.
The UK Elections Act 2022 included provisions on digital imprints for online political advertising, though these were not specifically calibrated to AI-generated content. The UK government's AI Safety Institute took an interest in election AI risks and published guidance for political actors.
Detection Technology: The State of the Art
What Works and What Doesn't
Deepfake detection technology improved substantially during 2023-2024, but remained in a challenging arms-race dynamic with generation technology. Major approaches and their limitations:
Neural network classifiers trained to distinguish synthetic media from authentic media achieved high accuracy rates (85-95%) on deepfakes generated by known systems. Their weakness is generalization: classifiers trained on one generation technology's outputs often underperform on content generated by a different or newer system. Each major advance in generation technology — from first-generation face-swap systems to GAN-based synthesis to diffusion model-based generation — created a gap period where classifiers needed retraining.
Forensic artifact detection looks for technical inconsistencies in synthetic media — unnatural blinking patterns, lighting inconsistencies, physiological impossibilities in pulse or breathing simulation. These approaches are effective against current generation systems but are being addressed by generation system developers; the forensic artifacts identifiable today may be eliminated in next-generation systems.
Provenance attestation systems using C2PA (Coalition for Content Provenance and Authenticity) standards embed cryptographic metadata at content creation indicating origin. C2PA-compliant content carrying a chain of custody from a verified device can be confirmed as authentic; the absence of provenance data does not confirm content as synthetic, but its presence can confirm authenticity. Adoption across cameras, editing tools, and platforms was growing in 2024 but far from universal.
Watermarking by AI generators — including OpenAI's watermarking of DALL-E outputs and Google DeepMind's SynthID — embeds signals in generated content that allow detection by systems trained to read the watermark. The limitation is that watermarks can be removed or degraded by simple processing steps (compression, resizing, transcription) and do not apply to content generated by systems that do not embed watermarks.
The overall picture is that detection technology is useful but not reliable enough for the high-stakes context of election disinformation, where false negatives (failing to detect genuine deepfakes) and false positives (incorrectly classifying authentic content as synthetic) both have significant consequences.
Analysis: What 2024 Teaches About AI and Democratic Resilience
The Liar's Dividend
A significant effect of AI deepfakes on democratic processes that received less attention than the fake-content risk is the "liar's dividend" — the way that the known existence of deepfake technology allows political actors to disclaim authentic content by alleging it is AI-generated. If voters know that deepfakes exist and are sophisticated, a politician caught in a genuine embarrassing audio or video recording has a plausible — even if ultimately unsuccessful — claim that the content is fabricated. The ambient possibility of deepfakes casts doubt on all audiovisual evidence.
This epistemic effect operates even when specific deepfakes are never successfully deployed at scale. The mere knowledge that AI can fabricate convincing political content creates uncertainty about all content. This is potentially more corrosive to democratic epistemics than specific successful disinformation operations, because it is diffuse and cannot be countered by identifying specific fabrications.
Institutional Resilience Matters More Than Technical Detection
The 2024 experience suggested that democratic resilience against AI disinformation depends less on technical detection capabilities than on institutional robustness: a functioning independent press with established credibility; clear procedures for rapid fact-checking and debunking of viral disinformation; election administration institutions that are professionally staffed, politically independent, and publicly trusted; and civic education that gives voters baseline skepticism toward politically convenient viral content.
Countries with strong institutions in these dimensions navigated AI election interference challenges better than countries where these institutions were weak or under political pressure. This suggests that investment in institutional resilience — journalism, civic education, election administration — is a more tractable and durable response to AI election interference than investment in technical detection alone.
The Platform Responsibility Gap
Platform policies on AI election content were more visible in 2024 than in any prior election cycle, but remained inconsistently enforced and insufficient in scope. Policies that applied to paid advertising did not apply to organic content; policies on labeled AI-generated content did not address unlabeled content; enforcement capacity for content in non-English languages remained substantially lower than for English-language content. The fundamental dynamic that made Myanmar possible — better moderation for high-revenue markets and high-resource languages — remained operative in 2024 election contexts globally.
Discussion Questions
-
The New Hampshire Biden robocall was produced at very low cost using widely available voice cloning technology. What does this accessibility imply for the trajectory of AI election interference, and what regulatory or technical responses could address it?
-
The "liar's dividend" — the ability to disclaim authentic content by alleging AI fabrication — may be as corrosive to democratic epistemics as actual deepfakes. What institutional or technical responses could reduce this effect?
-
Should AI-generated political content be required to carry a visible disclosure label? What challenges would implementation of such a requirement face?
-
Detection technology cannot reliably distinguish AI-generated from authentic political content in the high-stakes, real-time context of election disinformation. If technology cannot solve the problem, what can?
-
Platform policies applied inconsistently across languages and regions in 2024. What mechanisms — regulatory, market-based, or voluntary — could produce more consistent protection across global election contexts?