Case Study 39.2: AI and the 2024 Elections — What Happened and What It Portends

Overview

2024 was, by multiple measures, the most significant year in the history of democratic elections. Voters in more than sixty countries — representing more than half of the world's population — went to the polls, including in the United States, India, Indonesia, Pakistan, Mexico, South Africa, Bangladesh, the European Parliament, and dozens of others. It was simultaneously the year that generative AI became widely accessible to political campaigns, influence operations, and ordinary citizens for the first time.

The result was not, as some of the more alarming pre-election predictions suggested, a catastrophic collapse of electoral integrity. Elections in most major democracies proceeded in recognizable ways. But the 2024 cycle produced a documented catalog of AI-enabled incidents — ranging from sophisticated deepfakes to mass-produced campaign content to automated disinformation operations — that collectively point toward challenges that will be substantially more difficult to manage in future electoral cycles. This case examines what actually happened, what governance responses were attempted, and what the 2024 cycle reveals about the challenge of governing AI in democratic contexts.

The Landscape Before the Votes Were Cast

To understand what 2024 represented, it is worth understanding what changed between the 2020 electoral cycle and 2024. The change is primarily technological. In 2020, generating a convincing deepfake video required significant technical expertise and specialized hardware. In 2024, it required a smartphone app and a few minutes. In 2020, producing large volumes of personalized political content required a team of writers and significant resources. In 2024, it required a chatbot subscription and a prompt. In 2020, sophisticated voice synthesis required professional audio equipment and expertise. In 2024, a voice clone convincing enough to fool recipients could be created from a few seconds of sample audio.

This democratization of AI content generation had effects in two directions simultaneously. It lowered the barriers for legitimate political communication — small campaigns and grassroots organizations could produce sophisticated content they previously could not afford. And it lowered the barriers for disinformation, harassment, and manipulation — actors that previously lacked the resources for sophisticated influence operations now had access to them.

The AI tools most relevant to the 2024 electoral cycle included: image generation systems capable of producing photorealistic but fictional images; video synthesis systems capable of producing deepfake videos of real people saying things they never said; voice cloning systems capable of producing synthetic audio in a specific person's voice from a small sample; large language models capable of producing political content at scale; and automated distribution systems capable of reaching specific audiences with targeted content.

Documented Incidents

The 2024 electoral cycle produced documented AI-related incidents across multiple countries and multiple categories.

Voice deepfakes in political disinformation: In January 2024, artificial intelligence-generated robocalls using a synthetic version of President Biden's voice were sent to voters in New Hampshire before the state's primary, advising Democratic voters not to vote in the primary and to "save their vote" for the November election. The calls were a clear attempt at voter suppression. The New Hampshire Attorney General opened an investigation. A Democratic political consultant was subsequently charged in connection with the incident. This was one of the clearest documented cases of AI-generated content being used directly to suppress votes — and notably, it occurred before most of the year's major elections.

Deepfake candidate images in South Asia: In multiple South Asian elections, AI-generated images of candidates were used in political advertising. In some cases, these images were used by campaigns to present candidates in favorable contexts they had not actually been in. In other cases, AI-generated images were used to depict candidates in unflattering or false scenarios. The line between creative political imagery and deceptive deepfake content was contested throughout.

AI-generated campaign content at scale: Multiple campaigns in multiple countries used large language models to produce high volumes of social media content, emails, and advertising copy. In most cases, this content was clearly campaign-produced and its AI origin was either disclosed or not particularly relevant. In some cases, however, campaigns produced content designed to appear grassroots — a practice known as "astroturfing" — at scale that would not have been feasible without AI.

Synthetic audio of political figures: Several instances of synthetic audio purporting to be political figures making statements they had not made circulated on social media in multiple countries. The Slovakian elections saw deepfake audio that appeared to show a candidate discussing vote rigging with a journalist circulate two days before the election — too close to the vote for effective fact-checking to reach the same audiences as the disinformation.

AI-generated video content: In multiple countries, short video clips using AI-generated imagery of political figures were used in campaign advertising. The US Federal Election Commission received petitions to regulate AI in political advertising but did not act before the November 2024 election, leaving the question to a patchwork of state laws and platform policies.

Indian election deepfakes: India's 2024 general election — the world's largest democratic exercise by voter count — saw extensive use of AI-generated content. AI-generated videos of Bollywood stars endorsing political candidates without their knowledge or consent circulated widely. AI-generated videos of deceased political leaders appearing to endorse current candidates were produced. The Election Commission of India issued guidance on AI-generated content but enforcement was limited.

Governance Responses

Electoral authorities, platforms, and governments attempted various responses to AI-generated electoral content. The responses were generally faster than previous technological disruptions to elections but still inadequate to the scale and velocity of the challenge.

Platform policies: Major social media platforms, including Meta, YouTube, and TikTok, adopted or enhanced policies requiring disclosure of AI-generated content in political advertising. Meta's policy, for instance, required advertisers to disclose when political ads used "digitally altered or generated" media. Enforcement of these policies was inconsistent. The gap between policy and enforcement was significant: platforms could identify AI-generated content in some cases but lacked reliable detection methods in many cases, and user-uploaded content — as opposed to paid advertising — was even harder to govern.

Legislative action: Several US states passed laws specifically addressing AI in political advertising during 2024, including requirements for disclosure of AI-generated content and prohibitions on certain uses of deepfake technology in political contexts. Federal legislation was introduced but did not pass before the November election. The EU's AI Act, which came into force during 2024, included provisions relevant to AI-generated political content, but implementation timelines meant these provisions were not fully operational during the election cycle.

Election authority guidance: Multiple national election authorities issued guidance on AI-generated content, though this guidance was generally aspirational rather than enforceable. The Federal Election Commission in the US, for instance, declined to issue new rules before 2024, citing its deliberative process requirements. International election monitors from organizations like the Carter Center and the International Foundation for Electoral Systems developed frameworks for assessing AI-related electoral risks but lacked enforcement authority.

Technical detection efforts: Research institutions and civil society organizations developed tools for detecting AI-generated content. These tools were useful for post-publication fact-checking but were not widely integrated into platform content moderation pipelines or available to voters at the point of consumption. The arms race between generation and detection consistently favored generation.

What the 2024 Cycle Reveals

The 2024 electoral cycle does not support either the most alarming pre-election predictions or the most dismissive post-election assessments. The alarming view — that AI would make elections unmanageable by flooding the information environment with compelling disinformation — did not occur on that scale in most major democracies. The dismissive view — that AI was just another iteration of a long history of political manipulation and that existing mechanisms were adequate — is also untenable in light of documented incidents.

What 2024 actually revealed is a set of structural challenges that will grow more severe as generative AI capabilities improve and as the cost of AI-generated content continues to fall.

The velocity problem: AI-generated disinformation can be produced faster than it can be fact-checked, contextualized, or effectively responded to. The Slovakian case — where deepfake audio circulated two days before the election — illustrates this problem in acute form. Even where rapid response organizations exist, they cannot reach the same audiences as the original disinformation in the time available.

The detection gap: Reliable, scalable detection of AI-generated content does not currently exist for video, audio, or text. Watermarking approaches (embedding signals in AI-generated content that identify it as AI-generated) are promising but require cooperation from AI developers and are easily stripped from content that passes through lossy compression (as most social media video does). Human detection of AI-generated content is unreliable. The gap between generation and detection is structural, not merely technical.

The platform governance gap: Platform content moderation at electoral scale is not a solvable problem with current approaches. The volume of content is too large for human review; automated detection is unreliable; appeals processes are too slow; and enforcement is inconsistent across languages, countries, and content types. Platforms in the 2024 cycle were making consequential decisions about what political content to leave up, take down, or label as AI-generated without adequate regulatory frameworks, consistent standards, or democratic accountability.

The international coordination gap: Electoral disinformation operations do not respect national boundaries. Content produced in one country affects elections in another; platforms governed under one jurisdiction's laws serve users in dozens of others; AI tools developed by companies in a few countries are used globally for political manipulation. National responses to AI-generated electoral content are necessarily partial. International coordination on AI and elections is embryonic and inadequate to the challenge.

The inequality of vulnerability: Not all elections face equal risk from AI-generated disinformation. Countries with strong public broadcasting systems, high media literacy, robust electoral administration, and developed civil society fact-checking capacity are less vulnerable than those without these institutions. The populations most vulnerable to AI-driven electoral disinformation are often in countries with the least capacity to govern AI: smaller democracies, countries with less developed media ecosystems, countries with lower levels of digital literacy. This is an equity dimension of the challenge that international governance frameworks must address.

What It Portends

The 2024 cycle should be understood as an early iteration of a challenge that will intensify. Several developments will make future electoral cycles more difficult than 2024:

Improving generation capability: AI-generated video, audio, and images will become more convincing, faster to produce, and cheaper to create. The minimum threshold of resources and expertise required to produce convincing deepfakes will continue to fall.

Personalization at scale: The combination of large language models and detailed behavioral data will enable political content to be personalized to individual voters at scale — not the mass targeting that existed in 2020 and 2022, but content tailored to an individual's specific concerns, anxieties, and information environment. The persuasion potential of individualized content is substantially greater than broadcast content.

Agentic disinformation: The emergence of AI agents capable of autonomous social media engagement — creating accounts, building audiences, and distributing content without human management at each step — will make large-scale coordinated inauthentic behavior substantially easier to execute and harder to detect.

The legitimacy challenge: Perhaps most importantly, the existence of AI-generated content creates a general legitimacy challenge that goes beyond specific incidents. When any piece of content might be AI-generated, the rational response is to trust nothing — to apply skepticism uniformly, including to authentic content. This epistemic corrosion — the erosion of shared epistemic ground — may be more damaging to democratic deliberation than any specific piece of disinformation.

Toward Adequate Governance

The governance challenge posed by AI and elections does not have a clean solution. Several elements of an adequate response are identifiable, however:

Mandatory disclosure and authentication: AI-generated content in political contexts should require clear disclosure, and authenticated provenance systems (enabling users to verify what content is AI-generated) should be required of AI content producers. Implementation requires international coordination among AI developers and platforms.

Pre-election windows: Several countries have implemented "election silence" periods around voting. Analogous AI-specific restrictions — heightened obligations on platforms and AI developers during defined periods before elections — could limit the velocity problem without requiring year-round content restriction.

Strengthened civil society: Independent fact-checking organizations with the capacity to rapidly identify and contextualize AI-generated electoral disinformation are a crucial element of the response. These organizations require stable funding and institutional independence from both government and platform influence.

Media literacy investment: The most durable response to AI-generated electoral disinformation is a population capable of critically evaluating information — media literacy in the broad sense. This requires sustained investment in education and in accessible public information resources.

International coordination: Bilateral and multilateral agreements on AI-generated electoral content — among AI developers, among platform companies, among governments — are necessary to address the international coordination gap. The OECD, the UN, and regional bodies like the EU have roles to play in convening and facilitating these agreements.

Conclusion

The 2024 electoral cycle confirmed that AI-generated content is a real and growing challenge to democratic governance of elections, while also demonstrating that elections can survive AI-enabled disinformation without collapsing. Neither conclusion is fully reassuring. The challenge will intensify, the governance tools are inadequate, and the populations most vulnerable are also least protected.

What the 2024 cycle provides is a dataset — documented incidents, attempted responses, and partial successes — that should inform governance development before the next major electoral cycle. The window between electoral cycles is the time to develop the frameworks, tools, and institutional capacity that was lacking in 2024. Whether that window is used effectively will depend on political will and institutional capacity that the 2024 cycle itself did not create.

Discussion Questions

  1. Platform companies made consequential decisions about AI-generated political content during the 2024 electoral cycle without democratic accountability for those decisions. What governance structures would be appropriate for platform decisions about political speech?

  2. The detection gap — the inability to reliably identify AI-generated content at scale — appears structural rather than temporary. What does this imply for governance approaches that rely on detection and labeling?

  3. The vulnerability to AI-driven electoral disinformation is unequal: smaller democracies and countries with weaker media ecosystems face greater risks with fewer resources to respond. What do wealthy democracies and international institutions owe to more vulnerable democracies in this context?

  4. The "epistemic corrosion" concern — that the mere existence of AI-generated content undermines trust in authentic content — may be more damaging than specific disinformation incidents. How can governance frameworks address this systemic trust challenge rather than just specific content harms?

  5. What should be the elements of international governance of AI and elections? What institutions are best positioned to convene and implement such governance, and what are the obstacles?