Case Study: Deepfakes and Democracy: The 2024 Election
"The most dangerous deepfake is not the one that fools everyone. It's the one that makes everyone stop trusting anything." — Nina Jankowicz, researcher on disinformation
Overview
In January 2024, two days before the New Hampshire presidential primary, thousands of registered Democrats in the state received a robocall. The voice on the call was unmistakable — it was President Joe Biden, telling voters: "Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again... Your vote makes a difference in November, not this Tuesday."
The message was clear: stay home. Don't vote in the primary.
The voice was not President Biden's. It was an AI-generated clone — a deepfake created using voice synthesis technology that could replicate any person's voice from publicly available audio samples. The robocall reached an estimated 5,000 to 25,000 voters in a state where primary turnout margins can be thin.
The New Hampshire deepfake robocall was not an isolated incident. The 2024 election cycle, in the United States and globally, became the first major electoral period in which generative AI tools were cheap, accessible, and powerful enough to produce convincing synthetic media at scale. This case study examines the deepfake threats that materialized during the 2024 elections, the responses they provoked, and what they reveal about the epistemic foundations of democratic governance.
Skills Applied: - Analyzing deepfake harms across political and epistemic categories (Section 18.3) - Evaluating governance responses to synthetic media (Section 18.6) - Connecting deepfake threats to the broader information ecosystem - Assessing the adequacy of technical and institutional defenses
The New Hampshire Robocall
What Happened
The robocall operation was later traced to Steve Kramer, a political consultant who claimed he organized the calls to draw attention to the dangers of AI in elections — a justification that virtually no one found convincing. The voice clone was created by Life Corporation, a company whose AI voice synthesis tools were publicly available. The calls were transmitted through a telecommunications provider, Lingo Telecom, which routed the calls to New Hampshire voters using a spoofed caller ID that displayed the number of a local Democratic official.
The technical chain was notable for its simplicity: 1. Publicly available recordings of Biden's voice (from speeches, press conferences, and interviews) 2. A commercially available voice cloning tool 3. A telecommunications service to deliver the calls 4. A spoofed caller ID to lend credibility
Total estimated cost: under $1,000. Total reach: thousands of voters in a consequential election.
The Response
The response was swift but revealed the limitations of existing governance:
- The New Hampshire Attorney General opened an investigation and ultimately Kramer was fined $6 million by the FCC — but the penalty came months after the election. The immediate harm — voter suppression — could not be undone.
- The Federal Communications Commission (FCC) issued a declaratory ruling in February 2024 clarifying that AI-generated voices in robocalls are "artificial" under the Telephone Consumer Protection Act, making them subject to existing robocall regulations. This closed a regulatory gap but did so reactively, after the harm had occurred.
- Lingo Telecom was fined $2 million for transmitting the calls. The FCC's enforcement action established that telecommunications carriers have a responsibility to screen for fraudulent AI-generated content — a precedent with significant implications.
- Life Corporation was not immediately sanctioned — its terms of service prohibited the use of voice cloning for deception, but the company had not implemented technical safeguards to prevent it.
What It Revealed
The New Hampshire robocall revealed several structural vulnerabilities:
The cost asymmetry. Creating a convincing deepfake voice cost under $1,000. Investigating, tracing, and sanctioning the operation cost hundreds of thousands of dollars in government resources and took months. The asymmetry between the cost of attack and the cost of response is fundamental to the deepfake governance challenge.
The speed asymmetry. The robocall reached voters in hours. The regulatory response took months. In an election, timing is everything — a deepfake released 48 hours before an election may be debunked eventually, but not before it has influenced turnout or perceptions.
The infrastructure gap. No system existed to detect AI-generated voice calls in real time. Telecommunications carriers had no obligation (until the FCC ruling) to screen for synthetic voice content. The technical infrastructure for detection lagged far behind the technical infrastructure for creation.
The Global Landscape: 2024 as the Deepfake Election Year
The New Hampshire robocall was not isolated. The 2024 election cycle — which saw major elections in the United States, European Union, India, Indonesia, Mexico, South Korea, Taiwan, and other countries — became the first global electoral cycle in which deepfake threats were pervasive.
Notable Incidents
Slovakia (September 2023). Days before a closely contested parliamentary election, an audio recording surfaced on social media appearing to capture liberal candidate Michal Simecka discussing plans to buy votes and raise the price of beer. The audio was a deepfake. It spread rapidly on Facebook and Telegram during a 48-hour pre-election media blackout period (during which traditional media is prohibited from campaigning), making real-time debunking nearly impossible. Simecka's party lost.
Taiwan (January 2024). AI-generated videos and audio clips targeting candidates circulated on LINE (Taiwan's dominant messaging platform) in the weeks before the presidential election. Taiwanese authorities identified multiple deepfake operations linked to influence campaigns, some with suspected foreign state backing.
India (2024). In the world's largest election — with nearly a billion eligible voters — AI tools were used by multiple parties to create synthetic media of politicians, including deepfake videos of deceased leaders appearing to endorse current candidates. The use of AI became so widespread that some campaigns openly embraced it, using AI-generated video of their own candidates to reach voters in multiple languages.
United States — Beyond New Hampshire. Throughout the 2024 U.S. election cycle, AI-generated content proliferated across platforms. Deepfake images, videos, and audio were used to misrepresent candidates' statements, fabricate endorsements, and create misleading "evidence" of events that never occurred. Major platforms implemented AI content labels, but enforcement was inconsistent and often delayed.
Patterns Across Incidents
Several patterns emerged across these global incidents:
-
Timing is the weapon. The most effective deepfakes were released at moments when debunking was difficult — during media blackout periods, late on Friday evenings, or in the final 48 hours before voting.
-
Messaging platforms are the vector. Deepfakes spread most effectively not on major social media platforms (which have some moderation infrastructure) but on encrypted messaging apps (WhatsApp, Telegram, LINE) where content is difficult to monitor or moderate.
-
The "liar's dividend" may be more powerful than the deepfake itself. The "liar's dividend" is the phenomenon in which the mere existence of deepfake technology allows real content to be dismissed as fake. A politician caught on a genuine recording saying something damaging can now claim — with plausibility — that the recording is AI-generated. The deepfake threat degrades trust in all media, not just synthetic media.
-
Detection lags creation. AI-generated content detection tools exist but are imperfect, lag behind creation capabilities, and are not deployed at the scale needed for real-time election monitoring.
The Epistemic Crisis
Trust in Everything Declines
Dr. Adeyemi's observation (Section 18.1.4) — that generative AI breaks the centuries-old assumption that photographs depict reality, recordings capture speech, and video shows events — has direct democratic implications.
Democracy depends on a shared epistemic foundation: a common set of facts, or at least a shared standard for determining what is factual. Voters need reliable information about candidates, policies, and events to make informed choices. Journalism, photography, and audiovisual media have historically served as evidence-gathering tools that, while imperfect, provided a baseline of epistemic reliability.
Generative AI undermines this baseline. When any photograph, audio recording, or video can be fabricated with minimal cost and effort, the epistemological status of all media shifts. The question is no longer "is this content real or fake?" but "can we trust any content?" — and the answer, increasingly, is that we cannot be certain.
This uncertainty is itself a form of harm — and it may be more damaging than any individual deepfake. A single deepfake robocall can suppress turnout in one primary. A generalized collapse of trust in media can undermine democratic governance as a whole.
The Paradox of Detection
Deepfake detection tools — AI systems trained to identify AI-generated content — face a structural limitation: they are engaged in an arms race with generation tools. As detection improves, generation adapts to evade detection. The result is a perpetual cat-and-mouse dynamic in which detection never definitively solves the problem.
Moreover, detection tools produce probabilistic assessments ("87% likely to be AI-generated"), not definitive verdicts. In a heated electoral contest, a probabilistic assessment is easy to contest, spin, or ignore. The adversarial dynamic means that technical detection alone cannot restore epistemic reliability.
Governance Responses
Legislative
The 2024 election cycle accelerated legislative action on deepfakes:
-
United States: Multiple states enacted laws specifically addressing AI-generated election content. These laws generally require disclosure that content is AI-generated and prohibit the distribution of deceptive deepfakes within a specified period before an election (typically 60-90 days). The federal REAL Political Advertisements Act and the Protect Elections from Deceptive AI Act were introduced in Congress.
-
European Union: The EU AI Act (adopted 2024) requires that AI-generated content be labeled as such. The Digital Services Act requires platforms to disclose the use of AI in political advertising and to establish rapid-response mechanisms for election-related deepfakes.
-
China: The Deep Synthesis Provisions (effective January 2023) require that AI-generated content be clearly labeled and prohibit the use of deepfakes to spread "fake news" or undermine social stability. Enforcement mechanisms are tied to China's broader content regulation framework.
Platform Responses
Major platforms adopted AI content policies during the 2024 cycle:
- Meta required political advertisers to disclose the use of AI in ads and began labeling AI-generated images on Facebook and Instagram using metadata detection.
- Google/YouTube required disclosure of "altered or synthetic" content in election ads and implemented a reporting mechanism for deceptive AI content.
- TikTok required labeling of AI-generated content and prohibited AI-generated political advertising.
These platform responses were significant but limited. Enforcement relied primarily on self-reporting (advertisers disclosing their own use of AI) and metadata detection (which works only when metadata is preserved). Deepfakes shared organically by individual users — not as paid advertisements — often evaded detection and labeling.
The Provenance Approach
The C2PA content provenance standard (Section 18.6) represents a structural approach: rather than trying to detect deepfakes after the fact, embed provenance information in content at the moment of creation. If cameras, AI tools, and editing software all record their contributions to a piece of content, and platforms display this provenance information to viewers, the epistemic chain can be partially restored.
But C2PA faces adoption challenges: it requires buy-in from hardware manufacturers, software companies, platforms, and media organizations. Content shared through channels that strip metadata (messaging apps, screenshots, re-uploads) loses its provenance chain. And content without C2PA metadata is not thereby fake — it simply lacks provenance information, which is the default state for the vast majority of existing content.
Discussion Questions
-
The timing problem. The most effective election deepfakes are released when debunking is hardest — during media blackouts, late before elections, on encrypted platforms. What governance mechanisms could address this timing vulnerability? Consider both technical (rapid detection, platform response times) and institutional (election monitoring bodies, rapid-response protocols) solutions.
-
The liar's dividend. Some analysts argue that the liar's dividend — the ability to dismiss real content as "deepfake" — is a greater threat to democracy than actual deepfakes. Evaluate this argument. How do you govern a threat that operates through the possibility of fabrication rather than fabrication itself?
-
Platform responsibility. Should platforms be legally required to detect and remove election-related deepfakes within a specified time period (e.g., 24 hours of reporting)? What are the risks of such a mandate — including the risk of over-removal (censoring legitimate content)?
-
Global governance. The 2024 elections demonstrated that deepfake threats are global, but governance responses are national. A deepfake created in one country can target voters in another. What international governance mechanisms could address this cross-border dimension?
Your Turn: Mini-Project
Option A: Deepfake Detection Test. Find three pieces of media online (images or video) — some of which may be AI-generated and some authentic. Without using any detection tools, attempt to determine which are real and which are synthetic. Document your reasoning. Then, if available, use a deepfake detection tool to check your assessments. Write a one-page reflection on the difficulty of human detection and what this implies for epistemic reliability.
Option B: Election Governance Proposal. Design a governance framework for protecting electoral integrity from deepfake threats. Your framework should include: (a) a pre-election component (rules about AI content in the campaign period), (b) a rapid-response component (what happens when a deepfake is detected close to an election), and (c) an accountability component (consequences for those who create and distribute deceptive deepfakes). Address at least one potential objection to your framework.
Option C: Comparative Platform Policy. Compare the AI content policies of three major platforms (e.g., Meta, YouTube, TikTok) as they applied during the 2024 election cycle. For each, evaluate: (a) what was required, (b) how enforcement worked in practice, (c) what gaps existed, and (d) whether the policy was effective. Write a two-page comparative assessment.
References
-
Federal Communications Commission. "Declaratory Ruling: Artificial Intelligence-Generated Voice Cloning Technologies." FCC 24-17, February 8, 2024.
-
Kramer, Steve. New Hampshire robocall investigation. New Hampshire Attorney General, Case No. 2024-001 (2024).
-
Chesney, Robert, and Danielle Citron. "Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security." California Law Review 107 (2019): 1753-1819.
-
Paris, Britt, and Joan Donovan. "Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence." Data & Society Research Institute, 2019.
-
Ajder, Henry, et al. "The State of Deepfakes: Landscape, Threats, and Impact." Deeptrace Labs, 2019.
-
European Parliament. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)." Official Journal of the European Union, 2024.
-
Vaccari, Cristian, and Andrew Chadwick. "Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News." Social Media + Society 6, no. 1 (2020): 1-13.
-
MIT Media Lab. "Detect Fakes." Interactive deepfake detection research tool. https://detectfakes.media.mit.edu.
-
Coalition for Content Provenance and Authenticity (C2PA). "C2PA Technical Specification." Version 1.3, 2024. https://c2pa.org.