Case Study 2: Deepfake Audio in the 2024 New Hampshire Primary — When AI Enters Elections
Overview
On January 21, 2024, voters in New Hampshire began receiving automated telephone calls featuring a voice that sounded unmistakably like President Joe Biden. The message, delivered in the flat, measured cadence that characterizes Biden's speaking style, urged listeners not to vote in the state's January 23 Democratic primary election. "Republicans have been trying to push nonpartisan voters to participate in their primary," the voice said. "What a bunch of malarkey." The call concluded: "Save your vote for the November election. We'll need your vote in November."
The message was a deepfake — an audio recording synthesized using AI voice cloning technology to imitate Biden's voice without his knowledge or consent, for the explicit purpose of suppressing Democratic primary turnout. It was one of the first documented uses of AI-generated synthetic audio to interfere with an American election, and it triggered a cascade of investigations, legal proceedings, and regulatory responses that established early precedents for how the legal and political system would respond to AI-generated electoral disinformation.
The Production and Distribution of the Robocall
Technical Production
The Biden voice clone was produced using commercially available voice cloning technology. Subsequent investigation identified the AI voice synthesis service used as ElevenLabs, a text-to-speech and voice cloning platform that allows users to clone a voice from sample audio. The cost of producing the voice clone and generating the script was estimated at a few hundred dollars — a demonstration of the dramatic democratization of audio deepfake production.
The voice clone was created from publicly available audio of Biden's speeches and public statements. ElevenLabs and similar services allow voice cloning from relatively short audio samples; a few minutes of source audio is sufficient for high-quality voice synthesis. The resulting clone captured Biden's distinctive vocal characteristics — his rhythm, cadence, timbre, and the verbal tics (including "malarkey") associated with him — with sufficient fidelity to be convincing to listeners who received an unexpected robocall.
The script was written to exploit specific psychological vulnerabilities: it used Biden's own vocabulary, addressed New Hampshire primary voters directly, and framed voter suppression as advice rather than threat ("Save your vote for the November election"). The framing as friendly counsel from Biden himself made the message particularly insidious — it used the authority of the targeted candidate to suppress his own supporters.
Distribution
The calls were distributed through a robocall service, reaching an estimated number of New Hampshire voters before the primary. The distribution used a vendor that, like ElevenLabs, provides services that are legitimate in many contexts (telemarketing, political outreach, automated notifications) and was used without authorization for the deepfake campaign.
The breadth of distribution and the registered numbers targeted are subjects of ongoing investigation. Initial reports suggested thousands of calls were placed; subsequent reporting suggested the number may have been larger. The timing — two days before the primary — was designed to maximize impact while minimizing the window for detection and correction.
Discovery and Initial Response
How the Fraud Was Detected
The robocall campaign was identified quickly because of several factors that reduced its effectiveness:
Public reporting: Recipients recognized immediately that the message was suspicious and reported it to news organizations and political campaigns. A voter who received the call forwarded it to a Democratic Party operative who contacted state Democratic Party leadership.
Immediate public attention: The state Democratic Party and national media reported the calls within hours of their distribution, triggering rapid public awareness that a suspicious Biden call was circulating.
Technical implausibility: The claim that Biden was personally advising voters not to vote in the primary was implausible on its face — Biden's campaign had chosen not to campaign in the New Hampshire primary for procedural reasons, but actively discouraging supporters from voting was inconsistent with any plausible campaign strategy.
Political context: New Hampshire political observers quickly noted that the calls aligned with voter suppression rather than legitimate political messaging, making the malicious purpose transparent to politically informed recipients.
Despite rapid detection, the window between distribution and public identification may still have influenced a portion of recipients who voted before the story broke.
Platform and Service Response
ElevenLabs, upon being identified as the service whose technology was used, stated that it had a policy prohibiting the use of its platform for political deepfakes and that it was investigating the account responsible. The company subsequently suspended the account used to produce the Biden voice clone.
The robocall distribution vendor's response was more complex, as robocall services are used for a wide range of legitimate purposes and the fraudulent use was not immediately apparent from the surface characteristics of the order.
The Investigation
New Hampshire Attorney General
The New Hampshire Attorney General's office launched an investigation almost immediately, given the direct threat to the state's primary election. The investigation focused on identifying the operators responsible for both the production and distribution of the calls.
The investigation identified political consultant Steve Kramer, who worked for Democratic presidential candidate Dean Phillips, as having hired a political consultant named Paul Carpenter, who in turn contracted with an AI company to produce the calls. The alleged purpose was to demonstrate the threat of AI-generated political misinformation — a claim that investigators and critics found implausible as a good-faith justification for voter suppression.
FCC Action
The Federal Communications Commission (FCC) moved rapidly to address the robocall mechanism. In February 2024, the FCC issued a declaratory ruling clarifying that robocalls using AI-generated voices are subject to the Telephone Consumer Protection Act (TCPA), which governs automated calls. This ruling did not require new legislation; it clarified that existing law's prohibition on automated calls using artificial voices covers AI-generated deepfake audio. The ruling was significant because it established an immediately applicable federal legal framework for AI voice robocalls without waiting for congressional action.
The FCC also worked with state attorneys general to investigate and potentially sanction the entities responsible for distributing the calls.
Legal Proceedings
The New Hampshire Attorney General's investigation resulted in a referral to the FCC and potentially to federal law enforcement. The operators faced potential charges under: the TCPA (for unauthorized automated calls), New Hampshire election law provisions on voter suppression and election interference, and potentially federal statutes on election fraud.
The first financial penalty related to the incident came when the FCC proposed a $6 million fine against a political consultancy associated with the campaign. The proceeding established early precedents for how regulatory and legal responses would be structured for AI-generated electoral disinformation.
Regulatory and Legislative Response
The FCC Ruling in Detail
The February 2024 FCC declaratory ruling was significant beyond the immediate case. By clarifying that AI-generated voices constitute "artificial voices" under the TCPA, the FCC established that:
- Robocalls using AI voice clones are illegal without proper consent.
- The prohibition applies regardless of which specific AI technology was used to generate the voice.
- Carriers can block AI voice robocalls as illegal traffic.
- Violators face TCPA penalties, which can be substantial on a per-call basis.
The ruling addressed the distribution mechanism rather than the content creation — it did not directly regulate AI voice cloning services, but it made the robocall distribution of AI-generated audio illegal without consent.
Congressional Attention
The Biden robocall incident received substantial congressional attention and accelerated multiple legislative efforts:
The DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits Act), while focused primarily on non-consensual intimate imagery, was part of a broader legislative movement addressing AI-generated content without consent.
The NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe Act) proposed federal protections for individuals' voice and likeness against unauthorized AI replication, directly addressing the voice cloning vulnerability.
The AI Transparency in Elections Act was introduced in both chambers, requiring disclosure of AI-generated content in political communications including robocalls.
State-Level Legislative Response
Several states introduced or enacted legislation specifically addressing AI-generated robocalls in political contexts:
- Minnesota enacted HF 4772 prohibiting the use of deepfakes in campaign materials without disclosure.
- California enacted AB 2839 requiring disclosure of AI-generated content in political ads within 60 days of elections.
- Texas enacted HB 4337 with similar provisions.
- New Hampshire itself considered and advanced legislation addressing AI in elections following the incident in their state.
Media Coverage and Public Impact
Immediacy of Media Attention
The incident received immediate national and international media coverage — a function both of the clear news value of AI-generated presidential voice fraud and of the timing, two days before a significant primary election. Coverage focused on: the technical production of the deepfake, the political context and apparent motivation, the regulatory gap that permitted it, and the broader implications for AI in elections.
Effect on Primary
Assessing the actual effect of the robocall campaign on the New Hampshire primary results is difficult. The calls were identified and publicized quickly enough that many recipients would have been aware of the fraud before voting. At the same time, not all recipients would have received correction before voting, and some may have acted on the false message. The primary results — which Biden won as a write-in candidate despite not being on the ballot — cannot be parsed in a way that isolates the robocall effect.
Public Awareness Effect
Perhaps more consequential than any direct vote effect was the awareness-raising function of the incident. The Biden robocall became a widely cited example in discussions of AI election interference, media literacy education, and policy debates about AI regulation. Survey data from the period showed elevated public concern about AI-generated political content, and the incident informed how news organizations, election officials, and policymakers planned for the remainder of the 2024 election cycle.
Lessons and Broader Implications
Lesson 1: Cost and Accessibility Have Crossed Critical Thresholds
The production of the Biden robocall at a cost of a few hundred dollars, using commercial services accessible to any credit card holder, demonstrates that high-quality audio deepfake production for political disinformation is no longer gated behind specialized expertise or significant resources. This threshold crossing changes the threat model: the relevant question is no longer whether sophisticated state actors could produce AI political disinformation, but whether any political operative or motivated individual can. The answer, demonstrated by this case, is yes.
Lesson 2: Distribution Channels Matter as Much as Production
The choice of robocall as the distribution mechanism was strategically significant. Robocalls reach individuals directly at a high-trust contact point (their phone), without the filtering and skepticism that social media users may apply to unknown digital content. The telephone has an established credibility for official and political communications. The FCC's subsequent ruling clarified that the existing TCPA framework applies, providing a legal response — but the regulatory response came after the incident.
Lesson 3: Rapid Detection is Possible but Not Guaranteed
The New Hampshire deepfake was detected quickly because multiple independent factors made it implausible: political insiders recognized immediately that the message was inconsistent with Biden's actual campaign strategy; the obvious voter suppression goal was transparent; and recipients reported it immediately to political operatives with media contacts. These conditions may not be present in future incidents. A more sophisticated deepfake campaign targeting a lower-profile election with content that is plausible on its face could evade rapid detection.
Lesson 4: The Regulatory Gap Was Real and Significant
Before the FCC's February 2024 ruling, there was no clear federal prohibition specifically applicable to AI-generated voice robocalls in political contexts. The FCC's clarification that existing TCPA provisions cover AI voices was legally sound but required a reactive regulatory interpretation rather than anticipatory legislative design. The experience suggests that explicit anticipatory legislation — rather than waiting to clarify how existing law applies — would better address the AI election interference threat.
Lesson 5: Correction Does Not Fully Undo Impact
Research on misinformation correction consistently finds that corrections reduce but do not eliminate the belief effects of false information. Applied to the New Hampshire deepfake: even for voters who received both the false call and the subsequent correction, the correction may not have fully neutralized the call's effect. The voters who received the call but not the correction before voting represent an irreversible information harm. This temporal dimension — the gap between disinformation distribution and correction — is an inherent advantage for bad actors in time-sensitive electoral contexts.
Discussion Questions
-
The operator of the Biden deepfake robocall campaign claimed the purpose was to demonstrate the threat of AI-generated political misinformation. Does this claim change your assessment of the ethical and legal culpability of the actors involved? Should intent matter in evaluating AI election interference?
-
The FCC's response — clarifying that existing TCPA provisions cover AI voice robocalls — was effective because a relevant statute already existed and clearly applied with interpretation. What areas of AI electoral interference lack equivalent existing legal coverage? What new statutory authority would be needed?
-
The Biden robocall was detected quickly partly because its voter suppression purpose was transparent to politically informed recipients. How would you assess the detectability of a more sophisticated AI deepfake campaign — one designed with plausible content, delivered at the right time, through an appropriate channel? What detection mechanisms would remain effective?
-
Media coverage of the incident served as both warning and tutorial: it informed the public about the threat, but also publicized the specific techniques used. How should news organizations balance the public's right to information about AI election interference threats against the risk that coverage teaches bad actors about what works?
-
Several states responded with AI disclosure legislation. Design an alternative regulatory approach that you believe would be more effective than disclosure requirements in preventing AI-generated voter suppression. What are the tradeoffs of your approach?
Key Facts and Timeline
| Date | Event |
|---|---|
| January 21, 2024 | Biden deepfake robocalls distributed to New Hampshire voters |
| January 22, 2024 | Calls identified and reported to Democratic Party and media |
| January 23, 2024 | New Hampshire Democratic primary — Biden wins as write-in |
| January 24, 2024 | National news coverage of deepfake robocalls |
| February 2024 | FCC declaratory ruling: AI voice robocalls illegal under TCPA |
| February 2024 | ElevenLabs suspends account used to produce voice clone |
| March 2024 | FCC investigation opened; referrals to state attorneys general |
| 2024 | Multiple state legislatures introduce AI election interference bills |
| 2024 | $6 million FCC proposed fine against associated political consultancy |
Estimated production cost of Biden voice clone: A few hundred dollars Technology used: ElevenLabs voice cloning service Distribution mechanism: Automated robocall service Distribution target: New Hampshire Democratic primary voters Detection timeline: Hours (same day as distribution)