Case Study 40.1: The AI-Generated Robocall — Who Is Responsible?
Background
This case study is based on documented events from the 2024 New Hampshire Democratic primary, with details altered and extended for analytical purposes.
In the days before a significant primary election, voters in multiple New Hampshire districts received automated phone calls that used an AI-generated voice indistinguishable from the Democratic presidential candidate's actual voice. The calls told voters: "What a bunch of malarkey. You know the value of voting Democratic, but I'm asking you not to vote in [today's/tomorrow's] primary... Save your vote for the November election when it really matters."
The calls reached between 5,000 and 25,000 voters, according to subsequent estimates. The technology used was readily available through commercial AI voice synthesis platforms; the voice clone was produced from publicly available audio of the candidate's speeches. The calls were paid for by a political consultant who was subsequently fined by the FCC.
The Chain of Responsibility
Analyzing who bears responsibility requires mapping the chain of actors involved:
The voice synthesis platform: A commercial AI company that provides voice cloning capabilities through an API. The platform's terms of service prohibited using the technology to impersonate real individuals, but enforcement of these terms was inconsistent. The platform did not verify that the voice being cloned belonged to the requesting user.
The orchestrating consultant: The individual who designed the operation, paid for the calls, and directed the targeted deployment. Clearly the primary responsible party.
The robocall vendor: A firm that transmitted the calls without verifying the content or the authorization of the voice being used.
The telecommunications carriers: The networks through which the calls traveled.
The political information context: The calls were designed to be plausible within the existing political context — the message echoed real debates about primary participation — which made some recipients uncertain about whether the call was real.
The Regulatory Response
The FCC subsequently announced:
- A fine against the consultant responsible for orchestrating the operation.
- Guidance clarifying that AI-generated audio in robocalls requires disclosure and that existing anti-spoofing regulations apply.
- Initiation of an investigation into whether the robocall vendor had adequate screening processes.
The voice synthesis platform updated its terms of service but was not subject to direct regulatory action.
State election authorities referred the matter to the state attorney general, who is considering whether existing election law covers AI-generated impersonation in campaign communications.
The Extended Consequences
In subsequent interviews with affected voters, researchers found:
- Approximately 12% of voters who received the call reported that it caused them to reconsider whether to participate in the primary (though most ultimately voted)
- A small but potentially significant fraction of lower-propensity voters reported the call as one factor in their decision not to participate
- The disclosure that the call was AI-generated caused significant confusion — many voters were uncertain whether the call, initially attributed to the candidate's campaign, had been authorized
- Media coverage of the incident amplified the liar's dividend: a subsequent poll found that 23% of respondents were uncertain whether previous video they had seen of the candidate was real or AI-generated (the video in question was real)
Discussion Questions
1. Map the chain of responsibility: Which actors in this scenario bear primary, secondary, and tertiary responsibility for the democratic harm caused? Use specific criteria for each assignment of responsibility.
2. The voice synthesis platform's terms of service prohibited the use that occurred. Does this absolve the platform of responsibility? What additional platform design choices could have made this operation more difficult to execute?
3. Evaluate the FCC's regulatory response. Was it adequate? What elements of the harm were addressed? What elements were not addressed? What regulatory authority would be needed to address the unaddressed elements?
4. The case illustrates the liar's dividend: the disclosure that the call was AI-generated caused confusion about authentic content. Design a 30-day public communication strategy for the candidate's campaign to address this confusion and restore voter confidence in authentic content. What are the limitations of a campaign-level response to a system-level problem?
5. Suppose you are advising a state legislature considering legislation in response to this incident. Draft the key provisions of a state AI-in-political-communications disclosure law that would: (a) require disclosure; (b) define what counts as AI-generated; (c) establish an enforcement mechanism; (d) handle the edge case of satire and political commentary.
6. Apply the dual-use framework from Chapter 38: the same voice synthesis technology used to create this operation could be used by a legitimate campaign to produce audio content at lower cost. How should a legitimate campaign's use of voice synthesis technology be handled in your proposed regulatory framework? Is there a coherent line between legitimate and illegitimate uses?
7. If you were the analytics director for a major party committee, what policies would you recommend the party adopt regarding: (a) its own use of AI voice synthesis in campaign communications; (b) its response when opposition candidates use AI-generated content; (c) its coordination with platform companies on detection and disclosure?