Case Study: AI Deepfakes in 2024 Elections
Analysis of documented AI-generated deepfake incidents during the 2024 global election cycle, including the New Hampshire Biden robocall, Slovakian audio deepfake, and broader implications for electoral integrity.
Overview
2024 was the largest election year in history, with over 4 billion people in more than 60 countries participating in national elections. It was also the first major election cycle in which generative AI tools capable of producing convincing audio and video deepfakes were widely accessible. The intersection of these two facts produced multiple documented incidents where AI-generated content was used to deceive voters, suppress turnout, or undermine trust in electoral processes.
The incidents ranged from crude AI-generated robocalls to sophisticated audio deepfakes that fooled trained journalists. While the 2024 elections did not see the worst-case scenarios that some analysts feared --- no single deepfake decisively swung a national election --- the documented incidents established a pattern of how AI-generated media is being weaponized in electoral contexts and revealed significant gaps in detection, regulation, and voter resilience.
This case study examines the most significant documented incidents, the technical landscape of deepfake creation and detection, and the lessons for protecting electoral integrity in an era of generative AI.
Timeline
September 2023: In Slovakian parliamentary elections, an AI-generated audio deepfake surfaces two days before voting. The audio purports to be a conversation between liberal party leader Michal Simecka and a journalist discussing vote buying. The timing (48 hours before the election) exploits the pre-election media quiet period, limiting opportunities for debunking. The liberal party loses the election.
January 21, 2024: Voters in New Hampshire receive AI-generated robocalls impersonating President Joe Biden's voice, urging them not to vote in the upcoming Democratic primary. The calls tell recipients "your vote makes a difference in November, not this Tuesday" and discourage participation in the January 23 primary. An estimated 5,000-25,000 voters receive the calls.
January-February 2024: Investigation traces the Biden robocall to political consultant Steve Kramer, who commissioned the calls using ElevenLabs voice cloning technology. Kramer later states he intended to draw attention to the dangers of AI in elections. The FCC declares AI-generated robocalls illegal under existing Telephone Consumer Protection Act provisions.
February 2024: During the Indonesian presidential election, AI-generated video of the late President Suharto (who died in 2008) endorsing a candidate circulates on social media. The video is quickly identified as AI-generated but still reaches millions of viewers.
February 2024: In Pakistan's national elections, AI-generated audio of jailed former Prime Minister Imran Khan circulates with messages to supporters. Both authentic and AI-generated audio of Khan are in circulation, creating confusion about which messages represent his actual views.
March 2024: AI-generated deepfake audio and video targeting candidates proliferate in India's general election campaign. The Election Commission of India issues advisories about deepfake content. Political parties themselves begin using AI-generated content for campaign purposes, blurring the line between legitimate campaign material and disinformation.
June 2024: During the European Parliament elections, multiple member states report instances of AI-generated disinformation targeting EU candidates. The EU's Digital Services Act provisions for election integrity are tested for the first time.
September-November 2024: The US presidential campaign experiences multiple deepfake incidents including AI-generated audio clips of candidates, manipulated video shared on social media, and AI-generated images used in political advertising. Meta, Google, and X (Twitter) implement varying levels of AI content labeling.
November 2024: Post-election analysis finds that while deepfakes were deployed in numerous countries, their direct impact on election outcomes was difficult to measure. The more significant effect was erosion of trust: the existence of deepfake technology provided plausible deniability for authentic damaging content ("it's a deepfake") and reduced public trust in all media.
Technical Analysis
The Deepfake Accessibility Landscape
The 2024 election cycle was the first where high-quality deepfake generation tools were accessible to non-technical users:
# Deepfake generation accessibility analysis (as of 2024)
from dataclasses import dataclass
from enum import Enum
class SkillLevel(Enum):
NON_TECHNICAL = "non_technical" # No programming required
BASIC = "basic" # Basic command line / UI usage
INTERMEDIATE = "intermediate" # Some ML knowledge
ADVANCED = "advanced" # Deep learning expertise
@dataclass
class DeepfakeToolAccessibility:
"""Accessibility profile of a deepfake generation tool."""
name: str
modality: str # audio, video, image
skill_required: SkillLevel
cost: str
quality: str
generation_time: str
sample_data_needed: str
DEEPFAKE_TOOLS_2024 = [
DeepfakeToolAccessibility(
name="ElevenLabs (voice cloning)",
modality="audio",
skill_required=SkillLevel.NON_TECHNICAL,
cost="$5-22/month subscription",
quality="High - convincing to most listeners",
generation_time="Seconds to minutes",
sample_data_needed="As little as 30 seconds of target voice",
),
DeepfakeToolAccessibility(
name="VALL-E / open-source TTS cloning",
modality="audio",
skill_required=SkillLevel.BASIC,
cost="Free (open-source, requires GPU)",
quality="Medium-High",
generation_time="Minutes",
sample_data_needed="3-10 seconds of target voice",
),
DeepfakeToolAccessibility(
name="Stable Diffusion + LoRA",
modality="image",
skill_required=SkillLevel.BASIC,
cost="Free (open-source, requires GPU)",
quality="High for images, lower for video",
generation_time="Seconds per image",
sample_data_needed="5-20 photos of target face",
),
DeepfakeToolAccessibility(
name="Sora / Runway / Kling (video generation)",
modality="video",
skill_required=SkillLevel.NON_TECHNICAL,
cost="$12-100/month subscription",
quality="Medium-High, improving rapidly",
generation_time="Minutes to hours",
sample_data_needed="Text description or reference image",
),
DeepfakeToolAccessibility(
name="Open-source face swap (DeepFaceLab, etc.)",
modality="video",
skill_required=SkillLevel.INTERMEDIATE,
cost="Free (requires powerful GPU)",
quality="Medium - artifacts visible on close inspection",
generation_time="Hours for training, real-time for inference",
sample_data_needed="Hundreds of target face images (or video)",
),
]The New Hampshire Biden Robocall
The Biden robocall incident is the most well-documented case of AI-generated audio deployed for electoral interference in the US:
# Analysis of the New Hampshire Biden robocall incident
class BideniRobocallAnalysis:
"""
Technical and operational analysis of the January 2024
AI-generated Biden robocall targeting New Hampshire voters.
"""
@staticmethod
def technical_details() -> dict:
return {
"voice_cloning_tool": "ElevenLabs",
"sample_data_source": "Publicly available Biden speech recordings",
"audio_quality": "Convincing to most recipients; matched Biden's "
"speech patterns, cadence, and vocal characteristics",
"script_content": (
"The AI-generated audio told voters: 'What a bunch of "
"malarkey. You know the value of voting Democratic. Your "
"vote makes a difference in November, not this Tuesday... "
"Voting this Tuesday only enables the Republicans in their "
"quest to elect Donald Trump again. Your vote makes a "
"difference in November, not this Tuesday.'"
),
"distribution_method": "Robocall via spoofed caller ID",
"scale": "Estimated 5,000-25,000 calls",
"caller_id_spoofing": "Displayed the number of a known "
"Democratic political operative",
}
@staticmethod
def detection_and_attribution() -> dict:
return {
"initial_detection": (
"Recipients reported the calls to election officials "
"and media. The New Hampshire Attorney General's office "
"began investigation within hours."
),
"voice_analysis": (
"Audio forensic analysis identified artifacts consistent "
"with AI voice synthesis. The speech patterns were "
"accurate but certain prosodic features (micro-pauses, "
"breathing patterns) were inconsistent with Biden's "
"natural speech."
),
"attribution_timeline": (
"ElevenLabs identified the account used to generate the "
"voice clone and suspended it. The telecom provider "
"Lingo Telecom identified the call source. Investigation "
"traced the calls to political consultant Steve Kramer."
),
"legal_response": [
"NH AG issued cease-and-desist to Lingo Telecom",
"FCC declared AI-generated robocalls illegal (Feb 2024)",
"FCC proposed $6 million fine against Lingo Telecom",
"Steve Kramer indicted on felony charges (May 2024)",
"ElevenLabs suspended the account and enhanced verification",
],
}The Slovakian Audio Deepfake
The Slovak election deepfake demonstrated the particular danger of audio deepfakes deployed in the pre-election quiet period:
# Slovakian election deepfake analysis
class SlovakianDeepfakeAnalysis:
"""
Analysis of the audio deepfake targeting the 2023 Slovak
parliamentary election, relevant as a precedent for 2024.
"""
@staticmethod
def incident_details() -> dict:
return {
"date": "September 28, 2023 (2 days before September 30 election)",
"content": (
"AI-generated audio purporting to be a conversation "
"between Michal Simecka (Progressive Slovakia party leader) "
"and Monika Todova (journalist) discussing plans to "
"manipulate the election through vote buying."
),
"distribution": "Shared on Facebook and messaging platforms",
"timing_exploitation": (
"Released during the 48-hour pre-election 'moratorium' "
"period when media are prohibited from publishing election "
"content. This meant:\n"
"- Fact-checkers could not publish debunking during the period\n"
"- Traditional media could not cover the deepfake story\n"
"- The audio spread on social media without institutional "
"counter-narrative\n"
"- By election day, many voters had heard the audio but "
"not seen it debunked"
),
"election_result": (
"Simecka's Progressive Slovakia party lost to Robert Fico's "
"SMER party. While the deepfake's direct impact on the "
"result is impossible to measure, the timing and reach "
"were sufficient to be a contributing factor in a close race."
),
}
@staticmethod
def strategic_lessons() -> list[dict]:
return [
{
"lesson": "Timing is the most powerful amplifier",
"detail": "Releasing deepfakes during media quiet periods, "
"immediately before elections, or on Friday evenings "
"(when newsrooms are understaffed) maximizes impact "
"by limiting debunking capacity.",
},
{
"lesson": "Audio deepfakes are harder to debunk than video",
"detail": "Consumers are increasingly aware that video can "
"be faked but retain higher trust in audio. Audio "
"deepfakes also require more sophisticated analysis "
"to definitively identify as synthetic.",
},
{
"lesson": "Regulatory frameworks have temporal blind spots",
"detail": "Election media moratoriums were designed to prevent "
"last-minute smear campaigns. AI deepfakes exploit "
"these same moratoriums by deploying content that "
"cannot be effectively countered during the restricted "
"period.",
},
]Detection Technologies and Their Limitations
The 2024 election cycle tested the effectiveness of deepfake detection technologies in a real-world electoral context:
# Deepfake detection technology assessment
from dataclasses import dataclass
@dataclass
class DetectionTechnology:
"""A deepfake detection approach and its real-world effectiveness."""
name: str
approach: str
strengths: list[str]
limitations: list[str]
real_world_deployment: str
effectiveness_2024: str
DETECTION_TECHNOLOGIES = [
DetectionTechnology(
name="Audio spectral analysis",
approach="Analyze frequency spectrum patterns that differ between "
"natural and synthesized speech",
strengths=[
"Can detect specific synthesis artifacts",
"Works on audio-only content",
"Real-time analysis possible",
],
limitations=[
"Effectiveness degrades as synthesis quality improves",
"Compression artifacts (phone, messaging apps) mask synthesis artifacts",
"High false positive rate on low-quality legitimate audio",
],
real_world_deployment="Used by forensic analysts and platforms",
effectiveness_2024="Medium - effective on lower-quality deepfakes; "
"unreliable against state-of-the-art synthesis",
),
DetectionTechnology(
name="Visual artifact detection (CNN-based)",
approach="Neural network trained to identify visual artifacts in "
"face-swapped or generated video",
strengths=[
"Can detect face-swap inconsistencies",
"Works on individual frames",
"Automated at scale",
],
limitations=[
"Arms race: detectors train on current generators; new generators evade",
"Social media compression destroys subtle artifacts",
"Cross-platform sharing degrades detection accuracy",
],
real_world_deployment="Deployed by Meta, Google, Microsoft",
effectiveness_2024="Medium - effective on known generator outputs; "
"lower accuracy on novel generators",
),
DetectionTechnology(
name="Content provenance (C2PA/CAI)",
approach="Cryptographic signing of content at creation to establish "
"provenance chain from camera to publication",
strengths=[
"Cannot be forged (cryptographic guarantee)",
"Establishes positive authenticity (this IS real)",
"Industry-backed standard (Adobe, Microsoft, etc.)",
],
limitations=[
"Only works for content created with C2PA-compatible devices",
"Adoption in 2024 was still very limited",
"Does not help with content that lacks provenance metadata",
"Can be stripped by simple file operations (screenshot, re-encode)",
],
real_world_deployment="Limited - camera manufacturers beginning adoption",
effectiveness_2024="Low - insufficient adoption for meaningful impact",
),
DetectionTechnology(
name="AI content watermarking (SynthID, etc.)",
approach="Embed imperceptible watermarks in AI-generated content "
"at generation time",
strengths=[
"Invisible to humans",
"Survives some transformations (compression, cropping)",
"Enables automated detection at platform scale",
],
limitations=[
"Only works for content generated by participating tools",
"Open-source generators can remove or not include watermarks",
"Adversarial attacks can remove watermarks",
"Does not cover content generated before watermarking was implemented",
],
real_world_deployment="Google SynthID, Meta watermarking on AI images",
effectiveness_2024="Low-Medium - significant adoption gaps",
),
]The Liar's Dividend
Perhaps the most significant impact of deepfake technology on the 2024 elections was not the deepfakes themselves but the "liar's dividend" --- the phenomenon where the mere existence of deepfake technology provides plausible deniability for authentic damaging content:
# The Liar's Dividend analysis
class LiarsDividendAnalysis:
"""
Analysis of the 'liar's dividend' effect on electoral integrity.
The liar's dividend is the phenomenon where the existence of
deepfake technology allows anyone to dismiss authentic damaging
content by claiming it is AI-generated.
"""
@staticmethod
def mechanism() -> dict:
return {
"definition": (
"The liar's dividend occurs when public awareness of "
"deepfake capabilities erodes trust in ALL media, "
"including authentic content. Politicians and public figures "
"can dismiss genuine recordings, photos, or videos as "
"'deepfakes' or 'AI-generated' without evidence."
),
"2024_examples": [
"Multiple politicians dismissed authentic unflattering "
"recordings as 'deepfakes' without providing evidence",
"Social media users expressed uncertainty about the "
"authenticity of real footage shared during campaigns",
"Voter surveys showed declining trust in all media, "
"with 'it could be AI' cited as a reason",
],
"asymmetry": (
"The liar's dividend is asymmetric: it benefits those "
"with something to hide (who can dismiss real evidence) "
"and harms those telling the truth (whose authentic "
"evidence is doubted). This asymmetry favors "
"disinformation actors over transparency."
),
"measurement_challenge": (
"The liar's dividend is harder to measure than direct "
"deepfake impact because it manifests as a diffuse erosion "
"of trust rather than a specific incident. Surveys showing "
"declining trust in media may partially reflect deepfake "
"awareness but are difficult to isolate from other factors."
),
}
@staticmethod
def countermeasures() -> list[dict]:
return [
{
"measure": "Content provenance infrastructure",
"description": "Establish cryptographic provenance for "
"authentic content so that 'this is real' "
"can be proven, not just 'this is fake' detected",
"timeline": "3-5 years for meaningful adoption",
},
{
"measure": "Media literacy education",
"description": "Educate voters to evaluate content based "
"on source credibility and context, not just "
"visual/audio quality",
"timeline": "Ongoing - generational effort",
},
{
"measure": "Rapid authentication services",
"description": "Government and media organizations provide "
"rapid verification services for potentially "
"manipulated election-related content",
"timeline": "Implementable for specific election cycles",
},
{
"measure": "Regulatory framework",
"description": "Laws specifically criminalizing AI-generated "
"election disinformation with meaningful "
"enforcement mechanisms",
"timeline": "In progress across multiple jurisdictions",
},
]Lessons Learned
For Election Security
1. Audio deepfakes are the immediate threat: Video deepfakes receive more attention, but audio deepfakes were the primary vector in documented 2024 election incidents. Audio is easier to generate convincingly, harder to detect, and consumed in contexts (phone calls, messaging apps) where forensic analysis is difficult.
2. Timing exploitation is the key amplifier: The most impactful deepfake deployments exploited timing --- pre-election quiet periods, weekend news cycles, or moments of political crisis --- to maximize reach before debunking was possible. Defense must include rapid-response capabilities that can operate outside normal media cycles.
3. Platform response speed is critical: The window between deepfake deployment and effective debunking determines the impact. Social media platforms' ability to flag, label, and reduce the distribution of suspected deepfakes within hours rather than days is essential.
4. The liar's dividend may outweigh direct deepfake impact: The erosion of trust in all media --- authentic and synthetic --- may be a more significant long-term threat to electoral integrity than any individual deepfake incident.
For Technical Defense
1. Detection alone is insufficient: No detection technology achieved sufficient accuracy and coverage to meaningfully counter election deepfakes in 2024. The defensive strategy must combine detection with provenance (proving content is authentic), platform policies (reducing distribution of suspected deepfakes), and legal deterrence.
2. Provenance is more valuable than detection: Proving that authentic content IS real (through cryptographic provenance) is more useful than trying to prove that specific content IS fake. Investment in C2PA/CAI content provenance standards should be prioritized alongside detection research.
3. Cross-platform coordination is essential: Deepfakes spread across platforms. Detection on one platform does not prevent spread on others. Cross-platform coordination on deepfake identification and labeling is needed but faces competition, privacy, and jurisdictional challenges.
For Red Teams
1. Election security assessments should include deepfake scenarios: Red team exercises for election security should include scenarios where AI-generated content is deployed at critical moments, testing the organization's detection, verification, and communication response.
2. Test the full response chain: Beyond technical detection, test the organizational response: How quickly can a suspected deepfake be verified? Who has authority to issue a public statement? How is the determination communicated to voters?
3. Assess voter resilience: Evaluate whether voter education and media literacy efforts are effectively reaching the populations most vulnerable to deepfake deception.
References
- NBC News, "AI-generated Biden robocalls urged New Hampshire voters to stay home on primary day," January 2024
- FCC, "FCC Makes AI-Generated Voices in Robocalls Illegal," February 2024
- Reuters, "Audio deepfake of Slovak liberal party leader surfaces before vote," September 2023
- Ajder, H., et al., "The State of Deepfakes: Landscape, Threats, and Impact," Deeptrace (now Sensity), 2019 (foundational report)
- Freedom House, "Freedom on the Net 2024: The Struggle for Trust Online," October 2024
- Witness.org and Partnership on AI, "Prepare, Don't Panic: Synthetic Media and Elections," 2024
Why was the timing of the Slovakian election audio deepfake particularly effective?
What is the 'liar's dividend' and why is it potentially more damaging than direct deepfake attacks?