Voters Should Prepare for AI Deepfakes Targeting Election Process

Voters Should Prepare for AI Deepfakes Targeting Election Process

Election Day is less than one month away, and this year’s presidential campaign has already delivered a series of unprecedented and unexpected events. Yet despite the ink spilled in anticipation of an artificial intelligence (AI)-generated candidate “deepfake,” false depictions of presidential candidates Vice President Kamala Harris and former President Donald J. Trump have been few and inconsequential. However, voters should remain vigilant regarding deepfakes in the lead-up to Election Day—not for misleading videos about candidates, but rather for deepfakes targeting the election process that could be used to sow confusion and foster distrust in the outcome.

Campaign vs. Election Process Misinformation

While often lumped together under the catchall term “election misinformation,” there are two distinct types of misinformation related to elections: campaign misinformation and election process misinformation. The campaign version focuses on portraying candidates and/or their supporters in an inaccurate light, whether positive or negative. Examples include deepfake images of Taylor Swift and her fans supporting Trump and a false video of a Chicago mayoral candidate making provocative statements about police on the eve of the election.

Meanwhile, election process misinformation focuses on procedures that occur before, during, and after Election Day. While less common so far in 2024, the robocall New Hampshire voters received from President Joe Biden’s manipulated voice urging them to skip the Democratic primary serves as a powerful example of election process misinformation.

Though the New Hampshire robocall was quickly discovered and remedied, election process misinformation can be much more pernicious. Deepfakes of election workers destroying ballots or false statements by a local election official about fraudulent votes could create chaos in the days following the election while being much harder to identify as fake and harder to correct.  

Recognizing the distinction between campaign misinformation and election process misinformation is important for two reasons. First, the type of misinformation informs if and how the government should respond. Second, acknowledging the distinction may help the public better recognize which storylines should be consumed with skepticism and confirmed through reputable sources.

Government Response Should Depend on the Type of Misinformation

The government’s role in dealing with election misinformation depends on the type at issue. State and local election officials should attempt to correct false claims about the election process and preemptively educate the public about the safeguards in place to ensure reliable results. The government is poorly suited to prevent false statements about the process, as content restrictions are heavily disfavored under the First Amendment. Election officials should try to minimize the potential impacts by countering false claims with the truth and establishing credibility with the public as the authoritative source of information about election administration.

Meanwhile, campaign misinformation often warrants no response at all from the government. Because the First Amendment generally protects political speech, the best remedy is for campaigns and supporters to correct the false claims themselves. Yet, state lawmakers are increasingly attempting to regulate this type of political speech.

Twenty states currently have laws on the books to regulate the creation and distribution of AI-generated election misinformation, with 15 of those laws enacted in 2024 alone. The most common approach is to require labeling of the deceptive content, although California, Minnesota, and Texas impose an outright ban. While almost all of the laws prioritize protections for candidates and campaigns, they do not address deceptive content related to the election process.  

California is the exception, though one of their new laws—which included restrictions on deepfakes related to candidates as well as the election process—was placed on hold by a federal court judge over concerns that the law’s speech prohibitions violated the First Amendment. So while lawmakers deserve credit for recognizing that the risk of misinformation extends to the election process, their heavy-handed remedy missed the mark.

Public Awareness Is an Important Countermeasure Against All Types

Raising public awareness around the potential for AI-generated deepfakes and other forms of deceptive election content is an important countermeasure that applies to both types of election misinformation. An important part of this effort is emphasizing that the volume is likely to fluctuate depending on the election calendar.  

For example, campaign misinformation is likely to be highest in the weeks leading up to Election Day as candidates compete for support from voters who could be swayed by new information portraying a candidate in a positive or negative light. On the other hand, election process misinformation is relevant throughout the entire election cycle—from early voting through certification. Depending on the date, voters must be on the lookout for suspicious storylines ranging from voter registration controversies in the lead-up to the election to ballot-counting concerns in the post-election period.  

In either case, the remedy is for people to remain suspicious of information that garners an emotional response and to double-check online information across multiple sources before sharing with others. People should also consider how a given storyline helps or hinders different candidates’ goals and assess the incentives for advancing a false narrative.  

Conclusion

Concerns around the potential impact of AI-generated deepfakes on U.S. elections have driven policymakers and the public to focus on the issue primarily through the lens of campaigns and candidates. However, the tactic can also be used to spread doubt about the election process. As with campaign misinformation, lawmakers should resist the temptation to regulate election process misinformation. Instead, they should encourage public awareness and support local election officials in their efforts to counter false narratives and establish credibility as the trusted source of election process information. This way, the public will be better equipped to handle the inevitable onslaught of false claims seeking to cast doubt on the outcome by questioning the procedures that exist to ensure free and fair elections.

Subscribe to our policy work

Originally Appeared Here