How Local Election Officials Can Prepare for the Risks of AI

How Local Election Officials Can Prepare for the Risks of AI

  • Deepfake images and video, now easier to create and more believable, are a rising concern as the general election nears.
  • Using them to spread false information about candidates may not be the biggest damage AI could do.
  • A network of experts has created demonstrations, training and resources to help those running elections respond to AI-generated interference in their work.
  • The potential for artificial intelligence (AI) to disrupt elections has been much discussed, but no one knows for sure what might emerge from this Pandora’s box. Experts in AI and election administration are working to increase awareness of the possibilities — from the aggravating to the truly dangerous— and create resources to help election officials prepare for them.

    Since the beginning of the year, 14 states have enacted legislation intended to prevent the use of misleading AI-generated images and videos (deepfakes) in elections, according to a recent Brennan Center brief. Most of these laws didn’t apply until about 90 days before the election, and it’s early to know what to expect from enforcement, says Larry Norden, the brief’s lead author and vice president of Brennan’s Elections and Government Program. Norden is part of a network of experts in law, technology and election administration who have developed educational materials, training and demonstrations to give election officials deeper awareness of what readily available AI tools can do.


    Norden says he won’t be surprised to see deepfakes go viral despite these laws, in part because domestic actors won’t be the only ones creating them. Intelligence agencies have warned about intensifying election interference from foreign actors who aren’t bound by them, including Russia, China and Iran.

    Also, experts say, deepfakes are a small part of the AI problem.

    “What I’m worried about the most is direct interference with the time, place and manner of voting,” said Kathy Boockvar, Pennsylvania’s secretary of state during the 2020 election.

    A California nonprofit, CivAI, has created an online “Deepfake Sandbox” to give public officials a chance to get their hands dirty and see firsthand how AI can make this possible.

    We made this video to show what could happen on Election Day.

    Don’t fall for the deepfakes.

    Raise awareness now. pic.twitter.com/k7onyEFJ4K

    — The Future US (@seeTheFutureUS) April 7, 2024
    A video created by The Future US portrays an existing possibility — an AI robocall that has the feel of a human conversation.

    Not an “Emerging” Technology

    CivAI has created interactive demonstrations of AI’s capabilities to educate the public about this technology. It can be difficult for people trying to regulate it or respond to it to understand what’s possible without personalized, firsthand exposure, says Lucas Hansen, CivAI’s co-founder. “It’s proof that this technology is realized, not another emerging technology that has no impact and isn’t real,” Hansen says.

    The public page for the sandbox allows any user to make a deepfake image. Public officials can experience a greatly expanded set of demonstrations by visiting and requesting access.

    An AI-generated tweet made in a “sandbox” created by CivAI to give public officials first-hand experience of the power and ease of use of AI tools exemplifies the kind of misinformation that could have direct impact on voter behavior. (CivAI)

    There, they can experiment with such things as cloning a voice to make plausible AI audio, generating fake news stories and tweets to promote them, or using AI to create a mail campaign, harassing email or a social media message saying that a polling place or time has changed.

    Few people realize how much information — biographical data, photos, audio and other personal details — a program like ChatGPT can scrape from the Internet as raw material for believable messages, Hansen says. In the sandbox, a prompt instantly pulled a photo and bio from LinkedIn and generated a fake, scandalous and visually authentic New York Times story about this writer, and a tweet to promote it. Moreover, Hansen wrote the prompt in Russian, and the program created the fake story page in perfect English.

    This language capability concerns Noah Praetz, who managed one of the country’s largest election districts before founding The Elections Group. The Voting Rights Act requires language assistance during elections to some minority groups if their numbers are big enough in a community, but that doesn’t mean election officials have the capacity or manpower to monitor or respond to misinformation in every language spoken in their communities. (All told, Americans speak more than 350 languages.)

    “There are communication pockets within the United States that aren’t particularly accessible to election officials to combat anything that might be false,” Praetz says.

    Deepfakes will become the new spam — but with dangerous consequences like THIS.

    Don’t fall for the fake on Election Day.

    Keep calm & vote on. pic.twitter.com/FnFH91NReC

    — The Future US (@seeTheFutureUS) April 8, 2024
    AI images of ballot dumping could spawn real-world trouble. (The Future US)

    Ready for Everything

    The Brennan center partnered with The Elections Group and the Institute for the Future to create a guide to help election officials recognize and respond to AI threats. It includes a review of possible threats, scenarios in which they might come into play and guidelines for both pre-emptive action and response.

    The guide has been used as a foundation for tabletop exercises with election officials that simulate AI-caused disruptions and give them opportunities to work through the best ways to respond. In some of these exercises, Praetz says, trainers introduced “instructions” they had created using AI into the mix, such as a voicemail from a state official asking participants to do something out of the ordinary. It wasn’t until this happened a couple of times that people began to ask whether it was the official’s voice or AI, he says.

    If an odd request comes in while ballots are being counted, for example, it’s essential to verify that whoever is communicating is who they say they are. “Radar better go up,” Praetz says. “Talk to them out of channel, ask them to go on a video call or tell you something about you that only they would know.”

    Barbara Byrum, clerk for Ingham County, Mich., attended one of these tabletop exercises. Hers included an AI-created video from the state elections director. She knew his speaking cadences and hand mannerisms well enough to see that they were off.

    “But what if that AI comes from a text message or email from the chief legal counsel for the Secretary of State’s Office?” she says. The exercise created an opportunity to consider this possibility and bring simple (and effective) responses into view, such as checking the email address or calling the sender back.

    A seven-step checklist for mitigating AI threats came out of the training, focused less on deepfakes than on how AI could be used to disrupt election administration. The first step is becoming familiar with the AI capabilities that are covered in the guide. “We sent that to every single election official in the country,” Norden says. “There’s a lot they can do, and a lot they are doing.”

    The U.S. Election Assistance Commission has published an AI toolkit for election officials. The Cybersecurity and Infrastructure Security Agency offers training on the use of AI by foreign influence operations. At this point in the election cycle, Norden thinks training and education can do more to help keep things on track than new legislation.

    Byrum has had the good fortune to attend election security training exercises with partners at the local, state and federal levels, she says. She gives credit to Michigan lawmakers for enacting some of the first bills requiring political ads made with AI to include disclaimers and setting penalties for those who don’t comply.

    She’s better prepared than many, perhaps, but Michigan’s status as a must-win swing state makes it a target for interference. Byrum can’t let her guard down. “What I learned in 2020 is that nothing is typical in election administration,” Byrum says. “Election administrators and clerks have to be ready for everything.”

    An AI version of Arizona Secretary of State Adrian Fontes reveals his digital origins. Careful observers, or those who know him personally, might recognize this as a deepfake, but these margins will narrow as technology continues to advance. AI makes it possible for a subject to speak in virtually any language. (Brennan Center)


    Originally Appeared Here