Is this year of elections also the year of deepfakes?

Is this year of elections also the year of deepfakes?

There have been a lot of warnings that in 2024, when millions around the globe head to the polls, we might face unprecedented, artificial intelligence (AI)-powered efforts to mislead voters. So far, we’re mostly seeing satire.

In late 2023, Karen Rebelo from the Indian fact-checking outlet BOOM Live first stumbled upon videos she suspected were made using AI-generated voice clones of politicians. She wasn’t sure what to do. As part of her job as a fact-checker, she “debunks” fakery but she cannot publish a debunk based on a gut feeling. 

The videos looked poorly made, but the audio sounded real which was confusing. “It sounded uncannily like their voices. It didn’t sound robotic in any way,” Rebelo said.

It took months to finally find experts who tested and confirmed her suspicion. The experts found that a popular AI tool was likely used that enables the uploading of voice samples for text-to-speech generation. 

Rebelo worried that she would see an overwhelming amount of false AI-generated content in the run-up to the general election in India this spring. But to her surprise, she turned out to be wrong. At least so far.

Deepfakes or just ugly cartoons?

So far in 2024, AI tools have already been used for fake election endorsements, bot comments and spreading calls to boycott elections. Nevertheless, experts say we still rarely see the so-called deepfakes that are indistinguishable from authentic videos or would pose concerning consequences. 

Sophie Murphy-Byrne, senior government affairs manager at Logically, an AI-powered company fighting false information, said that their team conducted 224 Indian election-related fact-checks but only 4% were about AI-generated content. “We predominantly noticed the use of cheapfakes over deepfakes,” Murphy-Burne said.

The word “deepfakes” refers to AI-generated but real-looking videos, images and audio that convincingly mimic real individuals’ likenesses. A “cheapfake”, however, refers to an altered piece of media, but uses simple, easy-to-access methods, for example speeding, slowing or cutting out a part.

If you have uploaded many pictures, videos and audio of yourself online, deepfakes could be used to imitate you too. But there is especially a lot of concern that it will be used not only to mock experts, politicians and leaders of important institutions, but also to spread conspiracy theories, sow distrust and undermine democracies.

Rest of World, a media outlet focusing on non-Western countries, has been tracking cases where AI has been used in the elections. One of the editors of the project, Russell Brandom, has noticed that so far the AI is being used to make mostly content that he thinks is similar in nature to ugly political cartoons used for trolling. 

“It’s hard to say that they’re trying to deceive anyone,” Brandom said.

Currently, there is value in using AI for that — it has a novel look and attracts attention. “I think there’s also the thrill of transgression,” he adds.

Telling real from fake

Politicians’ faces are often inserted in memes and popular film fragments that are most likely easily recognizable to viewers. 

In India, a Bollywood movie scene where a man lets go of the hand of a man hanging from a cliff was used to illustrate political betrayal. Narendra Modi, who was sworn in as India’s prime minister for a third term in June, was swapped with the American rapper Lil Yachty in footage of the musician walking onstage. Another rapper, Eminem, seemed to endorse a South African opposition party in an altered clip. This content doesn’t suggest to Brandom that someone might believe it or change their voting choice because of it. 

Jānis Sārts, the director of the NATO Strategic Communications Centre of Excellence, a research institute, has drawn similar conclusions. Just like Rebelo, he was surprised to see how little it has been used and has also noted it’s often for satire.

But just because something is funny doesn’t mean it is harmless. Some experts use the term “hahaganda” to describe the use of humor in propaganda. “Humor can be very powerful. It can be one of the best ways to overcome communication barriers,” Sārts said.

“A continuous stream of mostly harmless AI-generated content still has the potential to create an infodemic,” Murphy-Byrne said. It becomes harder to think critically and tell truth and fiction apart, she said. False information spreads easily like a virus during a pandemic.

Sārts said he’d guess that about 80% of political AI-generated content is not intentionally made to mislead. But there is the other 20%.

Spam and discouraging voters

The Rest of World directory lists cases where the intent, if not the execution, seems more nefarious. Chinese spam campaigns used AI images to cast doubt about the election integrity in the United States and paint one of the candidates, President Joe Biden, in a negative light. In Bangladesh, AI videos were used to fake election withdrawals of candidates. In Pakistan, calls to boycott elections were spread. 

There are other cases in countries that Rest of World doesn’t focus on: robocalls used to discourage people from voting in the U.S. state of New Hampshire and an AI audio recording in Slovakia of a candidate discussing rigging elections, released two days before people cast their votes.

For now, different languages are a major obstacle to AI propaganda makers, Rebelo thinks. There are 22 languages officially recognized in India. The technology is most advanced in English. But Rebelo said that more authentic-looking content will likely be used in other languages when the tools to produce it become more widely available. 

She said India has a large misinformation problem and she doesn’t see why the political actors spreading it would draw the line at AI. 

“They are looking for any technical advantage they can get because this is how they’ve always traditionally acted,” Rebelo argues. “AI technology is getting better. There’s only one way it’s moving — it’s getting more sophisticated.”

Making deep fakes convincing is difficult. 

Sārts said that although the technology is there to make deepfake videos for propaganda, it still requires knowledge and resources to produce. 

“There are many simpler ways to do it,” Sārts said. “Those who spread disinformation have not learned to use AI well enough yet.” The U.S. elections in November might be a big test, he said.

Murphy-Byrne also thinks that the barriers to using such technology are getting lower. She said that during the 2016 U.S. election campaign, Russia spent millions of dollars to spread propaganda messages. 

“With these new AI technologies, the execution of such a campaign no longer depends on a large, skilled, well-organized team,” she said. “Those kinds of campaigns can now be created for as little as $400 by an average private citizen in their bedroom using open-source technology,” she said in a recent webinar.

Sārts and Murphy-Byrne also both emphasize the risk of personalization. AI allows the generation of content on a certain topic from different perspectives to tailor it to different peoples’ beliefs.

Concerns for the future

Murphy-Byrne also said that another risk we could see emerging is “flooding the zone” — generating so much content that information becomes hard to follow and results in distrust.

There are simpler ways AI might be used to mislead. Rebelo and Brandom warn about cases when the progress with AI technology might be used as a cover to claim that real but unflattering audio or videos are fake. BOOM Live has already reported such cases.

But Sārts said there are ways AI will help those looking to uncover propaganda campaigns, especially when it comes to measuring whether propaganda had an impact. “The causal relationship is difficult to prove at the moment,” he said. “AI gives hope that we might come to that by, for example, analyzing sentiment change in language.”

Logically has developed a tool that helps quickly discover harmful or fact-check-worthy claims with AI technology. Another tool that’s in development would help monitor content across different platforms, including small platforms with weak content moderation where harmful content often originates before it reaches more mainstream sites.

Maldita.es, a Spanish fact-checking outlet, found that large online platforms failed to take any visible action on 45% of disinformation posts about the recent EU elections identified by European fact-checkers. 

Some of this content received millions of views. The proportion was even higher for disinformation about migration and election integrity: 57% and 56% of the content was left without action.

The reaction to AI-generated content does not seem much different. X (formerly Twitter) and TikTok, as well as Meta, the parent company of Instagram and Facebook, said they would label it. But so far, social media platforms have been slow to react, experts interviewed for this article said. 

“I think it’s a good first step for anything else we do,” Brandom said. “We’ve seen some labels but there’s a lot of content that’s been up for a long time, is clearly AI-generated and is not labeled. If we can identify this at a nonprofit journalism group, why are they not able to?”

Originally Appeared Here