Will AI Trickery and Deepfake Crash the US Presidential Election Party?

Will AI Trickery and Deepfake Crash the US Presidential Election Party?

With the US presidential elections less than a month away, widespread concerns have been raised about the potential of AI in the spread of misinformation. 

While the stakes are high in the close fight between former President and Republican candidate Donald Trump and Democrat Kamala Harris, the AI landscape is fraught with challenges and the need to safeguard electoral integrity. 

The internet is swamped with AI-generated deepfakes — including pop star Taylor Swift endorsing Trump, actor Will Smith and Trump eating noodles, suggestive videos of Rep Alexandria Ocasio-Cortez, scam advertisements, and deepfake video of Trump running from police while being arrested, among others.

Deepfake technology, which employs artificial intelligence to imitate a person’s voice or appearance in audio or video, has been around for years. However, its accessibility has now intensified, allowing almost anyone with a computer to easily create convincing deepfakes at little or no cost and share them on social media.

As per a recent report by video content platform Kapwing, 64% of deepfake videos of the ten most “deepfaked” individuals were of politicians and business leaders. Unsurprisingly, Donald Trump and Elon Musk topped the list. 

While deepfakes and election misinformation isn’t new, as AI continues to evolve, its potential to disrupt electoral outcomes and the ability to manipulate public perception through realistic audio and videos become more pronounced. The risk is real. A majority of Americans say they are concerned about the impact of AI on the 2024 presidential campaign.

The deepfake of Swift endorsing Trump, which the latter promoted as a fact, caused the start of an imaginary “Swifties for Trump” movement online, the singer, who initially remained silent, took to social media to refute the claim. She instead supported Trump’s opponent and expressed concerns about AI and misinformation. 

“It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter,” Swift wrote in a long post on her Instagram account. “The simplest way to combat misinformation is with the truth.”

A new Pew Research report finds that 39% of Americans believe AI will be misused with harmful intent during the presidential campaign, while only 5% think it will be primarily for good. 

Besides, the report highlights that 57% of US adults–both Republicans and Democrats–say they are very concerned that people or organisations seeking to influence the election will use AI to create and distribute fake or misleading information about the candidates and campaigns.

In terms of political ads, researchers from NYU’s Center on Technology Policy conducted an experiment with fake ads and discovered that candidates were perceived as “less trustworthy and less appealing” when their ads included AI disclaimers. 

The findings highlight the need to balance the benefits of labelling, such as enhancing trust in political messaging, with the drawbacks of potentially discrediting harmless AI use. The study also identified public preferences for disclosure rules.

Until 2021, Trump used social media posts to expand his reach. However, he was banned across platforms for inciting violence when his supporters stormed the US Capitol that year. In 2023, both Meta and X lifted the ban and Trump resumed posting on social media. Now, with Musk openly endorsing Trump, the Republican nominee has relied on the platform to spread his agenda. 

The AI and election discourse, at large, also entrusts responsibility on part of citizens to consume content with caution. Voters should understand the sensational, and often sceptical undertones of political messaging and discern the right from wrong. 

Early this year, big-tech companies announced requirements for labelling AI-generated content to help users distinguish between machine- and human-created material. They also pledged to make voluntary safety commitments. 

Meanwhile, OpenAI launched image detection tools for users to identify fake content. The company has also partnered with Microsoft to launch a fund to fight deepfakes. And Google said that its AI chatbot, Gemini, will not answer election-related queries on its platform. 

Even new-age startups are joining forces to craft policies to tackle misinformation and deepfakes in the AI era. For instance, Anthropic, an AI safety and research company, created a process combining expert-led ‘Policy Vulnerability Testing’ with automated evaluations to identify risks and better their response. The company also shared these tools online as part of its election integrity efforts. 

Self-regulation and policing by these platforms and big-tech companies like Meta, Google and OpenAI are a great step in the right direction, albeit insufficient. Government intervention and regulation are definitely required to offset the damage.

Regulations in the Midst of Parody & Deepfakes 

As it turns out, we are still in the nascent stage of regulating emerging AI technologies, especially deepfakes. In July, California governor Gavin Newsom targeted political deepfakes after Musk shared an altered video of VP Kamala Harris’ campaign. The thin line between memes, facts, and deepfakes is what makes regulating the entire space even more difficult. 

Newsom signed three Bills intended to limit the use of AI in producing misleading images or videos for political ads in preparation for the 2024 election. However, earlier this month, a federal judge upheld the first amendment law, putting this deepfake law on hold. 

Researchers at UChicago have studied this topic extensively, and hinted that while there are harms, it also presents us with opportunities with AI engagement. The paper noted that political parties and tech platforms should use campaigns, media outlets to leverage generative AI to help voters understand complex policies. 

“Beyond political learning, generative AI could also be used to facilitate communication between citizens and elected officials,” it noted. 


Originally Appeared Here