AI’s limited but noteworthy impact on 2024 Elections: Fact-checkers

AI’s limited but noteworthy impact on 2024 Elections: Fact-checkers

During the recent 2024 Lok Sabha elections, a video of Bharatiya Janata Party (BJP) Member of Parliament (MP) Dinesh Lal Yadav claiming that unemployment in India is rising because of population growth was shared by Indian Youth Congress President Srinivas BV on X (formerly Twitter). The spread of the video on social media prompted BJP IT cell head Amit Malviya to claim that the video was a deep fake and that it was being shared to “mislead people, create unrest and sow divisions in the society.” However, a fact-check by Logically Facts revealed that the video was, in fact, real and not a deepfake. This was one among many such instances where AI and deep fake were brought up during the Parliamentary elections in 2024.

Kritika Goel, Head of Editorial Operations (India), Logically Facts, noted that the widespread awareness of AI and deepfakes has made the new technology an effective tool for political parties to deny the truth. She added that people can now deny the things that they said and accuse the evidence of being a deepfake, giving them “plausible deniability” or the “liars dividend.”

Several organisations, like the World Economic Forum, identified AI-generated misinformation as one of the biggest short-term risks this year; however, panellists speaking at MediaNama’s ‘Fact-checking and Combating Misinformation in Elections’ Discussion attested otherwise. Speaking on 3 July 2024, they noted that AI did not pose a major threat to democracy; instead, parties seemed to be more interested in testing the ways in which AI can be used to spread misinformation.

How was AI used in these elections?

Rajneil Kamath,  Founder and Publisher of Newschecker, said, “We didn’t see any democracy destabilizing deepfake the way one may have seen in Slovakia, for example, which happened just two days before an election.” Instead, we saw “manipulated media using AI that was very viral, that was widely spoken about, especially involving celebrities, for example, and mixing celebrity culture with politics or memes and satire”.

Goel noted that AI was more widely used for political campaigning during these elections rather than information manipulation. Abhilash Mallick, Editor at Quint Fact Check, concurred and said there were instances where Quint had to inform the public about instances where deepfakes were used for political advertising.

Despite not destabilizing democracy, deepfakes and AI-generated misinformation have introduced new problems in public misinformation. Firstly, Goel noted that AI has led to an “erosion of trust”, wherein people are less likely to trust verified content, suspecting it to be AI. Goel said, “It has led to planting that seed of doubt in your audiences or your reader’s mind, which makes them question even the legitimate information that they’re engaging with.”

Kamath also concurred that with AI, “we don’t just have to tell people what may not be correct. We also have to start telling them what is indeed true. So, it’s never a false alarm because many things that are true also now have a feature where nobody wants to believe that it is true and therefore think it’s misinformation.”

AI-generated misinformation in the future

While AI-generated misinformation may not have been particularly “democracy destabilizing this election”, Singh said, “ A lot of the AI and deepfakes will get used for testing out hypothesis because the current narratives have reached a certain amount of saturation point. And that is where the technology and potential for misinformation scares me because it will not be a part of a campaign anymore, it will be a testing route.”

He predicts that deepfakes and AI-generated content will be used to push different narratives by political parties, and the amount of misinformation can be used to determine what resonates with the public.

Jency Jacob, Managing Editor at BOOM Fact Check, agreed that political parties are testing deepfake technology, according to his observations. He said that in many instances, AI and deepfakes were used as memes or satire to “make fun of the person whom they are targeting.” He said, “I feel that while they knew that this is not going to work and people will be able to see it through because some of these were really poor quality, they were testing it, and they are testing it for the future elections. They’re trying to see how this will work, whether the people are receptive to it, whether they will accept it, whether they understand.”

Jacob observed, “A lot of the videos, I know that people understood, are actually not true videos, but they enjoyed it anyway, and they liked it because it subscribed to their point of view or the political ideology they follow.”

Thus he warns,  “We can’t take our eyes off because this is a new tool or it’s a new technology that everyone is very excited about. As the tools get better, more and more people,.. the challenge for all the fact-checkers are going to come.”

What are the challenges fact-checkers can expect to face from AI?

Tarunima Prabhakar, Co-founder of Tattle Civic Technologies, said that there needed to be a conversation to determine what is classified as AI-generated misinformation as AI technology becomes more common. She noted that platforms now integrate AI tools allowing people to manipulate their images. Thus she said, “To what extent is this important, specifically in the context of misinformation when it comes to campaigning.” Prabhakar also noted that a challenge fact-checkers face with AI-generated content is within the nature of the tools for AI detection. She said, “No tool gives you a yes or no binary answer, right? They always give you probabilistic answers. We’re also entering a world in which, because different companies are vying for deepfake detection as a business model, these models are often these detection models are often developed as proprietary technologies. And that makes it actually harder to even understand what these probabilistic scores mean.”

Thus, “To actually trust the detection side of the game, we probably need more transparent approaches. We need the academic and research community to step in and do some of this work more transparently. Over the last 1 year, 18 months, we have seen less and lesser research being done in the open on this, a lot of it is now actually being done inside companies.”, she said.

Goel also pointed out that deepfake tools are often not trained to understand regional languages and said that more classifiers were needed. Prabhakar suggested “contextual data sets” as a solution to this.

When asked if AI can be deployed to fact-check misinformation, Goel said, “I do think that you can leverage technology, you can rely on technology to make things better to probably for tasks like claim discovery, for other tasks like which other sources you could rely on like a repository or something.” However, she said, “There’s a lot of local context, there’s cultural context that needs to be kept in mind while we’re writing our fact check”, which may pose a limitation for AI.

Also Read:

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!

Originally Appeared Here