Deepfake Analysis Unit report: most media alterations done with basic tools, only 2% deepfakes

Deepfake Analysis Unit report: most media alterations done with basic tools, only 2% deepfakes

Only 2% of the media reviewed by the Deepfake Analysis Unit (DAU) of the Misinformation Combat Alliance (MCA) were actually deepfakes, revealed a quarterly DAU report. Another 3% were tagged as AI-generated (but not deepfakes) while 9% were manipulated images, audio or videos. The DAU also faced a significant spam problem, as 55% of all submissions on its tipline were spam. This quarterly report is especially significant as it encompasses the period in which India was under the general elections.

How does it work?

The MCA set up the DAU to tackle the “emerging crisis of AI-generated video and audio” on March 25. The DAU launched a WhatsApp tipline where the public can send media that seemed AI-generated content to them. The MCA is a collective of 12 fact-checking organisations in India, including Boom, Factly and the Quint.

When a user submitted a piece of content for review on the tipline, the DAU would investigate for any signs of AI-generated media. The unit would use a number of AI detection tools alongside forensic experts to analyse the content and publish a report while also informing the user.

The Unit assessed tips in four Indian languages, English, Hindi, Tamil and Telugu. 84% of the reviewed content was in Hindi, while 28% was in English. Tamil and Telugu made up 1% and 3% of the media respectively.

The DAU reviewed content within its mandate and found that 15% was not AI-generated, while 9% was manipulated, indicating that these videos or audio files had been altered using simple editing software. The DAU marked a tiny percentage as ‘cheapfake,’ which includes manipulated media with poor production quality.

Nearly a third of the content within the DAU’s purview was related to the 2024 elections. Among political figures featured, Narendra Modi was the most common, followed by Rahul Gandhi, Arvind Kejriwal, Yogi Adityanath, Amit Shah, and Mamata Banerjee, in that order. Additionally, a small percentage, around 1%, of the media items focused on financial scams. Several media items sent to the WhatsApp tipline had content related to Bollywood and Hollywood actors.

Advertisements

Deepfakes not a major factor in elections

The rarity of deepfake usage to spread misinformation reflects insights gained during Medianama’s panel discussion event on fact-checking during elections. Rajneil Kamath,  Founder and Publisher of Newschecker, told Medianama that India didn’t witness a “democracy destabilising” deepfake like Slovakia did. Instead, we saw “manipulated media using AI that was very viral, that was widely spoken about, especially involving celebrities, for example, and mixing celebrity culture with politics, memes, and satire.”

Kritika Goel from Logically Facts noted that political parties used AI more for campaigning during these elections rather than information manipulation. However, she stated that AI led to an “erosion of trust”, making people less likely to trust verified content, suspecting it to be AI. 

Nevertheless, fact-checkers at the event warned that political parties could use deepfakes and AI-generated content to push different narratives and test different campaigning strategies.

Also Read:

Originally Appeared Here