Deepfake videos might be fun until you become a target

Chances are you would have seen hilarious memes and videos around Pakistan’s performance in the ongoing ICC T20 World Cup. While many of them overtly appear fake, some deepfake videos look very close to reality. Although these were humorous in nature, deepfakes, if made with an ill intent, can be misused.

For example, a video featuring actor Ranveer Singh, where his visual was coupled with an AI-enhanced voice clone, criticising Prime Minister Narendra Modi on issues like unemployment and inflation recently made headlines. The video falsely suggested Singh’s endorsement of the Indian National Congress during the 2024 Lok Sabha elections. But having no way of knowing this to be true or being digitally altered videos, propagating a fake narrative or sentiment, many of those who saw or received these videos, would have freely shared it family friends and peers. This is how deepfakes thrive.

Moreover, this is not an isolated incident. There have been many others targeting actors, sports personalities, and business leaders, to spread malicious content.

Inside the World of Deepfakes

Deepfakes aren’t a new phenomenon. The first deepfake video was created around 2011-2012 using AI called generative adversarial networks (GANs), says Jaspreet Bindra, Founder of Tech Whisperer. Previously, high-quality deepfakes were difficult to make and required specialised knowledge. However, with advancements in generative AI, it has become significantly easier to create deepfakes.

Overall, deepfake technology is becoming more accessible, but the level of difficulty depends on the desired outcome.

The ease of making a deepfake video depends on the level of sophistication one is aiming for. On one hand, there are free online tools and apps that allow you to do simple face-swapping with a few clicks. These typically generate lower quality videos with noticeable glitches, like unconvincing lip movements or blurry faces. They often add watermarks to the video as well.

On the other hand, creating high-quality deepfakes that are nearly indistinguishable from real videos requires more technical expertise and powerful software. These can involve complex machine learning algorithms and processing power that may not be readily available to everyone. As technology advances and tools become more accessible, however, the barrier to creating sophisticated deepfakes may continue to lower.

Time to act:

It’s important to be aware of the ethical implications of deepfakes and use them responsibly, especially in a country like India which has a very high potential risk of misinformation and disinformation. To counter the menace of misinformation owing to deepfakes that effect India’s democratic values and transparent governance, India’s Ministry of Electronics and IT (Meity) has issued an advisory prohibiting deceptive or misleading information which misguides the recipient regarding the message’s source, deliberately spreads false information, or is flagged as fake by fact-check unit of the Central Government. Although initial steps have been taken, the persistence and widespread nature of deepfakes pose significant challenges, making containment an ongoing struggle.

What can be done? Devroop Dhar, Co-Founder & Managing Director at Primus Partners says, “Government of India’s recent advisory on curbing misinformation and deepfakes is a step in the right direction. However, to control the rampant misuse of the technology, the government should mandate of watermarking any AI-generated outcome. A specialised unit under establishments like CERT-In to detect and tackle such malicious content, fast tracking Digital India Act which will put guardrails on the technology from a user harm perspective, and public awareness and sensitization campaigns as wide as possible should be prioritised.”  He adds, a team may be created, preferably under Ministry of I&B, which can also monitor and alert people regarding deepfakes. In addition, should also promote research and innovation in this space for the right use of AI. 

Echoing the sentiment, Amit Jaju, Senior Managing Director, Ankura Consulting Group (India) says corporates need to develop AI tools for detecting deepfakes, collaborate with the government, conduct employee training, and implement internal policies against deepfakes.

As much as it is the responsibility of the government and corporates to put in policies and tools from deepfake to spread and impact, individuals too must be vigilant of deepfakes as they pose significant threats to individuals, enabling malicious actors to fabricate convincing videos or images impersonating them, leading to reputational damage, identity theft, and emotional distress. It also exacerbates privacy concerns.

According to a report issued by cybersecurity company McAfree in April this year, more than 75% of Indians present online and surveyed have seen some form of deepfake content over the last 12 months. What’s even more shocking is at least 38% of the respondents surveyed have encountered a deepfake scam during this time.

Jaju says individuals need to become more vigilant and informed about deepfakes. There should be culture of skepticism where individuals verify the authenticity of information before sharing it. Individuals should use available tools and apps designed to detect deepfakes that will help in identifying and reporting suspicious content. And lastly, keep abreast of the latest developments in deepfake technology and the measures being taken to combat it.

 

Originally Appeared Here

Author: Rayne Chancer