Anyone can be deepfaked, but women victims are the majority in this rising digital abuse crisis

Anyone can be deepfaked, but women victims are the majority in this rising digital abuse crisis


When lawyer Stefanie Yuen Thio’s colleague told her about suggestive videos and photos of her circulating on TikTok, an intense dread filled her.

“I have been deepfaked,” the joint managing partner of TSMP Law Corporation and chairperson of SG Her Empowerment (SHE) wrote in an Oct 19 LinkedIn post.

“When I saw those racy videos and photos of me, I felt shocked and confused – the images were fake yet disturbingly real,” she said. 

Seeing that there was no nudity, her shock turned into “strange relief” and then, “a violent, palpable sense of violation”.

Deepfakes are realistic but fabricated videos, audio, or images generated with artificial intelligence (AI), making someone appear to say or do something they didn’t. While they can be used for humour or creative experimentation, most deepfakes today are non-consensual AI pornography.

Fuelled by increasingly accessible AI tools, the number of these online deepfake videos has since grown exponentially. 

A report by Sumsub, a UK-based tech company specialising in online fraud, showed that the Asia-Pacific region recorded a 1,530 per cent rise in deepfake cases between 2022 and 2023, ranking second globally after North America.

A 2019 report by Sensity AI, a Netherlands-based AI threat detection platform, found that about 96 per cent of deepfakes were non-consensual sexual content, and over a staggering 90 per cent of that featured women. 

Reports of AI-generated child sexual abuse material – including deepfakes of mostly young girls – have also surged. According to the US-based National Centre for Missing and Exploited Children, such cases jumped by 1,325 per cent, from 4,700 in 2023 to more than 67,000 in 2024. 

In Singapore, SHECARES, a support centre run by SHE in collaboration with the Singapore Council of Women’s Organisations, has handled over 440 cases of online harm since its launch in 2023, including deepfake and AI-generated pornography.

Additionally, a 2023 SHE survey found that young women aged 15 to 34 were twice as likely as men to experience online sexual harassment. In the same survey, more than 70 per cent of women aged 15 to 24 knew a female friend who had faced some form of online harm, such as online sexual harassment.

Yet, How Kay Lii, the chief executive officer of SHE, highlighted that figures related to deepfake abuse likely represent a small fraction of actual cases, as many incidents go unreported. 

And although the Sensity AI report observed that most deepfake pornography targets female celebrities and women politicians, anyone can be a victim. 

Last year, South Koreans held various public protests after several women were subjected to AI-generated porn by their peers in what was called the country’s “deepfake porn crisis”. In Singapore, male students at the Singapore Sports School created and shared deepfake nude images of their female classmates. The father of one victim told CNA it wasn’t “just one or two” boys, but “a huge group of boys”. 

Sugidha Nithiananthan, AWARE’s advocacy and research director, said: “Online sexual harms and deepfake abuse are growing more insidious and widespread, and women are more vulnerable than ever – both the law and society need to do more to keep up.”

THE CONTENT IS FAKE, BUT THE VIOLATION IS REAL

When Yuen Thio first saw the photos and videos of herself online, she wasn’t sure what to believe – they looked so real. Despite having worked with SHE for over three years and knowing that online sexual violence is never the victim’s fault, she still felt some self-blame and found herself wondering if she had invited it by being visible on social media.



Content Curated Originally From Here