In this op-ed from Hugging Face’s Margaret Mitchell, chief ethics scientist; Sasha Luccioni, AI research scientist; Elizabeth Allendorf, backend engineer; Emily Witko, head of culture and DEIB; and Bruna Trevelin, legal counsel, explore how to stop deepfake porn, and what to do if you see it. Hugging Face is an open science and community-driven platform for AI builders, with dedicated ethics, society, and legal teams working towards responsible AI.
The moment we heard that fake images of Taylor Swift were being passed around online, we knew what had happened. Swift, like many women and teens, was a target of “deepfake porn,” the massively harmful practice of creating nonconsensual fake sexualized images. As women working in AI, we’ve all experienced inappropriate sexualization and know first-hand how tech companies can do a better job at protecting women and teens. So let’s talk about what “deepfakes” are and what can be done to stop their proliferation.
What is a “deepfake”?
About 10 years ago, a technique known as “deep learning” began working well for labeling images – like automatically labeling dog pictures without being told that a dog has four paws and a tail. Deep learning is a type of Artificial Intelligence (AI), and specifically machine learning, in which systems learn based on example inputs (like a dog photo) and desired outputs (like the label ‘dog’). Recently, computing breakthroughs have made it possible to use AI to do everything from generate videos to write book summaries (whether these summaries are good is another matter). When an AI-generated image, video, or audio is difficult to distinguish from real content, it’s called a “deepfake.”
Deepfakes can take many forms, spanning everything from silly memes mixing one person’s face with the body of another, to seriously harmful audio of a public figure saying something they never said. Deepfake technology can be used to create realistic, yet entirely fictional, characters or scenarios, making it a powerful tool for entertainment, such as in movies or video games. On the darker side, it can be used for malicious purposes, including intentionally spreading false information (called “disinformation”) in order to manipulate public opinion and creating nonconsensual photos or videos.
Around 2017, deepfake porn images began to emerge. Deepfakes can be created from images people post online, and over 90% of deepfakes are nonconsensual porn, the vast majority of which depicts women. Their distribution can cause emotional distress and damage reputations, making it critical for everyone to step in and do something to stop them. There is no single solution, but we can all make it harder to create and proliferate nonconsensual deepfakes.
What can I do?
One of the most important ways to combat deepfakes is by raising awareness. Parents and educators can help teens understand why this exploitation is not OK by discussing consent, responsibility, and how nonconsensual porn can be objectifying, demeaning, and traumatic — being a victim of deepfake porn can mean being hurt for the rest of your life. Bring the topic up with your parents and teachers, to let them know this technology exists — adults can help with the larger picture once they know this is happening. Teens can help their peers learn too, by speaking up. One of the most powerful ways to disrupt the normalization of harmful technology is to call it out for what it is: unacceptable. And creating them can give rise to serious consequences.