Non-consensual intimate images continue to pop up on the internet. Some are real and others are AI generated. In both cases, there are real victims.
The content is posing the question: how are explicit images policed and is the law keeping up?
WSBT 22 Evening Anchor Julianna Furfari spoke to a woman who was a victim of non-consensual intimate image abuse. We have concealed her identity for her safety and privacy. She shared with us that she discovered images of herself online. In her case, it was her husband posting the images and pretending to be her. She says the battle to get the images off the internet is nearly impossible. Now, she is looking to help other victims and advocate for change.
“You immediately freak out, and you start looking on all the web pages, and you’re like, how does how did this get here? What? Who did this to me? How do I get these down?” she said.
Those questions are the reality for many, when they realize non-consensual images of them have been posted online.
“Not only will you be suffering from sexual abuse, because that’s what it is classified as, but you will have serious psychological and emotional distress. Not only did you have your most intimate images whether they were, real images or generated deep fake porn images displayed on the internet for the world to view, almost always it is someone that you know that has spread the images,” said the victim.
She detailed how difficult it is to get images off the internet.
“For every post that was reposted or shared, you have to go after each single one to report them. So, when you think of this being done behind your back for years and years and years, you’re talking thousands and thousands of things. So, it’s defeating. It feels like you’re, like David, versus Goliath.”
In her case, one web page got thousands of views and for every time it was reposted on other social media outletsit was like starting the battle over again.
“In my case, I will be forever worried about my pictures being resurfaced over and over again, and I will never be able to unsee what I saw, and you live with that stress daily,” she said.
She turned to lawyers and therapy and quickly realized resources were limited for victims of these crimes.
“It is like you’ve spread glitter to the world and trying to go back and find the glitter is impossible. You are up against never winning honestly. So, I do think that will be a big thing going after the big tech companies, making it a federal crime,” she said.
The Take it Down Act is aimed at doing just that. The bill would make it a federal crime to post or threaten to publish intimate imagery online without an individuals consent. Social media companies would have 48 hours to remove images and delete duplicates after a victims request. This goes for real images or those that are computer generated.
“The algorithm that we typically call deep fakes came out in 2017 and it was released publicly to the internet specifically to create pornographic videos,” said Walter Scheirer a professor at Notre Dame. “The creator of it was interested in creating fake pornography involving like prominent actresses and this unfortunately became very very popular.”
Scheirer has been involved in media forensics for years. He said the prevalence of these crimes has grown.
“When the algorithm was first released it required a bit of technical knowledge to get up and running but that is largely falling by the wayside as the user interfaces have gotten much simpler,” Scheirer said.
Scheirer says while the images are fake, the impact is real.
“We’ve seen this in middle schools and high schools you know specifically targeting young women which is appalling and I think there’s a pretty strong legal case when you think about laws related to child pornography,” he said.
As the Take it Down Act moves through congress, it is important to know that there are state laws that target this content.
FBI Special Agent Herbert Stapleton in Indianapolis told me this is an emerging trend and they have seen some cases.
“Just because it’s a computer-generated image of someone, a victim, it does not mean that it’s harmless and it doesn’t mean that you haven’t violated the law,” Stapleton said.
Stapleton says they are keeping an eye on AI that is used as harassment, bullying or extortion. He said it is a violation of the law.
“Sometimes the law is slow to keep up with technology. It’s just part of the way our system works. But in this particular instance, you know, someone creates an AI image of someone else that’s explicit in some way, especially if that’s used to harass or extort the other person. Then that is criminal activity,” he said.
As the law catches up to developing technology, victims are continuing to fight back against the dark side of deep fakes.
The take it down act passed the senate in February and will now move through the house.
There are resources available if you or someone you know has been impacted, they include: StopNCII, Cyber Civil Rights, RAINN.