Bill Would Address the 96 Percent of online AI “Deep Fake” Videos Featuring Non-consensual Pornographic Content
U.S. Senator John Hickenlooper announced today that he has joined a bipartisan group of Senate colleagues to introduce the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act.
The bill would criminalize the publication of non-consensual, intimate imagery (NCII), including AI-generated “deep fake pornography”, on social media and other online sites, and require social media companies to have procedures to remove content upon notification from a victim.
“AI innovation is going to change so much about our world, but it can’t come at the cost of our children’s privacy and safety,” said Hickenlooper. “We have a narrow window to get out in front of this technology. We can’t miss it.”
New generative artificial intelligence tools can create lifelike, but fake, imagery depicting real people, known as deepfakes. A 2019 report by Sensity found that non-consensual deep fake pornography accounted for around 96 percent of the total deepfake videos online.
Deep fakes have recently been used to target minors, including incidents where classmates used AI tools to create sexually explicit but fake images of other classmates that they then shared on social media.
The TAKE IT DOWN Act would protect Americans by making it unlawful for a person to knowingly publish sexually explicit deepfake images of an identifiable individual, and require social media companies and websites to remove the images. The act would:
- Criminalize the publication of NCII: The bill makes it unlawful for a person to knowingly publish NCII on social media and other online platforms. NCII is defined to include realistic, computer-generated pornographic images and videos that depict identifiable, real people. The bill also clarifies that a victim consenting to the creation of an authentic image does not mean that the victim has consented to its publication.
- Protect good faith efforts to assist victims. The bill permits the good faith disclosure of NCII, such as to law enforcement, in narrow cases.
- Require websites to take down NCII upon notice from the victim. Social media and other websites would be required to have in place procedures to remove NCII, pursuant to a valid request from a victim, within 48 hours. Websites must also make reasonable efforts to remove copies of the images. The FTC is charged with enforcement of this section.
- Protect lawful speech
Hickenlooper notes that the bill is narrowly tailored to criminalize knowingly publishing NCII without barring lawful speech. The bill respects First Amendment protections by requiring that computer-generated NCII meet a “reasonable person” test. Meaning, it must appear to realistically depict an individual.
Editor’s note: AI deep fakes, deployed on social media platforms and weaponized as disinformation, are popular with other extremist organizations as well as our Ark Valley Voice reporting next week will begin to point out.