
Women sue 3 Phoenix men for turning their online photos into AI porn
Experts say this use of AI to sexualize without consent is going mainstream, and the law and technology to stop it are just starting to catch up.
- Lawmakers and courts are struggling to keep pace with the rapid evolution of AI-generated deepfakes.
- New state and federal laws criminalize non-consensual deepfakes, but prosecutions have been slow and face legal hurdles.
- Victims and attorneys are pursuing civil lawsuits to hold AI platform websites accountable for the content they generate.
As concerns over non-consensual deepfakes spread online, lawmakers, courts and researchers are grappling with how to protect everyday people who post their photos.
A recent case in Arizona, where three women filed suit against a group of men and their AI platform, gives a glimpse into how the justice system is navigating the fallout. The case has also exemplified how easy it is to fall victim.
Awareness over new laws designed for protection and legislation that stops short of holding websites and social media platforms accountable have posed hurdles to preventing the harm.
Only one known case has been referred to and charged by the Maricopa County Attorney’s Office, the state’s largest prosecutorial office aside from the attorney general.
Where cases were prosecuted in other states, the judicial system has been slow and deliberate. At the same time, the “move fast and break things” mantra associated with the tech industry has led to continued evolutions.
Tech industry experts say it will take time for the courts to decide how non-consensual deepfakes are punished under existing laws. In the meantime, some want legislators to pass laws that provide more extensive relief and, ideally, prevention.
But crafting such legislation has proven difficult. Already, civil liberty groups warn some regulations infringe on free speech. Tech trade groups have challenged those in court.
Prosecutions on AI deepfakes have yet to take hold
Lawmakers at the state and federal levels have passed laws that criminalize non-consensual AI generated images.
Nationally, the “Take It Down Act” makes it a crime to post a non-consensual deepfake, with a possible 2-to-3-year prison sentence if convicted.
No prosecutions have been widely reported since it went into effect in May 2025.
Arizona’s so-called “Revenge Porn Law,” which made it illegal to post sexually explicit or nude photos without consent, was amended in 2025 to include AI-generated images, dubbed “realistic pictorial representation.”
That law has led to at least one criminal charge by Maricopa County Attorney Rachel Mitchell’s office. The ongoing case involves multiple charges of sexual exploitation of a minor and includes one charge related to the Arizona law. The defendant is accused of disclosing explicit non-consensual AI images of a social media influencer in Texas.
Mitchell said the lack of cases doesn’t mean non-consensual deepfakes aren’t spreading. Victims may not be aware, she said, and she hopes going forward “anyone who has been victimized in this way will report it to the police.”
In the civil lawsuit filed in Maricopa County Superior Court, three women accused a group of men of scraping their photos to create AI-generated sexual images without their consent and selling them for profit.
When asked why the office didn’t investigate the claims in that lawsuit, Erin Pellet, a spokesperson for the Maricopa County Attorney’s Office, said it’s “uncommon” for prosecutorial agencies to pursue cases that weren’t referred to them by law enforcement and that it could “complicate prosecution.” She did not elaborate.
At least two clauses in Arizona’s existing law could pose a challenge for prosecutors.
First, the law exempts “any disclosure that is made with the consent of the person who is depicted in the image.”
Mary Anne Franks, a free speech scholar at George Washington University Law School, consulted Arizona lawmakers in the early 2010s on revenge porn policies. She said that language could be argued to mean if a photo showed multiple nude individuals, only one would need to provide consent.
“If I’m a defense lawyer, that’s the first thing I’d point to,” Franks said.
The other hurdle is that disclosing nude images of others without their consent is only illegal if it was done with “the intent to harm, harass, intimidate, threaten or coerce the depicted person.”
The Take It Down Act, by comparison, prohibits the publication of non-consensual deepfakes that are either intended to cause harm or simply do cause harm, regardless of intent.
“We had been saying for years, ‘Can you please fix this? Because you are screwing so many victims with this,’” Franks said. “The one thing they could do to actually fix this (law) would be to take that out.”
The intent requirement could provide a criminal defense to anyone who says they created the images for profit. That was a key component in the Arizona lawsuit.
Mitchell also pointed to that clause when asked about pursuing charges related to the lawsuit and whether existing law was sufficient. But taking that language out, she said, could prompt First Amendment concerns about whether the law was narrowly tailored.
“The Constitution requires that any time the law limits expression, the person drafting the statute has an obligation to make sure the state has a compelling interest that is narrowly tailored so that protected speech remains protected,” Mitchell said.
Franks rejected that argument, saying five state supreme court cases said otherwise.
In a 2020 case, Austin v. Illinois, the Illinois State Supreme Court said a law banning revenge porn did not need to specify an intent to harm. When the defendant tried to petition the U.S. Supreme Court, the court denied the review.
“That is as close as you’re ever going to get to a final answer on a question, which is, ‘We are not interested in this issue. We think the Illinois court got it right,’” Franks said.
How are AI-image generation websites held accountable?
While the Arizona law criminalizes the people who create the images, it does not explicitly criminalize the websites used to generate them.
That leaves AI-generation platforms in a legal gray area. Experts say their liability may depend on whether courts view them as passive tools or as active participants in creating the content.
In the meantime, those platforms face increasing challenges through civil lawsuits, like the one in Maricopa County, and through government actions in other places, including a recent crackdown in San Francisco targeting several high-traffic deepfake websites.
San Francisco city attorneys sued operators of the websites, arguing they violated existing laws regarding revenge porn and child exploitation.
The suit shut down at least 10 of the sites, and one operator agreed to a permanent injunction and financial penalties.
In the Arizona lawsuit, plaintiffs are asking the court to hold the AI platform website financially and legally responsible, and either prevent the website from creating more non-consensual images or have it shut down.
The plaintiffs’ attorney Nick Brand believes going after the websites will pressure companies into restricting non-consensual nudification.
“Why not deal with the larger issue … of generative AI being able to produce this type of material? Why give the people the tools?” he said.
His case is among the first to challenge liability in an Arizona court, but he believes more will come.
What about platforms that don’t generate AI images?
Unlike AI-generation websites, social media platforms are treated differently in courts.
Section 230 of the Communications Decency Act typically shields them from liability for content posted by users.
The federal Take It Down Act, however, does provide some penalties for platforms.
It requires platforms to remove non-consensual deepfakes from their site within 48 hours of being notified by a user. Those that fail to comply could face investigation by the Federal Trade Commission and see daily $53,000 penalties per violation.
The hosting platforms have until May 2026 before the law applies to them.
The full impact won’t be seen until then, but so far, attorneys for victims say the law hasn’t led to deterrence and that the burden remains on victims.
In the Arizona case, Brand said defendants created a “take it down” request page after the federal law passed, but they didn’t stop generating non-consensual photos.
And, Brand said, the “take it down” page only comes into use once an image was shared enough that a person becomes aware of it, after the harm has happened.
Marchant pointed out that even when people can find the non-consensual deepfake images, they have to continuously chase them down, going from one platform to the next.
The seemingly endless saga for victims has fueled long and ongoing debates over Section 230. Federal officials have frequently aimed to reform, and in some cases repeal, the law.
But for the last 30 years, it largely has gone unchanged.
In that time, state lawmakers often included Section 230 language in their bills, which some experts say has lead states to over-comply with Section 230, to their detriment and the detriment of future victims.
States, including Arizona, began injecting language into their laws providing blanket exemptions for “interactive computer services,” a term pulled from Section 230, Franks said. She said they did so at the urging of the tech industry.
“And that’s a crazy thing to do because that’s not even what Section 230 does,” Franks said. Section 230 allows liability “under certain circumstances,” she said.
If Congress were to ever reform Section 230 or sunset it, Franks said, “this provision puts in for eternity that nonetheless an interactive computer service would not ever get sued or could be prosecuted.”
Franks believes states could pass laws placing liability on social media platforms if the platforms were shown to be content collaborators and not just content facilitators.
A law might be able to say the platforms “knowingly colluded with a content creator to produce this material and then … sought to advertise it,” Franks said.
She gave the example of the social media platform X for its AI chatbot, Grok. The chatbot has faced accusations of generating non-consensual intimate imagery after being prompted by users.
The public could soon see the outcome of Franks’ theory. In March, Baltimore became the first city to sue xAI, claiming the company violated law by generating the harmful deepfakes.
But Marchant said these sorts of legal cases can be long and drawn out as the courts test how liable platforms are under current laws.
In Pennsylvania, a mother sued TikTok in 2022 after her daughter died trying to film a challenge video. She claimed her daughter was influenced by videos that TikTok’s algorithm showed her.
The court dismissed the case citing Section 230 protections, but the mom appealed. In 2024, the appeals court sided with the mom and sent the case back to the lower courts. It remains in litigation.
Marchant said the courts’ intentionally slow appeals process creates a “pacing problem,” where the justice system is not designed to keep up with the fast evolution of technology.
New remedies could provide relief but burden free speech
A new effort at the Arizona Legislature with bipartisan support could lead to harsher penalties for perpetrators of non-consensual sexually explicit deepfakes.
House Bill 2133 from Rep. Nick Kupper, R-Surprise, would enact $10,000 per day penalties on those convicted of sharing AI-generated, sexually explicit deepfakes without the depicted person’s consent. It would exempt internet service providers, search engines and cloud services, but would not exempt social media platforms or websites.
If passed, the proposed law would also allow Arizona prosecutors to request injunctive relief against social media companies that doesn’t currently exist. That would let judges ban or require certain actions from the perpetrator. That could be an injunction to remove the photos or ban the photos from being posted again, hypothetically.
But the bill and others like it have provoked a series of constitutional concerns.
Kupper’s bill would require social media companies to implement systems to detect content users are uploading that could be “harmful to minors.” If the user doesn’t upload age and consent verification of every individual depicted in the material, platforms are supposed to block the upload.
While the law defines “harmful to minors” in a way that’s backed by Supreme Court precedent, free speech and civil liberty experts remain concerned a vast amount of protected speech could be swept up — particularly content dealing with the LGBTQ community — depending on who’s prosecuting.
They worry it could lead to a chilling effect where platforms over-comply, blocking more material than necessary, and users self-censor to avoid penalty.
These concerns could portend legal challenges if Kupper’s bill is signed into law.
People exploited by non-consensual deepfakes should not wait for the laws to catch up, Brand said; they should seek support.
“I think the biggest mistake would be sitting on your hands and not doing anything,” he said. “(What) the girls in this case have said to me more than once was feeling that support and knowing that somebody is going to try for you has meant a lot for them.”
Miguel Torres covers the criminal justice system for The Arizona Republic.
Taylor Seely is a First Amendment Reporting Fellow at The Arizona Republic / azcentral.com. Do you have a story about the government infringing on your First Amendment rights? Reach her at tseely@arizonarepublic.com or by phone at 480-476-6116.
Seely’s role is funded through a collaboration between the Freedom Forum and Journalism Funding Partners. Funders do not provide editorial input.






