What started with a spoof of a campaign ad has led to a critical question: Do new California laws meant to crack down on election-related deepfakes also ban memes and satire?
Measures signed into law in September by Gov. Gavin Newsom were designed to limit the use of artificial intelligence in political ads and help keep voters from being duped by manipulated media.
But they also led some – including billionaire Elon Musk – to claim that the laws make parody illegal.
With Election Day weeks away, the issue remains unsettled. It’s working its way through the legal system in the form of a lawsuit filed against state officials by a content creator who posted a parody of an election ad for Democratic presidential nominee Kamala Harris that Musk amplified on his X (formerly Twitter) platform. It uses altered audio to make it seem like Harris calls herself “the ultimate diversity hire.”
More from the Fact-Check Team: How we pick and research claims | Email newsletter | Facebook page
Targeting election deepfakes with legislation isn’t new. Lawmakers in roughly half of states have passed or at least considered laws addressing them. But the new ones in California are considered some of the nation’s toughest. Supporters characterize them as a necessary way to address an emerging threat to democracy, while critics counter they go too far and violate constitutional free speech protections.
“It’s a concern that people are going to be manipulated to believe something’s true when it’s not, and that false belief might actually change the outcome of an election,” said Tiffany Li, an associate professor at the University of San Francisco School of Law and an expert on AI law.
One of the laws took effect the day it was signed and is described as a “safeguard” for the 2024 election. But that measure has since been paused by a federal judge who issued a preliminary injunction and said it likely violates the First Amendment’s free speech protections. That means by the time voters cast their ballots Nov. 5, they may not have a permanent resolution.
The eventual outcome – whatever it is – is certain to have a significant impact on the misinformation landscape, not the least of which is the precedent it sets for how future AI-related issues are handled, said Nate Persily, a professor at Stanford Law School and contributor to a series of essays on AI and democracy.
“Even though we’ve only seen the beginning of the use of AI in this election, the rules that we establish based on these early uses could have more profound implications for the use of AI when it becomes a regular part of campaigns,” he said.
‘You’ve got Newsom pointing a gun at you’
The timeline from parody to injunction covers fewer than three months.
The video that started it all came from content creator Christopher Kohls, who on July 26 posted the video lampooning Harris to YouTube and to X. It uses footage from the first ad of her presidential campaign, but it replaces the audio with what sounds like the vice president’s voice calling herself “the ultimate diversity hire” and mocking President Joe Biden.
What’s not up for debate is its authenticity – or lack thereof. It carries labels on both platforms identifying it as parody.
“It’s clearly manipulated,” Li said. “It’s clearly not her.”
But Musk reposted it to his 200 million followers without labeling it as misleading – a potential violation of the rules of the platform he owns. Newsom responded by saying in a July 28 X post that “manipulating a voice in an ‘ad’ like this one should be illegal.”
Shortly after that, attorney Theodore Frank reached out to Kohls and told him, “You’ve got Newsom pointing a gun at you” and offered to represent him pro bono, Frank said.
The governor signed three measures into law on Sept. 17:
- A.B. 2655, the Defending Democracy from Deepfake Deception Act, takes effect in January 2025 and requires platforms to label or remove election-related content during specific periods before and after the election if it is “deceptive” and digitally altered or created. It allows candidates and other officials to seek an injunction against a platform failing to comply.
- A.B. 2839, which went into effect upon being signed, allows people to sue over election deepfakes and makes it illegal to create and publish them 120 days before and 60 days after Election Day.
- A.B. 2355, which takes effect in January 2025, also involves labeling, requiring political ads using AI-generated or altered content to be identified that way.
On the day of the bill signing, Frank filed the federal lawsuit that takes aim at A.B. 2655 and A.B. 2839 over the injunctions and lawsuits they permit. It says they restrict Kohls’ speech and allow anyone who dislikes certain political content to sue, and it asks the court to declare both measures unconstitutional.
“My team and I looked at each other and said there isn’t a law (California) could pass that would outlaw this speech,” Frank said.
District Judge John Mendez granted an injunction blocking AB 2839 on Oct. 2, writing that most of the law “acts as a hammer instead of a scalpel.” He described it as “a blunt tool that hinders humorous expression and unconstitutionally stifles” free expression. As of Oct. 24, there have been no rulings about the constitutionality of the other measure in question.
Law has exemptions for parody, satire, if labeled
For now, uncertainty remains over how memes, parody and satirical content fit within those laws.
Newsom spokesperson Izzy Gardon did not provide a comment to USA TODAY. But he said in a statement issued in response to the injunction that he is “confident the courts will uphold the state’s ability to regulate these types of dangerous and misleading deepfakes.” He added that “satire remains alive and well in California – even for those who miss the punchline.”
Fact check: Deepfake video shows Tim Walz dancing in cowboy outfit
That appears to be a reference to the wording of two of the measures. A.B. 2655, for example, states that it “would also exempt content that is satire or parody.” A.B. 2839 also states that satire and parody are exempt – but they must contain a label stating that the image, audio or video was manipulated for those purposes.
While Li characterized the labeling requirement as “pretty reasonable” on the surface, she also pointed out a basic flaw.
“Usually you don’t come in saying, ‘Hi, I’m making a joke right now. Here is my joke,’” she said. “If you’ve got to label every parody, it’s no longer funny.”
Ultimately, the answer may hinge on how judges define those terms, an expert said.
“The question is, what does satire and parody mean?” Persily said. “Courts are going to have to determine that. And a lot of it could be related to what we think the reasonable person will perceive a synthetic image to be. Will they consider it to be possibly realistic? Or will it be seen as a satire?”
Thank you for supporting our journalism. You can subscribe to our print edition, ad-free app or e-newspaper here.
USA TODAY is a verified signatory of the International Fact-Checking Network, which requires a demonstrated commitment to nonpartisanship, fairness and transparency. Our fact-check work is supported in part by a grant from Meta.