Legal experts say Congress passing a bill regulating deceptive AI content any time soon is unlikely as foreign countries target the U.S. with deepfakes—videos or images that have been fabricated or digitally altered using AI.
In the weeks leading up to the 2024 presidential election, deepfakes targeting Vice President Kamala Harris’ presidential campaign have been circulating online and linked to Russia.
“We’ve gotten great different versions of bills … but crossing the finish line seems to be something that is remarkably difficult and it’s because I don’t think there’s this sense of urgency,” Rebecca Delfino, professor of law and associate dean of clinical programs and experiential learning at Loyola Law School Los Angeles, told Legaltech News.
Some members of Congress have been looking to pass legislation regulating AI, including deepfakes, in the period between the presidential election and the end of this session of Congress—a period known as the lame-duck session, according to Politico.
Still, experts say that it’s unlikely that Congress will pass any AI legislation in the near future, let alone the lame duck session. Delfino noted that the weaponization of AI is probably going to get worse before Congress enacts policy around it.
“We see policymakers act with swiftness when we have catastrophic events presented whether it be 9/11, or to a lesser extent, the pandemic … unfortunately, what I truly fear is until a deepfake creates that kind of chaos, there won’t be the ‘wow, we need to move swiftly,'” she said.
While progress on AI legislation has been made in states like California, Utah and Colorado, the laws function differently in each state.
“There’s not consensus on what a federal law would even look like,” Hunton Andrews Kurth partner Lisa Sotto said. “The EU AI Act … that’s going to be the most important underpinning for AI regulation in the world, no question about it, very comprehensive, but we’re not there yet in the states.”
Over the past few months, the 2024 U.S. election has seen a marked rise in deepfakes targeting election officials.
In October, for instance, a video of a man by the name of Matthew Metro making a fabricated allegation against Democratic vice-presidential nominee Tim Walz went viral on X.
The video was eventually discovered to be a hoax posted from a fake account. The real Metro denied making the allegation and the video, and said he did not have any connection to the account the video was posted from.
Disinformation researchers told the Washington Post that the video is probably linked to John Mark Dougan, a former U.S. Marine who had received payments from the Russian military intelligence service that funded his fake news sites.
Although in this instance the video is not believed to be a deepfake by some, researchers and experts are concerned that Dougan, who is based in Moscow, will work with Russian intelligence to produce deepfakes and spread disinformation surrounding the election.
However, because it’s foreign actors who create deepfakes, some question whether any U.S. law can mitigate the threat.
“None of [the pending bills on deepfakes] will pass in time to be relevant to the 2024 election, and it’s unclear, even if they did pass, how much impact they would have on foreign malicious actors,” Eversheds Sutherland’s congressional investigations practice co-lead Neal Higgins said.
Similar videos are expected to pop up in the days after the election.
On Monday, the FBI warned the public of deepfake election videos falsely claiming to be made by the agency.
“All of those efforts have in common the goal of sowing chaos and confusion and undermining our electoral system during an incredibly close election where there’s real peril that such disinformation could have real-world effects,” Higgins said.