Democracy, California Gov. Gavin Newsom warns, is on the brink. The culprit? A wave of “disinformation powered by generative AI,” poised to “pollute our information ecosystems like never before.” With the 2024 election looming, Newsom and California Democrats argue that artificial intelligence-generated content threatens to warp public perception. In response, the Golden State has swiftly enacted two bold new laws designed to stem the tide of “deceptive” content spreading across the internet.
These laws not only likely violate the First Amendment, which protects even false political speech, but they are also rooted in exaggerated fears of AI disinformation.
An obviously deepfaked video of Vice President Kamala Harris, widely shared by Elon Musk, prompted Newsom’s push to regulate online discourse, but, of course, these laws will also ban the many parody AI videos of Donald Trump.
To be sure, disinformation, deepfakes and propaganda can spread and have real-life effects. But as researchers have pointed out — mostly to deaf ears — the extent and impact of disinformation are, thus far, typically much smaller than the alarmist scenarios assume. And a recent study by MIT researchers found that humans can frequently discern deepfakes with both audio and visual cues. That’s why widely shared deepfakes of Harris or Trump failed to convince anyone they were real.
Also, a closer look at 2024 elections around the world demonstrates how fears of AI deepfakes have largely been overblown.
Before this summer’s European parliamentary elections, headline after headline sounded the alarm that “AI could supercharge disinformation” and put the future of democracy at stake. A perfect storm of Russian propaganda and artificial intelligence threatened to drown the integrity of an election with 373 million eligible voters in 27 countries in disinformation and deepfakes.
That message was echoed by think tanks, researchers and European Union leaders ahead of the June election. Věra Jourová, the European Commission vice president for values and transparency, said AI deepfakes of politicians could create “an atomic bomb … to change the course of voter preferences.” In response, the European Commission sent alerts to social media platforms and set up crisis units expecting to deal with efforts to cast doubt on the legitimacy of the election’s outcome for weeks after the vote.
So what happened? Despite active disinformation networks on social media platforms, the E.U.-funded and often alarmist European Digital Media Observatory identified no major disinformation-related incident or any deluge of deepfakes. In the U.K. elections, British fact-checking group FullFact told Politico, “There hasn’t been a [deepfake] which has just dominated a day of the actual election campaign.”
What about the rest of the world? Elections have taken place in many countries, some with less resilient democratic institutions and more vulnerable election procedures than European democracies.
A Washington Post article highlighted India’s 2024 elections as a “preview” of how AI is transforming democracy. Despite being “awash in deepfakes,” researchers found, AI had little impact, instead proving a net positive by connecting voters.
In Pakistan and Indonesia, observers reported minimal disinformation, with viral fake news fact-checked on social media. A coalition of civil society groups and government agencies in Taiwan ensured transparency and crowdsourced fact-checking, mitigating China’s interference attempts.
It should be a positive story that democracies around the world, to this point, have a higher degree of resilience than many feared. More importantly, these election results demonstrate that a critical mass of voters can think for themselves and don’t slavishly fall for lies, propaganda and nonsense, even when slickly produced with cutting-edge technology.
As the 2024 U.S. election approaches, we should be vigilant but resist the urge to sacrifice free speech in the name of fighting disinformation. Our democracy is more resilient than fearmongers suggest.
California’s two new laws, on the other hand, are panic-driven and counterproductive, and they open the door to state-sanctioned censorship of lawful speech.
A.B. 2839 prohibits the use of AI deepfakes about political candidates, while A.B. 2655 requires large platforms to block “deceptive” content about politicians, respond to every public complaint within 36 hours and remove “substantially similar” content.
Both laws will chill political speech, infringe on Californians’ ability to criticize politicians, undermine platforms’ rights to moderate content and even prevent people from highlighting “deceptive” content as fake.
While A.B. 2839 exempts political satire and parody, it requires those responsible to include disclosures that the “materially deceptive” content isn’t real, which will surely undermine the impact of these messages if commentators must declare that they are just joking.
We would also be wise to remember that the very politicians who generate headlines about AI disinformation — and insist that they should be trusted to define this nebulous concept — are frequently the sources of political misinformation.
Instead of succumbing to elite panic, we should face the challenge of disinformation while heeding the words of former Supreme Court Justice Anthony Kennedy, who said, “Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth.” In defending free speech, we must avoid granting the government unprecedented powers to decide what truth is, recognizing that the greatest threat to democracy often comes from those who claim to protect it.
(Disclosure: The Future of Free Speech is a nonpartisan think tank in joint partnership with Vanderbilt University and Denmark-based Justitia. It has received limited financial support from Google for specific projects not related to the subject of this piece. In all cases, The Future of Free Speech retains full independence and final authority for its work.)
This article was originally published on MSNBC.com