Newsom’s Veto of California AI Bill Sparks Debate & Controversies

Newsom’s Veto of California AI Bill Sparks Debate & Controversies

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

KEY TAKEAWAYS

  • Governor Gavin Newsom has vetoed proposed legislation aimed at controlling the growth of AI. (Jump to Section)
  • The legislation, known as SB-1047, was an early effort to regulate a powerful technology that has so far seen little regulatory action. (Jump to Section)
  • Opponents and proponents of SB-1047 voiced passionate views on the proposed bill, which attracted wide attention across technology, business, Hollywood, and academia. (Jump to Section)

In a decision that has major implications for the future of artificial intelligence regulation, Governor Gavin Newsom has vetoed proposed California state legislation that would have set guardrails for the explosive growth of AI. Bill, SB-1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was the first attempt by any state to fully regulate the development of AI. Because so many AI developers are based in California, it would have had a widespread impact on the industry beyond the state. 

The bill proposed the following measures to grant exhaustive regulatory powers over how AI technology is developed and used by large companies:

  • Requiring that developers implement “killswitch” capabilities to promptly enact a full shutdown before beginning to train AI models.
  • Prohibiting developers from using an AI model or its derivative for a purpose not exclusively related to the training or reasonable evaluation of the model.
  • Requiring developers to annually retain third-party auditors to perform independent audits of compliance with those provisions starting in 2026.

“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said in his official statement. “Instead, the bill applies stringent standards to even the most basic functions – as long as a large system develops them.” Although vetoed, the AI legislation may yet be revised and become law. It had overwhelming support in both houses of the state legislature, and Newsom has directed lawmakers to redraft its provisions for the next session.

To aid in this process, he assembled a cohort of technology experts for advice about regulating generative AI. The advisory group includes Jennifer Tour Chayes, dean of the College of Computing, UC Berkeley, and Fei-Fei Li, a professor of computer science at Stanford, often referred to as the “Godmother of AI.” Li recently wrote an article opining that SB-1047 “will unduly punish developers and stifle innovation” and “cripple public sector and academic AI research.”

Interest in AI Regulation is Growing 

The need for AI regulation has come sharply to the forefront, as the powerful technology has developed at a breakneck pace even as regulation has yet to progress past calls for action. In drafting the bill, California legislators were responding to a dangerous vacuum: AI has enormous power to influence consumers, business, even politics, yet it is almost completely unregulated.

Congress, despite holding hearings, has passed no legislation. A few states have nibbled at the edges, enacting laws intended to protect against deepfakes and prohibiting discriminatory hiring practices based on AI models. In August 2024, the European Union passed the AI Act, which put the beginnings of guardrails on AI development, requiring AI generated content to be labeled as such. Yet the larger functionality of artificial intelligence remains a free-for-all, driven by fast-moving, well-funded companies equipped to outrun slow-moving legislative efforts. 

Given that AI can already closely mimic human sentience—reaping vast financial rewards in the process—the plodding pace of government legislation will be hard pressed to keep up. Especially since those governmental bodies are often divided, and lawmakers are not typically technology experts.

“The challenge is how new this technology is and the amount of time needed by governments to put laws in place,” Melissa Ruzzi, Director of AI at AppOmni, told eWeek. Still, the effort is essential. “We need to be open to law changes as we all learn together about how AI is being used in all different sectors of society,” she said.

AI Makes Strange Bedfellows

Supporters and critics of California’s AI bill voiced passionate opinions—and the alliances cut across culture and business in surprising ways. Predictably, large AI players including Microsoft, Google, OpenAI, and Meta opposed the bill. Jason Kwon, OpenAI’s Chief Strategy Officer, wrote Newsom to say that the bill would “slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”

On the other hand, leading AI startup Anthropic—seen as a top competitor to OpenAI—offered amendments to the bill and said it could support a revised draft of the legislation.

“AI springs from California,” said former Speaker of the House Nancy Pelosi. “We must have legislation that is a model for the nation and the world. We have the opportunity and responsibility to enable small entrepreneurs and academia—not big tech—to dominate.” However, Pelosi weighed in against SB1047, calling it “well-intentioned but ill informed.” Her opinion was echoed by a number of California members of Congress, who shared their views with Newsom that the bill was too open-ended and not well drafted for an emerging technology.

Tesla founder Elon Musk supported the legislation. “This is a tough call and will make some people upset, but all things considered, I think California should probably pass the SB-1047 AI safety bill,” he tweeted on X, his social media platform. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.” Musk in early 2023 signed a letter along with more than a thousand AI experts calling for a pause on the development of “giant” AI until the technology’s risks can be properly managed.

A group of 120 Hollywood celebrities wrote a letter to support the bill—perhaps partially because AI’s ability to generate content poses such a profound threat to their livelihood. Among the signers were Judd Apatow, Shonda Rhimes, Whoopi Goldberg, and Rob Reiner. “This bill is not about protecting artists—it’s about protecting everyone,” they wrote. “Grave threats from AI used to be the stuff of science fiction, but not anymore.”

Similarly, a group of academics wrote Newsom to support the bill. The group included Geoffrey Hinton, a professor at the University of Toronto, widely known as the “godfather of AI.” The academics said that “decisions about whether to release future powerful AI models should not be taken lightly, and they should not be made purely by companies that don’t face any accountability for their actions.”

Read our guide to AI and privacy issues to learn more about challenges, best practices, and responsible use of AI.

Originally Appeared Here