The Gist
- AI legislation impact. California’s veto of SB 1047 raises questions on how AI safety laws will shape marketing strategies by 2026.
- State vs. national AI regulation. The veto shows states taking the lead on AI policies, but a national approach might offer more consistent guidelines.
- Business lessons from the veto. Tracking the debates on AI safety helps businesses align with public sentiment and regulatory trends for better strategic decisions.
After a few weeks of suspense, California Governor Gavin Newsom decided to veto SB 1047, the AI safety bill that garnered media attention for how it would potentially determine tech-industry guidelines for AI usage.
Critics feared that the bill’s regulatory framework would stifle innovation and burden AI developers needlessly while being ineffective against the threats.
In a detailed online statement, Gov. Newsom explained that he did not believe that SB 1047 was “the best approach to protecting the public from real threats posed by the technology.” Newsom instead signed other bills that cover varying instances in which AI usage would have a significant impact, ranging from AI usage with healthcare services to AI models being deepfakes and misinformation agents.
The question marketers should be asking themselves is how the AI legislation debates will influence marketing strategies. AI is used for several predictive analysis techniques and customer service activities, so paying attention to AI regulations will impact firms that rely on AI-based systems that provide deliverables to customer experience.
How Did Californians Feel About the Bill?
SB 1047 became a standout legislation among the current discussion on state bills addressing AI because it highlighted many ideas inspired by the EU AI Act and the Biden Executive Order on AI. It also raised eyebrows because its main author, California State Sen. Scott Wiener (D-San Francisco) represents the jurisdiction where Silicon Valley is located.
It generated high interest among California citizens who are concerned about the use of AI. The AI Policy Institute polled citizens and found that 65% supported SB 1047 before Senator Wiener added changes requested by Anthropic, owner of ChatGPT competitor Claude, and other tech firms that later supported the bill. Senator Wiener also appeared in news segments like Bloomberg to bolster support.
Still, many prominent tech and government leaders took sides on SB 1047. Some believed the bill’s requirements would have reduced innovation incentives, raised development costs on startups and not deliver protections the bill promised. Others felt the bill’s establishment of an agency and public cloud framework was in the right direction of establishing guardrails that would advance AI safety and equitability.
Related Article: Is California SB-1047 the Future of AI Regulation?
State Bills Addressing AI Introduce Alternatives to SB 1047
Another factor in the veto decision was the content in some of the 17 state bills Governor Newsom approved. Many of these were for different issues: three bills set removal requirements of deepfakes on social media.
But a few bills touched upon measures that SB 1047 had raised or created extensions of existing provisions. For example, SB-896 requires California’s Office of Emergency Services to conduct risk analyses with companies hosting LLMs that will analyze potential threats to critical state infrastructure. This is comparable to the risk assessment described in SB 1047.
Two other comparable bills passed are AB-1008 which extended existing privacy measures to companies using AI, and AB-2013, which requires companies that use AI to disclose certain specifications such as the training dataset.
Furthermore, Newsom announced a council of AI experts to assist the governor in developing guardrails for deploying generative AI. The council would focus on “developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.” The council, Governor Newsom, and the state legislature are expected to work together in the next session.
Lessons from California’s Veto of the AI Safety Bill
The veto holds several lessons for business leaders and legislatures confronting the legislating of influential technology.
Follow All the Bills on AI Safety
Doing so can help in comparing the possible effects of different legislations, even if it’s not in your state. Be more aware of the legislation. It’s important these days given the wider reach of media messaging, the faster innovation pace of tech in products and services and the shorter attention spans of people who use that media.
Understanding the Consensus on AI Safety
Watch the AI bill debates, regardless if the policy being debated becomes law or not. The consensus of these debates indicate what the public sentiment is about technology, ultimately guiding companies paying attention to the discussion on their communication of commitments to AI safety.
The passage of SB 1047 in the California state legislature indicates that, at a certain level, the bill galvanized the conversation about what precautionary threshold should exist for AI development and what obligations developers should maintain. Newsom’s veto brings up even more conversations now.
Spotlight on AI Safety Discussions
Until now, many ongoing discussions about AI safety have been playing out among industry insiders. That condition is understandable given that AI still feels nascent, as if it is within an early stage of Gartner’s adoption cycle.
Yet, good legislative intervention must contain precise remedies based on evidence-based information and have a solid consensus among key supporters outside of the tech community as well as those inside.
Supporters of SB 1047 gained a moral victory. The bill discussion brought AI issues into the spotlight and raised the quality of thoughtful engagement on what AI legislation should look like. One of the hallmarks of politics is legislation that often does not get signed into law but does inspire the right outcomes for discussion. Regardless of the debates raised, SB 1047 was high profile in stature, enough to encourage prominent members of society, industry and government to pay more attention to collaboration opportunities on tech issues.
View all
The Future of AI Safety Legislation
Time will tell if a collaborative sentiment among leaders remains sustainable and becomes prominent in other civic tech topics such as a national regulation on AI usage. States have taken the initiative, but a national policy would better establish the consistent building blocks for public safety from AI harms and risks while encouraging the development and harnessing of the best of what AI can offer.
Marketers should look at how governors of each state host an AI policy bill. Each response will differ by local politics, for sure. Doing so will keep marketers abreast of how AI safety policies evolve and how those policies impact the AI they want for their customer experiences.