California’s SB 1047, a bill that would impose liability on AI developers, just passed a vote in the state assembly, and is likely to head to Governor Gavin Newsom soon for signature or veto. Its enactment would be a grave mistake; the legislation will stifle AI innovation and safety research. The bill is deeply flawed because it makes the fundamental mistake of regulating a general purpose technology rather than applications of that technology.
AI is useful for many applications. It can be used as part of a medical device, to power a social media feed, to build a helpful chatbot, or to generate deceptive political deepfakes. Unfortunately, it is not within the power of an AI provider to determine how someone will use their technology downstream. While the number of beneficial use cases of AI vastly outnumbers the problematic ones, there is no way for a technology provider to guarantee that no one will ever use it for nefarious purposes.
Technologies like AI can be applied in countless ways to solve problems; applications of a general purpose technology like AI are almost always where the specific benefit (or potential harm) comes in. That is why legislators should regulate applications rather than technology.
Consider the electric motor. It can be used to build a blender, electric vehicle, dialysis machine, or guided bomb. It makes more sense to regulate a blender, rather than its motor. Further, there is no way for an electric motor maker to guarantee no one will ever use that motor to design a bomb. If we make that motor manufacturer liable for nefarious downstream use cases, it puts them in an impossible situation. A computer manufacturer likewise cannot guarantee no cybercriminal will use its wares to hack into a bank, and a pencil manufacturer cannot guarantee it won’t ever be used to write illegal speech. In other words, whether a general purpose technology is safe depends much more on its downstream application than on the technology itself.
There are already successful examples of regulating AI’s problematic applications. For example, non-consensual deep fake porn is a disgusting application that is harming many people, including underage girls. I am encouraged that the U.S. Senate unanimously passed the Defiance Act in July to combat this. AI has also been used to generate fake product reviews; the Federal Trade Commission issued guidance earlier this month to rein this in. Both are welcome developments that will protect Americans.
Read More: California’s Draft AI Law Would Protect More Than Just People
There are other issues with SB 1047. AI developers already find its requirements ambiguous and confusing. This means that companies will end up needing to hire armies of lawyers and create bureaucracy to try to comply. That will have a stifling effect particularly on free, open source software. Today’s smartphones, laptops, websites—pretty much all valuable software—were built in part using open source. In the case of AI, some large companies have trained proprietary models. Some of these proprietary model providers would hate to have to compete with free, open source software, and have lobbied intensely for regulations to kneecap it.
But open versions of models have been critical for AI safety research. They give academic researchers the ability to study cutting edge models, spot problems, and propose solutions. Open models also—as the FTC points out—encourage competition. Unfortunately, by raising compliance costs for open source efforts, this will discourage the release of open models, and make AI less competitive and less safe.
SB 1047 has been amended numerous times since it was first proposed. Some of these revisions have made the bill less bad. For example, removing criminal perjury for developers (which could have penalized them with prison time) and adding in dollar thresholds ($10 million or $100 million, depending on the nature of the work) below which developers are exempted from some of the bill’s requirements. But it is still a harmful bill. It doesn’t reflect an understanding of how AI development works, or where it is going in the future.
AI is a nascent technology that will benefit billions of people. Its positive use cases—as well as a smaller number of harmful ones—are still poorly understood. California’s governor should not let irresponsible fear-mongering about AI’s hypothetical harms lead him to take steps that would stifle innovation, kneecap open source, and impede market competition. Rather than passing a law that hinders AI’s technology development, California, and the U.S. at large, should invest in research to better understand what might still be unidentified harms, and then target its harmful applications.
Andrew Ng is the founder of DeepLearning.AI, managing general partner at AI Fund, executive chairman of LandingAI, chairman and co-founder of Coursera, and an adjunct professor at Stanford University. In 2023, he was named to the Time100 AI list of the most influential AI persons in the world.
More Must-Reads from TIME