California Gov. Newsom vetoes AI safety bill that divided Silicon Valley
Go Deeper.
Create an account or log in to save stories.
Like this?
Thanks for liking this story! We have added it to a list of your favorite stories.
Gov. Gavin Newsom of California on Sunday vetoed a bill that would have enacted the nation’s most far-reaching regulations on the booming artificial intelligence industry.
California legislators overwhelmingly passed the bill, called SB 1047, which was seen as a potential blueprint for national AI legislation.
The measure would have made tech companies legally liable for harms caused by AI models. In addition, the bill would have required tech companies to enable a “kill switch” for AI technology in the event the systems were misused or went rogue.
Newsom described the bill as “well-intentioned,” but noted that its requirements would have called for “stringent” regulations that would have been onerous for the state’s leading artificial intelligence companies.
Turn Up Your Support
MPR News helps you turn down the noise and build shared understanding. Turn up your support for this public resource and keep trusted journalism accessible to all.
In his veto message, Newsom said the bill focused too much on the biggest and most powerful AI models, saying smaller upstarts could prove to be just as disruptive.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” Newsom wrote.
California Senator Scott Wiener, a co-author of the bill, criticized Newsom's move, saying the veto is a setback for artificial intelligence accountability.
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way,” Wiener wrote on X.
The now-killed bill would have forced the industry to conduct safety tests on massively powerful AI models. Without such requirements, Wiener wrote on Sunday, the industry is left policing itself.
“While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that the voluntary commitments from industry are not enforceable and rarely work out well for the public.”
Many powerful players in Silicon Valley, including venture capital firm Andreessen Horowitz, OpenAI and trade groups representing Google and Meta, lobbied against the bill, arguing it would slow the development of AI and stifle growth for early-stage companies.
“SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere,” OpenAI’s Chief Strategy Officer Jason Kwon wrote in a letter sent last month to Wiener.
Other tech leaders, however, backed the bill, including Elon Musk and pioneering AI scientists like Geoffrey Hinton and Yoshua Bengio, who signed a letter urging Newsom to sign it.
“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” wrote Hinton and dozens of former and current employees of leading AI companies.
On Sunday, in his X post, Wiener called the veto a “setback” for “everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public.”
Other states, like Colorado and Utah, have enacted laws more narrowly tailored to address how AI could perpetuate bias in employment and health-care decisions, as well as other AI-related consumer protection concerns.
Newsom has recently signed other AI bills into law, including one to crack down on the spread of deepfakes during elections. Another protects actors against their likenesses being replicated by AI without their consent.
As billions of dollars pour into the development of AI, and as it permeates more corners of everyday life, lawmakers in Washington still have not proposed a single piece of federal legislation to protect people from its potential harms, nor to provide oversight of its rapid development.
Copyright 2024, NPR