SACRAMENTO, Calif. — California Governor Gavin Newsom vetoed an AI bill that its sponsor said would’ve enacted “common sense, first-in-the-nation safeguards to protect society from AI.”
Newsom vetoed SB 1047 yesterday after the bill was approved this summer by the California State Assembly 49-15 and California State Senate 29-9. The bill was sponsored by state senator Scott Wiener (D-San Francisco) and introduced in February.
The bill was supported by AI leaders, such as Anthropic and Elon Musk, and opposed by other AI leaders, such as OpenAI, Google and Meta.
Newsom said that “while well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision making or the use of sensitive data.”
“Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it,” Newsom said. “I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Newsom's office shared the governor's veto letter to the California State Senate.
Wiener said the veto is a "setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”
“This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way," Wiener said.
See more: What is President Biden's Executive Order on AI?
Wiener's SB 1047 featured various protections against AI uses, including cyberattacks on critical infrastructure, developing chemical, nuclear and biological weapons and automated crime.
The bill would’ve required developers of the most advanced AI systems — costing over $100 million to train — to test their models for “the ability to cause critical harm,” Wiener’s office said. Developers would’ve been required to establish guardrails to help mitigate against risk.
SB 1047 would’ve created a public cloud computing cluster, CalCompute, for startups, researchers and community groups to participate in the "responsible development" of large-scale AI systems.
It would’ve also created whistleblower protections for employees of frontier AI laboratories.
See more: What is the EU "AI Act"?