Explosive AI bill sits on Gov. Gavin Newsom’s desk
Update: CA Gov. Gavin Newsom vetoed this AI legislation on Sept. 29, saying the bill was too broad and would stifle innovation. While an override is technically possible, the CA Assembly has already recessed and largely focused on the national election, and there hasn’t been such an override in the state in over 44 decades. Read on for what the bill would have done to regulate AI.
On August 29, California lawmakers overwhelmingly voted in favor (45-15) of Senate Bill 1047, the first-of-its-kind bill that would hold AI developers liable for catastrophic harm caused by their models if proper safety measures weren’t taken. The bill has exposed deep rifts in the tech world and shaken up party lines and now sits on the desk of Gov. Gavin Newsom, who has until Sept. 30 to sign or veto the bill.
SB 1047 would require technology developers of “covered models” – or models that cost at least $100 million to train – to implement cybersecurity features and near-immediate shutdown capabilities in case of an emergency.
SB 1047 aims to protect society from “critical harms” that could be caused by advanced AI systems in the hands of malicious users. The bill defines “critical harms” as severe, large-scale damage to public safety or infrastructure. For example, it specifically cites a scenario where a bad actor might use AI to launch a cyberattack resulting in at least $500 million in damages. For context, the Change Healthcare outage cost an estimated near $2.5 billion to its parent company UnitedHealth and $1 billion a day to its service providers.
Though earlier drafts of the bill allowed for the CA Attorney General to pursue criminal charges against non-compliant parties, the current draft specifies that violations will only result in civil penalties.
This contentious bill has drawn a clear line between those who believe it’s a necessary step to safeguard against potential catastrophic risks posed by advanced AI, and those who argue it could stifle innovation and harm California’s tech leadership.
Over 100 current and former Big Tech employees signed an open letter in support of the bill in opposition to their employers’ take on the bill.
“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure. It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”
The employee letter has 37 signatories that currently work for companies that oppose the bill, including 19 from Google DeepMind, 15 from OpenAI, and 3 from Meta. Geoffrey Hinton, former Google Brain researcher and Turing Award winner, Chris Olah, Anthropic co-founder, and Jan Leike, former OpenAI researcher all signed their names.
Unsurprisingly, major tech companies like Google and Meta are against the bill and are joined by Speaker Emerita Nancy Pelosi, San Francisco mayor London Breed, and venture capitalists Andreessen Horowitz. Critics of the bill argue that the legislation could harm the U.S. AI ecosystem and stifle innovation.
“The view of many of us in Congress is that SB 1047 is well-intentioned but ill informed,” shared Pelosi in a statement. “AI springs from California. We must have legislation that is a model for the nation and the world. We have the opportunity and responsibility to enable small entrepreneurs and academia – not big tech – to dominate.”
Senator Wiener, author of the bill, responded to Pelosi arguing that the legislation is crucial for ensuring “California stays in the lead on AI innovation while also protecting the health and safety of our residents from potential catastrophic risks.”
“Innovation and safety are not mutually exclusive, and I reject the false claim that in order to innovate, we must leave safety solely in the hands of technology companies and venture capitalists.”