Image: Gage Skidmore from Peoria, AZ, United States of America, CC BY-SA 2.0, via Wikimedia Commons

The controversial AI safety bill SB 1047, which was passed by California legislatures, has been vetoed down by Governor Gavin Newsom.  The bill, which was authored by State Senator Scott Wiener, aimed to impose multiple safety protocols on companies who are working on advanced AI models. SB 1047 received criticism from several directions, including some democrat regulators, companies such as OpenAI, among others.

“To the Members of the California State Senate: I am returning Senate Bill 1047 without my signature. This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm. The bill would also establish the Board of Frontier Models – a state entity – to oversee the development of these models,” Newsom wrote in a statement. “I do not believe this is the best approach to protecting the public from real threats posed by the technology,” he added.

For its part, the bill does put a burden on the companies to introduce safety protocols and safeguards, which would be applicable to models costing at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training. These models are known as frontier models, and can be deployed to perform complex tasks in cybersecurity, data management, and similar fields. Going forward, Newsom is seeking the aid of leading AI researchers, including Stanford University’s Fei-Fei Li, to help shape the future AI policies of the state.

There are multiple safeguards that companies were required to implement while working on AI models. These include a “kill switch” that would shut down AI systems in case they turned out to be an imminent threat, while the company will be charged with providing protection for whistleblowers in case they revealed violations related to AI safety. So far, supporters of the bill argue that it is a necessary first step towards the regulation of AI.

Not everyone has been supportive of the bill – in fact, SB 1047 has received strict opposition from major technology companies across the globe. Several Silicon Valley firms have opposed the bill – this includes the likes of OpenAI, social media company Meta, tech titan Google, and individuals like US Congressman Ro Khanna and House Speaker Nancy Pelosi.

Arguments against the bill say that the provisions of SB 1047 are too broad and could stifle innovation. Several lawmakers from both sides as well as people such as Yann LeCun — Chief AI Scientist for Facebook AI Research — believe that it would hinder AI’s development itself and prove to be a bane for small businesses. “At Mozilla, we believe that open-source software is essential for advancing technology in a transparent and equitable way, and SB 1047 risked undermining this by placing unnecessary burdens on those developing open-source AI models,” said Mozilla’s Senior Public Policy and Government Relations Analyst, Joel Burke.

For his part, Newsom argued that the bill did not account for whether an AI system was used in a high-risk environment or involved sensitive data. Instead, it came up with a one-size-fits-all approach, wherein it focuses only on “the most expensive and large-scale models” and implies that only large AI models require regulation and oversight (thus creating a false sense of security among the public). “Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good,” Newsom said.