In perhaps one of the most critical legislature step towards AI regulation, the California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, commonly referred to as SB 1047. This bill represents a significant step in the regulatory landscape of AI, positioning California at really the forefront of efforts to manage the risks associated with advanced AI technologies.

SB 1047 introduces a framework that is aimed at enhancing the safety and security of AI systems, particularly those that are considered frontier models—sophisticated AI systems with the potential to cause significant harm if not properly managed. The bill mandates that AI companies adhere to several requirements before training such models. Key among these requirements is the implementation of a reliable “kill switch” mechanism, which would allow developers to immediately deactivate an AI model if it poses a serious risk. This provision is intended to address concerns about the potential for AI systems to malfunction or be exploited in ways that could lead to widespread damage.

Subscribe to TP Daily for updates on the latest and greatest in Tech

Additionally, the legislation requires that AI models be protected against “unsafe post-training modifications,” ensuring that any changes made to the models after their initial development do not compromise their safety. Companies are also obligated to establish rigorous testing procedures to evaluate whether their models, or any derivatives, are particularly vulnerable to causing or enabling critical harm. Supporters of the bill, including high-profile figures like Elon Musk, argue that SB 1047 represents a necessary step toward ensuring that AI development proceeds in a manner that prioritizes public safety.

Senator Scott Wiener, the principal author of SB 1047, has been a staunch advocate for the bill, framing it as a necessary step to safeguard public interests in the rapidly evolving field of AI. Wiener emphasizes that the bill is designed not to stifle innovation but to ensure that AI technologies are developed and deployed in a manner that minimizes potential risks. He asserts that the bill aligns with commitments made by major AI labs to test their models for catastrophic safety risks and represents a balanced approach to addressing foreseeable AI challenges.

So far, the legislation has undergone several amendments in response to feedback from various stakeholders, including AI companies and open-source advocates. These adjustments include replacing potential criminal penalties with civil ones and refining the enforcement powers granted to the California attorney general. As SB 1047 moves back to the State Senate for a final procedural vote, its future rests with Governor Gavin Newsom, who will have until the end of September to make a decision. If enacted, the bill could set a precedent for AI regulation in the US, potentially influencing other states and federal lawmakers to consider similar measures.

Despite the bill’s safety-oriented objectives, SB 1047 has faced substantial opposition from several quarters within the tech industry. Major AI companies, including OpenAI and Anthropic, have expressed concerns that the legislation’s focus on catastrophic harms could disproportionately impact smaller developers and open-source projects. Critics argue that the bill’s requirements could impose significant burdens on these entities, potentially driving them away from California and hindering innovation within the state. Notably, prominent figures such as Representative Nancy Pelosi and other members of Congress have voiced opposition to the bill, echoing concerns from the tech industry about its potential to stifle technological advancement. Critics also suggest that AI regulation would be more appropriately addressed at the federal level, rather than through state-specific legislation.