Governor Gavin Newsom rejects a bill mandating strict safety tests for AI, citing fears it would drive tech companies out of California.
California Governor Gavin Newsom has vetoed a high-profile bill aimed at tightening safety regulations for artificial intelligence, following intense opposition from the tech industry. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) sought to enforce rigorous safety testing on AI systems, but Newsom warned it could hinder innovation rather than protect the public.
In a statement on September 29, Newsom expressed concerns that the bill disproportionately targeted established tech giants, like OpenAI and Google, while failing to address emerging threats from new technologies. He described the bill’s stringent requirements, including provisions for mandatory “kill switches” and risk mitigation plans, as overly broad, potentially stifling even basic AI developments.
Senator Scott Wiener, who sponsored the bill, aimed to give the state attorney general authority to sue companies for AI-related risks. The bill also sought to impose strict safety measures to prevent AI misuse. Despite the veto, Newsom acknowledged the need for robust AI safety regulations and urged AI experts to collaborate on developing practical frameworks.
While Newsom’s decision was met with relief from many in Silicon Valley, who feared the bill could push companies out of California, some figures, including Elon Musk, voiced support for tighter AI regulations, underscoring the ongoing debate over how best to regulate the fast-growing technology.
Governor Newsom highlighted his administration’s commitment to AI oversight, noting that California has recently enacted over 18 AI-related laws, reinforcing the state’s determination to strike a balance between innovation and public safety.