New York Passes Bill for AI Safety and Disaster Prevention

The New York state legislature has taken a significant step in AI regulation by passing the Responsible Artificial Intelligence and Safety in our Economy (RAISE) Act on Thursday. This landmark bill, now awaiting Governor Kathy Hochul's signature, aims to establish America's first legally mandated transparency standards for frontier AI models. Specifically, it targets advanced AI systems from leading companies such as OpenAI, Google, and Anthropic, seeking to prevent them from causing catastrophic disaster scenarios, defined as incidents leading to over 100 deaths or injuries, or more than $1 billion in damages.
The passage of the RAISE Act marks a notable victory for the AI safety movement, which has recently seen challenges amidst a strong emphasis on rapid innovation in Silicon Valley. Prominent safety advocates, including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio, have publicly supported this legislation. Its provisions bear some resemblance to California’s controversial AI safety bill, SB 1047, which was ultimately vetoed. However, New York state Senator Andrew Gounardes, a co-sponsor, emphasized that the RAISE Act was meticulously designed to avoid stifling innovation among startups or academic researchers—a primary criticism leveled against SB 1047. Senator Gounardes underscored the urgency of implementing such guardrails, stating, “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving. The people that know [AI] the best say that these risks are incredibly likely… That’s alarming.”
Should the RAISE Act be enacted into law, it would impose stringent requirements on the world’s largest AI laboratories. These include the mandatory publication of thorough safety and security reports concerning their frontier AI models. Furthermore, AI labs would be required to promptly report any safety incidents, such as concerning model behavior or instances of malicious actors stealing an AI model. Failure to comply with these standards could result in significant civil penalties, with New York’s attorney general empowered to levy fines of up to $30 million.
The bill's scope is specifically narrowed to regulate the largest companies whose AI models were trained using over $100 million in computing resources and are made available to New York residents. This encompasses major players globally, whether based in California like OpenAI and Google, or in China like DeepSeek and Alibaba. Nathan Calvin, Vice President of State Affairs and General Counsel at Encode, who contributed to both the RAISE Act and SB 1047, highlighted that the New York bill intentionally diverged from certain aspects of its Californian predecessor. Notably, the RAISE Act does not mandate a “kill switch” for AI models nor does it hold companies accountable for critical harms that occur during post-training phases.
Despite these careful distinctions, the New York AI safety bill has encountered substantial pushback from Silicon Valley. Andreessen Horowitz general partner Anjney Midha publicly criticized it on X, stating, “The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead.” Andreessen Horowitz, along with startup incubator Y Combinator, were among the most vocal opponents of California’s SB 1047. Assemblymember Alex Bores, another co-sponsor of the RAISE Act, acknowledged the industry resistance but asserted that the bill would not limit the innovation of tech companies.
Anthropic, an AI lab known for its focus on safety and recent calls for federal transparency standards, has not yet taken an official stance on the bill. However, Anthropic co-founder Jack Clark expressed concerns that the RAISE Act’s broadness could pose risks to “smaller companies.” Senator Gounardes countered this point, reiterating that the bill was specifically designed not to apply to small companies. Another common criticism suggests that AI model developers might simply withdraw their most advanced models from New York, a scenario observed in Europe due to its stringent tech regulations. Assemblymember Bores dismissed this concern, arguing that the regulatory burden is relatively light and that pulling out of a state with the third-largest GDP in the U.S. would be economically illogical for most companies.
Beyond its core objectives of preventing disaster scenarios and enhancing transparency, the AI Disaster Prevention Bill also delves into critical areas such as algorithmic accountability, bias mitigation, worker protection, and misinformation control. It aims to establish standards for data collection and model validation to reduce discriminatory outcomes, address job displacement through retraining programs, and combat the spread of AI-generated fake news. The bill emphasizes the importance of robust AI risk management strategies to anticipate and minimize negative consequences. By setting these new standards for AI transparency and accountability, New York's legislation could serve as a precedent for other states and the federal government, much like the California Consumer Privacy Act (CCPA) influenced data privacy regulations across the nation. The ongoing evolution of AI necessitates a proactive approach to regulation, positioning businesses for future success by ensuring compliance and responsible AI development.