New York State Passes RAISE Act to Regulate Frontier AI Models

New York State has taken a significant step towards regulating artificial intelligence by approving the Responsible AI Safety and Education (RAISE) Act on June 15, 2025. This landmark legislation aims to mandate comprehensive transparency and safety measures for advanced, or 'frontier,' AI models, with the explicit goal of preventing potential disaster scenarios. These include catastrophic events such as the death or injury of over 100 people or financial damages exceeding $1 billion, as reported by ETtech.
The RAISE Act, which awaits approval from New York Governor Kathy Hochul, is supported by prominent AI experts like Geoffrey Hinton and Yoshua Bengio. If enacted, it would establish the first legally binding transparency standards for frontier AI laboratories. This current iteration of the bill is a direct reform of a previous AI safety bill that was vetoed, as it had narrowly focused only on large-scale models and failed to address high-risk deployments or potentially dangerous smaller AI systems.
Key provisions of the proposed RAISE Act are designed to enhance accountability and oversight in the development and deployment of frontier AI. It mandates that AI labs release detailed safety and security reports concerning their models. Furthermore, in cases where AI model behavior or malicious actors negatively affect AI systems, these labs are required to report such safety incidents promptly. Non-compliance with these regulations can lead to substantial civil penalties, reaching up to $30 million, underscoring the serious commitment to enforcement.
The push for AI governance in New York aligns with a broader global recognition of the need for guardrails in AI development and adoption. In India, for instance, AI adoption rates are notably higher than in many other countries, as revealed by a recent IBM global survey. While India views AI as a significant catalyst for economic growth, there is a growing understanding that robust governance and safety measures are crucial for safe adoption and building resilience against potential disruptions.
Consequently, there has been a significant surge in demand for specialized AI trust and safety professionals within India's tech multinational corporations and global capability centers (GCCs). Data from Teamlease Digital indicates a 36% year-on-year increase in hiring for these roles, with projections forecasting a further 25-30% growth in demand for AI trust and safety professionals in 2025. This parallel trend underscores the increasing global emphasis on ensuring that AI technologies are developed and deployed responsibly, mitigating risks while harnessing their transformative potential.