New York’s RAISE Act: New Compliance Requirements for Frontier AI Developers

On December 19, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act into law, establishing a comprehensive safety and transparency framework for so-called “large developers” of “frontier models” that are developed, deployed, or operating in New York. As discussed in more detail below, covered developers must comply with the Act’s mandatory safety protocols, incident‑reporting obligations, and report to a new state oversight entity established by the Act. 

Notably, it is reported that Governor Hochul and the legislature agreed to make certain additional amendments to the Act during the January 2026 legislative session, and the final version of the Act has not been published as of the date of this article.

1. Frontier Models and Large Developers

The RAISE Act applies to large developers of frontier models.  “Frontier models” are defined as (1) AI models trained using greater than 10º26 computational operations with compute costs that exceed $100 million, or (2) AI models trained through “knowledge distillation” (in which a frontier model or its output is used to train a smaller model with similar or equivalent capabilities) with compute costs that exceed $5 million.

The Act currently defines a “large developer” as a company that has trained one or more frontier models and spent, in the aggregate, over $100 million in compute costs training frontier models.  However, this definition is expected to be replaced by a $500 million revenue requirement in the amendments that lawmakers committed to make in January 2026, which would align the definition of “large developer” with California’s Transparency in Frontier Artificial Intelligence Act (which provided the framework for New York’s RAISE Act). 

In either case, under the current definitions or the anticipated amendments, the Act captures the country’s largest AI developers, but startups and early stage companies are likely outside of its scope.

2. Safety Protocol Requirements for Developers

The RAISE Act requires covered developers to implement and maintain written documentation of robust safety and security protocols that are used to mitigate the risks of “critical harm” associated with high‑capability AI systems. 

Critical harm” is defined as the death or serious injury of at least 100 people, or at least $1 billion in damages to money or property, caused or materially enabled by a large developer’s use, storage or release of a frontier model, through either (i) the creation or use of a chemical, biological, radiological, or nuclear weapon, or (ii) a model engaging in criminal conduct without meaningful human intervention, where the crime requires intent, recklessness or gross negligence, including solicitation or aiding and abetting of such a crime.

Before a frontier model is deployed or made available in New York, a covered developer is required to conduct safety testing to determine if the model creates an “unreasonable risk” of critical harm, or if the model could be used to develop another frontier model in a manner that would increase the risk of critical harm. The Act does not quantify the degree of risk that would be deemed “unreasonable.”

A covered developer must also conspicuously publish a copy of its safety and security protocols, which may be appropriately redacted when necessary to protect the developer’s trade secrets and confidential information, employee or customer privacy, or public safety (among other things).  Additionally, a covered developer must conduct annual reviews of its safety and security protocols to address any changes to the model’s capabilities or industry best practices, and if any safety or security protocols are materially modified, such modifications must be published by the developer. 

3. Mandatory Incident Reporting to the State

In addition to preventive safety measures, the RAISE Act also requires covered developers to notify the New York attorney general and Division of Homeland Security and Emergency Services within 72 hours of the developer learning of a qualifying “safety incident,” or learning facts sufficient to establish a reasonable belief that a safety incident has occurred. 

A qualifying “safety incident” includes any known incidence of critical harm, or an event suggesting that the model could cause or has caused critical harm, such as: 

  • A model autonomously engaging in behavior other than at the request of a user;
  • Unauthorized access to the model weights of the frontier model;
  • Critical failure of any technical or organizational controls, including controls limiting the ability to modify the model; or
  • Any “unauthorized use” of the model.

Developers that fail to report incidents, or that provide false information, may face civil penalties brought in an action by the New York attorney general. The current text of the Act permits civil penalties of up to $10 million for the first violation, and $30 million for any subsequent violation.  However, Governor Hochul’s press release on the Act indicates that penalties will range from $1 million for the first violation, to up to $3 million for subsequent violations, which suggests that this section of the Act will be subject to the forthcoming amendments.

Only the New York attorney general can enforce the RAISE Act; there is no private right of action.

4. Creation of a New Oversight Office

Governor Hochul’s press release also indicates that the RAISE Act will establish an oversight office within the New York Department of Financial Services, which department is also responsible for enforcement of the NYDFS Cybersecurity Regulation (23 NYCRR Part 500). This office will evaluate covered developers’ compliance with safety protocols and reporting requirements, and will issue annual public reports on its findings. More information is expected in the forthcoming amendments.

5. Next Steps

While the RAISE Act takes effect on January 1, 2027, it is possible that the Act will face federal opposition pursuant to President Trump’s December 11, 2025, executive order announcing a national policy to establish a “minimally burdensome” national standard for AI, and directing the Department of Justice to challenge state laws that are deemed inconsistent with that goal. In any event, given the scope of new compliance requirements imposed by the Act, large developers of frontier models should familiarize themselves with the Act’s requirements and begin planning for compliance. 

Share on LinkedIn

Authors

  • Member

    Matt focuses his practice on technology, strategic outsourcing, and data privacy, where he advises c...

  • Member

    Matt focuses his practice in the areas of cyber and technology risks, privacy compliance, commercial...

Related Posts