One false start and one year later, California has enacted an artificial intelligence safety and transparency law that is the first of its kind in the United States. Almost exactly one year after vetoing a predecessor AI safety bill, Governor Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA) into law on September 29, 2025. TFAIA represents the United States’ most aggressive attempt yet to impose safety regulations on the largest “frontier” AI labs (e.g. OpenAI, Meta, Anthropic, and Google DeepMind). AI companies that meet the law’s thresholds will be required to make certain disclosures regarding their safety and security practices. TFAIA also creates certain whistleblower protections for employees of frontier AI developers.
California’s previous attempt at comprehensive AI regulation, SB 1047, was vetoed by Governor Newsom in September 2024 after intense lobbying and backlash from the AI industry, which argued that the bill would unreasonably stifle innovation and threaten California’s (and, by extension, the United States’) position as the world leader in AI development. One primary concern with SB 1047 voiced by the AI labs was the bill’s imposition of liability on AI labs for certain types of potential harms and damages caused by AI models. Attempting to allay those concerns, TFAIA focuses on transparency requirements instead of the imposition of liability.
The primary operative requirements of TFAIA are mandatory public disclosures that must be made when a company deploys an AI model that meets or exceeds a certain level of processing power – specifically, computing power greater than 10^26 floating-point operations, or “flops”. The AI models currently fielded by the leading frontier AI labs either already satisfy, or will soon exceed, this threshold. The companies that deploy these powerful AI models will be required to publicly disclose a variety of safety and security information, including how they determine when a model may pose “catastrophic” levels of risk (defined as causing death or injury to 50 or more people, or over $1 billion dollars in economic damages), their mitigation plans for such risks, and their cybersecurity practices for ensuring the security of model weights (which could enable third parties to replicate powerful models but disable safety controls). These disclosures must be updated at least once annually.
TFAIA also institutes protections for employees of frontier AI labs to become whistleblowers when they believe the lab is developing models that may pose catastrophic risk. AI companies are prohibited from enacting any policies that would prohibit employees from disclosing such information, as well as from retaliating against whistleblowers who go public. Instead TFAIA requires the AI labs to create internal processes through which employees may disclose such concerns anonymously, and to internally investigate any such anonymous concerns and, where appropriate, institute appropriate mitigation.
TFAIA does not have a private right of action, and instead will be exclusively enforced by the California Attorney General. AI labs that fail to comply with TFAIA will be subject to penalties depending on the severity of the failure, with such penalties not to exceed $1 million per violation.
The California legislature and Governor Newsom have clearly attempted to strike a compromise with TFAIA between placating the AI safety advocates (who pushed for the more stringent requirements of the vetoed SB 1047) and avoiding any major inhibition of the massive economic engine that the AI industry has become. It remains to be seen whether TFAIA will succeed in achieving this balance. The AI industry will also be watching closely to see if TFAIA becomes a blueprint for future legislation in other states.
Ben Mishkin is a Member in Cozen O’Connor’s Technology, Privacy, & Data Security practice group.