Gavin Newsom Says He's Signing A Law To Install 'Common-Sense Guardrails' For AI Safety: What This Means For Google, Meta And Nvidia

Benzinga
2025.09.30 02:30
portai
I'm PortAI, I can summarize articles.

California Governor Gavin Newsom has signed SB 53, a law requiring AI companies like Google, Meta, and Nvidia to disclose risk assessments for their advanced models to prevent catastrophic risks. The law targets firms with revenues over $500 million and imposes penalties for violations. Newsom emphasized the balance between AI innovation and public safety, positioning California as a leader in AI regulation. The law may serve as a model for national standards, amid concerns about creating a complex compliance landscape for startups.

On Monday, Governor Gavin Newsom (D-Calif.) signed a landmark law requiring artificial intelligence giants such as OpenAI, Alphabet Inc.'s (NASDAQ: GOOG) (NASDAQ: GOOGL) Google, Meta Platforms, Inc. (NASDAQ: META) and Nvidia Corporation (NASDAQ: NVDA) to disclose how they plan to prevent their most advanced models from causing potential catastrophic risks.

California Takes Lead On AI Regulation

Newsom described the new law, SB 53, as a critical step in ensuring that AI innovation thrives while protecting public safety.

"California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," he said in a press release.

"AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation," the statement read.

Newsom's office called the law a potential model for the rest of the U.S. If Congress enacts national standards, California lawmakers are expected to align state rules while maintaining the "high bar established by SB 53," noted Reuters.

What The Law Requires

SB 53 applies to AI companies with annual revenues exceeding $500 million.

These firms must conduct public risk assessments, detailing how their technology could spiral out of human control or be misused to create bioweapons.

Violations carry penalties of up to $1 million.

The law comes after Newsom vetoed an earlier bill that sought annual third-party audits of companies investing more than $100 million in AI models.

That proposal faced heavy industry pushback over the potential compliance burden.

Industry Pushes Back On Patchwork Rules

Jack Clark, co-founder of Anthropic, welcomed the move, saying, "Anthropic is proud to have supported this bill."

Sen. Scott Wiener (D-Calif.) supported the bill and took to X, formerly Twitter to say, "It's an exciting step for responsible scaling of AI innovation."

However, Collin McCune, head of government affairs at Andreessen Horowitz, warned that SB 53 risks creating "a patchwork of 50 compliance regimes that startups don’t have the resources to navigate."

Global Context: AI Rules Take Shape Worldwide

California's law follows similar efforts abroad. The EU's AI Act also imposes strict requirements on high-risk systems, from risk assessments to bias controls.

Meanwhile, China has called for a global body to coordinate AI governance, highlighting the fragmented state of international rules.