California Revives Push for AI Safety Reporting with New SB 53 Amendments

California Revives Push for AI Safety Reporting with New SB 53 Amendments

California Revives Push for AI Safety Reporting with New SB 53 Amendments

California is once again at the forefront of AI regulation as State Senator Scott Wiener introduces new amendments to his AI transparency bill, SB 53. This move comes just months after his previous effort, SB 1047, was vetoed, but the latest proposal aims to strike a more balanced approach to holding the world’s top AI companies accountable.

What’s New in SB 53?

SB 53 would require leading AI developers—such as OpenAI, Google, Anthropic, and xAI—to publicly share their safety and security protocols, and to issue reports whenever significant safety incidents occur. If passed, California would become the first state to mandate this level of transparency for AI companies, setting a potential precedent for national and global standards.

  • Transparency Requirements: Companies must regularly publish safety and security documentation.
  • Incident Reporting: Any safety event with substantial societal impact, such as incidents causing mass harm or significant financial damage, must be reported.
  • Whistleblower Protections: Employees who raise concerns about critical risks (e.g., technology contributing to mass injury or major loss) are shielded from retaliation.
  • Support for Startups and Researchers: The proposed CalCompute, a public cloud computing cluster, would offer resources for those developing large-scale AI outside of major tech firms.

The Road to Legislation

This new push follows the recommendations from California’s AI policy group, which emphasized the need for “requirements on industry to publish information about their systems” in order to promote a robust and transparent environment. Senator Wiener’s office confirmed that these expert recommendations heavily influenced the amendments now introduced in SB 53.

Senator Wiener stated, “The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be.” The hope is to encourage transparency without hampering the state’s thriving AI sector.

Industry and Legislative Response

While some AI companies, like Anthropic, have expressed support for increased transparency, other major players such as OpenAI, Google, and Meta have historically resisted such mandates. Notably, leading firms have been inconsistent in publishing safety reports for their advanced models, with Google and OpenAI both delaying or omitting reports for their latest releases.

The bill now moves to the California State Assembly Committee on Privacy and Consumer Protection. If it passes there, it must still navigate several other legislative hurdles before reaching Governor Newsom’s desk for approval.

Federal and State Dynamics

California’s renewed effort comes as New York considers a similar law (the RAISE Act), and after a failed federal proposal that would have imposed a moratorium on new state AI regulations for 10 years. The Senate’s rejection of this moratorium keeps the door open for states to lead on AI policy.

“Ensuring AI is developed safely should not be controversial — it should be foundational,” said Geoff Ralston, former president of Y Combinator. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

What’s Next?

SB 53 represents a moderated approach compared to previous AI safety bills, but could still require companies to disclose more than they currently do. As the legislative process unfolds, all eyes will be on how the world’s largest AI companies respond—and whether California will once again set the standard for AI governance in the U.S.

References

Read more

Lex Proxima Studios LTD