California AI Regulation Bill Advances to Assembly Vote with Key Amendments

California’s "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (Senate Bill 1047), spearheaded by Senator Scott Wiener (D-San Francisco), has cleared the Assembly Appropriations Committee with some significant amendments. The bill, aimed at establishing rigorous safety standards for large-scale artificial intelligence (AI) systems, is set for a vote on the Assembly floor on Aug. 20 and must pass by Aug. 31 to move forward.

SB 1047 was crafted to regulate the development of advanced AI models by setting clear, actionable safety requirements, as well as regulatory measures. It targets AI models that are especially powerful and expensive to develop, with the goal of balancing innovation with public safety.

The bill sets standards for AI models with significant computational power — specifically, models that utilize 1026 floating-point operations (FLOPS) per second and cost more than $100 million to train. These models are referred to as "frontier" AI systems.

Among other provisions, the bill establishes risk assessment, safety, security, and testing requirements the developer of a covered AI model must fulfill before training the covered model, using the covered model, or making the covered model available for public or commercial use.

It requires, beginning Jan. 1, 2028, the developer of a covered model to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill.

The bill has undergone substantial revisions based on industry feedback, perhaps most notably from Anthropic, a leading AI research organization known for its work in developing advanced AI systems with a focus on safety, alignment, and ethical considerations. The aim of the amendments is to balance innovation and safety, Weiner said in a statement.

"We can advance both innovation and safety; the two are not mutually exclusive," Wiener said. "While the amendments do not reflect 100% of the changes requested by all stakeholders, we've addressed core concerns from industry leaders and made adjustments to accommodate diverse needs, including those of the open source community."

Major Amendments to SB 1047

  • Criminal Penalties for Perjury Removed: The bill now imposes only civil penalties for false statements to authorities, addressing concerns about the potential misuse of criminal penalties.
  • Elimination of the Frontier Model Division (FMD): The proposed new regulatory body has been removed. Enforcement will continue through the Attorney General's office, with some FMD functions transferred to the Government Operations Agency.
  • Adjusted Legal Standards: The standard for developer compliance has shifted from "reasonable assurance" to "reasonable care," a well-established common law standard, including elements like adherence to NIST safety standards.
  • New Threshold for Fine-Tuned Models: Models fine-tuned at a cost of less than $10 million are exempt from the bill's requirements, focusing regulatory burden on larger-scale projects.
  • Narrowed Pre-Harm Enforcement: The Attorney General's authority to seek civil penalties is now restricted to situations where actual harm has occurred or imminent threats to public safety exist.

Support and Criticism

SB 1047 has garnered support from prominent AI researchers, including Geoffrey Hinton and Yoshua Bengio, who emphasize the importance of balancing innovation with safety. Hinton praised the bill for its sensible approach, highlighting the need for legislation that addresses the risks of powerful AI systems.

However, the bill has also faced criticism, particularly from startup founders and industry leaders. Critics argue that the bill's thresholds and liability provisions could stifle innovation and disproportionately burden smaller developers. Anjney Midha, General Partner at Silicon Valley-based VC firm Andreessen Horowitz, criticized the bill's focus on model-level regulations rather than specific misuse or malicious applications. He warned that stringent requirements could drive innovation overseas and hinder the open source community.

"It's hard to [overstate] just how blindsided startups, founders, and the investor community feel about this bill," Midha said during an interview posted on his company's website. "When it comes to policy-making, especially in technology at the frontier, our legislators should be sitting down and soliciting the opinions of their constituents — which in this case, includes startup founders."

"If this passes in California, it will set a precedent for other states and have rippling consequences inside and outside of the USA — essentially a huge butterfly effect in regard to the state of innovation," he added.

In an open letter on their website ("A statement in opposition to California SB 1047"), members of the AI Alliance, which describes itself as "a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation," voiced their concerns about SB 1047.

"While SB 1047 is not targeting open source development specifically, it will affect the open-source community dramatically. The bill requires developers of AI models of 1026 FLOPS or similar performance (as determined by undefined benchmarks) to implement a full shutdown control that would halt operation of the model and all derivative models. Once a model is open sourced and subsequently downloaded by a third party, by design developers no longer have control over a model. Before such a "shutdown switch" provision is passed, we need to understand how it can be done in the context of open source; the bill does not answer that question. No models at 1026 FLOPS are openly available today, but technology is rapidly advancing, and the open ecosystem could evolve alongside it. However, this legislation seems intended to freeze open source AI development at the 2024 level."

Legislative Context

The bill's advancements come amid a backdrop of federal inaction on AI regulation. With the US Congress largely stagnant on technology legislation, California's initiative seeks to preemptively address the risks posed by rapidly advancing AI technologies while fostering a supportive environment for innovation.

Governor Gavin Newsom's administration has also been proactive on AI. The Governor issued an Executive Order last September to prepare for AI's impacts, and his office released a report on AI's potential benefits and harms.

SB 1047 represents a significant step in California's regulatory approach to AI, with its outcome poised to influence both national and global AI policy. The Assembly's vote on Aug. 20 will be a critical juncture in shaping the future of AI regulation in the state.

Featured

  • three main icons—a cloud, a user profile, and a padlock—connected by circuit lines on a blue abstract background

    Report: Identity Has Become a Critical Security Perimeter for Cloud Services

    A new threat landscape report points to new cloud vulnerabilities. According to the 2025 Global Threat Landscape Report from Fortinet, while misconfigured cloud storage buckets were once a prime vector for cybersecurity exploits, other cloud missteps are gaining focus.

  • AI microchip, a cybersecurity shield with a lock, a dollar coin, and a laptop with financial graphs connected by dotted lines

    Survey: Generative AI Surpasses Cybersecurity in 2025 Tech Budgets

    Global IT leaders are placing bigger bets on generative artificial intelligence than cybersecurity in 2025, according to new research by Amazon Web Services (AWS).

  • Stylized illustration showing cybersecurity elements like shields, padlocks, and secure cloud icons on a neutral, minimalist digital background

    Microsoft Announces Security Advancements

    Microsoft has announced major security advancements across its product portfolio and practices. The work is part of its Secure Future Initiative (SFI), a multiyear cybersecurity transformation the company calls the largest engineering project in company history.

  • glowing digital document floats above a laptop, surrounded by soft, flowing tech-inspired lines and geometric shapes in shades of blue and white

    Boston U Expands AllCampus Partnership with New Non-Credit Certificate Programs

    Boston University Metropolitan College's Center for Professional Education has expanded its relationship with online program management provider AllCampus. The agreement will extend support for BU's existing online Paralegal Studies Program and add new non-credit certificates in financial planning, professional fundraising, and genealogical studies.