California AI Regulation Bill Advances to Assembly Vote with Key Amendments

California’s "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (Senate Bill 1047), spearheaded by Senator Scott Wiener (D-San Francisco), has cleared the Assembly Appropriations Committee with some significant amendments. The bill, aimed at establishing rigorous safety standards for large-scale artificial intelligence (AI) systems, is set for a vote on the Assembly floor on Aug. 20 and must pass by Aug. 31 to move forward.

SB 1047 was crafted to regulate the development of advanced AI models by setting clear, actionable safety requirements, as well as regulatory measures. It targets AI models that are especially powerful and expensive to develop, with the goal of balancing innovation with public safety.

The bill sets standards for AI models with significant computational power — specifically, models that utilize 1026 floating-point operations (FLOPS) per second and cost more than $100 million to train. These models are referred to as "frontier" AI systems.

Among other provisions, the bill establishes risk assessment, safety, security, and testing requirements the developer of a covered AI model must fulfill before training the covered model, using the covered model, or making the covered model available for public or commercial use.

It requires, beginning Jan. 1, 2028, the developer of a covered model to annually retain a third-party auditor to perform an independent audit of compliance with the requirements of the bill.

The bill has undergone substantial revisions based on industry feedback, perhaps most notably from Anthropic, a leading AI research organization known for its work in developing advanced AI systems with a focus on safety, alignment, and ethical considerations. The aim of the amendments is to balance innovation and safety, Weiner said in a statement.

"We can advance both innovation and safety; the two are not mutually exclusive," Wiener said. "While the amendments do not reflect 100% of the changes requested by all stakeholders, we've addressed core concerns from industry leaders and made adjustments to accommodate diverse needs, including those of the open source community."

Major Amendments to SB 1047

  • Criminal Penalties for Perjury Removed: The bill now imposes only civil penalties for false statements to authorities, addressing concerns about the potential misuse of criminal penalties.
  • Elimination of the Frontier Model Division (FMD): The proposed new regulatory body has been removed. Enforcement will continue through the Attorney General's office, with some FMD functions transferred to the Government Operations Agency.
  • Adjusted Legal Standards: The standard for developer compliance has shifted from "reasonable assurance" to "reasonable care," a well-established common law standard, including elements like adherence to NIST safety standards.
  • New Threshold for Fine-Tuned Models: Models fine-tuned at a cost of less than $10 million are exempt from the bill's requirements, focusing regulatory burden on larger-scale projects.
  • Narrowed Pre-Harm Enforcement: The Attorney General's authority to seek civil penalties is now restricted to situations where actual harm has occurred or imminent threats to public safety exist.

Support and Criticism

SB 1047 has garnered support from prominent AI researchers, including Geoffrey Hinton and Yoshua Bengio, who emphasize the importance of balancing innovation with safety. Hinton praised the bill for its sensible approach, highlighting the need for legislation that addresses the risks of powerful AI systems.

However, the bill has also faced criticism, particularly from startup founders and industry leaders. Critics argue that the bill's thresholds and liability provisions could stifle innovation and disproportionately burden smaller developers. Anjney Midha, General Partner at Silicon Valley-based VC firm Andreessen Horowitz, criticized the bill's focus on model-level regulations rather than specific misuse or malicious applications. He warned that stringent requirements could drive innovation overseas and hinder the open source community.

"It's hard to [overstate] just how blindsided startups, founders, and the investor community feel about this bill," Midha said during an interview posted on his company's website. "When it comes to policy-making, especially in technology at the frontier, our legislators should be sitting down and soliciting the opinions of their constituents — which in this case, includes startup founders."

"If this passes in California, it will set a precedent for other states and have rippling consequences inside and outside of the USA — essentially a huge butterfly effect in regard to the state of innovation," he added.

In an open letter on their website ("A statement in opposition to California SB 1047"), members of the AI Alliance, which describes itself as "a community of technology creators, developers, and adopters collaborating to advance safe, responsible AI rooted in open innovation," voiced their concerns about SB 1047.

"While SB 1047 is not targeting open source development specifically, it will affect the open-source community dramatically. The bill requires developers of AI models of 1026 FLOPS or similar performance (as determined by undefined benchmarks) to implement a full shutdown control that would halt operation of the model and all derivative models. Once a model is open sourced and subsequently downloaded by a third party, by design developers no longer have control over a model. Before such a "shutdown switch" provision is passed, we need to understand how it can be done in the context of open source; the bill does not answer that question. No models at 1026 FLOPS are openly available today, but technology is rapidly advancing, and the open ecosystem could evolve alongside it. However, this legislation seems intended to freeze open source AI development at the 2024 level."

Legislative Context

The bill's advancements come amid a backdrop of federal inaction on AI regulation. With the US Congress largely stagnant on technology legislation, California's initiative seeks to preemptively address the risks posed by rapidly advancing AI technologies while fostering a supportive environment for innovation.

Governor Gavin Newsom's administration has also been proactive on AI. The Governor issued an Executive Order last September to prepare for AI's impacts, and his office released a report on AI's potential benefits and harms.

SB 1047 represents a significant step in California's regulatory approach to AI, with its outcome poised to influence both national and global AI policy. The Assembly's vote on Aug. 20 will be a critical juncture in shaping the future of AI regulation in the state.

Featured

  • interconnected cloud icons with glowing lines on a gradient blue backdrop

    Report: Cloud Certifications Bring Biggest Salary Payoff

    It pays to be conversant in cloud, according to a new study from Skillsoft The company's annual IT skills and salary survey report found that the top three certifications resulting in the highest payoffs salarywise are for skills in the cloud, specifically related to Amazon Web Services (AWS), Google Cloud, and Nutanix.

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.