Anthropic Announces Cautious Support for New California AI Regulation Legislation

Anthropic has announced its support for an amended version of California's Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," because of revisions to the bill the company helped to influence, but not without some reservations.

"In our assessment the new SB 1047 is substantially improved to the point where we believe its benefits likely outweigh its costs," Anthropic CEO Dario Amodei said in a letter to California Governor Gavin Newsom on Aug. 21. "However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us."

California's proposed bill on AI regulation, SB 1047, advanced by State Senator Scott Wiener, a Democrat, mandates safety testing for many of the most advanced AI models that cost more than $100 million to develop or those that require a defined amount of computing power. If the bill passes, developers of AI software operating in the state will need to outline methods for turning off the AI models if they go awry, effectively implementing a kill switch. The bill would also give the state attorney general the power to sue if developers are not compliant.

Senator Wiener recently revised the bill to appease tech companies, relying in part on input from Anthropic, a San Francisco-based AI safety and research company backed by Amazon and Alphabet. The revised bill did away with a provision for a government AI oversight committee. (See "California AI Regulation Bill Advances to Assembly Vote with Key Amendments.")

In his letter, Amodei listed what he sees as the pros and cons of SB 1047. His list of pros included:

"Developing SSPs and being honest with the public about them." The bill mandates the adoption of safety and security protocols (SSPs) similar to those used by top AI developers like Anthropic, Google, and OpenAI. Some companies haven't adopted these measures or have been vague about them, and there are no safeguards against misleading claims. "It is a major improvement, with very little downside, that SB 1047 requires companies to adopt some SSP (whose details are up to them) and to be honest with the public about their SSP-related practices and findings."

"Deterrence of downstream harms through clarifying the standard of care." AI systems are more adaptable than most technologies, and SSP-like measures by companies like Anthropic can reduce misuse risks. SB 1047 ties companies' liability to their SSPs, incentivizing the creation of effective protocols to prevent catastrophic risks. "As a company developing foundational models that also invests heavily in safety, Anthropic thinks it is important to systematize and incentivize this attitude across the industry."

"Pushing forward the science of AI risk reduction." AI safety is an emerging field, with best practices still being developed. While early, strict legislation may be premature, it's crucial to push AI companies to invest in safety science. By requiring Safety and Security Protocols and tying them to liability, the bill encourages companies to address foreseeable risks and develop mitigation strategies before their models become societal risks.

His list of concerns included:

"Some concerning aspects of pre-harm enforcement are preserved in auditing and GovOps." One of Anthropic's original concerns about the bill was the Frontier Model Division's (FMD) prescriptive guidance, reinforced by pre-harm enforcement. The company found it too inflexible for AI's early development stage. The amended SB 1047 eliminates the FMD and narrows pre-harm enforcement, though some powers have shifted to GovOps, which can now set binding requirements for private auditors. The relationship between these entities is complex, with GovOps providing non-binding guidance but influencing mandatory audit conditions.

"It is our best understanding that this interplay will not end up causing unnecessary pre-harm enforcement, but the language has enough ambiguity to raise concerns," Amodei wrote. "If implemented well, this could lead to well-defined standards for auditors and a well-functioning audit ecosystem, but if implemented poorly this could cause the audits to not focus on the core safety aspects of the bill."

"The bill's treatment of injunctive relief." Another place pre-harm enforcement still exists is that the Attorney General retains broad authority to enforce the entire bill via injunctive relief, including before any harm has occurred. This is substantially narrower than previous pre-harm enforcement, but is still a vector for overreach.

"Miscellaneous other issues." The company's list of concerns also included know-your-customer requirements on cloud providers, overly short notice periods for incident reporting, and overly expansive whistleblower protections that are subject to abuse, were not addressed.

"The burdens created by these provisions are likely to be manageable, if the executive branch takes a judicious approach to implementation," Amodei wrote. "If SB 1047 were signed into law, we would urge the government to avoid overreach in these areas in particular, to maintain a laser focus on catastrophic risks, and to resist the temptation to commandeer SB 1047's provisions to accomplish unrelated goals."

Opponents of the bill, which include OpenAI, Meta, Y Combinator, and venture capital firm Andreessen Horowitz, argue that the bill's thresholds and liability provisions could stifle innovation and unfairly burden smaller developers. They criticize the bill for focusing on model-level regulations rather than specific misuse. He warned that strict requirements could drive innovation overseas and harm the open source community.

Anjney Midha, General Partner at Andreessen Horowitz, has expressed concerns that startups, founders, and investors will feel blindsided by the bill and emphasized the need for lawmakers to consult with the tech community.

In an open letter, the AI Alliance, a group focused on safe AI and open innovation, voiced its concerns. The group noted that, although SB 1047 doesn't directly target open-source development, it would significantly impact it. The bill requires developers of AI models with 10^26 FLOPS or more to implement a shutdown control, but it doesn't address how this would work for open source models. Although no such models exist yet, the bill could freeze open source AI development at its 2024 level.

Several California representatives, including Ro Khanna, Anna Eshoo, and Zoe Lofgren, have opposed the bill, citing concerns about its impact on the state's economy and innovation ecosystem.

Featured

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • blue and green lines intersecting and merging in an abstract pattern against a light gray background with a subtle grid design

    Data Integration Market: Cloud Giants Down, AI Up

    "By 2027, AI assistants and AI-enhanced workflows incorporated into data integration tools will reduce manual intervention by 60 percent and enable self-service data management," according to research firm Gartner.

  • minimalist bookcase filled with textbooks featuring vibrant, solid-colored spines with no text, and a prominent number "25" displayed on one of the shelves

    OpenStax Celebrates 25th Anniversary

    OpenStax is celebrating its 25th anniversary as 2024 comes to a close. The open educational resources initiative from Rice University has served almost 37 million students in 153 countries and saved students nearly $3 billion in course material costs since its launch in 1999.

  • wind turbine and solar panels with glowing accents on the left and a digital shield surrounded by binary code on the right

    Educause Horizon Report: Sustainability Pressures Lead to Increased Cybersecurity Risks

    Educause recently released the 2024 Cybersecurity and Privacy Edition of its Horizon Report series, forecasting key trends, technologies, and practices shaping the future of cybersecurity and privacy in higher education.