Open Menu Close Menu

Artificial Intelligence

California Passes AI Safety Legislation, Awaits Governor's Signature

California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

The bill (S.B. 1047), which passed the California State Assembly by a 45–15 vote, following a 32–1 vote in the state Senate in May, awaits a final procedural vote in the Senate. If passed, the bill  would require large AI companies to test their systems for safety before they are released to the public. Additionally, the bill grants the state's attorney general the authority to sue companies for damages if their technologies cause significant harm, including death or property damage.

The passage of S.B. 1047 has reignited a contentious debate about how best to regulate artificial intelligence. The debate has focused so far on generative AI systems, which have raised concerns about misuse in such areas as disinformation campaigns and even the creation of biological weapons.

Senator Scott Wiener, a Democrat and co-author of the bill, celebrated the Assembly's vote as a proactive step in ensuring that AI development aligns with the public interest. "With this vote, the Assembly has taken the truly historic step of working proactively to ensure an exciting new technology protects the public interest as it advances," Wiener said in a statement.

Governor Newsom, who has faced intense lobbying from both sides of the issue, has not yet publicly indicated his stance on the legislation. The tech industry, including companies such as Google, Meta, and OpenAI, has mounted a significant campaign urging the governor to veto the bill, arguing that it could stifle innovation. They contend that the regulation of AI technologies should be handled at the federal level to avoid a patchwork of state laws that could slow the pace of progress.

Opponents also argue that S.B. 1047 targets developers rather than those who misuse AI systems, such as those who would deploy AI tools for nefarious purposes. Nancy Pelosi, former Speaker of the House, along with other congressional representatives, expressed concern that the bill's requirements are premature and could harm AI development. Pelosi referred to the legislation as "well-intentioned but ill-informed."

On the other side, prominent figures such as Elon Musk, CEO of Tesla and founder of xAI, have voiced support for the legislation. Musk, who has long advocated for AI regulation, called the bill a "tough call," but ultimately expressed approval due to the potential risks AI poses to society.

One of the most significant aspects of S.B. 1047 is its focus on ensuring that companies test their AI models for safety. The bill applies to AI models that require more than $100 million in data to train, a threshold that few current models meet but one that experts say could become more common as the technology evolves.

To address concerns from the tech industry, Senator Wiener made several amendments to the bill earlier this month. These included removing a proposed new agency dedicated to AI safety and narrowing the bill's liability provisions, so companies would only be punished for actual harm, not potential harm.

Wiener emphasized that his approach is a "light touch," intended to balance the need for innovation with safety. "Innovation and safety can go hand in hand — and California is leading the way," he said in a statement.

If signed into law, S.B. 1047 could position California as a leader in AI regulation, similar to its role in setting national standards for environmental regulations and consumer privacy. It could also serve as a model for other states and even federal lawmakers, who have yet to pass any comprehensive AI regulations.

The European Union has already enacted the AI Act, which imposes strict regulations on AI, but no such laws exist in the U.S. In an open letter to Governor Newsom, a coalition of AI experts, including AI "godfathers" Geoffrey Hinton and Yoshua Bengio, warned of AI's catastrophic potential without appropriate safety measures. In a letter to Newsom, they emphasized that the decisions made now could have far-reaching consequences.

"Forty years ago, when I was training the first version of the AI algorithms behind tools like ChatGPT, no one — including myself — would have predicted how far AI would progress. Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously ... I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it's critical that we have legislation with real teeth to address the risks. California is a natural place for that to start, as it is the place this technology has taken off."

Professor Bengio published an op-ed in Fortune in support of the bill.

The governor's decision on S.B. 1047 will come amid a broader conversation about AI's role in society and how best to harness its potential while mitigating its risks. As the largest home to AI companies, including 35 of the world's top 50 firms, California's regulations could influence the global AI landscape for years to come.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

comments powered by Disqus