California Governor Vetoes AI Regulation Bill, Calls for More Targeted Approach

In a decision that has reignited debates over artificial intelligence (AI) regulation, California Governor Gavin Newsom on Sunday vetoed Senate Bill 1047, the proposed legislation aimed at safeguarding against the misuse of the technology. The bill, which had passed both houses of the state legislature with overwhelming support, was intended to be one of the first of its kind in the U.S., setting mandatory safety protocols for AI developers. Newsom's veto has drawn sharp reactions from various stakeholders in the tech industry, academia, and political circles.

In his veto message, Newsom expressed concerns over the bill's overly broad approach.

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."

Newsom called for a more targeted approach to AI regulation that also supports the potential benefits it could bring.

In defending his decision, Newsom also highlighted his ongoing collaboration with AI experts, such as Stanford professor Fei-Fei Li, who is often referred to as the "godmother of AI," to create more science-based, empirical guidelines for regulating AI systems. Newsom stressed the need for a deeper understanding of "frontier models" — the most advanced AI systems — and their potential risks before enacting sweeping legislation.

The Debate Over AI Regulation

The veto has brought a range of reactions, underscoring the divisive nature of AI regulation. On one side, companies like Google and OpenAI welcomed Newsom's decision. In a statement, Google praised the governor for ensuring that California remains at the forefront of developing "responsible AI tools," adding that the tech giant looks forward to working with Newsom's administration and the federal government to create appropriate safeguards. OpenAI applauded Newsom's recognition of California's leadership role in AI innovation and his efforts to engage state lawmakers on issues such as deepfakes, child safety, and AI literacy.

SB 1047 also faced sharp criticism from some organizations over its potential impact on the open source community. The Mozilla Foundation, the nonprofit behind the Mozilla Firefox browser, had previously called on Gov. Gavin Newsom to veto the bill.

"We see parallels between the early internet and today's AI ecosystem, which is becoming increasingly closed and controlled by a few large tech companies," the foundation wrote in an earlier blog post. "We are concerned that SB 1047 would accelerate this trend, harming the open source community and making AI less safe, not more."

The veto has disappointed legislators and activists who saw the bill as a necessary first step toward reigning in unchecked AI development. State Senator Scott Wiener, who authored the bill, described the veto as a "missed opportunity" for California to lead on tech regulation, as it had done with data privacy and net neutrality. Wiener stressed that without stringent safeguards, the public remains vulnerable to the potential harms posed by rapidly advancing AI systems.

"This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet," Weiner wrote in a statement. "The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing. While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way."

Similarly, the nonprofit organization Accountable Tech condemned the veto, calling it a "massive giveaway to Big Tech companies" that would allow them to continue deploying AI technologies without adequate oversight. The group warned that AI tools, already contributing to societal risks like threats to democracy and civil rights, could cause further harm without proper regulation.

"We're deeply disappointed in Governor Newsom's decision today, which will, once again, put the interests of billionaire tech executives above the well-being of Californians," the group wrote in a statement. "Tech companies have proven time and time again that they can't be trusted to regulate themselves — and yet when given the opportunity to sign common sense, bipartisan AI guardrails into law, Governor Newsom caved to industry pressure."

The Political Landscape

The veto also highlights the broader political and regulatory challenges surrounding AI governance. With federal efforts to regulate AI stalled in Congress, individual states like California have increasingly taken the lead on the issue. SB 1047 would have placed California at the forefront of AI regulation, potentially setting a standard for other states to follow. Some lawmakers, however, including U.S. Representative Ro Khanna and former House Speaker Nancy Pelosi, who represent Silicon Valley, had voiced opposition to the bill, citing concerns about its impact on innovation.

In her response to the veto, Pelosi thanked Newsom for recognizing the need to balance innovation with responsible regulation. She echoed the governor's call for AI policies that enable small entrepreneurs and academic institutions, rather than large tech companies, to thrive.

The Big Picture

The controversy surrounding SB 1047 underscores the complexities of regulating a technology that is evolving at a breakneck pace. Although some advocate for urgent legislative action to mitigate the risks of AI, others caution that overly rigid regulations could stifle innovation and harm smaller developers. In the meantime, as the federal government remains slow to act, states like California are grappling with how best to navigate the opportunities and dangers posed by AI.

As the debate continues, Newsom's veto serves as a reminder that balancing innovation with public safety is no easy task. The question of how to regulate AI effectively, without hindering its potential benefits, will likely remain a hot-button issue for the foreseeable future.

What's Next?

Despite the veto, Newsom has signaled that AI regulation remains a priority for his administration. In his statement, he reiterated his commitment to working with researchers and lawmakers to develop "responsible guardrails" for AI, particularly for generative AI technologies that have recently exploded in popularity. Newsom plans to continue these efforts during the Legislature's next session, focusing on creating regulatory frameworks that are informed by empirical, science-based analyses.

In the absence of SB 1047, California's approach to AI regulation will likely be shaped by ongoing collaboration between state leaders, academic experts, and industry stakeholders. This iterative process may result in a more refined version of the bill, or a new regulatory framework altogether, aimed at addressing the concerns raised by both proponents and opponents of the current legislation.

Featured

  • open laptop with screen depicting a glowing, holographic figure surrounded by floating symbols of knowledge like books, equations, and lightbulbs

    Cengage Intros Gen AI Student Assistant Beta

    Ed tech company Cengage has announced the beta launch of Student Assistant, a generative AI tool designed to guide students through the learning process with personalized resources and feedback.

  • stylized illustration of a college administrator lying awake in a cozy bed, looking thoughtful

    When Thinking About Data, What Keeps You Up at Night?

    The proliferation of technology in education means we have more data about how, what and if students are learning than ever before. The question is, how do we ensure that data gets into the hands of the people who can use it to improve teaching and learning, without invading a student or educator's privacy?

  • Copilot Propels Microsoft to Lead Position in Analytics/BI Market

    A new Gartner report on the analytics/business intelligence market places Microsoft in the lead position of the field. The Redmond cloud giant stands apart and alone atop the axes for both the ability to execute and completeness of vision in Gartner's latest "Magic Quadrant for Analytics and Business Intelligence Platforms."

  • abstract illustration of a biometric face scan, featuring a minimalistic wireframe outline of a human face

    Microsoft Releases Face Check Identity Verification for Enterprise Use

    Face Check with Microsoft Entra Verified ID, a consent-based method used to confirm a person's identity, is now available in general release.