California AI Watermarking Bill Garners OpenAI Support

ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark."

ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." Microsoft, Adobe, and other tech companies have also expressed their support.

The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

Watermarking is a technique used to embed additional information into images, audio, video, and documents, often invisibly, to establish their provenance and authenticity

In a letter sent to California State Assembly member Buffy Wicks, who authored the bill, OpenAI Chief Strategy Officer Jason Kwon emphasized the importance of transparency in AI content, especially during election years. "New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content," Kwon wrote. (The letter was reviewed by Reuters.)

This bill has been overshadowed by another California state bill, SB 1047, which aims to require that AI developers conduct safety testing on some of their own models. That bill has faced a backlash from the tech industry, including Microsoft-backed OpenAI. (More information here.)

California state lawmakers introduced 65 bills addressing artificial intelligence during this legislative session, according to the state's legislative database. These proposed measures include ensuring algorithmic decisions are unbiased and protecting the intellectual property of deceased individuals from AI exploitation. However, many of these bills have already stalled.

San Francisco-based OpenAI has emphasized the importance of transparency and provenance requirements, such as watermarking, for AI-generated content, particularly in an election year.

With elections taking place in countries representing a third of the world's population this year, experts are increasingly concerned about the impact of AI-generated content, which has already played a significant role in some elections, including in Indonesia.

"New technology and standards can help people understand the origin of content they find online and avoid confusion between human-generated and photorealistic AI-generated content," Kwon wrote in his letter.

AB 3211 passed the state Assembly with a unanimous 62-0 vote and recently cleared the senate appropriations committee, setting it up for a full Senate vote. If approved by Aug. 31, the bill will move to Governor Gavin Newsom for signing or veto by Sept. 30.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.

  • three glowing stacks of tech-themed icons

    Research: LLMs Need a Translation Layer to Launch Complex Cyber Attacks

    While large language models have been touted for their potential in cybersecurity, they are still far from executing real-world cyber attacks — unless given help from a new kind of abstraction layer, according to researchers at Carnegie Mellon University and Anthropic.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • magnifying glass revealing the letters AI

    New Tool Tracks Unauthorized AI Usage Across Organizations

    DevOps platform provider JFrog is taking aim at a growing challenge for enterprises: users deploying AI tools without IT approval.