Tech Giants Agree to Put Limits on Gen AI Systems

Sixteen generative AI leaders — including OpenAI, Microsoft, Google, and Anthropic — have agreed to pull the plug on their own AI technologies if they're deemed too dangerous.

The companies are signatories of the "Frontier AI Safety Commitments" document unveiled last week at the AI Seoul Summit. The document, which lays out guidelines for limiting AI misuse, was dubbed a "world first" by the the U.K. government, which co-hosted the summit alongside the Republic of Korea.

The full list of signatories is:

  • Amazon 
  • Anthropic 
  • Cohere 
  • Google/Google DeepMind 
  • G42 
  • IBM 
  • Inflection AI 
  • Meta 
  • Microsoft 
  • Mistral AI 
  • Naver 
  • OpenAI 
  • Samsung Electronics 
  • Technology Innovation Institute 
  • xAI 
  • Zhipu.ai

In the topmost goal of the document, organizations are asked to "effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems."

Many of the signatories already have internal requirements meant to ensure the safety of their AI technologies. OpenAI, for example, unveiled an AI "preparedness framework" last year, though it's still in beta. It also recently formed a new AI Safety and Security Committee, albeit after disbanding its previous AI safety committee.

Microsoft, meanwhile, abides by its Responsible AI Standard developed in 2016. Meta and others are also independently exploring ways to "watermark" content created by their AI systems to limit misinformation, especially in light of this year's elections.

Critically, however, a tenet of this first commitment is that organizations must agree to kill development of AI systems that are beyond saving.

Specifically, they must define "thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable," and "commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds."

The companies are tasked with defining their kill thresholds over the coming months, with the goal of publishing a formal safety framework in time for the AI Action Summit happening February 2025 in France.

The two other goals outlined in the document are:

  • Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Organisations' approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The document also lists several AI safety best practices that the signatories pledge to apply, if they haven't already. These include red-teaming, watermarking, incentivizing third-party testing, creating safeguards against insider threats, and more.

Said U.K. Prime Minister Rishi Sunak, "These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI." The pledges laid out in the document do not carry legal weight, however; they're described as "voluntary commitments."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • server racks, a human head with a microchip, data pipes, cloud storage, and analytical symbols

    OpenAI, Oracle Expand AI Infrastructure Partnership

    OpenAI and Oracle have announced they will develop an additional 4.5 gigawatts of data center capacity, expanding their artificial intelligence infrastructure partnership as part of the Stargate Project, a joint venture among OpenAI, Oracle, and Japan's SoftBank Group that aims to deploy 10 gigawatts of computing capacity over four years.

  • glowing digital brain interacts with an open book, with stacks of books beside it

    Federal Court Rules AI Training with Copyrighted Books Fair Use

    A federal judge ruled this week that artificial intelligence company Anthropic did not violate copyright law when it used copyrighted books to train its Claude chatbot without author consent, but ordered the company to face trial on allegations it used pirated versions of the books.

  • human figures surrounded by precise arcs with book and gear icons

    Kennedy-King College Rolls Out Holistic Student Support Program

    Chicago's Kennedy-King College is expanding student support services through a collaboration between City Colleges of Chicago and One Million Degrees (OMD), a Chicago-based nonprofit serving low-income community college students.

  • AI assistant represented by a glowing blue humanoid figure in front of a laptop, surrounded by interconnected network nodes and data servers

    Network to Code Launches AI Assistant for Enterprise Network Teams

    Network automation firm Network to Code has launched NautobotGPT, an AI-powered assistant aimed at helping enterprise network engineers create, test, and troubleshoot automation tasks more efficiently.