Tech Giants Agree to Put Limits on Gen AI Systems

Sixteen generative AI leaders — including OpenAI, Microsoft, Google, and Anthropic — have agreed to pull the plug on their own AI technologies if they're deemed too dangerous.

The companies are signatories of the "Frontier AI Safety Commitments" document unveiled last week at the AI Seoul Summit. The document, which lays out guidelines for limiting AI misuse, was dubbed a "world first" by the the U.K. government, which co-hosted the summit alongside the Republic of Korea.

The full list of signatories is:

  • Amazon 
  • Anthropic 
  • Cohere 
  • Google/Google DeepMind 
  • G42 
  • IBM 
  • Inflection AI 
  • Meta 
  • Microsoft 
  • Mistral AI 
  • Naver 
  • OpenAI 
  • Samsung Electronics 
  • Technology Innovation Institute 
  • xAI 
  • Zhipu.ai

In the topmost goal of the document, organizations are asked to "effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems."

Many of the signatories already have internal requirements meant to ensure the safety of their AI technologies. OpenAI, for example, unveiled an AI "preparedness framework" last year, though it's still in beta. It also recently formed a new AI Safety and Security Committee, albeit after disbanding its previous AI safety committee.

Microsoft, meanwhile, abides by its Responsible AI Standard developed in 2016. Meta and others are also independently exploring ways to "watermark" content created by their AI systems to limit misinformation, especially in light of this year's elections.

Critically, however, a tenet of this first commitment is that organizations must agree to kill development of AI systems that are beyond saving.

Specifically, they must define "thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable," and "commit not to develop or deploy a model or system at all, if mitigations cannot be applied to keep risks below the thresholds."

The companies are tasked with defining their kill thresholds over the coming months, with the goal of publishing a formal safety framework in time for the AI Action Summit happening February 2025 in France.

The two other goals outlined in the document are:

  • Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Organisations' approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The document also lists several AI safety best practices that the signatories pledge to apply, if they haven't already. These include red-teaming, watermarking, incentivizing third-party testing, creating safeguards against insider threats, and more.

Said U.K. Prime Minister Rishi Sunak, "These commitments ensure the world's leading AI companies will provide transparency and accountability on their plans to develop safe AI." The pledges laid out in the document do not carry legal weight, however; they're described as "voluntary commitments."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • Digital clouds with data points and network connections

    Microsoft Makes Windows 365 Cloud Apps Available for Public Preview

    Microsoft has announced that Windows 365 Cloud Apps are now available for public preview. This allows IT administrators to stream individual Windows applications from the cloud, removing the need to assign Cloud PCs to every user.

  • university building with classical architecture is partially overlaid by a glowing digital brain graphic

    NSF Invests $100 Million in National AI Research Institutes

    The National Science Foundation has announced a $100 million investment in National Artificial Intelligence Research Institutes, part of a broader White House strategy to maintain American leadership as competition with China intensifies.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • school building connected by lines to symbols of AI, data charts, and a funding document with a dollar sign

    ED Issues Guidance on the Use of Federal Grant Funds to Support Learner Outcomes with AI

    In response to President Trump's April 23 Executive Order on advancing AI education, the United States Department of Education has issued new guidance on how K-12 and higher education institutions may use federal grant funds "to support improved outcomes for learners through the responsible integration of artificial intelligence."