U.S. and U.K. to Collaborate on AI Safety Testing Frameworks

The United States and United Kingdom governments have announced a joint effort to establish AI safety testing standards and protocols.

The two countries have signed a so-called "Memorandum of Understanding," announced the U.S. Department of Commerce on Monday. The memorandum, signed by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, concerns a nascent effort by the two countries to "work together to develop tests for the most advanced AI models."

The news comes one week after Anthropic, a leading generative AI firm and maker of the Claude large language model family, posted a lengthy blog advocating for an industrywide effort to create a standardized process for testing the safety of AI systems. In that blog, Anthropic stressed the importance of creating a robust AI testing paradigm, one that's verified and administered by reputable third parties, to "avoid societal harm" caused by AI.

Anthropic also appealed specifically to governments to begin setting up AI testing programs immediately to address the near and present danger of AI-driven cybersecurity attacks.

In a prepared statement, the U.K.'s Donelan indicated that AI — and the regulation of it — is "the defining technology challenge of our generation."

"[T]he safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The U.S.-U.K. partnership, which is effective immediately, has several aspects. The two countries are committing to developing "a common approach to AI safety testing," and to sharing resources and capabilities in pursuit of that goal. That includes "personnel exchanges," as well as information and research sharing.

They also "intend to perform at least one joint testing exercise on a publicly accessible model."

Down the line, the two countries plan to forge similar partnerships with other countries "to promote AI safety across the globe."

Both governments acknowledge the need to lay the groundwork for AI safety standards immediately, given how rapidly AI technology evolves — another point Anthropic discussed in its manifesto.

"This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren't running away from these concerns — we're running at them," said Raimondo. 

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • close-up illustration of a hand signing a legislative document

    California Passes AI Safety Legislation, Awaits Governor's Signature

    California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches STEM-Optimized 'Reasoning' AI Model

    OpenAI has launched a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.

  • scene in a cybersecurity operations center, showing an AI and a human competing head-to-head

    91% of CISOs Say AI Will Outperform Security Pros

    A new survey of CISOs by Bugcrowd indicates AI is already beating security pros in some areas and is expected to take on a larger role in the future.