U.S. and U.K. to Collaborate on AI Safety Testing Frameworks

The United States and United Kingdom governments have announced a joint effort to establish AI safety testing standards and protocols.

The two countries have signed a so-called "Memorandum of Understanding," announced the U.S. Department of Commerce on Monday. The memorandum, signed by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, concerns a nascent effort by the two countries to "work together to develop tests for the most advanced AI models."

The news comes one week after Anthropic, a leading generative AI firm and maker of the Claude large language model family, posted a lengthy blog advocating for an industrywide effort to create a standardized process for testing the safety of AI systems. In that blog, Anthropic stressed the importance of creating a robust AI testing paradigm, one that's verified and administered by reputable third parties, to "avoid societal harm" caused by AI.

Anthropic also appealed specifically to governments to begin setting up AI testing programs immediately to address the near and present danger of AI-driven cybersecurity attacks.

In a prepared statement, the U.K.'s Donelan indicated that AI — and the regulation of it — is "the defining technology challenge of our generation."

"[T]he safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The U.S.-U.K. partnership, which is effective immediately, has several aspects. The two countries are committing to developing "a common approach to AI safety testing," and to sharing resources and capabilities in pursuit of that goal. That includes "personnel exchanges," as well as information and research sharing.

They also "intend to perform at least one joint testing exercise on a publicly accessible model."

Down the line, the two countries plan to forge similar partnerships with other countries "to promote AI safety across the globe."

Both governments acknowledge the need to lay the groundwork for AI safety standards immediately, given how rapidly AI technology evolves — another point Anthropic discussed in its manifesto.

"This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren't running away from these concerns — we're running at them," said Raimondo. 

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • cloud, database stack, computer screen, binary code, and flowcharts interconnected by lines and arrows

    Salesforce to Acquire Data Management Firm Informatica

    Salesforce has announced plans to acquire data management company Informatica for $8 billion. The deal is aimed at strengthening Salesforce's AI foundation and expanding its enterprise data capabilities.

  • Abstract AI circuit board pattern

    New Nonprofit to Work Toward Safer, Truthful AI

    Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a new nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

  • illustration of a football stadium with helmet on the left and laptop with ed tech icons on the right

    The 2025 NFL Draft and Ed Tech Selection: A Strategic Parallel

    In the fast-evolving landscape of collegiate football, the NFL, and higher education, one might not immediately draw connections between the 2025 NFL Draft and the selection of proper educational technology for a college campus. However, upon closer examination, both processes share striking similarities: a rigorous assessment of needs, long-term strategic impact, talent or tool evaluation, financial considerations, and adaptability to a dynamic future.

  • server racks, a human head with a microchip, data pipes, cloud storage, and analytical symbols

    OpenAI, Oracle Expand AI Infrastructure Partnership

    OpenAI and Oracle have announced they will develop an additional 4.5 gigawatts of data center capacity, expanding their artificial intelligence infrastructure partnership as part of the Stargate Project, a joint venture among OpenAI, Oracle, and Japan's SoftBank Group that aims to deploy 10 gigawatts of computing capacity over four years.