U.S. and U.K. to Collaborate on AI Safety Testing Frameworks

The United States and United Kingdom governments have announced a joint effort to establish AI safety testing standards and protocols.

The two countries have signed a so-called "Memorandum of Understanding," announced the U.S. Department of Commerce on Monday. The memorandum, signed by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, concerns a nascent effort by the two countries to "work together to develop tests for the most advanced AI models."

The news comes one week after Anthropic, a leading generative AI firm and maker of the Claude large language model family, posted a lengthy blog advocating for an industrywide effort to create a standardized process for testing the safety of AI systems. In that blog, Anthropic stressed the importance of creating a robust AI testing paradigm, one that's verified and administered by reputable third parties, to "avoid societal harm" caused by AI.

Anthropic also appealed specifically to governments to begin setting up AI testing programs immediately to address the near and present danger of AI-driven cybersecurity attacks.

In a prepared statement, the U.K.'s Donelan indicated that AI — and the regulation of it — is "the defining technology challenge of our generation."

"[T]he safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The U.S.-U.K. partnership, which is effective immediately, has several aspects. The two countries are committing to developing "a common approach to AI safety testing," and to sharing resources and capabilities in pursuit of that goal. That includes "personnel exchanges," as well as information and research sharing.

They also "intend to perform at least one joint testing exercise on a publicly accessible model."

Down the line, the two countries plan to forge similar partnerships with other countries "to promote AI safety across the globe."

Both governments acknowledge the need to lay the groundwork for AI safety standards immediately, given how rapidly AI technology evolves — another point Anthropic discussed in its manifesto.

"This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren't running away from these concerns — we're running at them," said Raimondo. 

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • interconnected cloud icons with glowing lines on a gradient blue backdrop

    Report: Cloud Certifications Bring Biggest Salary Payoff

    It pays to be conversant in cloud, according to a new study from Skillsoft The company's annual IT skills and salary survey report found that the top three certifications resulting in the highest payoffs salarywise are for skills in the cloud, specifically related to Amazon Web Services (AWS), Google Cloud, and Nutanix.

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.