U.S. and U.K. to Collaborate on AI Safety Testing Frameworks

The United States and United Kingdom governments have announced a joint effort to establish AI safety testing standards and protocols.

The two countries have signed a so-called "Memorandum of Understanding," announced the U.S. Department of Commerce on Monday. The memorandum, signed by U.S. Commerce Secretary Gina Raimondo and U.K. Technology Secretary Michelle Donelan, concerns a nascent effort by the two countries to "work together to develop tests for the most advanced AI models."

The news comes one week after Anthropic, a leading generative AI firm and maker of the Claude large language model family, posted a lengthy blog advocating for an industrywide effort to create a standardized process for testing the safety of AI systems. In that blog, Anthropic stressed the importance of creating a robust AI testing paradigm, one that's verified and administered by reputable third parties, to "avoid societal harm" caused by AI.

Anthropic also appealed specifically to governments to begin setting up AI testing programs immediately to address the near and present danger of AI-driven cybersecurity attacks.

In a prepared statement, the U.K.'s Donelan indicated that AI — and the regulation of it — is "the defining technology challenge of our generation."

"[T]he safe development of AI is a shared global issue," she said. "Only by working together can we address the technology's risks head on and harness its enormous potential to help us all live easier and healthier lives."

The U.S.-U.K. partnership, which is effective immediately, has several aspects. The two countries are committing to developing "a common approach to AI safety testing," and to sharing resources and capabilities in pursuit of that goal. That includes "personnel exchanges," as well as information and research sharing.

They also "intend to perform at least one joint testing exercise on a publicly accessible model."

Down the line, the two countries plan to forge similar partnerships with other countries "to promote AI safety across the globe."

Both governments acknowledge the need to lay the groundwork for AI safety standards immediately, given how rapidly AI technology evolves — another point Anthropic discussed in its manifesto.

"This partnership is going to accelerate both of our Institutes' work across the full spectrum of risks, whether to our national security or to our broader society. Our partnership makes clear that we aren't running away from these concerns — we're running at them," said Raimondo. 

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • computer with a red warning icon on its screen, surrounded by digital grids, glowing neural network patterns, and a holographic brain

    Report Highlights Security Risks of Open Source AI

    In these days of rampant ransomware and other cybersecurity exploits, security is paramount to both proprietary and open source AI approaches — and here the open source movement might be susceptible to some inherent drawbacks, such as use of possibly insecure code from unknown sources.

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study: Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.

  • university building with classical columns and a triangular roof displayed on a computer screen, surrounded by minimalist tech elements like circuit lines and abstract digital shapes

    Pima Community College Launches New Portal for a Unified Digital Campus Experience

    Arizona's Pima Community College is elevating the digital campus experience for students, faculty, and staff with a new portal built on the Pathify digital engagement platform.