NIST's U.S. AI Safety Institute Announces Research Collaboration with Anthropic and OpenAI

The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has formalized agreements with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

Under the Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

"Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelly, director of the U.S. AI Safety Institute, in a statement. "With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety."

"These agreements are just the start," she added, "but they are an important milestone as we work to help responsibly steward the future of AI."

The U.S. AI Safety Institute also intends to work closely with its partners at the U.K. AI Safety Institute to offer feedback to Anthropic and OpenAI on potential safety enhancements to their models.

"Safe, trustworthy AI is crucial for the technology's positive impact," said Anthropic co-founder and head of policy Jack Clark, in a statement. "Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment."

The agreements come at a time of increasing regulatory scrutiny over the safe and ethical use of AI technologies. California legislators are also poised to vote on a bill regulating AI development and deployment.

The initiative builds on NIST’s longstanding legacy in advancing measurement science and standards, with the aim of fostering the safe, secure, and trustworthy development and use of AI, as outlined in the Biden-Harris administration’s Executive Order on AI.

"We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said OpenAI chief strategy officer Jason Kwon.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • a cloud, an AI chip, and a padlock interconnected by circuit-like lines

    Report: Attackers Increasingly Targeting Cloud, AI Systems

    CrowdStrike’s 2025 Threat Hunting Report found that AI tools are being weaponized and directly targeted, while cloud intrusions surge 136% in early 2025.

  • student and teacher using AI-enabled laptops, with rising arrows on a graph

    Student and Teacher AI Use Jumps Nearly 30% in One Year

    In a recent survey from learning platform Quizlet, 85% of high school and college students and teachers said they use AI technology, compared to 66% in 2024 — a 29% increase year over year.

  • geometric grid of colorful faculty silhouettes using laptops

    Top 3 Faculty Uses of Gen AI

    A new report from Anthropic provides insights into how higher education faculty are using generative AI, both in and out of the classroom.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.