NIST's U.S. AI Safety Institute Announces Research Collaboration with Anthropic and OpenAI

The U.S. AI Safety Institute, part of the National Institute of Standards and Technology (NIST), has formalized agreements with AI companies Anthropic and OpenAI to collaborate on AI safety research, testing, and evaluation.

Under the Memoranda of Understanding, the U.S. AI Safety Institute will gain access to new AI models from both companies before and after their public release. This collaboration aims to assess the capabilities and risks of these models and develop methods to mitigate potential safety concerns.

"Safety is essential to fueling breakthrough technological innovation," said Elizabeth Kelly, director of the U.S. AI Safety Institute, in a statement. "With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety."

"These agreements are just the start," she added, "but they are an important milestone as we work to help responsibly steward the future of AI."

The U.S. AI Safety Institute also intends to work closely with its partners at the U.K. AI Safety Institute to offer feedback to Anthropic and OpenAI on potential safety enhancements to their models.

"Safe, trustworthy AI is crucial for the technology's positive impact," said Anthropic co-founder and head of policy Jack Clark, in a statement. "Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment."

The agreements come at a time of increasing regulatory scrutiny over the safe and ethical use of AI technologies. California legislators are also poised to vote on a bill regulating AI development and deployment.

The initiative builds on NIST’s longstanding legacy in advancing measurement science and standards, with the aim of fostering the safe, secure, and trustworthy development and use of AI, as outlined in the Biden-Harris administration’s Executive Order on AI.

"We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said OpenAI chief strategy officer Jason Kwon.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • a glowing gaming controller, a digital tree structure, and an open book

    Report: Use of Game Engines Expands Beyond Gaming

    Game development technology is increasingly being utilized beyond its traditional gaming roots, according to the recently released annual "State of Game Development" report from development and DevOps solutions provider Perforce Software.

  • abstract representation of equity at the core of AI

    Why Equity Must Be a Core Part of the Conversation About AI

    AI is an immensely powerful tool that can provide customized support for students with diverse learning needs, tailoring educational experiences to meet student’s individual needs more effectively. However, significant disparities in AI access and digital literacy skills prevent many of these same students from fully leveraging its benefits.

  • Man wearing headset working on a computer

    Internet2: Network Routing Security and RPKI Adoption in Research and Education

    We ask James Deaton, vice president of network services, about Internet2's initiatives and leadership efforts to promote routing security and RPKI adoption in research and higher education networks.

  • network of transparent cloud icons, each containing a security symbol like a lock or shield

    Okta, OpenID Foundation Propose New Identity Security Standard

    Okta and the OpenID Foundation have announced the formation of the IPSIE Working Group — with the acronym standing for Interoperability Profiling for Secure Identity in the Enterprise — dedicated to a new identity security standard for Software-as-a-Service (SaaS) applications.