Tech Companies Announce Frontier Model Forum to Address Safe, Responsible AI Development

Anthropic, Google, Microsoft, and OpenAI have partnered to launch the Frontier Model Forum to draw on the expertise of member companies to promote safety and responsibility in developing frontier AI models. They are calling for other companies to join them in this collective effort.

"Frontier" AI models are those which are being developed and will exceed existing model capabilities, the companies said. Now is the time for a slate of "best practices" and "guardrails" to be developed to address issues arising with the continued development of AI.

The Forum's goals are to:

  • Promote AI safety research and standards as well as independent evaluations of it;
  • Generate best practices for developing frontier models responsibly, and educate the public about their capabilities and limitations;
  • Collaborate with government, academia, business, and the public to share knowledge about AI;
  • Support developing useful models to help mitigate real-world issues, such as environment, health, and cybersecurity; and
  • Develop a public library of resources about research, standards, safety, and evaluation of AI technology.

The Forum is calling for organizations to join them who are:

  • Developing and deploying frontier models;
  • Dedicated to frontier model safety;
  • Willing to participate in joint initiatives to address the issues arising with frontier models; and
  • Willing to share research and information with others.

The Forum will create an advisory board, working group, and executive board and will establish a charter, governance, and funding sources. The Forum is also committed to cooperation with existing AI safety and responsibility initiatives such as the G7 Hiroshima Process, the Partnership on AI, and others to support each other in assessing risks, benefits, standards, and social impact of AI technology.

In making this announcement, representatives of the Forum's founding companies stressed their excitement about the new venture and the importance of its work.

"We're all going to need to work together to make sure AI benefits everyone," said Kent Walker, president of global affairs for Google and Alphabet.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," said Brad Smith, vice chair and president of Microsoft.

"The Frontier Model Forum will play a vital role in coordinating best practices and sharing research on frontier AI safety," said Dario Amodei, Anthropic CEO.

"It is vital that AI companies — especially those working on the most powerful models — align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," said Anna Makanju, vice president of global affairs at OpenAI. "This is urgent work and this forum is well positioned to act quickly to advance the state of AI safety."

To learn more, visit OpenAI's Frontier Model Forum page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • white clouds in the sky overlaid with glowing network nodes, circuits, and AI symbols

    AWS, Microsoft, Google, Others Make DeepSeek-R1 AI Model Available on Their Platforms

    Leading cloud service providers are now making the open source DeepSeek-R1 reasoning model available on their platforms, including Amazon, Microsoft, and Google.

  • illustration with geometric shapes, digital circuitry, and subtle icons of an open book, graduation cap, and lightbulb

    University of Michigan Launches Agentic AI Virtual Teaching Assistant

    At the University of Michigan's Stephen M. Ross School of Business, a new Virtual Teaching Assistant pilot program is utilizing agentic AI to provide students with 24/7 access to support and self-directed learning.

  • robot waving

    Copilot Updates Aim to Make AI More Personal

    Microsoft has unveiled a range of updates to its Copilot platform, marking a new phase in its effort to deliver what it calls a "true AI companion" that adapts to individual users' needs, preferences and routines.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.