IBM and Meta Launch Global AI Alliance with Dozens of Partners and Collaborators

IBM and Meta have announced the launch of an international AI Alliance, consisting of developers, researchers, and adopters from across the industry and the world to advance the use of AI in an open, safe, and responsible manner. Over 50 members and collaborators worldwide have joined this effort.

According to IBM's release, "While there are many individual companies, startups, researchers, governments, and others who are committed to open science and open technologies and want to participate in the new wave of AI innovation, more collaboration and information sharing will help the community innovate faster and more inclusively, and identify specific risks and mitigate those risks before putting a product into the world."

The Alliance says its goal is to ensure "scientific rigor, trust, safety, security, diversity, and economic competitiveness" by pooling resources to "address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers, and adopters around the world."

The website's home page notes that members and collaborators represent $80 billion in research and development, 400,000 students supported by academic institutions, and 100,000 staff members.

The Alliance will begin by forming member-driven working groups, establishing a governing board and technical oversight committee, and working with existing initiatives from governments, nonprofit, and civil society organizations.

The Alliance's objectives are to:

  • Develop and deploy benchmarks and evaluation standards, tools, and other resources for responsible development of AI systems;
  • Promote capable and open multilingual, multimodal, and scientific foundation models to address challenges such as climate, education, and more;
  • Assist development of an AI hardware accelerator ecosystem by "boosting contributions and adoption of essential enabling software technology";
  • Support educator and student research efforts to contribute to AI model and tool projects;
  • Educate the public about the benefits, risks, solutions, and regulations concerning AI use; and
  • Develop initiatives and host events showcasing how members use AI responsibly and beneficially.

"The progress we continue to witness in AI is a testament to open innovation and collaboration across communities of creators, scientists, academics and business leaders," said Arvind Krishna, IBM Chairman and CEO. "This is a pivotal moment in defining the future of AI."

"The AI Alliance brings together researchers, developers and companies to share tools and knowledge that can help us all make progress whether models are shared openly or not," said Nick Clegg, Meta's president of global affairs. "We're looking forward to working with partners to advance the state-of-the-art in AI and help everyone build responsibly."

Participating U.S. higher education institutions include Cornell University, Dartmouth College, New York University, Rensselaer Polytechnic Institute, UC Berkeley College of Computing, Data Science, and Society, University of Illinois Urbana-Champaign, The University of Notre Dame, The University of Texas at Austin, and Yale University.

To learn more, visit the Alliance's Learn and FAQ page and Focus Areas page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • From the Kuali Days 2025 Conference: A CEO's View of Planning for AI

    How can a company serving higher education navigate the changes AI brings to ed tech? What will customers expect? CT talks with Kuali CEO Joel Dehlin, who shared his company's AI strategies with attendees at Kuali Days 2025 in Anaheim.

  • glowing blue AI sphere connected by fine light lines, positioned next to a red-orange shield with a checkmark

    Cloud Security Alliance Offers Playbook for Red Teaming Agentic AI Systems

    The Cloud Security Alliance has introduced a guide for red teaming Agentic AI systems, targeting the security and testing challenges posed by increasingly autonomous artificial intelligence.

  • Training the Next Generation of Space Cybersecurity Experts

    CT asked Scott Shackelford, Indiana University professor of law and director of the Ostrom Workshop Program on Cybersecurity and Internet Governance, about the possible emergence of space cybersecurity as a separate field that would support changing practices and foster future space cybersecurity leaders.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.