Carnegie Mellon Debuts Initiative to Combine Disparate AI Research

The School of Computer Science (SCS) at Carnegie Mellon University (CMU) has launched CMU AI, a new initiative designed to connect work on artificial intelligence across seven departments within the school.

"For AI to reach greater levels of sophistication, experts in each aspect of AI, such as how computers understand the way people talk or how computers can learn and improve with experience, will increasingly need to work in close collaboration," said Andrew Moore, dean of SCS, in a prepared statement. "CMU AI provides a framework for our ongoing AI research and education."

The initiative comprises more than 100 faculty involved in AI research and education, along with nearly 1,000 students. Moore will direct the initiative along with Newell University Professor of Computer Science and Director of the Language Technologies Institute Jaime Carbonell, Director of the Robotics Institute Martial Hebert, Computer Science Professor Tuomas Sandholm and Manuela Veloso, the Herbert A. Simon University Professor of Computer Science and head of the Machine Learning Department.

"Carnegie Mellon has been on the forefront of AI since creating the first AI computer program, Logic Theorist, in 1956," according to information released by the university. "It created the first and only machine learning department, studying how software can make discoveries and learn with experience. CMU scientists pioneered research into how machines can understand and translate human languages, and how computers and humans can interact with each other. Carnegie Mellon's Robotics Institute has been a leader in enabling machines to perceive, decide and act in the world, including a renowned computer vision group that explores how computers can understand images."

The university has been involved in the development of AI technologies including autonomous vehicles, IBM's Watson, 3D sports replay and soccer- and poker-playing robots.

The new initiative aims to unite developments in AI that have been developed separately to create powerful new technologies.

"AI is no longer something that a lone genius invents in the garage," Moore added. "It requires a team of people, each of whom brings a special expertise or perspective. CMU researchers have always excelled at collaboration across disciplines, and CMU AI will enable all of us to work together in unprecedented ways."

"Students who study AI at CMU have an opportunity to work on projects that unite multiple disciplines — to study AI in its depth and multidisciplinary, integrative aspects. They generally leave CMU for positions of great leadership, and they lead global AI efforts both in terms of starting new ventures and joining innovative companies that tremendously value our education and research," Veloso said in a CMU report about the initiative. "CMU students at all levels have a big impact on what AI is doing for society."

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.