Carnegie Mellon Software Engineering Institute Forms AI Security Incident Response Team

The Software Engineering Institute (SEI) at Carnegie Mellon University has created an Artificial Intelligence Security Incident Response Team (AISIRT) to analyze and respond to threats and security incidents involving the use of AI and machine learning (ML). The team will focus on dealing with threats from many different AI and ML systems, including commerce, lifestyle, and important infrastructure such as defense and national security, the SEI said. The team will also lead research into AI and ML incident analysis, response, and vulnerability mitigation.

The SEI noted that the rapid expansion of AI and ML platforms and software has presented serious safety risks from improper use or deliberate misuse. Prevention and mitigation of threats requires cooperation among academia, industry, and government, it said.

AISIRT will draw upon university cybersecurity, AI, and ML experts and work on furthering the recommendations made by the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023.

"AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences," said SEI Director and CEO Paul Nielsen. "Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI."

This is not the SEI's first foray into cybersecurity, the institute said. Its CERT Coordination Center has been operating since 1988 to address vulnerabilities in computer systems. SEI also heads the National AI Engineering Initiative, and its experts are working on practices that support secure and human-centered AI.

Those who have experienced or are experiencing AI vulnerabilities or attacks may report them to AISIRT here.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • Abstract AI circuit board pattern

    New Nonprofit to Work Toward Safer, Truthful AI

    Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a new nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • chart with ascending bars and two silhouetted figures observing it, set against a light background with blue and purple tones

    Report: Enterprises Embracing Agentic AI

    According to research by SnapLogic, 50% of enterprises are already deploying AI agents, and another 32% plan to do so within the next 12 months..

  • college student working on a laptop, surrounded by icons representing campus support services

    National U Launches Student Support Hub for Non-Traditional Learners

    National University has launched a new student support hub designed to help online and working learners balance career, education, and family responsibilities as they pursue their education. Called "The Nest," the facility is positioned as a "co-learning" center that provides wraparound support services, work and study space, and access to child care.