Carnegie Mellon Software Engineering Institute Forms AI Security Incident Response Team

The Software Engineering Institute (SEI) at Carnegie Mellon University has created an Artificial Intelligence Security Incident Response Team (AISIRT) to analyze and respond to threats and security incidents involving the use of AI and machine learning (ML). The team will focus on dealing with threats from many different AI and ML systems, including commerce, lifestyle, and important infrastructure such as defense and national security, the SEI said. The team will also lead research into AI and ML incident analysis, response, and vulnerability mitigation.

The SEI noted that the rapid expansion of AI and ML platforms and software has presented serious safety risks from improper use or deliberate misuse. Prevention and mitigation of threats requires cooperation among academia, industry, and government, it said.

AISIRT will draw upon university cybersecurity, AI, and ML experts and work on furthering the recommendations made by the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023.

"AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences," said SEI Director and CEO Paul Nielsen. "Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI."

This is not the SEI's first foray into cybersecurity, the institute said. Its CERT Coordination Center has been operating since 1988 to address vulnerabilities in computer systems. SEI also heads the National AI Engineering Initiative, and its experts are working on practices that support secure and human-centered AI.

Those who have experienced or are experiencing AI vulnerabilities or attacks may report them to AISIRT here.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • large group of college students sitting on an academic quad

    Student Readiness: Learning to Learn

    Melissa Loble, Instructure's chief academic officer, recommends a focus on 'readiness' as a broader concept as we try to understand how to build meaningful education experiences that can form a bridge from the university to the workplace. Here, we ask Loble what readiness is and how to offer students the ability to 'learn to learn'.

  • AI logo near computer equipment

    White House Releases National Policy Framework for AI

    The White House has released a four-page AI policy framework aimed at setting a national approach to AI, with priorities including child safety, intellectual property protections, truth and accuracy guardrails, and worker training for an AI-driven economy.

  • Graphic of connected devices protected by digital padlocks

    Veeam Launches Agent Commander to Help Detect Enterprise AI Risk

    Veeam Software has introduced Agent Commander, a new platform designed to help enterprises detect AI risk, protect AI systems, and undo AI mistakes.

  • Silhouettes of people stand in a futuristic, digital space

    Redefining Our Careers: Two Women's Leap into Technology

    IT is about more than systems, code, and networks. It's about communicating, supporting, securing, and empowering people through technology.