Carnegie Mellon Software Engineering Institute Forms AI Security Incident Response Team

The Software Engineering Institute (SEI) at Carnegie Mellon University has created an Artificial Intelligence Security Incident Response Team (AISIRT) to analyze and respond to threats and security incidents involving the use of AI and machine learning (ML). The team will focus on dealing with threats from many different AI and ML systems, including commerce, lifestyle, and important infrastructure such as defense and national security, the SEI said. The team will also lead research into AI and ML incident analysis, response, and vulnerability mitigation.

The SEI noted that the rapid expansion of AI and ML platforms and software has presented serious safety risks from improper use or deliberate misuse. Prevention and mitigation of threats requires cooperation among academia, industry, and government, it said.

AISIRT will draw upon university cybersecurity, AI, and ML experts and work on furthering the recommendations made by the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023.

"AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences," said SEI Director and CEO Paul Nielsen. "Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI."

This is not the SEI's first foray into cybersecurity, the institute said. Its CERT Coordination Center has been operating since 1988 to address vulnerabilities in computer systems. SEI also heads the National AI Engineering Initiative, and its experts are working on practices that support secure and human-centered AI.

Those who have experienced or are experiencing AI vulnerabilities or attacks may report them to AISIRT here.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • interconnected cloud icons with glowing lines on a gradient blue backdrop

    Report: Cloud Certifications Bring Biggest Salary Payoff

    It pays to be conversant in cloud, according to a new study from Skillsoft The company's annual IT skills and salary survey report found that the top three certifications resulting in the highest payoffs salarywise are for skills in the cloud, specifically related to Amazon Web Services (AWS), Google Cloud, and Nutanix.

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • digital brain made of blue circuitry on the left and a shield with a glowing lock on the right, set against a dark background with fading binary code

    AI Dominates Key Technologies and Practices in Cybersecurity and Privacy

    AI governance, AI-enabled workforce expansion, and AI-supported cybersecurity training are three of the six key technologies and practices anticipated to have a significant impact on the future of cybersecurity and privacy in higher education, according to the latest Cybersecurity and Privacy edition of the Educause Horizon Report.