Carnegie Mellon Software Engineering Institute Forms AI Security Incident Response Team

The Software Engineering Institute (SEI) at Carnegie Mellon University has created an Artificial Intelligence Security Incident Response Team (AISIRT) to analyze and respond to threats and security incidents involving the use of AI and machine learning (ML). The team will focus on dealing with threats from many different AI and ML systems, including commerce, lifestyle, and important infrastructure such as defense and national security, the SEI said. The team will also lead research into AI and ML incident analysis, response, and vulnerability mitigation.

The SEI noted that the rapid expansion of AI and ML platforms and software has presented serious safety risks from improper use or deliberate misuse. Prevention and mitigation of threats requires cooperation among academia, industry, and government, it said.

AISIRT will draw upon university cybersecurity, AI, and ML experts and work on furthering the recommendations made by the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023.

"AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences," said SEI Director and CEO Paul Nielsen. "Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI."

This is not the SEI's first foray into cybersecurity, the institute said. Its CERT Coordination Center has been operating since 1988 to address vulnerabilities in computer systems. SEI also heads the National AI Engineering Initiative, and its experts are working on practices that support secure and human-centered AI.

Those who have experienced or are experiencing AI vulnerabilities or attacks may report them to AISIRT here.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • glowing digital brain made of blue circuitry hovers above multiple stylized clouds of interconnected network nodes against a dark, futuristic background

    Report: 85% of Organizations Are Using Some Form of AI

    Eighty-five percent of organizations today are leveraging some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.

  • a glowing golden coin with a circuit board pattern, set against a gradient blue and white background with faint stock market graphs and metallic letters "AI" integrated into the design

    Google to Invest $1 Billion in AI Startup Anthropic

    Google is reportedly investing more than $1 billion in generative AI startup Anthropic, expanding its stake in one of Silicon Valley's leading artificial intelligence firms, according to a source familiar with the matter.

  • abstract representation of a supercomputer with glowing blue and green neon geometric shapes resembling interconnected data nodes on a dark background

    University of Florida Invests in Supercomputer Upgrade for AI, Research

    The University of Florida has announced plans to upgrade its HiPerGator supercomputer with new equipment from Nvidia. The $24 million investment will fuel the institution's leadership in AI and research, according to a news announcement.

  • Stock market graphs and candlesticks breaking apart with glass-like cracks

    Chinese Startup DeepSeek Disrupts AI Market

    A new low-cost Chinese artificial intelligence model is wreaking havoc in the technology sector, with tech stocks plummeting globally as concerns grow over the potential disruption it could cause.