Carnegie Mellon Software Engineering Institute Forms AI Security Incident Response Team

The Software Engineering Institute (SEI) at Carnegie Mellon University has created an Artificial Intelligence Security Incident Response Team (AISIRT) to analyze and respond to threats and security incidents involving the use of AI and machine learning (ML). The team will focus on dealing with threats from many different AI and ML systems, including commerce, lifestyle, and important infrastructure such as defense and national security, the SEI said. The team will also lead research into AI and ML incident analysis, response, and vulnerability mitigation.

The SEI noted that the rapid expansion of AI and ML platforms and software has presented serious safety risks from improper use or deliberate misuse. Prevention and mitigation of threats requires cooperation among academia, industry, and government, it said.

AISIRT will draw upon university cybersecurity, AI, and ML experts and work on furthering the recommendations made by the White House's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released in October 2023.

"AI and cybersecurity experts at the SEI are currently at work on AI- and ML-related vulnerabilities that, if left unaddressed, may be exploited by adversaries against national assets with potentially disastrous consequences," said SEI Director and CEO Paul Nielsen. "Our research in this rapidly emerging discipline reinforces the need for a coordination center in the AI ecosystem to help engender trust and to support advancing the safe and responsible development and adoption of AI."

This is not the SEI's first foray into cybersecurity, the institute said. Its CERT Coordination Center has been operating since 1988 to address vulnerabilities in computer systems. SEI also heads the National AI Engineering Initiative, and its experts are working on practices that support secure and human-centered AI.

Those who have experienced or are experiencing AI vulnerabilities or attacks may report them to AISIRT here.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.