MIT Researchers Develop AI Cybersecurity Platform

Researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed a cybersecurity system that combines human and machine-learning approaches to reduce cyber attacks and false positives.

Named AI2 to signify that it merges artificial intelligence with "analyst intuition," the system was developed by Kalyan Veeramachaneni, a research scientist at CSAIL, and Ignacio Arnaldo, a former postdoctoral researcher at CSAIL who is now a chief data scientist at PatternEx. In tests, the researchers demonstrated that "AI2 can detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives by a factor of five," according to a news release from CSAIL.

Most modern cybersecurity systems use either analyst-driven solutions or machine-learning approaches. Analyst-driven systems rely on rules created by people and consequently can't detect attacks that don't adhere to those rules, whereas machine-learning systems rely on anomaly detection, which tends to generate false positives that have to be investigated by people. The AI2 system merges both approaches to improve cybersecurity efforts.

The AI2 system begins by analyzing data for suspicious activity using unsupervised machine learning and then presenting that activity to a human analyst who can confirm which activities are actual cyber attacks. AI2 incorporates that human feedback into its models when analyzing its next set of data, combining a supervised model with an unsupervised model. As the system collects additional data from the analyst, it continually updates its model.

"You can think about the system as a virtual analyst," said Veeramachaneni in a prepared statement. "It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly."

Veeramachaneni presented a paper about the system, "AI2: Training a Big Data Machine to Defend," at the 2nd IEEE International Conference on Big Data Security on Cloud, which was held in New York City April 8-10, 2016.

About the Author

Leila Meyer is a technology writer based in British Columbia. She can be reached at [email protected].

Featured

  • stylized illustration of people conversing on headsets

    AI and Our Next Conversations in Higher Education

    Ryan Lufkin, the vice president of global strategy for Instructure, examines how the focus on AI in education will move from experimentation to accountability.

  • AI word on microchip and colorful light spread

    Microsoft Unveils Maia 200 Inference Chip to Cut AI Serving Costs

    Microsoft recently introduced Maia 200, a custom-built accelerator aimed at lowering the cost of running artificial intelligence workloads at cloud scale, as major providers look to curb soaring inference expenses and lessen dependence on Nvidia graphics processors.

  • large group of college students sitting on an academic quad

    Student Readiness: Learning to Learn

    Melissa Loble, Instructure's chief academic officer, recommends a focus on 'readiness' as a broader concept as we try to understand how to build meaningful education experiences that can form a bridge from the university to the workplace. Here, we ask Loble what readiness is and how to offer students the ability to 'learn to learn'.

  • Blue metallic mesh fabric folds

    Microsoft Acquires Osmos for Agentic AI Data Engineering

    In a strategic move to reduce time-consuming manual data preparation, Microsoft has acquired Seattle-based startup Osmos, specializing in agentic AI for data engineering.