MIT Researchers Develop AI Cybersecurity Platform

Researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have developed a cybersecurity system that combines human and machine-learning approaches to reduce cyber attacks and false positives.

Named AI2 to signify that it merges artificial intelligence with "analyst intuition," the system was developed by Kalyan Veeramachaneni, a research scientist at CSAIL, and Ignacio Arnaldo, a former postdoctoral researcher at CSAIL who is now a chief data scientist at PatternEx. In tests, the researchers demonstrated that "AI2 can detect 85 percent of attacks, which is roughly three times better than previous benchmarks, while also reducing the number of false positives by a factor of five," according to a news release from CSAIL.

Most modern cybersecurity systems use either analyst-driven solutions or machine-learning approaches. Analyst-driven systems rely on rules created by people and consequently can't detect attacks that don't adhere to those rules, whereas machine-learning systems rely on anomaly detection, which tends to generate false positives that have to be investigated by people. The AI2 system merges both approaches to improve cybersecurity efforts.

The AI2 system begins by analyzing data for suspicious activity using unsupervised machine learning and then presenting that activity to a human analyst who can confirm which activities are actual cyber attacks. AI2 incorporates that human feedback into its models when analyzing its next set of data, combining a supervised model with an unsupervised model. As the system collects additional data from the analyst, it continually updates its model.

"You can think about the system as a virtual analyst," said Veeramachaneni in a prepared statement. "It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly."

Veeramachaneni presented a paper about the system, "AI2: Training a Big Data Machine to Defend," at the 2nd IEEE International Conference on Big Data Security on Cloud, which was held in New York City April 8-10, 2016.

About the Author

Leila Meyer is a technology writer based in British Columbia. She can be reached at [email protected].

Featured

  • glowing digital brain above a chessboard with data charts and flowcharts

    Why AI Strategy Matters (and Why Not Having One Is Risky)

    If your institution hasn't started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

  • abstract pattern of lights and connecting lines

    Google Introduces Gemini Enterprise Platform

    Google Cloud has launched Gemini Enterprise, a unified artificial intelligence platform designed to integrate AI capabilities across enterprise workflows.

  • A Comprehensive Guide to the Best Value Evaluation Systems

    Choosing the most cost-effective evaluation system requires balancing price, usability and insight quality. In a landscape full of digital tools and data demands, it is important to prioritize platforms that deliver clear results without complicating operations.

  • glowing digital brain interacts with an open book, with stacks of books beside it

    Federal Court Rules AI Training with Copyrighted Books Fair Use

    A federal judge ruled this week that artificial intelligence company Anthropic did not violate copyright law when it used copyrighted books to train its Claude chatbot without author consent, but ordered the company to face trial on allegations it used pirated versions of the books.