Research Hub to Explore Safety and Equity in AI

A new research hub at Northwestern University will explore the impact of artificial intelligence systems and seek ways to better incorporate safety and equity into the technology. The Center for Advancing Safety of Machine Intelligence (CASMI) is supported by the Digital Intelligence Safety Research Institute (DISRI) at Underwriters Laboratories, which has committed $7 million over three years to the effort and will jointly lead the hub's research and operations in partnership with the university. The goal: to "bring together and coordinate a wide-ranging research network focused on maximizing machine learning's benefits while recognizing and averting potential negative effects," according to a news announcement.

Northwestern and Underwriters Laboratories have collaborated since 2020 to study machine learning's current and potential impacts on human health and safety, the organizations said. The CASMI research hub will build on that work and "refine a new framework to evaluate the impact of artificial intelligence technologies and devise new ways to responsibly design and develop these technologies."

In particular, CASMI and DISRI said they will develop connections and collaborations across multiple institutions and disciplines, in a distributed model designed to foster research in multiple areas related to machine learning and artificial intelligence. In the research hub's first year, the organizations plan to fund an initial set of research projects and start sharing results. In years two and three, they expect to expand the research as well as explore opportunities to connect the research network with industry partners.

"Artificial intelligence informed by machine learning is increasingly ubiquitous in our everyday lives," said Christopher J. Cramer, Underwriters Laboratories chief research officer and acting DISRI executive director, in a statement. "It's imperative we get it right. We must develop approaches and tests that will incorporate equity into machine learning and hold it to standards guided by both safety and ethical considerations. I'm terrifically excited about this partnership, which will foster research aimed at integrating safety into machine-learning and artificial intelligence design, development, and testing processes."

"Machine learning is among the most transformational forces in technology today, but we're only beginning as a society to genuinely understand and evaluate how it affects our lives," commented Kristian Hammond, CASMI executive director and Northwestern's Bill and Cathy Osborn professor of computer science. "Our partnership with Underwriters Laboratories will help us establish the clear understanding we need to develop these technologies safely and responsibly. Our goal is to go beyond platitudes and operationalize what it means for these technologies to be safe as they are used in the world."

For more information, visit the CASMI site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • glowing digital brain-shaped neural network surrounded by charts, graphs, and data visualizations

    Google Releases Advanced AI Model for Complex Reasoning Tasks

    Google has released Gemini 2.5 Deep Think, an advanced artificial intelligence model designed for complex reasoning tasks.

  • abstract pattern of cybersecurity, ai and cloud imagery

    OpenAI Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • cybersecurity book with a shield and padlock

    NIST Proposes New Cybersecurity Guidelines for AI Systems

    The National Institute of Standards and Technology has unveiled plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.

  • magnifying glass highlighting a human profile silhouette, set over a collage of framed icons including landscapes, charts, and education symbols

    AWS, DeepBrain AI Launch AI-Generated Multimedia Content Detector

    Amazon Web Services (AWS) and DeepBrain AI have introduced AI Detector, an enterprise-grade solution designed to identify and manage AI-generated content across multiple media types. The collaboration targets organizations in government, finance, media, law, and education sectors that need to validate content authenticity at scale.