Research Hub to Explore Safety and Equity in AI

A new research hub at Northwestern University will explore the impact of artificial intelligence systems and seek ways to better incorporate safety and equity into the technology. The Center for Advancing Safety of Machine Intelligence (CASMI) is supported by the Digital Intelligence Safety Research Institute (DISRI) at Underwriters Laboratories, which has committed $7 million over three years to the effort and will jointly lead the hub's research and operations in partnership with the university. The goal: to "bring together and coordinate a wide-ranging research network focused on maximizing machine learning's benefits while recognizing and averting potential negative effects," according to a news announcement.

Northwestern and Underwriters Laboratories have collaborated since 2020 to study machine learning's current and potential impacts on human health and safety, the organizations said. The CASMI research hub will build on that work and "refine a new framework to evaluate the impact of artificial intelligence technologies and devise new ways to responsibly design and develop these technologies."

In particular, CASMI and DISRI said they will develop connections and collaborations across multiple institutions and disciplines, in a distributed model designed to foster research in multiple areas related to machine learning and artificial intelligence. In the research hub's first year, the organizations plan to fund an initial set of research projects and start sharing results. In years two and three, they expect to expand the research as well as explore opportunities to connect the research network with industry partners.

"Artificial intelligence informed by machine learning is increasingly ubiquitous in our everyday lives," said Christopher J. Cramer, Underwriters Laboratories chief research officer and acting DISRI executive director, in a statement. "It's imperative we get it right. We must develop approaches and tests that will incorporate equity into machine learning and hold it to standards guided by both safety and ethical considerations. I'm terrifically excited about this partnership, which will foster research aimed at integrating safety into machine-learning and artificial intelligence design, development, and testing processes."

"Machine learning is among the most transformational forces in technology today, but we're only beginning as a society to genuinely understand and evaluate how it affects our lives," commented Kristian Hammond, CASMI executive director and Northwestern's Bill and Cathy Osborn professor of computer science. "Our partnership with Underwriters Laboratories will help us establish the clear understanding we need to develop these technologies safely and responsibly. Our goal is to go beyond platitudes and operationalize what it means for these technologies to be safe as they are used in the world."

For more information, visit the CASMI site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • close-up illustration of a hand signing a legislative document

    California Passes AI Safety Legislation, Awaits Governor's Signature

    California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report: Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, a popular open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a new report from Microsoft.

  • interconnected cubes and circles arranged in a grid-like structure

    Hugging Face Gradio 5 Offers AI-Powered App Creation and Enhanced Security

    Hugging Face has released version 5 of its Gradio open source platform for building machine learning (ML) applications. The update introduces a suite of features focused on expanding access to AI, including a novel AI-powered app creation tool, enhanced web development capabilities, and bolstered security measures.