Research Hub to Explore Safety and Equity in AI

A new research hub at Northwestern University will explore the impact of artificial intelligence systems and seek ways to better incorporate safety and equity into the technology. The Center for Advancing Safety of Machine Intelligence (CASMI) is supported by the Digital Intelligence Safety Research Institute (DISRI) at Underwriters Laboratories, which has committed $7 million over three years to the effort and will jointly lead the hub's research and operations in partnership with the university. The goal: to "bring together and coordinate a wide-ranging research network focused on maximizing machine learning's benefits while recognizing and averting potential negative effects," according to a news announcement.

Northwestern and Underwriters Laboratories have collaborated since 2020 to study machine learning's current and potential impacts on human health and safety, the organizations said. The CASMI research hub will build on that work and "refine a new framework to evaluate the impact of artificial intelligence technologies and devise new ways to responsibly design and develop these technologies."

In particular, CASMI and DISRI said they will develop connections and collaborations across multiple institutions and disciplines, in a distributed model designed to foster research in multiple areas related to machine learning and artificial intelligence. In the research hub's first year, the organizations plan to fund an initial set of research projects and start sharing results. In years two and three, they expect to expand the research as well as explore opportunities to connect the research network with industry partners.

"Artificial intelligence informed by machine learning is increasingly ubiquitous in our everyday lives," said Christopher J. Cramer, Underwriters Laboratories chief research officer and acting DISRI executive director, in a statement. "It's imperative we get it right. We must develop approaches and tests that will incorporate equity into machine learning and hold it to standards guided by both safety and ethical considerations. I'm terrifically excited about this partnership, which will foster research aimed at integrating safety into machine-learning and artificial intelligence design, development, and testing processes."

"Machine learning is among the most transformational forces in technology today, but we're only beginning as a society to genuinely understand and evaluate how it affects our lives," commented Kristian Hammond, CASMI executive director and Northwestern's Bill and Cathy Osborn professor of computer science. "Our partnership with Underwriters Laboratories will help us establish the clear understanding we need to develop these technologies safely and responsibly. Our goal is to go beyond platitudes and operationalize what it means for these technologies to be safe as they are used in the world."

For more information, visit the CASMI site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • college student sitting at a laptop writing a college essay

    How Can Schools Manage AI in Admissions?

    Many questions remain around the role of artificial intelligence in admissions as schools navigate the balance between innovation and integrity.  

  • a hobbyist in casual clothes holds a hammer and a toolbox, building a DIY structure that symbolizes an AI model

    Ditch the DIY Approach to AI on Campus

    Institutions that do not adopt AI will quickly fall behind. The question is, how can colleges and universities do this systematically, securely, cost-effectively, and efficiently?

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • laptop screen showing Coursera course

    Coursera Introduces New Gen AI Skills Training and Credentials

    Learning platform Coursera is expanding its Generative AI Academy training portfolio with an offering for teams, as well as adding new generative AI courses, specializations, and certificates.