Cornell, Carnegie Researchers Aim to Hold Computers Accountable for Their Decisions

machine learning and ai 

A team of researchers led by professors at Cornell University will launch a project designed to ensure that automated decision-making systems respect privacy and come to their decisions fairly.

"There's a lot of new technology being deployed in a variety of important settings, and we don't fully understand all the ramifications," said Thomas Ristenpart, associate professor of computer science at Cornell Tech and co-principal investigator for the project, in a prepared statement.

Funded by a $3 million, five-year grant from the National Science Foundation, the project is being undertaken by Ristenpart, co-principal investigator Helen Nissembaum and researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI) in Berkeley, bringing together experts in the fields of machine learning, ethics, philosophy, privacy and security.

One issue the team will look into is whether machine learning systems leak information about the datasets they're trained on through the conclusions that they come to.

"Unfortunately, we don't yet understand what machine-learning systems are leaking about privacy-sensitive training data sets," said Ristenpart in a prepared statement. "This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly."

The researchers will also explore safeguards to prevent real-world biases from being reflected in applications, such as racially biased recidivism prediction software or gender biases in which job offerings users see, through collaboration with relevant experts.

The researchers hope not just to identify instances of bias but to develop methods that systems could identify bias on their own and correct the issue.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Carnegie Mellon's Anupam Datta, also a principle investigator on the grant, in a prepared statement. "These explanations then inform fixes to the system to avoid future violations."

"Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards," said Michael Tschantz, a principle investigator from ICSI, in a news release.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • glowing digital brain above a chessboard with data charts and flowcharts

    Why AI Strategy Matters (and Why Not Having One Is Risky)

    If your institution hasn't started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

  • laptop screen with a video play icon, surrounded by parts of notebooks, pens, and a water bottle on a student desk

    New AI Tool Generates Video Explanations Based on Course Materials

    AI-powered studying and learning platform Studyfetch has launched Imagine Explainers, a new video creator that utilizes artificial intelligence to generate 10- to 60-minute explainer videos for any topic.

  • cloud and circuit patterns with AI stamp

    Cloud Management Startup Launches Infrastructure Intelligence Tool

    A new AI-powered infrastructure intelligence tool from cloud management startup env0 aims to turn the fog of sprawling, enterprise-scale deployments into crisp, queryable insight, minus the spreadsheets, scripts, and late-night Slack threads.

  • Stylized illustration showing cybersecurity elements like shields, padlocks, and secure cloud icons on a neutral, minimalist digital background

    Microsoft Announces Security Advancements

    Microsoft has announced major security advancements across its product portfolio and practices. The work is part of its Secure Future Initiative (SFI), a multiyear cybersecurity transformation the company calls the largest engineering project in company history.