Open Menu Close Menu

Machine Learning

Cornell, Carnegie Researchers Aim to Hold Computers Accountable for Their Decisions

machine learning and ai 

A team of researchers led by professors at Cornell University will launch a project designed to ensure that automated decision-making systems respect privacy and come to their decisions fairly.

"There's a lot of new technology being deployed in a variety of important settings, and we don't fully understand all the ramifications," said Thomas Ristenpart, associate professor of computer science at Cornell Tech and co-principal investigator for the project, in a prepared statement.

Funded by a $3 million, five-year grant from the National Science Foundation, the project is being undertaken by Ristenpart, co-principal investigator Helen Nissembaum and researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI) in Berkeley, bringing together experts in the fields of machine learning, ethics, philosophy, privacy and security.

One issue the team will look into is whether machine learning systems leak information about the datasets they're trained on through the conclusions that they come to.

"Unfortunately, we don't yet understand what machine-learning systems are leaking about privacy-sensitive training data sets," said Ristenpart in a prepared statement. "This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly."

The researchers will also explore safeguards to prevent real-world biases from being reflected in applications, such as racially biased recidivism prediction software or gender biases in which job offerings users see, through collaboration with relevant experts.

The researchers hope not just to identify instances of bias but to develop methods that systems could identify bias on their own and correct the issue.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Carnegie Mellon's Anupam Datta, also a principle investigator on the grant, in a prepared statement. "These explanations then inform fixes to the system to avoid future violations."

"Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards," said Michael Tschantz, a principle investigator from ICSI, in a news release.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

comments powered by Disqus