Cornell, Carnegie Researchers Aim to Hold Computers Accountable for Their Decisions

machine learning and ai 

A team of researchers led by professors at Cornell University will launch a project designed to ensure that automated decision-making systems respect privacy and come to their decisions fairly.

"There's a lot of new technology being deployed in a variety of important settings, and we don't fully understand all the ramifications," said Thomas Ristenpart, associate professor of computer science at Cornell Tech and co-principal investigator for the project, in a prepared statement.

Funded by a $3 million, five-year grant from the National Science Foundation, the project is being undertaken by Ristenpart, co-principal investigator Helen Nissembaum and researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI) in Berkeley, bringing together experts in the fields of machine learning, ethics, philosophy, privacy and security.

One issue the team will look into is whether machine learning systems leak information about the datasets they're trained on through the conclusions that they come to.

"Unfortunately, we don't yet understand what machine-learning systems are leaking about privacy-sensitive training data sets," said Ristenpart in a prepared statement. "This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly."

The researchers will also explore safeguards to prevent real-world biases from being reflected in applications, such as racially biased recidivism prediction software or gender biases in which job offerings users see, through collaboration with relevant experts.

The researchers hope not just to identify instances of bias but to develop methods that systems could identify bias on their own and correct the issue.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Carnegie Mellon's Anupam Datta, also a principle investigator on the grant, in a prepared statement. "These explanations then inform fixes to the system to avoid future violations."

"Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards," said Michael Tschantz, a principle investigator from ICSI, in a news release.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • data professionals in a meeting

    Data Fluency as a Strategic Imperative

    As an institution's highest level of data capabilities, data fluency taps into the agency of technical experts who work together with top-level institutional leadership on issues of strategic importance.

  • a cloud, an AI chip, and a padlock interconnected by circuit-like lines

    Report: Attackers Increasingly Targeting Cloud, AI Systems

    CrowdStrike’s 2025 Threat Hunting Report found that AI tools are being weaponized and directly targeted, while cloud intrusions surge 136% in early 2025.

  • laptop displaying a glowing digital brain and data charts sits on a metal shelf in a well-lit server room with organized network cables and active servers

    Cisco Introduces AI-First Approach to IT Operations

    At its recent Cisco Live 2025 event, Cisco announced AgenticOps, a transformative approach to IT operations that integrates advanced AI capabilities to enhance efficiency and collaboration across network, security, and application domains.

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education.