Cornell, Carnegie Researchers Aim to Hold Computers Accountable for Their Decisions

machine learning and ai 

A team of researchers led by professors at Cornell University will launch a project designed to ensure that automated decision-making systems respect privacy and come to their decisions fairly.

"There's a lot of new technology being deployed in a variety of important settings, and we don't fully understand all the ramifications," said Thomas Ristenpart, associate professor of computer science at Cornell Tech and co-principal investigator for the project, in a prepared statement.

Funded by a $3 million, five-year grant from the National Science Foundation, the project is being undertaken by Ristenpart, co-principal investigator Helen Nissembaum and researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI) in Berkeley, bringing together experts in the fields of machine learning, ethics, philosophy, privacy and security.

One issue the team will look into is whether machine learning systems leak information about the datasets they're trained on through the conclusions that they come to.

"Unfortunately, we don't yet understand what machine-learning systems are leaking about privacy-sensitive training data sets," said Ristenpart in a prepared statement. "This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly."

The researchers will also explore safeguards to prevent real-world biases from being reflected in applications, such as racially biased recidivism prediction software or gender biases in which job offerings users see, through collaboration with relevant experts.

The researchers hope not just to identify instances of bias but to develop methods that systems could identify bias on their own and correct the issue.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Carnegie Mellon's Anupam Datta, also a principle investigator on the grant, in a prepared statement. "These explanations then inform fixes to the system to avoid future violations."

"Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards," said Michael Tschantz, a principle investigator from ICSI, in a news release.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • abstract AI technology

    New AI Command Center Helps Track AI Agents and Control Costs

    Data protection platform AvePoint has launched a command center to help organizations monitor artificial intelligence agents, addressing security risks and rising expenses as organizations deploy more automated AI tools.

  • Abstract digital cloudscape of glowing interconnected clouds and radiant lines

    Cloud Complexity Outpacing Human Defenses, Report Warns

    According to the 2026 Cloud Security Report from Fortinet, while cloud security budgets are rising, 66% of organizations lack confidence in real-time threat detection across increasingly complex multi-cloud environments, with identity risks, tool sprawl, and fragmented visibility creating persistent operational gaps despite significant investment increases.

  • A man stands at the threshold of a wide open door, looking outward into a glowing, abstract digital landscape filled with light and network‑like patterns

    Shadow AI Isn't a Threat: It's a Signal

    Unofficial AI use on campus reveals more about institutional gaps than misbehavior.

  • glowing brain above stacked coins

    The Higher Ed Playbook for AI Affordability

    Fulfilling the promise of AI in higher education does not require massive budgets or radical reinvention. By leveraging existing infrastructure, embracing edge and localized AI, collaborating across institutions, and embedding AI thoughtfully across the enterprise, universities can move from experimentation to impact.