Cornell, Carnegie Researchers Aim to Hold Computers Accountable for Their Decisions

machine learning and ai 

A team of researchers led by professors at Cornell University will launch a project designed to ensure that automated decision-making systems respect privacy and come to their decisions fairly.

"There's a lot of new technology being deployed in a variety of important settings, and we don't fully understand all the ramifications," said Thomas Ristenpart, associate professor of computer science at Cornell Tech and co-principal investigator for the project, in a prepared statement.

Funded by a $3 million, five-year grant from the National Science Foundation, the project is being undertaken by Ristenpart, co-principal investigator Helen Nissembaum and researchers from Carnegie Mellon University and the International Computer Science Institute (ICSI) in Berkeley, bringing together experts in the fields of machine learning, ethics, philosophy, privacy and security.

One issue the team will look into is whether machine learning systems leak information about the datasets they're trained on through the conclusions that they come to.

"Unfortunately, we don't yet understand what machine-learning systems are leaking about privacy-sensitive training data sets," said Ristenpart in a prepared statement. "This project will be a great opportunity to investigate the extent to which having access to the output of machine learning systems reveals sensitive information and, in turn, how to improve machine learning to be more privacy friendly."

The researchers will also explore safeguards to prevent real-world biases from being reflected in applications, such as racially biased recidivism prediction software or gender biases in which job offerings users see, through collaboration with relevant experts.

The researchers hope not just to identify instances of bias but to develop methods that systems could identify bias on their own and correct the issue.

"A key innovation of the project is to automatically account for why an automated system with artificial intelligence components exhibits behavior that is problematic for privacy or fairness," said Carnegie Mellon's Anupam Datta, also a principle investigator on the grant, in a prepared statement. "These explanations then inform fixes to the system to avoid future violations."

"Although science cannot decide moral questions, given a standard from ethics, science can shed light on how to enforce it, its consequences and how it compares to other standards," said Michael Tschantz, a principle investigator from ICSI, in a news release.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • AI robot with cybersecurity symbol on its chest

    Microsoft Adds New Agentic AI Tools to Security Copilot

    Microsoft has announced a major expansion of its AI-powered cybersecurity platform, introducing a suite of autonomous agents to help organizations counter rising threats and manage the growing complexity of cloud and AI security.

  • modern college building with circuit and brain motifs

    Anthropic Launches Claude for Education

    Anthropic has announced a version of its Claude AI assistant tailored for higher education institutions. Claude for Education "gives academic institutions secure, reliable AI access for their entire community," the company said, to enable colleges and universities to develop and implement AI-enabled approaches across teaching, learning, and administration.

  • central cloud platform connected to various AI icons—including a brain, robot, and network nodes

    Linux Foundation to Host Protocol for AI Agent Interoperability

    The Linux Foundation has announced it will host the Agent2Agent (A2A) protocol project, an open standard originally developed by Google to support secure communication and interoperability among AI agents.

  • open laptop in a college classroom with holographic AI icons like a brain and data charts rising from the screen

    4 Ways Universities Are Using Google AI Tools for Learning and Administration

    In a recent blog post, Google shared an array of education customer stories, showcasing ways institutions are using AI tools like Gemini and NotebookLM to transform both learning and administrative tasks.