Paper Offers Framework for Reducing Algorithm Bias

algorithm

A recent paper on algorithm development provided some guidance for how to reduce the bias inherent in AI algorithms as well as the harm those biases can have on underprivileged groups. "A Harm-Reduction Framework for Algorithmic Fairness" argued that artificial intelligence and machine learning are "increasingly" being applied to decision-making and "affect the lives of individuals in ways large and small." The report was produced by the Center for Research on Equitable and Open Scholarship at MIT Libraries and Berkman Klein Center for Internet & Society at Harvard University and published in IEEE Security & Privacy.

The issue of algorithm bias cropped up recently when freshman Congresswoman Alexandria Ocasio-Cortez (D-N.Y.) told a gathering during a Martin Luther King Jr. Day event in New York City that "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions." During the public interview with writer Ta-Nehisi Coates, Ocasio-Cortez said, "[Algorithms are] just automated assumptions. And if you don't fix the bias, then you're automating the bias."

As two of the paper's authors explained in an opinion piece on The Hill, the standard approaches for reducing bias in algorithms "do little to address inequality." As an example, they pointed to Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk assessment application used by the criminal justice system to help make decisions on sentencing, bail, probation and parole. While race and ethnicity data is "intentionally excluded from COMPAS," discriminatory results still surface because the training data (such as income) used to build the model "are sourced from a criminal justice system plagued by racial disparities" and can act like proxies for race.

But rather than giving up on algorithms altogether, the authors suggested a greater understanding of the "costs and benefits" involved in adopting them. "In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms," they wrote.

This "balancing act," as they called it, necessitates a more holistic analysis that explores the impact of "four key decision points":

  • How the algorithm was designed;
  • What data was used to train the algorithm;
  • How the formula is applied to each person's data; and
  • How the result or output is used to make the decision.

Right now, the authors stated, too many algorithms are proprietary, and there's too little pressure for the designers to share how their algorithms work. That should include explaining their design choices as well as the data on the consequences of those choices and continual monitoring of how the algorithms affect various groups of people — especially vulnerable groups, they said. "We should not trust an algorithm unless it can be reviewed and audited in meaningful ways."

That level of sharing won't happen without pressure "from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools," they stated.

The article explaining the paper is openly available on The Hill. The paper itself is also openly available through the Berkman Klein Center.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • soft-edged digital blocks and AI imagery on a muted background

    OpenAI Launches GPT-4.1 with Upgrades in Coding, Context Processing, Efficiency

    OpenAI has announced GPT-4.1, offering stronger performance across software development, instruction following, and long-context comprehension.

  •  laptop on a clean desk with digital padlock icon on the screen

    Study: Data Privacy a Top Concern as Orgs Scale Up AI Agents

    As organizations race to integrate AI agents into their cloud operations and business workflows, they face a crucial reality: while enthusiasm is high, major adoption barriers remain, according to a new Cloudera report. Chief among them is the challenge of safeguarding sensitive data.

  • glowing digital brain made of blue circuitry hovers above multiple stylized clouds of interconnected network nodes against a dark, futuristic background

    Report: 85% of Organizations Are Using Some Form of AI

    Eighty-five percent of organizations today are leveraging some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.

  • abstract geometric pattern of glowing interconnected triangles, hexagons, and circles in blue, gold, and white, spread across a dark navy-to-black gradient background

    OpenAI Unveils 'Operator' AI for Performing Web Tasks

    OpenAI has launched "Operator," an AI agent designed to perform web-based tasks autonomously using its own browser. Currently available as a research preview for Pro users in the United States, the tool aims to automate everyday activities such as filling out forms, ordering groceries, and even creating memes.