Paper Offers Framework for Reducing Algorithm Bias

algorithm

A recent paper on algorithm development provided some guidance for how to reduce the bias inherent in AI algorithms as well as the harm those biases can have on underprivileged groups. "A Harm-Reduction Framework for Algorithmic Fairness" argued that artificial intelligence and machine learning are "increasingly" being applied to decision-making and "affect the lives of individuals in ways large and small." The report was produced by the Center for Research on Equitable and Open Scholarship at MIT Libraries and Berkman Klein Center for Internet & Society at Harvard University and published in IEEE Security & Privacy.

The issue of algorithm bias cropped up recently when freshman Congresswoman Alexandria Ocasio-Cortez (D-N.Y.) told a gathering during a Martin Luther King Jr. Day event in New York City that "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions." During the public interview with writer Ta-Nehisi Coates, Ocasio-Cortez said, "[Algorithms are] just automated assumptions. And if you don't fix the bias, then you're automating the bias."

As two of the paper's authors explained in an opinion piece on The Hill, the standard approaches for reducing bias in algorithms "do little to address inequality." As an example, they pointed to Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk assessment application used by the criminal justice system to help make decisions on sentencing, bail, probation and parole. While race and ethnicity data is "intentionally excluded from COMPAS," discriminatory results still surface because the training data (such as income) used to build the model "are sourced from a criminal justice system plagued by racial disparities" and can act like proxies for race.

But rather than giving up on algorithms altogether, the authors suggested a greater understanding of the "costs and benefits" involved in adopting them. "In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms," they wrote.

This "balancing act," as they called it, necessitates a more holistic analysis that explores the impact of "four key decision points":

  • How the algorithm was designed;
  • What data was used to train the algorithm;
  • How the formula is applied to each person's data; and
  • How the result or output is used to make the decision.

Right now, the authors stated, too many algorithms are proprietary, and there's too little pressure for the designers to share how their algorithms work. That should include explaining their design choices as well as the data on the consequences of those choices and continual monitoring of how the algorithms affect various groups of people — especially vulnerable groups, they said. "We should not trust an algorithm unless it can be reviewed and audited in meaningful ways."

That level of sharing won't happen without pressure "from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools," they stated.

The article explaining the paper is openly available on The Hill. The paper itself is also openly available through the Berkman Klein Center.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • open laptop in a college classroom with holographic AI icons like a brain and data charts rising from the screen

    4 Ways Universities Are Using Google AI Tools for Learning and Administration

    In a recent blog post, Google shared an array of education customer stories, showcasing ways institutions are using AI tools like Gemini and NotebookLM to transform both learning and administrative tasks.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Launches Stand-Alone AI App

    Meta Platforms has introduced a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • three main icons—a cloud, a user profile, and a padlock—connected by circuit lines on a blue abstract background

    Report: Identity Has Become a Critical Security Perimeter for Cloud Services

    A new threat landscape report points to new cloud vulnerabilities. According to the 2025 Global Threat Landscape Report from Fortinet, while misconfigured cloud storage buckets were once a prime vector for cybersecurity exploits, other cloud missteps are gaining focus.

  • Stylized illustration showing cybersecurity elements like shields, padlocks, and secure cloud icons on a neutral, minimalist digital background

    Microsoft Announces Security Advancements

    Microsoft has announced major security advancements across its product portfolio and practices. The work is part of its Secure Future Initiative (SFI), a multiyear cybersecurity transformation the company calls the largest engineering project in company history.