Open Menu Close Menu

Artificial Intelligence

Paper Offers Framework for Reducing Algorithm Bias

algorithm

A recent paper on algorithm development provided some guidance for how to reduce the bias inherent in AI algorithms as well as the harm those biases can have on underprivileged groups. "A Harm-Reduction Framework for Algorithmic Fairness" argued that artificial intelligence and machine learning are "increasingly" being applied to decision-making and "affect the lives of individuals in ways large and small." The report was produced by the Center for Research on Equitable and Open Scholarship at MIT Libraries and Berkman Klein Center for Internet & Society at Harvard University and published in IEEE Security & Privacy.

The issue of algorithm bias cropped up recently when freshman Congresswoman Alexandria Ocasio-Cortez (D-N.Y.) told a gathering during a Martin Luther King Jr. Day event in New York City that "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions." During the public interview with writer Ta-Nehisi Coates, Ocasio-Cortez said, "[Algorithms are] just automated assumptions. And if you don't fix the bias, then you're automating the bias."

As two of the paper's authors explained in an opinion piece on The Hill, the standard approaches for reducing bias in algorithms "do little to address inequality." As an example, they pointed to Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk assessment application used by the criminal justice system to help make decisions on sentencing, bail, probation and parole. While race and ethnicity data is "intentionally excluded from COMPAS," discriminatory results still surface because the training data (such as income) used to build the model "are sourced from a criminal justice system plagued by racial disparities" and can act like proxies for race.

But rather than giving up on algorithms altogether, the authors suggested a greater understanding of the "costs and benefits" involved in adopting them. "In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms," they wrote.

This "balancing act," as they called it, necessitates a more holistic analysis that explores the impact of "four key decision points":

  • How the algorithm was designed;
  • What data was used to train the algorithm;
  • How the formula is applied to each person's data; and
  • How the result or output is used to make the decision.

Right now, the authors stated, too many algorithms are proprietary, and there's too little pressure for the designers to share how their algorithms work. That should include explaining their design choices as well as the data on the consequences of those choices and continual monitoring of how the algorithms affect various groups of people — especially vulnerable groups, they said. "We should not trust an algorithm unless it can be reviewed and audited in meaningful ways."

That level of sharing won't happen without pressure "from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools," they stated.

The article explaining the paper is openly available on The Hill. The paper itself is also openly available through the Berkman Klein Center.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

comments powered by Disqus