Paper Offers Framework for Reducing Algorithm Bias

algorithm

A recent paper on algorithm development provided some guidance for how to reduce the bias inherent in AI algorithms as well as the harm those biases can have on underprivileged groups. "A Harm-Reduction Framework for Algorithmic Fairness" argued that artificial intelligence and machine learning are "increasingly" being applied to decision-making and "affect the lives of individuals in ways large and small." The report was produced by the Center for Research on Equitable and Open Scholarship at MIT Libraries and Berkman Klein Center for Internet & Society at Harvard University and published in IEEE Security & Privacy.

The issue of algorithm bias cropped up recently when freshman Congresswoman Alexandria Ocasio-Cortez (D-N.Y.) told a gathering during a Martin Luther King Jr. Day event in New York City that "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions." During the public interview with writer Ta-Nehisi Coates, Ocasio-Cortez said, "[Algorithms are] just automated assumptions. And if you don't fix the bias, then you're automating the bias."

As two of the paper's authors explained in an opinion piece on The Hill, the standard approaches for reducing bias in algorithms "do little to address inequality." As an example, they pointed to Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk assessment application used by the criminal justice system to help make decisions on sentencing, bail, probation and parole. While race and ethnicity data is "intentionally excluded from COMPAS," discriminatory results still surface because the training data (such as income) used to build the model "are sourced from a criminal justice system plagued by racial disparities" and can act like proxies for race.

But rather than giving up on algorithms altogether, the authors suggested a greater understanding of the "costs and benefits" involved in adopting them. "In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms," they wrote.

This "balancing act," as they called it, necessitates a more holistic analysis that explores the impact of "four key decision points":

  • How the algorithm was designed;
  • What data was used to train the algorithm;
  • How the formula is applied to each person's data; and
  • How the result or output is used to make the decision.

Right now, the authors stated, too many algorithms are proprietary, and there's too little pressure for the designers to share how their algorithms work. That should include explaining their design choices as well as the data on the consequences of those choices and continual monitoring of how the algorithms affect various groups of people — especially vulnerable groups, they said. "We should not trust an algorithm unless it can be reviewed and audited in meaningful ways."

That level of sharing won't happen without pressure "from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools," they stated.

The article explaining the paper is openly available on The Hill. The paper itself is also openly available through the Berkman Klein Center.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • glowing digital brain above a chessboard with data charts and flowcharts

    Why AI Strategy Matters (and Why Not Having One Is Risky)

    If your institution hasn't started developing an AI strategy, you are likely putting yourself and your stakeholders at risk, particularly when it comes to ethical use, responsible pedagogical and data practices, and innovative exploration.

  • laptop screen with a video play icon, surrounded by parts of notebooks, pens, and a water bottle on a student desk

    New AI Tool Generates Video Explanations Based on Course Materials

    AI-powered studying and learning platform Studyfetch has launched Imagine Explainers, a new video creator that utilizes artificial intelligence to generate 10- to 60-minute explainer videos for any topic.

  • cloud and circuit patterns with AI stamp

    Cloud Management Startup Launches Infrastructure Intelligence Tool

    A new AI-powered infrastructure intelligence tool from cloud management startup env0 aims to turn the fog of sprawling, enterprise-scale deployments into crisp, queryable insight, minus the spreadsheets, scripts, and late-night Slack threads.

  • Stylized illustration showing cybersecurity elements like shields, padlocks, and secure cloud icons on a neutral, minimalist digital background

    Microsoft Announces Security Advancements

    Microsoft has announced major security advancements across its product portfolio and practices. The work is part of its Secure Future Initiative (SFI), a multiyear cybersecurity transformation the company calls the largest engineering project in company history.