Paper Offers Framework for Reducing Algorithm Bias

algorithm

A recent paper on algorithm development provided some guidance for how to reduce the bias inherent in AI algorithms as well as the harm those biases can have on underprivileged groups. "A Harm-Reduction Framework for Algorithmic Fairness" argued that artificial intelligence and machine learning are "increasingly" being applied to decision-making and "affect the lives of individuals in ways large and small." The report was produced by the Center for Research on Equitable and Open Scholarship at MIT Libraries and Berkman Klein Center for Internet & Society at Harvard University and published in IEEE Security & Privacy.

The issue of algorithm bias cropped up recently when freshman Congresswoman Alexandria Ocasio-Cortez (D-N.Y.) told a gathering during a Martin Luther King Jr. Day event in New York City that "Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions." During the public interview with writer Ta-Nehisi Coates, Ocasio-Cortez said, "[Algorithms are] just automated assumptions. And if you don't fix the bias, then you're automating the bias."

As two of the paper's authors explained in an opinion piece on The Hill, the standard approaches for reducing bias in algorithms "do little to address inequality." As an example, they pointed to Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a risk assessment application used by the criminal justice system to help make decisions on sentencing, bail, probation and parole. While race and ethnicity data is "intentionally excluded from COMPAS," discriminatory results still surface because the training data (such as income) used to build the model "are sourced from a criminal justice system plagued by racial disparities" and can act like proxies for race.

But rather than giving up on algorithms altogether, the authors suggested a greater understanding of the "costs and benefits" involved in adopting them. "In our paper, we argue that society has an ethical responsibility not to make vulnerable groups pay a disproportionate share of the costs resulting from our growing use of algorithms," they wrote.

This "balancing act," as they called it, necessitates a more holistic analysis that explores the impact of "four key decision points":

  • How the algorithm was designed;
  • What data was used to train the algorithm;
  • How the formula is applied to each person's data; and
  • How the result or output is used to make the decision.

Right now, the authors stated, too many algorithms are proprietary, and there's too little pressure for the designers to share how their algorithms work. That should include explaining their design choices as well as the data on the consequences of those choices and continual monitoring of how the algorithms affect various groups of people — especially vulnerable groups, they said. "We should not trust an algorithm unless it can be reviewed and audited in meaningful ways."

That level of sharing won't happen without pressure "from policymakers, consumers and the companies that purchase and use algorithmic decision-making tools," they stated.

The article explaining the paper is openly available on The Hill. The paper itself is also openly available through the Berkman Klein Center.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • minimalist geometric grid pattern of blue, gray, and white squares and rectangles

    Windows Server 2025 Release Offers Cloud, Security, and AI Capabilities

    Microsoft has announced the general availability of Windows Server 2025. The release will enable organizations to deploy applications on-premises, in hybrid setups, or fully in the cloud, the company said.

  • translucent lock composed of interconnected nodes and circuits at the center

    Cloud Security Alliance: Best Practices for Securing AI Systems

    The Cloud Security Alliance (CSA), a not-for-profit organization whose mission statement is defining and raising awareness of best practices to help ensure a secure cloud computing environment, has released a new report offering guidance on securing systems that leverage large language models (LLMs) to address business challenges.

  • Purdue University

    Purdue Opens Large Esports Facility

    Purdue University has opened a new gaming lounge for students training and competing in esports as well as casual gamers. The institution partnered with Dell Technologies to outfit the 2,000-square-foot-space with Alienware gaming equipment.

  • glowing neural network-like structure and balanced scale

    California AI Regulation Bill Advances to Assembly Vote with Key Amendments

    California’s Senate Bill 1047 (SB 1047), the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," spearheaded by Senator Scott Wiener (D-San Francisco), has cleared the Assembly Appropriations Committee with some significant amendments.