U-M Research Center to Explore Ethics of AI

artificial intelligence

The need for ethics, standards and policies for the ever-increasing use of artificial intelligence and other emerging tech is the impetus behind a new research center at the University of Michigan. The Center for Ethics, Society and Computing (or ESC — "Escape" — for short) is "dedicated to intervening when digital media and computing technologies reproduce inequality, exclusion, corruption, deception, racism, or sexism," according to its mission statement.
 
"[AI] is a topic that used to be on the fringes but more recently has gotten broader attention as we have experienced many unintended consequences of technology," said center Associate Director Silvia Lindtner, assistant professor of information and art and design, in a statement. For instance, the increasing use of AI and data-based algorithms can lead to gender and racial stereotyping.

Beyond AI and data usage, the interdisciplinary center will also focus on issues of privacy, augmented and virtual reality, open data and identity. Current research projects include:

  • Embodisuit, wearable technology that allows the user to map Internet of Things signals onto different places on the body;
  • Big-DIG (Big Data, Innovation and Governance), working to "generate data-driven knowledge on innovation diffusion, impact and governance in the world-system";
  • The Ethics of Emotion Recognition, examining emotion recognition algorithms and emotional data;
  • Culturally Situated Design Tools for helping students learn STEM principles and eliminating misconceptions about race and gender in STEM; and
  • Auditing Algorithms, research into making the consequences of algorithm biases visible from the outside.

For more information, visit the ESC site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux 9.5 Expands Automation, Security

    Open source solution provider Red Hat has introduced Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Launches AI-Native Networking and Security Management Platform

    Juniper Networks has introduced a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.

  • a digital lock symbol is cracked and breaking apart into dollar signs

    Ransomware Costs Schools Nearly $550,000 per Day of Downtime

    New data from cybersecurity research firm Comparitech quantifies the damage caused by ransomware attacks on educational institutions.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Garners OpenAI Support

    ChatGPT creator OpenAI is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.