AI Ethics Advisory Board Offers Guidance on How to Develop and Deploy AI Responsibly
Northeastern University's Institute for Experiential AI is launching an artificial intelligence ethics advisory board that will provide hands-on, independent guidance to help organizations, institutions, government bodies and others develop and deploy AI responsibly. The board is part of a suite of Responsible AI services offered by the institute, including AI ethics training, analysis, strategy development and other consultative services.
The board is composed of more than 40 researchers and practitioners from academic institutions, companies and organizations all over the world, including a core group of Northeastern University faculty members as well as representatives from institutions such as Carnegie Mellon, Harvard, MIT, Mayo Clinic, Kaiser Permanente, and Honeywell. It is co-chaired by Ricardo Baeza-Yates, director of research at the Institute for Experiential AI, and Cansu Canca, research associate professor and ethics lead at the institute.
When an organization submits a request for AI ethics guidance, the board deploys a small, multi-disciplinary team of experts with relevant experience to resolve the request. Requests may be declined by the board chairs if they identify a conflict of interest or have ethical concerns. Organizations pay a consulting fee to cover the experts' time.
"The use of AI-enabled tools … requires a deep understanding of the potential consequences," commented board member Tamiko Eto, manager of research compliance, technology risk, privacy, and IRB at Kaiser Permanente, in a statement. "Any implementation must be evaluated in the context of bias, privacy, fairness, diversity, and a variety of other factors, with input from multiple groups with context-specific expertise."
For more information, visit the Institute for Experiential AI site.
About the Author
Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].