AI Ethics Advisory Board Offers Guidance on How to Develop and Deploy AI Responsibly

Northeastern University's Institute for Experiential AI is launching an artificial intelligence ethics advisory board that will provide hands-on, independent guidance to help organizations, institutions, government bodies and others develop and deploy AI responsibly. The board is part of a suite of Responsible AI services offered by the institute, including AI ethics training, analysis, strategy development and other consultative services.

The board is composed of more than 40 researchers and practitioners from academic institutions, companies and organizations all over the world, including a core group of Northeastern University faculty members as well as representatives from institutions such as Carnegie Mellon, Harvard, MIT, Mayo Clinic, Kaiser Permanente, and Honeywell. It is co-chaired by Ricardo Baeza-Yates, director of research at the Institute for Experiential AI, and Cansu Canca, research associate professor and ethics lead at the institute.

When an organization submits a request for AI ethics guidance, the board deploys a small, multi-disciplinary team of experts with relevant experience to resolve the request. Requests may be declined by the board chairs if they identify a conflict of interest or have ethical concerns. Organizations pay a consulting fee to cover the experts' time.

"The use of AI-enabled tools … requires a deep understanding of the potential consequences," commented board member Tamiko Eto, manager of research compliance, technology risk, privacy, and IRB at Kaiser Permanente, in a statement. "Any implementation must be evaluated in the context of bias, privacy, fairness, diversity, and a variety of other factors, with input from multiple groups with context-specific expertise."

For more information, visit the Institute for Experiential AI site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • glowing futuristic laptop with a holographic screen displaying digital text

    New Turnitin Product Brings AI-Powered Tools to Students with Instructor Guardrails

    Academic integrity solution provider Turnitin has introduced Turnitin Clarity, a paid add-on for Turnitin Feedback Studio that provides a composition workspace for students with educator-guided AI assistance, AI-generated writing feedback, visibility into integrity insights, and more.

  • From Fire TV to Signage Stick: University of Utah's Digital Signage Evolution

    Jake Sorensen, who oversees sponsorship and advertising and Student Media in Auxiliary Business Development at the University of Utah, has navigated the digital signage landscape for nearly 15 years. He was managing hundreds of devices on campus that were incompatible with digital signage requirements and needed a solution that was reliable and lowered labor costs. The Amazon Signage Stick, specifically engineered for digital signage applications, gave him the stability and design functionality the University of Utah needed, along with the assurance of long-term support.

  • Abstract AI circuit board pattern

    New Nonprofit to Work Toward Safer, Truthful AI

    Turing Award-winning AI researcher Yoshua Bengio has launched LawZero, a new nonprofit aimed at developing AI systems that prioritize safety and truthfulness over autonomy.

  • two large brackets facing each other with various arrows, circles, and rectangles flowing between them

    1EdTech Partners with DXtera to Support Ed Tech Interoperability

    1EdTech Consortium and DXtera Institute have announced a partnership aimed at improving access to learning data in postsecondary and higher education.