WHO Paper Raises Concerns about Multimodal Gen AI Models

Unless developers and governments adjust their practices around generative AI, large multimodal models may be adopted faster than they can be made safe for use, warns a new paper by the World Health Organization (WHO). The publication, "Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models," offers updated guidance for the ethical use of AI to account for large multimodal models. While the paper is aimed at operators and regulators in healthcare, its findings apply to any industry affected by the increasingly widespread use of generative AI technologies.

Large multimodal models, or LMMs, refer to generative AI models that are able to ingest and generate information in multiple formats, including text, images, video, and audio. LMMs can also generate outputs that are in a different format than what they were fed.

This versatility makes them useful for a wider range of tasks than so-called "unimodal" AI models. They are also better able to contextualize data that they are fed, and in turn generate outputs that are more nuanced. As such, the WHO notes, LMMs "have been adopted faster than any consumer application in history."  

However, the agency warns that LMMs are a whole new can of worms than other AI models that consumers may be more familiar with. The way that "LMMs are accessed and used is new" compared to other types of AI, according to the paper, "with both novel benefits and risks that societies, health systems and end-users may not yet be prepared to address fully."

The risks of widespread LMM use noted in the paper include:

  • The general industrywide lack of transparency around how LMM data is collected, processed and managed can make them — and the organizations that use them — noncompliant with data privacy and consumer protection regulations.
  • That same lack of transparency can impede efforts to curb systemic bias.
  • LMMs can give disproportionate power and influence to the select few companies that have enough compute, data, financial, and talent resources to create them.
  • LMMs consume a considerable amount of carbon energy and water, which can stress communities and worsen the climate change crisis.
  • "[B]y providing plausible responses that are increasingly considered a source of knowledge, LMMs may eventually undermine human epistemic authority, including in the domains of healthcare, science, and medicine."

Existing guidelines around the use of AI need to be updated to consider the risks presented by LMMs, says the WHO. "Regulations and laws written to govern the use of AI may not be fit to address either the challenges or opportunities associated with LMMs."

Overall, however, the WHO says the original six ethical AI principles it laid out in 2021 still apply to LMMs. Those are: Protect autonomy; promote human well-being, human safety, and the public interest; ensure transparency, "explainability," and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote AI that is responsive and sustainable.

"[T]he underlying ethical challenges identified in the guidance and the core ethical principles and recommendations ... remain relevant both for assessing and for effectively and safely using LMMs," the agency said, "even as additional gaps in governance and challenges have and will continue to arise with respect to this new technology."

The full paper is available for download on the WHO site.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • abstract graph showing growth

    Where Are You on the Ed Tech Maturity Curve?

    Ed tech maturity models can help institutions map progress and make smarter tech decisions.

  • abstract generative AI technology

    Apple and Google Strike AI Deal to Bring Gemini Models to Siri

    Apple and Google announced they have embarked on a multiyear partnership that will put Google's Gemini models and cloud technology at the core of the next generation of Apple Foundation Models, a move that could help Apple accelerate long-promised upgrades to Siri while handing Google a high-profile distribution win on the iPhone.

  • AI logo near computer equipment

    White House Releases National Policy Framework for AI

    The White House has released a four-page AI policy framework aimed at setting a national approach to AI, with priorities including child safety, intellectual property protections, truth and accuracy guardrails, and worker training for an AI-driven economy.

  • abstract glowing circuit patterns

    Microsoft Reduces Copilot Integrations in Windows 11

    Microsoft is dialing back its aggressive Copilot push in Windows 11, promising a sweeping quality overhaul that puts performance and reliability ahead of AI feature expansion .