WHO Paper Raises Concerns about Multimodal Gen AI Models

Unless developers and governments adjust their practices around generative AI, large multimodal models may be adopted faster than they can be made safe for use, warns a new paper by the World Health Organization (WHO). The publication, "Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models," offers updated guidance for the ethical use of AI to account for large multimodal models. While the paper is aimed at operators and regulators in healthcare, its findings apply to any industry affected by the increasingly widespread use of generative AI technologies.

Large multimodal models, or LMMs, refer to generative AI models that are able to ingest and generate information in multiple formats, including text, images, video, and audio. LMMs can also generate outputs that are in a different format than what they were fed.

This versatility makes them useful for a wider range of tasks than so-called "unimodal" AI models. They are also better able to contextualize data that they are fed, and in turn generate outputs that are more nuanced. As such, the WHO notes, LMMs "have been adopted faster than any consumer application in history."  

However, the agency warns that LMMs are a whole new can of worms than other AI models that consumers may be more familiar with. The way that "LMMs are accessed and used is new" compared to other types of AI, according to the paper, "with both novel benefits and risks that societies, health systems and end-users may not yet be prepared to address fully."

The risks of widespread LMM use noted in the paper include:

  • The general industrywide lack of transparency around how LMM data is collected, processed and managed can make them — and the organizations that use them — noncompliant with data privacy and consumer protection regulations.
  • That same lack of transparency can impede efforts to curb systemic bias.
  • LMMs can give disproportionate power and influence to the select few companies that have enough compute, data, financial, and talent resources to create them.
  • LMMs consume a considerable amount of carbon energy and water, which can stress communities and worsen the climate change crisis.
  • "[B]y providing plausible responses that are increasingly considered a source of knowledge, LMMs may eventually undermine human epistemic authority, including in the domains of healthcare, science, and medicine."

Existing guidelines around the use of AI need to be updated to consider the risks presented by LMMs, says the WHO. "Regulations and laws written to govern the use of AI may not be fit to address either the challenges or opportunities associated with LMMs."

Overall, however, the WHO says the original six ethical AI principles it laid out in 2021 still apply to LMMs. Those are: Protect autonomy; promote human well-being, human safety, and the public interest; ensure transparency, "explainability," and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote AI that is responsive and sustainable.

"[T]he underlying ethical challenges identified in the guidance and the core ethical principles and recommendations ... remain relevant both for assessing and for effectively and safely using LMMs," the agency said, "even as additional gaps in governance and challenges have and will continue to arise with respect to this new technology."

The full paper is available for download on the WHO site.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • teacher

    6 Policy Recommendations for Incorporating AI in the Classroom

    The Southern Regional Education Board's Commission on AI in Education has published six recommendations for states on adopting artificial intelligence in schools, colleges, and universities. The guidance marks the commission's first release since it was established last February, with more recommendations planned in the coming year.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Launches Stand-Alone AI App

    Meta Platforms has introduced a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • geometric pattern of interconnected triangles and hexagons

    Gravyty Merges with AI-Powered Student Engagement Companies Ivy.ai and Ocelot

    Gravyty, a provider of alumni and donor engagement and fundraising solutions, has announced a merger with AI-powered student enrollment and engagement companies Ivy.ai and Ocelot. The combined company will operate under the Gravyty brand.

  • laptop and fish hook

    Security Firm Identifies Generative AI 'Vishing' Attack

    A new report from Ontinue's Cyber Defense Center has identified a complex, multi-stage cyber attack that leveraged social engineering, remote access tools, and signed binaries to infiltrate and persist within a target network.