WHO Paper Raises Concerns about Multimodal Gen AI Models

Unless developers and governments adjust their practices around generative AI, large multimodal models may be adopted faster than they can be made safe for use, warns a new paper by the World Health Organization (WHO). The publication, "Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models," offers updated guidance for the ethical use of AI to account for large multimodal models. While the paper is aimed at operators and regulators in healthcare, its findings apply to any industry affected by the increasingly widespread use of generative AI technologies.

Large multimodal models, or LMMs, refer to generative AI models that are able to ingest and generate information in multiple formats, including text, images, video, and audio. LMMs can also generate outputs that are in a different format than what they were fed.

This versatility makes them useful for a wider range of tasks than so-called "unimodal" AI models. They are also better able to contextualize data that they are fed, and in turn generate outputs that are more nuanced. As such, the WHO notes, LMMs "have been adopted faster than any consumer application in history."  

However, the agency warns that LMMs are a whole new can of worms than other AI models that consumers may be more familiar with. The way that "LMMs are accessed and used is new" compared to other types of AI, according to the paper, "with both novel benefits and risks that societies, health systems and end-users may not yet be prepared to address fully."

The risks of widespread LMM use noted in the paper include:

  • The general industrywide lack of transparency around how LMM data is collected, processed and managed can make them — and the organizations that use them — noncompliant with data privacy and consumer protection regulations.
  • That same lack of transparency can impede efforts to curb systemic bias.
  • LMMs can give disproportionate power and influence to the select few companies that have enough compute, data, financial, and talent resources to create them.
  • LMMs consume a considerable amount of carbon energy and water, which can stress communities and worsen the climate change crisis.
  • "[B]y providing plausible responses that are increasingly considered a source of knowledge, LMMs may eventually undermine human epistemic authority, including in the domains of healthcare, science, and medicine."

Existing guidelines around the use of AI need to be updated to consider the risks presented by LMMs, says the WHO. "Regulations and laws written to govern the use of AI may not be fit to address either the challenges or opportunities associated with LMMs."

Overall, however, the WHO says the original six ethical AI principles it laid out in 2021 still apply to LMMs. Those are: Protect autonomy; promote human well-being, human safety, and the public interest; ensure transparency, "explainability," and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote AI that is responsive and sustainable.

"[T]he underlying ethical challenges identified in the guidance and the core ethical principles and recommendations ... remain relevant both for assessing and for effectively and safely using LMMs," the agency said, "even as additional gaps in governance and challenges have and will continue to arise with respect to this new technology."

The full paper is available for download on the WHO site.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs AI Content Safeguards into Law

    California Governor Gavin Newsom has officially signed off on a series of landmark artificial intelligence bills, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.

  • close-up illustration of a hand signing a legislative document

    California Passes AI Safety Legislation, Awaits Governor's Signature

    California lawmakers have overwhelmingly approved a bill that would impose new restrictions on AI technologies, potentially setting a national precedent for regulating the rapidly evolving field. The legislation, known as S.B. 1047, now heads to Governor Gavin Newsom's desk. He has until the end of September to decide whether to sign it into law.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report: Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, a popular open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a new report from Microsoft.

  • interconnected cubes and circles arranged in a grid-like structure

    Hugging Face Gradio 5 Offers AI-Powered App Creation and Enhanced Security

    Hugging Face has released version 5 of its Gradio open source platform for building machine learning (ML) applications. The update introduces a suite of features focused on expanding access to AI, including a novel AI-powered app creation tool, enhanced web development capabilities, and bolstered security measures.