News

Multimodal Models Another Can of Worms for Healthcare: WHO

Large multimodal models present "novel" risks to the healthcare landscape compared to other types of AI, according to the World Health Organization.

The agency this month updated its original guidance for the ethical use of AI in healthcare to account for such models, which it says "have been adopted faster than any consumer application in history."

Large multimodal models, or LMMs, refer to generative AI models that are able to ingest and generate information in multiple formats, including text, images, video and audio. In healthcare scenarios, for example, LMMs can be used on research papers, genomic data or patient X-rays. LMMs can also generate outputs that are in a different format than what they were fed.

This versatility makes them useful for a wider range of tasks than so-called "unimodal" AI models. They are also better able to contextualize data that they are fed, and in turn generate outputs that are more nuanced.

The WHO, which originally issued its guidance for AI back in 2021, said in its update that the way "LMMs are accessed and used is new [compared to other types of AI], with both novel benefits and risks that societies, health systems and end-users may not yet be prepared to address fully."

The risks of widespread LMM use were detailed in the organization's paper, available for download here. They include:

  • The general industrywide lack of transparency around how LMM data is collected, processed and managed can make them -- and the organizations that use them -- noncompliant with data privacy and consumer protection regulations.
  • That same lack of transparency can impede efforts to curb systemic bias.
  • LMMs can give disproportionate power and influence to the select few companies that have enough compute, data, financial and talent resources to create them.
  • LMMs consume a considerable amount of carbon energy and water, which can stress communities and worsen the climate change crisis.
  • "[B]y providing plausible responses that are increasingly considered a source of knowledge, LMMs may eventually undermine human epistemic authority, including in the domains of health care, science and medicine."

Unless developers and governments take action, according to the WHO, LMMs may be adopted faster than they can be made safe.

"Regulations and laws written to govern the use of AI may not be fit to address either the challenges or opportunities associated with LMMs," the organization wrote in the paper.

Overall, however, the WHO says the original six ethical AI principles it laid out in 2021 still apply to LMMs. Those are: Protect autonomy; promote human well-being, human safety and the public interest; ensure transparency, "explainability" and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote AI that is responsive and sustainable.

"[T]he underlying ethical challenges identified in the guidance and the core ethical principles and recommendations...remain relevant both for assessing and for effectively and safely using LMMs," the agency said, "even as additional gaps in governance and challenges have and will continue to arise with respect to this new technology."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Must Read Articles

Welcome to MedCloudInsider.com, the new site for healthcare IT Pros looking for insights on cloud and other cutting-edge IT tech.
Sign up now for our newsletter and don’t miss out! Sign Up Today