Google Launches 'MedLM' AI Foundation Models for Healthcare

Building on its earlier work in developing AI models for healthcare industry workloads, Google has made its new MedLM library of foundation models generally available.

"MedLM is now available to Google Cloud customers in the United States through an allowlisted general availability in the Vertex AI platform," Google announced earlier this month. Vertex AI is Google's AI platform for developers, providing them access to over 130 models, including Google's recently debuted Gemini. "MedLM is also currently available in preview in certain other markets worldwide." 

MedLM has two models at launch: MedLM-large for complex workloads, and the smaller but more scalable MedLM-medium. According to this info page, MedLM-medium "provides customers with better throughputs and includes more recent data."  

Both models are an extension of Google's work on its Med-PaLM healthcare LLM -- specifically, version 2 of MedPaLM, which Google claims is "the first [AI model] to reach human expert level on answering [U.S. Medical Licensing Examination]-style questions."   

More models are coming to MedLM in the coming months, including some based on Gemini.   

In its current iteration, MedLM can help clinicians perform certain administrative tasks, like summarizing doctors' notes to create after-visit documents. It has its limits, however, and Google advocates for a "human in the middle" approach to ensure the quality and accuracy of MedLM's outputs.

Moreover, per the info page, "[C]ustomers may not use MedLM for clinical provide medical advice, or in any manner that is overseen by or requires clearance or approval from a medical device regulatory agency." For instance, the following scenarios are examples of prohibited MedLM use:

  • Analysis of patient records, prescription patterns, geographical data, and so forth to identify patients with possible diagnosis of opioid addiction. 
  • Analysis of patient-specific medical information to detect a life-threatening condition, such as stroke or sepsis, and generate an alarm or an alert to notify a HCP. 
  • Analysis of patient-specific medical information found in the medical records, including the most recent mammography report findings, to provide a list of follow-up actions or treatment options. 
  • Analyzing prioritized list of FDA-authorized depression treatment options to an HCP based on an analysis of reported outcomes in a database of clinical studies using medical information (for example, diagnosis and demographics) from the patient's medical record.

Patient use of MedLM is also prohibited. Google emphasized that MedLM is strictly meant to be an "assistive tool" for qualified healthcare personnel, and not a medical tool in and of itself.

"LLMs and Generative AI are inherently probabilistic and may not always be accurate," Google said. "Without adequate consideration or controls by customers, use of Generative AI models in healthcare may constitute a hazard to patients due to inaccurate content, missing content, or misleading, biased content." 

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Must Read Articles

Welcome to, the new site for healthcare IT Pros looking for insights on cloud and other cutting-edge IT tech.
Sign up now for our newsletter and don’t miss out! Sign Up Today