News

How To Fix Medical AI Bias: Check Early and Check Often

introimage

Artificial intelligence researchers have recommended a five-step workflow for mitigating the causes and effects of bias in machine learning and AI.

Researchers in the Good Machine Learning Practices Team of the AFDO/RAPS Healthcare Products Collaborative, an industry body committed to driving collaboration and innovation in health care products, warn in a recent paper that AI bias that's left unchecked early in the product development process will only snowball into more serious problems in later product versions -- problems that can harm individual patients and larger populations.

"AI-based systems trained on unintentionally biased data will create models that replicate and potentially magnify those biases creating a model that does not accurately reflect the condition being treated and the population being served," they write. "This potentially introduces discrimination in the effectiveness of treating the entire patient population and social inequity."

They recommend developers use an "early and often" approach to checking and correcting bias in their AI and ML systems. 

"The best application of identifying and addressing unintended bias is to begin as early in the product life cycle as possible and to iterate the workflow periodically throughout the life cycle," they write. "This ensures the most current use, requirement, design, and implementation information are considered in effectively addressing unintended bias in the system."

The workflow they recommend comprises five steps:

  • Initial Bias Analysis: Identify intended use of the system; identify potential and applicable biases; estimate impact of bias on intended use.
  • Evaluate Bias: Determine acceptability of each bias.
  • Mitigate Bias: Determine mitigations; implement/verify mitigations; determine residual impact of bias on intended use; determine potential bias arising from mitigations; determine completeness of mitigations.
  • Evaluate Overall Bias: Determine overall acceptability of bias in the system; articulate benefit vs. bias.
  • Post Market Review: Evaluate predicted vs. actual bias using post-market data.

It's clear that biased AI and ML systems can put patient populations at risk and create deeper social inequities. However, the paper's authors note that bias can also fuel doubt about AI and ML technologies themselves, slowing their adoption and preventing health care providers from taking advantage of their potential benefits.

"Unless bias is circumvented," they conclude, "resulting reports of inequity may lower trust in the output of an AI-enabled medical device and create a barrier to adoption of ML technology in healthcare."

Read the paper here for the researchers' in-depth analysis of the causes of biased AI and ML systems, and how developers can correct them.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Must Read Articles

Welcome to MedCloudInsider.com, the new site for healthcare IT Pros looking for insights on cloud and other cutting-edge IT tech.
Sign up now for our newsletter and don’t miss out! Sign Up Today