Skip to Main Content

As the role of artificial intelligence grows in medicine, one of the leading concerns is that algorithmic tools will perpetuate disparities in care. Because AIs are trained on health records reflecting current standards of care, they could end up parroting bias baked into the medical system if not carefully designed. And if algorithms aren’t trained and tested on data from diverse populations, they could be less effective when used to guide care for poorly-represented subsets of patients.

So some AI development groups are tackling that problem head on, training and testing their algorithms on diverse patient data to ensure they can apply to a wide range of patients — long before they’re deployed in the wild.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!


Create a display name to comment

This name will appear with your comment