
As medicine continues to test automated machine learning tools, many hope that low-cost support tools will help narrow care gaps in countries with constrained resources. But new research suggests it’s those countries that are least represented in the data being used to design and test most clinical AI — potentially making those gaps even wider.
Researchers have shown that AI tools often fail to perform when used in real-world hospitals. It’s the problem of transferability: An algorithm trained on one patient population with a particular set of characteristics won’t necessarily work well on another. Those failures have motivated a growing call for clinical AI to be both trained and validated on diverse patient data, with representation across spectrums of sex, age, race, ethnicity, and more.
But the patterns of global research investment mean that even if individual scientists make an effort to represent a range of patients, the field as a whole skews significantly toward just a few nationalities. In a review of more than 7,000 clinical AI papers, all published in 2019, researchers revealed more than half of the databases used in the work came from the U.S. and China, and high-income countries represented the majority of the remaining patient datasets.
Create a display name to comment
This name will appear with your comment