Health insurers are leaning more heavily on machine learning to predict which patients will miss out on care because they can’t get a ride to an appointment or reliable internet to sign onto a video visit. But with those new technologies come concerns that lack of standards or checks on their use could propagate biases in health care, leaving some of the neediest patients behind.
Technology, payers say, helps them flag at-risk members faster and rapidly connect those in need with community services, staving off costlier long-term complications. Taken in aggregate, these predictions could give payers a window into community-level needs. Predictive models, which vary widely by payer, often draw on claims, public records, census data, and other data sources to flag the members most likely to lack transportation, nutritious food, stable internet, and other factors known as the “social determinants of health.”
But the deployment of these tools also carries risks that some experts say haven’t been fully examined. There’s little standardization for the types of data they pull, or how the assessments are used, said MIT machine learning researcher Irene Chen, who co-authored an analysis published Monday in Health Affairs examining the potential for bias as payers use artificial intelligence.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.