Skip to Main Content

The algorithms carry out an array of crucial tasks: helping emergency rooms nationwide triage patients,  predicting who will develop diabetes, and flagging patients who need more help to manage their medical conditions.

But instead of making health care delivery more objective and precise, a new report finds, these algorithms — some of which have been in use for many years — are often making it more biased along racial and economic lines.


Researchers at the University of Chicago found that pervasive algorithmic bias is infecting countless daily decisions about how patients are treated by hospitals, insurers, and other businesses. Their report points to a gaping hole in oversight that is allowing deeply flawed products to seep into care with little or no vetting, in some cases perpetuating inequitable treatment for more than a decade before being discovered.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!


Create a display name to comment

This name will appear with your comment

There was an error saving your display name. Please check and try again.