Skip to Main Content

Although artificial intelligence is entering health care with great promise, clinical AI tools are prone to bias and real-world underperformance from inception to deployment, including the stages of dataset acquisition, labeling or annotating, algorithm training, and validation. These biases can reinforce existing disparities in diagnosis and treatment.

To explore how well bias is being identified in the FDA review process, we looked at virtually every health care AI product approved between 1997 and October 2022. Our audit of data submitted to the FDA to clear clinical AI products for the market reveals major flaws in how this technology is being regulated.

advertisement

Our analysis

The FDA has approved 521 AI products between 1997 and October 2022: 500 under the 510(k) pathway, meaning the new algorithm mimics an existing technology; 18 under the de novo pathway, meaning the algorithm does not mimic existing models but comes packaged with controls that make it safe; three were submitted with premarket approval. Since the FDA only includes summaries for the first two, we analyzed the rigor of the submission data underlying 518 approvals to understand how well the submissions were considering how bias can enter the equation.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!

GET STARTED

Create a display name to comment

This name will appear with your comment

There was an error saving your display name. Please check and try again.