
Artificial intelligence is the fastest-growing frontier in medicine, but it is also among the most lawless. U.S. regulators have approved more than 160 medical AI products in recent years based on widely divergent amounts of clinical data and without requiring manufacturers to publicly document testing on patients of different genders, races, and geographies, a STAT investigation has found.
The result is a dearth of information on whether AI products will improve care or trigger unintended consequences, such as an increase in incorrect diagnoses, unnecessary treatment, or an exacerbation of racial disparities.
STAT examined data reported in hundreds of pages of documents filed with the FDA over the last six years by companies that ultimately gained approval of products that rely on AI. These companies won approval to use AI for everything from detecting heart arrhythmias with smartwatches to analyzing data from imaging devices to flag conditions such as stroke, cancer, and respiratory illnesses.
In some ways it seems that the applications are so different from each other that it doesn’t make sense to group them all together based on being based on ai.
Anyways, the FDA should probably at the very least require disclosure of the amount of validation data used to approve, even if they don’t want to raise the bar on new fast and loose innovations. It looks like most of these would be targeted at providers (since most relate to imaging in some way) who can exercise some judgement about when to use.