
There are a lot of ways that artificial intelligence can go awry in health and medicine. Epic Systems’ flawed algorithms for predicting sepsis led to false alarms while frequently failing to identify the condition in advance. Unregulated Medicare Advantage algorithms, used to determine how many days of rehabilitation will be covered by insurance, routinely denies patients the care they need.
A new article, published Thursday by a team of researchers in the journal Science, argues that these kinds of problems can only be averted if AI research uses more detailed performance metrics to root out bias and improve accuracy.
STAT spoke about the need for such an overhaul with the paper’s lead author, Ryan Burnell, a postdoc at Cambridge University at the Leverhulme Centre for the Future of Intelligence. Burnell admitted that it may be hard to sell companies on disclosing detailed metrics about their AI’s performance — but said that there’s a lot that journals, funding agencies and foundations, and academia can do to create good norms around sharing code, evaluation results, and the testing datasets used to train the AI.
Create a display name to comment
This name will appear with your comment