The Food and Drug Administration announced Tuesday that it is developing a framework for regulating artificial intelligence products used in medicine that continually adapt based on new data.

The agency’s outgoing commissioner, Scott Gottlieb, released a white paper that sets forth the broad outlines of the FDA’s proposed approach to establishing greater oversight over this rapidly evolving segment of AI products.

It is the most forceful step the FDA has taken to assert the need to regulate a category of artificial intelligence systems whose performance constantly changes based on exposure to new patients and data in clinical settings. These machine-learning systems present a particularly thorny problem for the FDA, because the agency is essentially trying to hit a moving target in regulating them.

advertisement

The white paper describes criteria the agency proposes to use to determine when medical products that rely on artificial intelligence will require FDA review before being commercialized.

The review may examine the underlying performance of a product’s algorithms, a manufacturer’s plan to make modifications, and the manufacturer’s ability to manage the risks associated with any modifications.

“A new approach to these technologies would address the need for the algorithms to learn and adapt when used in the real world,” Gottlieb wrote in a statement accompanying the white paper. “It would be a more tailored fit than our existing regulatory paradigm for software as a medical device.”

The paper is the first step in a monthslong process in which the FDA will collect input from the public and a variety of stakeholders in medicine before finalizing a policy on regulating adaptive AI systems.

Eric Topol, an expert in artificial intelligence at the Scripps Research Institute, said the white paper “demonstrates careful forethought about the field” of artificial intelligence in medicine. He noted that the “document calls for proof in a clinical, real world environment,” which ideally should be prospective, with “no AI algorithm approved only on the basis of retrospective [computerized] dataset analysis.”

He added that the eventual regulatory framework should support the ability of adaptive AI systems to learn and improve over time. “It is important to come up with a means of not shortchanging the auto-didactic power of deep learning nets that will continue to improve, not ‘freeze’ at the time of approval,” Topol wrote in an email to STAT.

The FDA has already approved medical devices that rely on so-called “locked algorithms,” or those that do not change each time an algorithm is used, but instead are changed by a manufacturer at intervals, using specific training data and a validation process to ensure proper functioning of the system. Among the devices approved last year were a device used to detect diabetic retinopathy, a degenerative eye disease, and another designed to alert providers of a potential stroke in patients.

The proper performance of those locked algorithms, and others like them, is crucial to ensuring that doctors base life-and-death treatment decisions on accurate information. That task is harder for products that learn and evolve on their own, in ways that are difficult even for the manufacturers of such systems to understand. An example of such a system, cited by Gottlieb, is one that uses algorithms to identify breast cancer lesions on mammograms and learns to improve its confidence, or identify subgroups of cancer, based on its exposure to additional real world data. Such systems are already in development in oncology and other areas of care.

Gottlieb noted that this type of technology also offers huge potential for improving medical care, and he said the agency is seeking to strike a regulatory balance that will allow promising products to get onto the market as soon as possible.

“Artificial intelligence has helped transform industries like finance and manufacturing, and I’m confident that these technologies will have a profound and positive impact on health care,” he wrote in his statement. “I can envision a world where, one day, artificial intelligence can help detect and treat challenging health problems, for example by recognizing the signs of disease well in advance of what we can do today.”

Leave a Comment

Please enter your name.
Please enter a comment.

  • While there are risks of rapidly evolving digital technology, they likely will be overweighed by immense benefits to population health and personalized medical care. The public policy challenge is how to allow and encourage innovation without stifling it, and striking an acceptable balance.

    Autonomous learning and self-correction allow for improvements at a much faster pace than could ever be done by human engineering and FDA approval processes. With that said, I think we need ways to know the reasoning behind those changes so we humans learn from the machines.

  • “So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
    Our system takes a novel approach to this problem, combining two different neural networks …”
    https://deepmind.com/blog/moorfields-major-milestone/

Sign up for our Daily Recap newsletter

A roundup of STAT’s top stories of the day in science and medicine

Privacy Policy