The Food and Drug Administration announced Tuesday that it is developing a framework for regulating artificial intelligence products used in medicine that continually adapt based on new data.
The agency’s outgoing commissioner, Scott Gottlieb, released a white paper that sets forth the broad outlines of the FDA’s proposed approach to establishing greater oversight over this rapidly evolving segment of AI products.
Good article, I opine that as AI-related systems will have a constant change with exposure to Data and their outcomes, so locked strategy will lead to immediate go to market strategy. But in Locked Strategy the difficulty is many AI software companies they validate their Algorithms on a specific vendors data, and they get approvals based on that specific vendor data, and they cant claim they can universally use the same algorithm for other vendor data. For example fundus camera for DR screening – Different Fundus Cameras will give different resolution and different color patterns, its important to understand that the AI companies have checked and thoroughly validated on all vendors data before they get into locked strategy….
While there are risks of rapidly evolving digital technology, they likely will be overweighed by immense benefits to population health and personalized medical care. The public policy challenge is how to allow and encourage innovation without stifling it, and striking an acceptable balance.
Autonomous learning and self-correction allow for improvements at a much faster pace than could ever be done by human engineering and FDA approval processes. With that said, I think we need ways to know the reasoning behind those changes so we humans learn from the machines.
“So our paper also takes on one of the key barriers for AI in clinical practice: the “black box” problem. For most AI systems, it’s very hard to understand exactly why they make a recommendation. That’s a huge issue for clinicians and patients who need to understand the system’s reasoning, not just its output – the why as well as the what.
Our system takes a novel approach to this problem, combining two different neural networks …”
When a young patient is worsening while managed with AI based diagnostics a physician will need to assume care.
If the AI is a black box this will be a dangerous “blind handoff”.
The FDA should assure that acute care AI provides computational transparency at the bedside.
Comments are closed.