
The rapid entry of artificial intelligence is stretching the boundaries of medicine. It will also test the limits of the law.
Artificial intelligence (AI) is being used in health care to flag abnormalities in head CT scans, cull actionable information from electronic health records, and help patients understand their symptoms.
At some point, AI is bound to make a mistake that harms a patient. When that happens, who — or what — is liable?
I’ll use radiology to answer this question, because many believe AI will have the largest impact there and some even believe that AI will replace radiologists.
Here’s a hypothetical situation to illustrate some of the legal uncertainties: Innovative Care Hospital, an early adopter of technology, decides to use AI instead of radiologists to interpret chest x-ray images as a way to reduce labor costs and increase efficiency. Its AI performs well, but for unknown reasons misses an obvious pneumonia and the patient dies from septic shock.
Who’ll get sued? The answer is, “It depends.”
If Innovative Care developed the algorithm in house, it will be liable through what’s known as enterprise liability. Though the medical center isn’t legally obliged to have radiologists oversee AI’s interpretation of x-rays, by removing radiologists from the process it assumes the risk of letting AI fly solo.
In this scenario, a suit would likely be settled. The hospital will have factored in the cost of lawsuits in its business model. If the case goes to trial, the fact that the hospital uses artificial intelligence to increase efficiency won’t likely be helpful for the hospital, even if the savings are passed to its patients. Efficiency can be framed as “cost cutting,” which juries don’t find as enticing as MBA students.
If Innovative Care had bought the algorithm from an AI vendor, the distribution of liability is more complex.
It’s unlikely that a commercial AI algorithm would be deployed for medical care without the blessing of the Food and Drug Administration (FDA). By going through the agency’s approval process, the vendor sheds some risk because of a legal concept known as preemption.
The rationale behind preemption is that when state and federal laws conflict, federal law prevails. For the manufacturers of medical products, preemption avoids having to meet safety requirements for each state separately. It smooths commerce across the states.
Whether FDA approval shields vendors from liability in state courts, however, is uncertain. The Supreme Court of the United States hasn’t ruled consistently in this realm.
In 1990, Laura Lohr’s pacemaker failed. She and her husband sued Medtronic, the manufacturer of the pacemaker, in Florida. The company claimed it couldn’t be sued because of FDA’s preemption. The Supreme Court ruled that Medtronic wasn’t sheltered from liability in lower courts because the device had fulfilled a less-rigorous approach to FDA approval: the expedited 510(K) pathway. The Supreme Court also emphasized that preemption applies only when specific state laws conflict with specific federal regulations.
Another Supreme Court decision is also instructive. In 2000, Diana Levine was given an intravenous infusion of Phenergan to stop severe nausea and vomiting. The infusion went awry. She developed gangrene in her right arm, and doctors had to amputate it. She sued the clinic, those who administered the drug, and the drug’s maker, Wyeth. The company claimed that because the FDA was satisfied with its labeling of the drug’s side effects, it was shielded from liability in lower courts for failing to specify one of its rarer side effects, limb loss. The Supreme Court rejected Wyeth’s claim.
That decision, however, may not apply to medical devices. Charles Riegel was undergoing artery-opening angioplasty in 1996 when the balloon-carrying catheter ruptured, requiring emergency surgery. He and his wife sued the catheter’s maker, Medtronic, for negligence in designing, making, and labeling the catheter. Medtronic claimed protection from the suit because it had met the FDA’s rigorous premarket approval process. The Supreme Court agreed with Medtronic.
As these three cases show, preemption might work, but it doesn’t offer litigation-free nirvana.
Can preemption even apply to artificial intelligence? Algorithms aren’t static products — they learn and evolve. An algorithm approved by the FDA is different than the one that has been reading x-rays and learning about them for a year. The FDA can’t foresee how the algorithm will perform in the future.
How courts deal with AI-based algorithms will also depend in part on whether algorithms are viewed as drugs or devices, and what kind of FDA approval they receive.
Many AI-based algorithms are currently being approved through the expedited pathway. Others aim to enter the more-rigorous premarket approval process. Regardless, AI vendors, many of which are start-ups, could be accruing liability of an unknown scale.
Random errors, in which artificial intelligence misses an obvious abnormality for inexplicable reasons, are uncommon and difficult to predict. But statistically speaking, their occurrence is certain. Such misses could lead to large settlements, particularly if it is argued that a radiologist would unlikely have missed the obvious finding, such as pneumonia or cancer. Big payouts or high-profile lawsuits could obliterate the emerging health AI sector, which is still a cottage industry.
On whom the responsibility for continuous improvement of AI will fall — vendors or users — depends on the outcome of the first major litigation in this realm. The choice of who to sue, of course, may be affected by deep pockets. Plaintiffs may prefer suing a large hospital instead of a small, venture-capital-supported start-up.
Hospitals replacing radiologists with artificial intelligence is too fanciful and futuristic. The more likely scenario is that AI will be used to help radiologists by flagging abnormalities on images. Radiologists will still be responsible for the final interpretation. AI will be to radiologists what Dr. Watson was to Sherlock Holmes — a trusted, albeit overzealous, assistant.
Though Holmes often ignored Watson’s advice, radiologists won’t find it easy to dismiss artificial intelligence because they’ll incur a new liability: the liability of disagreeing with AI. If AI flags a lung nodule on a chest radiograph that the radiologist doesn’t see and, therefore, doesn’t mention it in his or her report — in other words, disagrees with AI — and that nodule is cancerous and the patient suffers because of a late diagnosis, the radiologist may be liable not just for missing the cancer but for ignoring AI’s advice.
A string of such lawsuits would make radiologists practice defensively. Eventually, they would stop disagreeing with AI because the legal costs of doing that would be too high. Radiologists will recommend more imaging, such as CT scans, to confirm AI’s findings.
In other words, artificial intelligence won’t necessarily make medical care cheaper.
If evidence shows that radiologists who use artificial intelligence miss fewer serious diagnoses, it could become the de facto standard of care. At that point, radiologists who don’t use AI could be exposing themselves to liability. Plaintiff attorneys could use artificial intelligence to “find” missed cancers on patient’s old x-ray images. Professional insurance carriers could even stipulate that radiologists use AI as a condition of coverage.
These conjectures are based on the historical vector of American tort — civil wrongs that cause claimants to suffer loss or harm. Artificial intelligence fits perfectly in our oversensitive diagnostic culture in which doctors are petrified of missing potentially fatal diseases.
The adoption of artificial intelligence in radiology will certainly be influenced by science. But it will also be shaped by the courts and defensive medicine. Once a critical mass of radiologists use AI clinically, it could rapidly diffuse and, in a few years, reading chest x-rays, mammograms, head CTs, and other imaging without AI will seem old-fashioned and dangerous.
Ironically, the courts will keep both AI and radiologists tethered to each other, granting neither complete autonomy.
Saurabh Jha is an associate professor of radiology at the University of Pennsylvania and scholar of artificial intelligence in radiology.
AI/DOCTOR VOTING SYSTEM… give doctors the chance to review the information on their tablet and vote with other doctors, that way the doctors make money, the AI learns and the patient is saved, also, if those doctors decide wrongly and the patient dies then the doctors will not get their pay for that patient and since there is no one else to blame other than those who made the decision then they should be re-educated and the obvious process of paying for the loss of the one who died, well… you are a government, you’re meant to know what you’re doing, so you pay up, tis a black hole future endevours cost since our governments do run on MONEY which is mostly digital.