If you read high-profile medical journals, the high-end popular press, and magazines like Science or Nature, it is clear that the medicalization of artificial intelligence, machine learning, and big data is in full swing. Speculation abounds about what these can do for medicine. It’s time to put them to the test.
From what I can tell, artificial intelligence, machine learning, and big data are mostly jargon for one of two things. The first is about bigger and bigger computers sifting through mountains of data to detect patterns that might be obscure to even the best trained and most skilled humans. The second is about automating routine and even complex tasks that humans now do. Some of these could be “mechanical,” like adaptive robots in a hospital, and some might be “cognitive,” like making a complex diagnosis. Others might be a combination of the two, as in the almost-around-the-corner self-driving cars.
The idea of computers sorting through data and detecting patterns is of great interest for analyzing images like mammograms and colonoscopies, and for interpreting electrocardiograms. But is this really transformative or novel? An early version of digital image analysis and facial recognition was proposed by the polymath Francis Galton in the late 1800s. Likewise, machine reading of electrocardiograms has been occurring since at least the 1960s. There are, of course, issues with AI and machine learning like overdiagnosis and misreads, but the narrative is that eventually more data and technology will solve such problems.
Perhaps, though, IBM’s overselling of Watson to use artificial intelligence to identify new approaches to cancer care is a cautionary tale and reminds us that many things in medicine lack fixed rules and stereotypical features, and so will be hard for AI to solve.
Another hope is that AI could somehow rehumanize medicine by improving workflows and replacing the current tidal wave of screen time with face time with patients. Although that could happen, all of the data and associated analytics could also lead to an ever more oppressive version of medical Taylorism and a drive for “efficiency.”
It is possible that technology could free physicians and enhance their interactions with patients, but as the recent move to electronic health records shows, that is far from certain and the economic imperatives of corporate medicine to see more patients, capture more charges, and generate more throughput might just as easily predominate. Regulators will also likely weigh in. And while “Alexa, please refill Mrs. Smith’s statin prescription” seems simple enough, will we — or do we want to — get to “Alexa, please schedule Mrs. Smith with everything she needs for hip replacement”?
I think we need a Turing test for medical artificial intelligence. Such a test, proposed by British mathematician and computer scientist Alan Turing in 1950, can determine if a computer is capable of performing complex functions like a human being. For medicine, the test should be a problem that is currently hard to solve. Here’s one I think would be perfect: create a weight loss plan for patients with severe obesity (a body-mass index of 40 or more) that is as effective as bariatric surgery. This would be a classic non-inferiority trial, in which a new treatment isn’t less effective than one already in use.
Obesity treatment as a test of medical AI has the advantage of an easily measured outcome — all you need is a scale — and a condition that is potentially treatable by one or more interventions. Surgery is effective for sustained weight loss, and there are good data on the most effective surgical approaches. But it isn’t the only option — some people achieve long-term weight loss without surgery. Class 3 obesity is a common condition with plenty of downstream hazards — including increased risk of developing diabetes, heart disease, cancer, and arthritis, as well as trouble with activities of daily living — so the ability to recruit motivated participants for a randomized trial should be relatively easy.
All sorts of data are available that could be fed into “the computers” to generate individualized plans for participants. Beyond simple demographics, the plans could also synthesize genetic data, diet and exercise preferences, and information from wearables. Text messages could be sent to remind people what foods to avoid or when they needed to get in more steps for the day. Shopping for food could be automated, and certain foods and portions sizes at restaurants could be made electronically off limits. Even better, customized menus could be constructed on demand. All of this could be linked to financial incentive programs.
If you really wanted to stretch the limits, cars could be programmed to make it difficult to stop at fast food restaurants. Or some sort of “pre-eating” aversive stimulus could be applied when the algorithm detected signals or subtle behaviors associated with an increased likelihood of excessive eating — depending, of course, on ethical committee approval.
In short, it’s entirely possible to develop a truly comprehensive weight-loss plan.
The fact that genetic data, diet preferences, wearables, and text messages don’t seem to have much impact on long-term weight loss in controlled trials are only minor details. There are also a host of issues with implementing artificial intelligence in the real world. But let’s not get distracted.
Enthusiasts of AI, machine learning, and big data should throw caution to the winds and craft a highly effective alternative to bariatric surgery. Such a demonstration would clearly tip the scales and show the skeptics what medical AI can do.
Or put more simply: It is time for medical artificial intelligence to go big or go home.
Michael J. Joyner, M.D. is an anesthesiologist and physiologist at the Mayo Clinic. The views in this article are his own.
Well written Michael!! AI has the potential to be applied in almost every field of medicine including drug development, patient monitoring, and personalized patient treatment plans. The healthcare industry is evolving rapidly with large volumes of data and increasing challenges in cost and patient outcomes. So, early adaptation of AI in the healthcare space is necessary. I have found some articles very informative on this topic these are: https://www.navedas.com/real-world-examples-of-ai-and-healthcare-in-action/ and http://sitn.hms.harvard.edu/flash/2019/artificial-intelligence-in-medicine-applications-implications-and-limitations/
I was a psychotherapist for for 50 years and found that most psychiatric conditions could be understood by taking a personal history. All my patients had had unresolved traumas in their lives that were causing symptoms. Symptoms were relieved when they understood the connection.
With psychotic patients the situation was much different.
It was easy to see why they were considered brain damaged, despite research failing to find the brain or genetic pathology.
By total chance one of my psychoanalytic training cases, a young professional man, suddenly became psychotic, and his extreme thought disorder prevented communication.
We both felt doomed.
But, I’d managed to get a gifted supervisor, Dr Donald Winnicott, who suggested I stop talking and listen to the patient. Having no other option, I did so, session after session, until I felt I understood something, and was able to make a positive sound. This led gradually to communication, and he was able to rebuild his life on solid ground.
Dr Winnicotts thesis was that his life had been based on a conforming self that had formed during wartime when his parents were absent.
When psychiatry closed the psychiatric units we were running, because they thought that drugs would be faster and cheaper, I retreated to private practice in Ottawa, where I could get paid to see patients until they became well, and discovered that psychotic patients became my most challenging, hard working, and rewarding of my career.
They felt completely dehumanized by life experiences, and also by years of forced hospitalizations and drugs, and greatly appreciated a human hand reaching out to them; helping them to become human.
I reported these treatments to the psychiatric and psychoanalytic societies who would listen, but in our dehumanized world, they wouldn’t listen.
Science should be helping us to understand, but I’m afraid it has become so focused on the material world that it has abandoned our psychological beings, which are only secondarily physical.
When we overthrew religion with observations of physical reality, we seemed to have abandoned our souls, or what Dr Winnicott called our True Self.
I’m sure that’s what helped my patients recover, I helped them find themselves.
The horror today is that so many people are going without human help.
Artificial intelligence is all right in it’s place, but human intelligence is far superior in certain circumstances.
Comments are closed.