
If you read high-profile medical journals, the high-end popular press, and magazines like Science or Nature, it is clear that the medicalization of artificial intelligence, machine learning, and big data is in full swing. Speculation abounds about what these can do for medicine. It’s time to put them to the test.
From what I can tell, artificial intelligence, machine learning, and big data are mostly jargon for one of two things. The first is about bigger and bigger computers sifting through mountains of data to detect patterns that might be obscure to even the best trained and most skilled humans. The second is about automating routine and even complex tasks that humans now do. Some of these could be “mechanical,” like adaptive robots in a hospital, and some might be “cognitive,” like making a complex diagnosis. Others might be a combination of the two, as in the almost-around-the-corner self-driving cars.
The idea of computers sorting through data and detecting patterns is of great interest for analyzing images like mammograms and colonoscopies, and for interpreting electrocardiograms. But is this really transformative or novel? An early version of digital image analysis and facial recognition was proposed by the polymath Francis Galton in the late 1800s. Likewise, machine reading of electrocardiograms has been occurring since at least the 1960s. There are, of course, issues with AI and machine learning like overdiagnosis and misreads, but the narrative is that eventually more data and technology will solve such problems.
Perhaps, though, IBM’s overselling of Watson to use artificial intelligence to identify new approaches to cancer care is a cautionary tale and reminds us that many things in medicine lack fixed rules and stereotypical features, and so will be hard for AI to solve.
Another hope is that AI could somehow rehumanize medicine by improving workflows and replacing the current tidal wave of screen time with face time with patients. Although that could happen, all of the data and associated analytics could also lead to an ever more oppressive version of medical Taylorism and a drive for “efficiency.”
It is possible that technology could free physicians and enhance their interactions with patients, but as the recent move to electronic health records shows, that is far from certain and the economic imperatives of corporate medicine to see more patients, capture more charges, and generate more throughput might just as easily predominate. Regulators will also likely weigh in. And while “Alexa, please refill Mrs. Smith’s statin prescription” seems simple enough, will we — or do we want to — get to “Alexa, please schedule Mrs. Smith with everything she needs for hip replacement”?
I think we need a Turing test for medical artificial intelligence. Such a test, proposed by British mathematician and computer scientist Alan Turing in 1950, can determine if a computer is capable of performing complex functions like a human being. For medicine, the test should be a problem that is currently hard to solve. Here’s one I think would be perfect: create a weight loss plan for patients with severe obesity (a body-mass index of 40 or more) that is as effective as bariatric surgery. This would be a classic non-inferiority trial, in which a new treatment isn’t less effective than one already in use.
Obesity treatment as a test of medical AI has the advantage of an easily measured outcome — all you need is a scale — and a condition that is potentially treatable by one or more interventions. Surgery is effective for sustained weight loss, and there are good data on the most effective surgical approaches. But it isn’t the only option — some people achieve long-term weight loss without surgery. Class 3 obesity is a common condition with plenty of downstream hazards — including increased risk of developing diabetes, heart disease, cancer, and arthritis, as well as trouble with activities of daily living — so the ability to recruit motivated participants for a randomized trial should be relatively easy.
All sorts of data are available that could be fed into “the computers” to generate individualized plans for participants. Beyond simple demographics, the plans could also synthesize genetic data, diet and exercise preferences, and information from wearables. Text messages could be sent to remind people what foods to avoid or when they needed to get in more steps for the day. Shopping for food could be automated, and certain foods and portions sizes at restaurants could be made electronically off limits. Even better, customized menus could be constructed on demand. All of this could be linked to financial incentive programs.
If you really wanted to stretch the limits, cars could be programmed to make it difficult to stop at fast food restaurants. Or some sort of “pre-eating” aversive stimulus could be applied when the algorithm detected signals or subtle behaviors associated with an increased likelihood of excessive eating — depending, of course, on ethical committee approval.
In short, it’s entirely possible to develop a truly comprehensive weight-loss plan.
The fact that genetic data, diet preferences, wearables, and text messages don’t seem to have much impact on long-term weight loss in controlled trials are only minor details. There are also a host of issues with implementing artificial intelligence in the real world. But let’s not get distracted.
Enthusiasts of AI, machine learning, and big data should throw caution to the winds and craft a highly effective alternative to bariatric surgery. Such a demonstration would clearly tip the scales and show the skeptics what medical AI can do.
Or put more simply: It is time for medical artificial intelligence to go big or go home.
Michael J. Joyner, M.D. is an anesthesiologist and physiologist at the Mayo Clinic. The views in this article are his own.
So Mr Joyner is assuming ALL obesity results from eating, you bet it does! Eating all these foods with all these chemicals/additives added in them, not to mention what is in our drinks, our environment, and prescribed drugs, etc. Why not find the cause instead of treating just the symptom????? Big problem in medicine and in Dr. Joyners view, let us keep big agriculture, and big pharma pockets lined instead of finding the reason for the problem!!
Very interesting concept but the failure of obesity interventions is not the lack of detail but the lack of adherence. Machine learning does not have the answer — knowledge remains a relatively poor predictor of behavior. When I counseled morbidly obese patients what I quickly discovered is they were often more informed about calorie counts and nutritional content than I was, because many people I worked with “consumed” this information as a behavioral side-step to modifying their consumption behavior. We live in an increasingly toxic food environment where we are all enticed to make poor food choices on a daily, and often, hourly basis. I believe should consider focusing on early childhood obesity, in which the incidence of Type 2 diabetes has reached epidemic proportions, if we want to design a test case
Kind of a silly suggestion for a Turing Test. Losing weight has very little to do with physical medicine and has far more to do with psychology. You typically don’t end up with a BMI over 40 simply from bad eating habits. With a BMI that high (or a BMI well below 18.5) typically eating patterns are a result of deeper psychological issues.
I would propose a medical turing test could be a system that could crunch personalized health data (genomics, medical history, and past test results) and create a medical plan that addresses specific issues more effectively than the “general” medical plan for that issue.
One example would be high cholesterol levels. We know different people are affected differently by diet. Some people’s serum cholesterol levels are heavily affected by the food they consume. Other people’s serum cholesterol levels aren’t affected by their diet because their liver produces less cholesterol if they are consuming more in their diets.
That being said a “medical turing test” may not even be necessary. We have AIs that are better at detecting cancer from medical imaging than trained radiologists. If an AI can produce fewer false positives and fewer false negatives than a highly skilled human it would justify using that AI in that application regardless of its ability to detect/treat other conditions.
Exactly – and that is why I intentionally added a number of “dystopian” elements to what I wrote. Thx for reading and commenting….. Mike Joyner
I agree. I would welcome any help with weight loss. Ai help would be exciting because it has all the answers we don’t have. Please make this happen.
I think Ai will be able to create detailed plans tailored to individuals in the future. I was so glad to see such an article I didn’t realize it was a joke that the writer was playing. Nevertheless, Ai will certainly be doing all this and more. It will have realtime access to increasingly large big data, including DNA.
I asked stat to remove my comments because the whole thing is just embarrassing.
This is disgusting and ill-informed. By your own admission “genetic data, diet preferences, wearables, and text messages don’t seem to have much impact on long-term weight loss in controlled trials.” And that is entirely aside from the fact that this is horribly ableist, classist and fatphobic. You’re suggestion “cars could be programmed to make it difficult to stop at fast food restaurants” is terribly authoritarian. I certainly hope we don’t go down this road with AI in medicine. It would be a very shameful future.
Artificial intelligence for medicine needs a Turing test.
Obesity would be a good one – NO NOT REALLY!
Obviously Dr. Joyner has never built an AI.
I built one for US Human Genome Project.
These tend to be very narrow programs for specific diseases. Obesity is a cultural disease and will not be easy to quantify. Come to New Mexico and see how many different groups overeat. Native American tribes may eat Navajo tacos or Metis bannon bread. Hispanics corn chips, Anglos and African Americans fried chicken and burgers, etc. Each behavior will be difficult for an AI to monitor in real time. My son works for Medtronic developing device specific Machine Learning. The Artificial pancreas is now matching sugar to insulin. Sugar to insulin balancing is useful in determining fasting as a way to model eating behavior. But will it be enough? Mitchondrial Eve is our shared ancestor. She survived a genetic bottleneck because she could store fat better than her sisters in calorie deficient prehistory. Overeating may be built into our genes. AI will not change that.
Given my knowledge of deep learning, what you’re asking is normally referred to as “hard AI,” i.e. fully cognitive artificial intelligence. The current developments in Medical AI do not needed it. Additionally, we would need rigorous testing of hard AI before we should let it near patient, less the AI realize it can cheat. Some researchers recently had an AI read and decode material science papers. The AI made some novel and accurate material property predictions. Now image that our obesity bot realizes that meth helps people lose wait and encourages it’s use among it’s patients. How would the AI ‘s minders know? Human in the loop will be an essential component of any medical AI pipeline for the foreseeable future.
See AI eminence Judea Pearl’s “The Book of Why.” (“You are smarter than your data.”) We conflate “intelligence augmentation” with artificial intelligence far too much.
Dr. Joyner,
I lead clinical informatics at a large health system in the Twin Cities.
Love the article and couldn’t agree more with your thoughts and concerns around AI!