Artificial intelligence is at the forefront of the minds of many pharmaceutical and health care executives. We know this because, as life sciences consultants, our clients frequently ask us for advice on how best to navigate AI.
But along with enthusiasm in areas as diverse as phenotypic screening, drug repositioning, and analysis of CT scans, we are also finding a growing skepticism: What is real and what is hype? An example often cited by skeptical clients are the problems surrounding IBM Watson Health, especially in the cancer treatment sphere, where reporting by STAT and the Wall Street Journal, among others, has revealed a chasm between the public relations stories and the reality as experienced by clinicians.
Now is an appropriate time to ask: What is holding back artificial intelligence in health care and the life sciences? And what can organizations do to get the most from AI and minimize the risks?
One issue is the perception that many vendors touting “AI platforms” are simply promoting repackaged business intelligence or traditional analytical tools. Given the marketing hype around artificial intelligence, it is not surprising that many vendors might rebrand their wares as AI. But many customers see through this move, sparking cynicism.
We define artificial intelligence here as covering not just deep neural networks but also the larger family of machine-learning methods. We exclude traditional knowledge-based approaches such as expert systems and conventional decision support capabilities.
Let’s step through some of the challenges that AI is struggling with.
Distrust of black boxes. Clinicians and researchers often have limited insight into why an AI system makes a recommendation. Although vendors may wish to keep their algorithms confidential, it is often the case — especially with some deep-learning techniques — that the algorithm can’t easily explain how an answer was produced. Even their designers may not know. The resulting lack of trust is a fundamental issue in the use of some artificial intelligence systems.
Yet trust can be earned, for example, by devoting time and effort to rigorous testing, and by involving end-users in development and dissemination. Some examples of successful implementations of machine-learning algorithms include digital pathology and molecular diagnostics (such as Genomic Health’s Oncotype DX). Medical imaging is nearly there as well.
Poor data hygiene. “Garbage in, garbage out” shouldn’t be a surprise when it comes to artificial intelligence. Messy, unstructured data don’t magically clean themselves, and no fantasy AI robot will do it. Organizations embarking on an AI program need to deal with their ugly data before they start implementation. Sadly, most health care organizations lack adequate data governance and end up spending extensive financial and political capital setting up an AI pilot only to find that it produces little of value, or they run out of funds before the data cleaning is complete.
Such data governance issues are best addressed at the start, before data collection begins.
Not everyone is a data scientist. Data scientists are in high demand, leaving many organizations without appropriately skilled data professionals. Not surprisingly, many individuals with mediocre or non-aligned skills declare themselves to be data scientists. In our experience, the expertise of true data scientists is notably different from many of those claiming the title. We have seen providers, payers, and life science firms trying to establish data science or advanced analytics competency centers invest considerable funds, only to find themselves frustrated by disappointing results due to the capabilities of those hired to deliver.
Collaboration is key. No single artificial intelligence technique can handle every challenge. That means tailoring the AI development process to the specific application and user. This can be done only with close collaboration among multiple disciplines, including the end user.
A common criticism we’ve heard from clients using AI methods in drug discovery biology is that the answers generated by the algorithms are too generic and nonspecific — that they lack “insight.” To avoid that, development teams must include expertise in the relevant domain, target specific problems (also known as use cases), and engage end users at a deep level with a focus on the user experience.
Doubled-edged sword of regulatory culture. There is an inescapable conflict between the need to regulate and set standards based on the evidence of what works and is safe, and the desire to innovate, experiment, and produce new ways to advance human health. The health care industry is highly regulated to ensure patient safety and privacy, whereas the information technology industry can advance without the same constraints.
While we are not recommending that the U.S. regulatory framework be dismantled, it is becoming increasingly apparent to us how much technological innovation in U.S. health care is held back by an acute “anti-change culture” on the part of clinicians, management, and funding sources. Firms focusing on health care AI are increasing their research and marketing activities in countries like China, where regulations are less cumbersome and a desire to attain global leadership in artificial intelligence is a national priority.
Getting the most from health care AI
The very fact that our clients express disappointment in AI-related initiatives suggests that there is an appetite in health care for effective solutions. What is needed is a measured and realistic approach to thinking about how to utilize AI. Our experience of seeing the successes and failures of AI projects across pharmaceutical and other life science organizations, as well as among health care providers and payers, lead us to recommend the following key principles for successful implementation of an AI strategy:
- Be focused rather than broad: Identify the two or three most compelling business problems that artificial intelligence can solve for your organization. Lock down these use cases and execute a pilot AI strategy designed to maximize both business outcomes and organizational learning.
- Anticipate the degree of data sourcing and cleaning that will be needed.
- Set the expectations of management carefully — make sure they aren’t expecting a magic bullet. Also make sure that the leadership is aware of the limitations and risks of AI, and how your approach mitigates those risks.
- Communicate success loudly throughout your organization, and build on early wins to generate support.
Regardless of the current challenges, we believe that artificial intelligence will be an essential part of the future of the life sciences. Medical and pharmaceutical research continues to expand at an exponential rate, which creates a profound problem for clinicians and researchers trying to keep on top of the breadth and depth of the literature. The promise of AI to help address such issues, combined with the technical progress that is being made across the domains of AI, provides compelling reasons to believe that AI will, in time, become a significant element of much of our medical and pharmaceutical research and day-to-day health care delivery.
Grant Stephen is the CEO of bPrescient, a life science and health care information management and analytics consulting firm. Michael Jacobson, Ph.D., is the founder and managing partner at Cambridge Biostrategy Associates, a life science and health care management consulting firm.
I enjoy this article about AI. Artificial intelligence in health care is the hope of the future, but must be used carefully.
Great article. I like the allusion to the persistent conflation of “IA” (Intelligence Augmentation) with AI.
“The latest fad? Last year it was profitably fashionable to add “crypto” and/or “blockchain” to one’s resume or startup company name…”
This is different. It is to look into the mind, the behavioral, the subconscious etc..
Comments are closed.