In health care, two exciting uses of artificial intelligence — in the clinic for patient care and in the laboratory for drug discovery — are remarkably different applications. That perhaps explains why, though it’s still early days for both, they are developing at different rates.
In the clinical setting, AI works with known parameters, typically running through a classification process based on experiences of what works and what doesn’t for different types of patients. The potential of AI here is significant, and the early successes are truly exciting.
The opportunity is equally compelling in drug discovery, particularly in areas of high unmet need such as rare and hard-to-treat cancers and neurodegenerative conditions. Artificial intelligence can ingest and reason over information from the scientific literature and databases, as well as patient-level data, to identify potential approaches to treat diseases by proposing a drug target, designing a molecule, and defining patients in which to test that molecule to drive greater clinical success.
But the questions being asked of artificial intelligence in this sphere are fundamentally different than they are in clinical applications. AI is in uncharted territory here, searching for the novel, not the known. In any setting, AI requires training with positive — and ideally some negative — examples. This is a particular challenge in predicting new targets for glioblastoma, Parkinson’s, and other conditions for which no treatment has yet been shown to reverse the disease course.
The potential of AI in drug discovery
The pharmaceutical industry is facing a crisis is R&D. About 50% of late-stage clinical trials fail due to ineffective drug targets, resulting in only 15% of drugs advancing from Phase 2 to approval. And researchers tend to coalesce around the same disease areas and targets.
Artificial intelligence can help expand the drug discovery universe by making predictions in more novel areas of biology and chemistry. By extracting text from scientific papers, AI can help identify relevant information faster and make links between biomedical entities, such as medicines and proteins, often with relatively little information.
In amyotrophic lateral sclerosis (ALS), for example, 50 clinical trials in the last two decades have failed to show any positive results, leaving only two approved drugs on the market that have shown only modest benefits to patients. This is an area crying out for new approaches.
When I was on the IBM Watson team, I had the opportunity to work with the Barrow Neurological Institute on an ALS project. Watson ingested and analyzed a vast number of scientific abstracts to make predictions about proteins involved in ALS. The Barrow team took a leap of faith and tested one of the top protein predictions with only a handful of published scientific abstracts. They were rewarded with a positive result, showing an alteration of this protein in ALS and opening the door for the development of new therapies.
Building trust between artificial and human intelligence
Despite the potential of artificial intelligence to identify new targets for disease faster, at lower cost, and with lower failure rates, adoption of this technology is still low. Trust has a significant role to play in that.
BenevolentAI, the company I work for, is to date the first drug-discovery company that has embedded artificial intelligence from early discovery through clinical trials. Yet even we face challenges in the adoption of this technology by our own experts. And not without reason: Sometimes the algorithm gets it wrong.
In target identification, for example, the AI algorithm may have issues distinguishing between potential positive and negative biological effects on the disease course, or predicts drug targets that scientists know will likely have significant side effects. We need to help the system here by telling it to filter out specific drug or target classes. While this is a necessary step for the AI system to learn, it can be frustrating for biologists who find it obvious that those drug targets are bad choices. This human refinement process, however, is crucial for helping the AI system learn and therefore guaranteeing the best scientific results.
The role of artificial intelligence, whether it is applied to identifying targets, designing new drugs, or repurposing old ones, is to augment scientists’ abilities, not replace them. Scientists play essential roles in determining the data to use in machine learning and in providing expert evaluation of the results, both for additional accuracy and nuance. Accuracy because scientific data can be contradictory: Some biological facts may be true in animal models but not in humans, for example. And nuance because context matters so much in biology: a protein interaction may take place in the liver but not in the brain. AI systems are often not yet sophisticated enough to pick up on such context.
Conversely, artificial intelligence can generate hypotheses that might otherwise seem unlikely. Traditional drug discovery can assess only a finite set of experiments or evidence at one time, which increases the potential for bias as scientists are only seeing part of the picture, based on data and materials they have selected to review. But AI methodologies can help researchers avoid the implicit bias that arises when only limited, local data is used to draw a conclusion, and can reveal new solutions to old problems.
We have a long way to go to make artificial intelligence explainable, and many of the AI tools out there are still immature. There is a lot of hype and few case studies of success: We have yet to see an approved medicine for which the drug target was discovered using computational approaches.
I strongly believe that the best way for AI to become smarter, to be adopted by scientists, and therefore to make an impact is to have interdisciplinary teams developing this technology even as they test hypotheses in the lab to make the systems better able to learn. Enabling those feedback loops to improve the algorithms through testing their predictions and assumptions will also improve trust in artificial intelligence.
And it really matters that we continue to advance our understanding and use of AI in drug discovery. Late last year, I organized a presentation of the results of BenevolentAI’s target identification process for a specific form of cancer our company is investigating. It was the culmination of a week of intense collaboration, creativity, and productivity across six teams and 26 people, including most of the scientific roles within the company. The presentation carried a special meaning for me: on that day, the father of one of my best friends passed away after a two-year battle with the very form of cancer we had been investigating.
It was an emotional experience to see so many bright people working their hardest to tackle one of the most devastating diseases that exists, and producing insights we could all be proud of.
Alix Lacoste, Ph.D., is the vice president of data science at BenevolentAI.