If insanity is doing the same thing over and over again and expecting different results, then there’s lunacy in the health care industry when it comes to health data and misperceptions about what it is — and is not — delivering. The continual stream of claims about data as the path to transcendence are set against rigid restrictions on its access and use that are not going to change. The constants of patient confidentiality, regulatory compliance, and proprietary instinct cannot be detached from health data.
We launch shiny new facsimiles of data systems one after the other, insisting with each iteration that this time the magic of data can and will solve all our problems. Then it doesn’t.
How do we break free from the madness?
Drilling into data
“Data is the new oil!” is a supposed truism that’s thrown around a lot. It’s an apt metaphor in some ways, but not others.
We use data to power much of the transformative technology in health care, just as we use oil to power machines. And just as collecting and controlling oil reaped unfathomable fortunes for the Rockefellers, Gettys, and Mellons, collecting and controlling data are now doing the same for Zuckerberg, Bezos, and Page.
But oil is a finite, material substance; deposits of it eventually run dry. Using oil once destroys its value. Data, in contrast, collects everywhere, it’s growing in abundance — in fact, we’re drowning in it — and it can be reused many times without destruction.
Here’s another difference: unlike a barrel of oil, a wad of digital information has no inherent worth. In order to generate value, computation is required to generate insights from it.
Likening data to oil encourages people to hoard it in silos. The result? Inefficiency, increased overhead, drained resources, and repositories of meaningless digitized information that will never be used for practical purposes. The aimless floods of data in health care do nothing to help physicians improve efficiency or quality of care, and less to help patients take better care of themselves.
Creating value from health data is not a simple matter of refinement and consumption. Data must be wrangled and managed, an inherently complex process that requires processing and analysis conducted with a specific goal in mind.
For greatest effectiveness, data must move — among patients, hospitals, electronic health records, clinics, devices, and countless other channels in the modern health care ecosystem. But motion presents danger. Storing or moving data introduces vulnerabilities. And any time a vulnerability is introduced, the potential for exposure to unauthorized access or theft for nefarious purposes grows exponentially.
The health care industry is more affected by data breaches than any other industry, accounting for 25 percent of the total across industries. The number of compromised patient accounts tripled between 2017 and 2018. While such breaches aren’t often considered directly life threatening, they can have devastating impacts on patients and providers. For example, a 2017 Accenture report indicated that 26 percent of U.S. consumers had personal medical information stolen from health care information systems, and half of those incurred approximately $2,500 in out-of-pocket costs per incident. Other research sets the provider cost of health care data breaches at $408 per record.
Health care professionals find themselves mired in the tension between health data accessibility and data protection — trying to make data hustle and flow while knee-deep in quicksand.
Like others, I dream of reaping the promised benefits of data big and small. But we’ve got to stop thinking about data in 20th century terms to make it work for us. We must try something new if we expect different results.
When I think back to my work in molecular biology research at Harvard Medical School in the 1980s and 1990s, proximity to our data provided my colleagues and me with the illusion of control. The data from our tumor trials and pharmacology efforts was a tangible collection, filed in our labs. But we feared for the physical loss of our in-house records due to fire or theft or some other calamity. The movement of data to the cloud introduced its own ample security and privacy threats, but also opened enormous new avenues for previously unimaginable uses.
We seem stuck at that intersection between risk and potential.
While there is certainly merit in establishing a set of standards and operating procedures for how we deal with data, the problem is not the vehicle, be it a file cabinet or the cloud. The big barrier to progress is our own limited vision for the ultimate uses of data.
Consider the rigidity of old-school clinical trials, which in many cases limit the usefulness of the remedies they create. As the Kafkaesque tale of the so-called orphan drug nitisinone illustrates, a shift toward “trial flexibility” invited a breakthrough for the treatment of alkaptonuria, a rare and deadly genetic condition. Instead of relying on the usual clinical and anatomical gauges of effectiveness (in this case hip flexibility), those conducting a clinical trial of nitisinone in the United Kingdom used the level of homogentisic acid as a gauge. Less of this dietary protein breakdown product in the body meant the drug was working.
It required creativity to ask a different question of the data and reach a positive outcome. The numbers can be crunched in a variety of ways, but only if we afford ourselves the luxury of calling into question the status quo and setting aside familiar parameters such as cure, clinical worsening, and mortality.
I’m not suggesting we abandon safety guidelines, ethics, or scientific rigor. But data are the gift that keeps on giving, and we must be able to consider their potential from a variety of perspectives: What does the Framingham Heart Study mean for Alzheimer’s research? How does a fertility drug like tamoxifen find its greatest success in breast cancer treatment? We can scale such serendipity by recontextualizing how we acquire, store, and share health data. New opportunities lie in challenging the gospels of data management as written and creating room for variance in approach.
Reimagining the way we approach data is the first step to forging new solutions. Consider recent developments in precision medicine for inspiration. In 2017, the FDA granted accelerated approval to Keytruda, a treatment for patients whose cancers have a specific genetic feature. This was the first time the FDA approved a cancer treatment based on a common biomarker instead of on the location in the body where the tumor originated.
To be clear, this tissue-agnostic approval reflects not only scientific advancement in genetics and immunotherapy, but considerable enlightenment on the part of regulators in grasping that such advancements demand new kinds of measurement. It represents a revolutionary shift in disease diagnosis and treatment methodology, expanding the possibilities for clinical trials industry-wide by enabling a different way to approach a problem.
Similarly, altering our approach to health data could radically transform how we advance patient care. Although our current clinical trial system is largely successful, it is expensive, time intensive, and limited in scope. Different methods, such as targeting treatments to biomarkers as opposed to histology or physiology, exploit modern scientific capabilities and offer new hope.
The health care industry must explore different approaches to data management to unlock its potential. The Trusted Exchange Framework and Common Agreement (TEFCA) is a step in the right direction, but a cultural shift in purpose is required for true data enlightenment. This shift cannot continue to be about acquisition and quantity — it must generate applicable, scalable, quality findings. We have to stop believing that data hold any value at all without a means of motion and application.
In the same way that clinical outcomes must be determined to be reliable before making regulatory and medical decisions, data generated in health care must be high quality before they can be reliably used. Although data quality is a continuum, we need standards that ensure that conclusions and interpretations are derived from an error-free pool of information.
Clinicians currently spend almost half their professional time typing in, clicking through, and editing electronic health records. Instead of acting as liberating, highly functional tools, EHRs have become a daunting part of the physician workflow. That’s a problem because they’re a resource that could be tapped into and paired with advanced artificial intelligence systems that pull relevant information for a given patient or diagnosis. Providing physicians with the right data, at the right point in time, is essential for giving patients actionable insights that improve health.
The vortex of data in health care remains one of the industry’s grand illusions. There are innumerable data repositories moldering without purpose and oceans of data sludge mucking up the pipelines. We need to rethink proprietary attitudes in the way we use data, reconsider the reasons we collect it, and accept the necessity of finding better ways to share it. The principal goal and benefit in rethinking the possibilities of data management, like that of radically transforming our approach to clinical trials, is to improve the quality of care patients receive.
We know data can generate greater value — our responsibility is to find better ways to extract it.
Lawrence Cohen, Ph.D., is a biotechnology expert and CEO of Health2047.