Could a blood test detect cancer in healthy people? Grail, a Menlo Park, Calif.-based company, has raised $1.6 billion in venture capital to prove the answer is yes. And at the world’s largest meeting of cancer doctors, the company is unveiling data that seem designed to assuage the concerns and fears of its doubters and critics. But outside experts emphasize there is still a long way to go.
The data, from a pilot study that Grail is using to develop its diagnostic before running it through the gantlet of two much larger clinical trials, are being presented Saturday in several poster sessions at the annual meeting of the American Society of Clinical Oncology. The data show that the company’s test can detect cancer in the blood with relatively few false positives and that it is fairly accurate at identifying where in the body the tumor was found. Another abstract seems to show that the test is more likely to identify tumors if they are more deadly. One big worry with a cancer blood test is that it would lead to large numbers of patients being diagnosed with mild tumors that would be better off untreated.
“The progress of the technology is impressive,” said Dr. Len Lichtenfeld, the acting chief medical officer of the American Cancer Society. But he also urged caution. “Grail is one organization that is pursuing this goal. We will get there. But we still have to prove the technology, and we still have to learn how to apply the technology.”
Work-based on empirical studies usually is exhaustive and not accurate. This becomes more difficult when working on the genomics area due to many reasons, as for example genetic variation, domain complexity, technological inherent errors, etc. In effect, it will increase the load on the medical services domain, lead exhaustion of resources, and other unfavorable consequences.
Matt I think you’ve mixed up specificity and positive predictive value. This is important because in a population with a low true positive rate (as is the case with an asymptomatic screening population), the positive predictive value can be much different than the specificity.
they have a lot better chance at developing a test like this than anyone else because they have a lot more money to spend than anyone else ever had. Hopefully the money is spent wisely
Epigenetix, a Cambridge spin-off is also a serious competitor.
“The test was set up so that it would have a 99% specificity — meaning that for every 100 people told they had cancer, 1% would actually not have the disease.”
This is not what specificity means at all. The rate of false positives is dependent upon the prevalence of the disease in the population tested. For example, if the prevalence of occult cancer was 2%, about half of those with a positive test result will be false positives.
Specificity is not defined correctly, as Eric says.
Later: “So a test that looked at 100,000 individuals and detected cancer in 70% of them would find 700 cancers. If it had a 99% specificity, it would tell 1,000 people who do not have cancer that they had the disease.”
You cannot calculate this without knowing the prevalence of disease….
It is easy to get tripped up on applying these definitions (happens all the time!), but this is not being pedantic. Specificity (true negatives divided by # without disease) is important – as is the prevalence of disease in the tested population. The sensitivity and specificity must all be interpreted together with prevalence of disease to evaluate the value of the test. This is why Dr. Maitra says the following:
>>Anirban Maitra, a pancreatic cancer researcher at MD Anderson Cancer Center, said that if you look just at pancreatic cancer, not all cancers, it’s likely almost 1,000 people who don’t have cancer would be identified for every 15 that are diagnosed. “It may be better to apply tests of this nature in a pre-selected high risk population (mutation carriers, or cohorts being followed for cancer surveillance due to some concurrent high risk features) before going all in on a general population,” he said.”<<
An explanation of Grail's reported sensitivity and specificity would naturally segue to an explanation of the importance of disease prevalence . . . and into Dr. Maitra's point. Discussions about screening are a nice opportunity to demonstrate application of these definitions.
Comments are closed.