The journal Prostate Cancer and Prostatic Diseases is not widely known for its high drama. So it might come as a surprise that in 2018, it published a letter to the editor that would end up drawing curse words from urologists and making biomedical executives nervous.
Scientists at Memorial Sloan Kettering Cancer Center were writing to point out that a recent study about a diagnostic test had left out data showing how many potentially aggressive tumors might have been missed. Would the authors kindly publish those numbers?
Two of the study authors responded, and refused.
Later in 2018, a colleague of the letter-writers — a biostatistician who’d helped develop a competing product — followed up directly with the test’s manufacturer, Beckman Coulter, asking for the missing data to be released. The company had funded the study, and six of the 12 authors were Beckman employees. The response? Talk to our lawyers.
“It is our practice not to release raw data that could be subject to misinterpretation or misuse or is outside the parameters of the published study,” explained Beckman Coulter spokesperson Roslyn White in an email to STAT.
But when asked to review the exchange in the pages of the journal, more than 10 experts unaffiliated with either camp — including oncologists, epidemiologists, biostatisticians, and bioethicists — said they found the refusal to reveal these data questionable, both scientifically and morally. One clinician called it a “glaring omission.”
Many see it as just one example of a wider phenomenon in which science is spun for the benefit of companies marketing clinical tests. In 2013, a review of 126 diagnostic accuracy studies found that around a third of them misrepresented findings, making techniques seem more beneficial than was accurate. External researchers wonder if that is what’s going on in this study, which a former Beckman executive told STAT was in part designed to persuade insurers to pay for the test.
“If they have the data and are unwilling to share it, I don’t trust that,” said Cecile Janssens, an epidemiologist at Emory University, adding, “If you believe you have a good product, you should show the data. You should not just show the data you want to show.”
Jon Deeks, a professor of biostatistics at the University of Birmingham, in England, agrees. “I think there is something being hidden there. … It’s a smokescreen. It’s one way of saying, ‘We’re not going to tell you what the results are because, in this case, they’re not favorable.’”
Ironically, the test in question, called the Prostate Health Index, was first conceived as a way to help defuse controversy.
The prostate is a gland about the size of a walnut that lives underneath a man’s bladder, secreting liquid food for sperm. Be they healthy or sickly, prostate cells release a protein called prostate specific antigen, or PSA, into the blood, and there’s often more of it floating around when the gland is overtaken by cancer.
Who should get PSA screening, though, is a matter of bitter disagreement, because a man’s results can have so many different meanings. An elevated PSA level could mean an aggressive cancer that should be treated, or a tumor so lethargic it would probably have no effect on a patient’s health at all. Or it might simply mean a man’s prostate is inflamed.
The pro-screeners say it’s an important way of catching cancers early. But they acknowledge it comes with downsides. PSA testing causes more men to get biopsied — having a needle jabbed through the rectal wall, which carries some risk of bleeding and infection. When cancer is found, it’s not always clear how lethal it is, leading some patients to undergo unnecessary, invasive treatments for innocuous tumors. And who wants to experience side effects of incontinence and erectile dysfunction from a procedure you didn’t need?
So researchers and companies have been racing to develop tests that will allow fewer men to be biopsied.
That’s where tests like the Prostate Health Index come in. Approved by the Food and Drug Administration in 2012, this test combines a few different measurements of blood proteins into one number, which studies showed to be significantly better than the usual PSA results at identifying whether a man did, in fact, have prostate cancer. A rising Prostate Health Index score has been associated with a nearly fivefold increased risk of cancer in general, and a nearly twofold increased risk of aggressive cancer.
“A lot of patients are asking for it,” said Dr. Quoc-Dien Trinh, co-director of the Dana-Farber/Brigham and Women’s prostate cancer program. “The clientele we see at Dana-Farber, who are there for second or third opinions, they ask for these tests.”
But while there had been a number of promising studies on the Prostate Health Index in more controlled environments, there was only one looking at the effects in a real-world clinic — and that kind of research can be important for getting both government and private insurers to cover the costs of a test.
“You want to be able to demonstrate with evidence why a test has clinical value,” explained Dr. Michael Samoszuk, who was then chief medical officer at Beckman Coulter, but has since moved to a life sciences startup called CytoVale. “Payers would like to see that kind of information. There’s nothing nefarious about that. People want to know why it’s useful.”
What Beckman did was to contact four big urology practices already using the Prostate Health Index and ask the doctors to participate in a study. Whenever the clinicians came across a potential candidate for Beckman’s test — a man of 50 or older who’d recently had an elevated PSA but whose prostate wasn’t enlarged, tender, or lumpy — they’d ask him if he wanted to take part in this research. The authors ended up comparing the cases of 506 men who got the Prostate Health Index test to those of 683 “similar patients” who’d been seen by the same physicians in the two previous years.
Their results, published online in November 2017, seemed like a triumph. When surveyed, the doctors overwhelmingly reported that Beckman’s test had changed the way they managed patients’ care, and electronic health records provided evidence: Among those evaluated with the Prostate Health Index, 36% had gotten biopsies, while in the historical group, that figure was 60%.
The authors also wrote that among those who got the Beckman test, they ended up finding fewer of those tumors that are best left undetected.
That’s when the team at Memorial Sloan Kettering chimed in with the letter to the journal, pointing out that the study hadn’t disclosed how many patients had been found to have more aggressive cancers. If you did a little arithmetic, the letter-writers said, it seemed that in the Prostate Health Index group, one potentially dangerous cancer went undetected for every three biopsies not done — a miss rate that they called “concerning.” They asked the authors to release data in which the cancers were broken down by grade.
Yet as the study authors responded — in the pages of the journal, in email exchanges, and in phone interviews — their arguments only raised more questions.
The published reply came from two researchers who worked at urology clinics, rather than at Beckman Coulter: Jay White, then of Carolina Urology Partners, and Dr. Ronald Tutrone, of Chesapeake Urology Associates. They said the missing data were collected, but that the study was designed to understand changes in physician behavior, not to make comparisons about high-grade cancers. The research was “not powered” to address that question, they wrote.
In an interview, Tutrone explained that because higher-grade cancers would be quite rare among the population they were studying, there wouldn’t be enough of them to make statistically sound claims about their incidence.
“They’re a bunch of statistician wonks and took numbers out of context,” he said of the letter-writers. To make the kind of comparison they were suggesting, he said, you’d need a randomized trial in which you gave some patients the Prostate Health Index test while not giving it to others at the same time. That wasn’t how this study, with its historical control group, was put together. “They weren’t equivalent groups of patients,” he said.
He also said that the participating physicians probably did fewer biopsies than they would have in real life because they were primed by the aims of the study (Both White and Dr. Stephen Freedland, the editor-in-chief of the journal, declined requests for comment.)
To independent experts, though, such arguments negated some of the paper’s most important results.
Imagine a trial of a drug designed to look for improvement in how far a patient can walk, said Patrick Bossuyt, professor of clinical epidemiology at the University of Amsterdam: “Wouldn’t decision-makers also like to see the side effects, the number of hospital admissions, etc., even if the primary outcome measure was, say, walking distance?”
“The authors should report on the numbers of high grade tumors in the two groups. This is the most important end point. Without that data, there may be no meaningful conclusions on reporting the reduction in the frequency of biopsies,” Dr. Massimo Loda, head of pathology at Weill Cornell Medicine and NewYork-Presbyterian, said in an email.
And when Deeks, the biostatistician in Birmingham, used the letter-writers’ numbers to calculate the higher-grade cancers himself, he found the difference between the two groups was statistically significant. “The argument the company put out there — that the study wasn’t powered for the comparison — is rubbish,” he said.
As if the tale weren’t twisted enough already, there were also less public exchanges with Beckman Coulter over the missing data.
Andrew Vickers, a biostatistician at Memorial Sloan Kettering, had helped to develop a similar prostate cancer test called the 4Kscore, and receives significant royalties. He found White and Tutrone’s reply to his colleagues unconvincing, so he wrote directly to Beckman. “That’s normally what happens in science,” he said. “You do an experiment and release the results. That’s exactly why I wrote to them and said, ‘Hey, what are your data?’”
“There was a flurry of emails and I said, ‘Look we need to escalate this to the legal department,’” recalled Samoszuk, then chief medical officer at Beckman. “There were some questions about the motivation of the people at Sloan Kettering. They had a commercial interest.”
Even so, if Beckman’s concern was about not sharing their data with a competitor, bioethicists say there are ways to avoid that while still being transparent. “Provide it to the journal editors and let them decide,” suggested Holly Fernandez Lynch, biomedical ethicist at the University of Pennsylvania. “Provide the data to some independent middle person.”
To her, the biggest ethical concern was that patients had allowed their medical histories to be mined in the name of science. “They took on risks and burdens to answer scientific questions, they’re not beholden to this company,” she said. Of the data in question, she added, the patients “probably want it used for the public good. Sharing it will facilitate the maximum public value.”
Jonathan Kimmelman, a bioethicist at McGill University, added, “Unless the authors can offer a principled reason for refusing to provide this information, it’s not up to them to declare by fiat that these data are irrelevant.”
All these questions and critiques do not mean that there are hordes of men who’ve gotten the Prostate Health Index test now walking around with voracious, undiagnosed cancers. After all, these tests are often used in conjunction with others. Patients who get them are often followed by their urologists even when their scores are low.
The chief of the prostate and urologic cancer research group at the National Cancer Institute, Dr. Howard Parnes, explained that if you stiffen the criteria for doing biopsies, you may miss some higher-grade cancers, but you might still be reducing harm overall. “People have to understand that there is a bit of a trade-off,” he said.
For the critics, the story of the unreported Beckman data does not come with a moral about which test urologists should use. Instead, they see it as a cautionary tale about money in medicine, and how that might sway the reporting of results. That isn’t always apparent to someone clicking through the scientific literature; many abstracts and data tables are not as clear-cut as they seem.
As Bossuyt, the professor in Amsterdam, put it: “The unwillingness to release data, that happens. But many articles never get a letter to the editor — most, actually.”