Negativity in politics is a drag, a coarsening of the debate that drowns out meaningful discussions of facts and policies. When it comes to science, however, we need more negativity — negative findings, that is.
Critics of the status quo in science have long lamented journals’ tendency not to publish negative findings, meaning studies that fail to support their hypotheses. That doesn’t mean scientists fail to find effects in such studies. Quite the opposite. For instance, a study that finds a drug doesn’t work against an infection might be important — and actionable — information.
Yet positive results talk louder — they’re more appealing to readers and will get more media coverage. But that means researchers aren’t able to read about studies that might discourage them from pursuing a fruitless path or might give them valuable insights into where to look next.
That attitude appears to be changing. The American Journal of Gastroenterology, a well-regarded publication for the specialty, will be devoting its entire November issue to “negative” results.
“There’s a lot of great research out there and sometimes the results are negative,” said Brian Lacy, co-editor in chief of the journal. “So many of these negative studies are more important than positive results.”
The journal began calling for articles for the issue — the idea for which Lacy credits to his co-editor, Brennan Spiegel — in early 2016. It received nearly 100, he said, many of which were “great studies from well-known investigators” grateful for the chance to find a home for papers they didn’t think they could get published elsewhere. Several of the articles “will actually change how people will practice,” Lacy said.
Although the AJG might be the most prominent journal to take such a step, it’s not the only one. The Journal of Negative Results in BioMedicine has been solely publishing “non-confirmatory” data since 2002. As the journal explains: “publishing well documented failures may reveal fundamental flaws and obstacles in commonly used methods, drugs or reagents such as antibodies or cell lines, ultimately leading to improvements in experimental designs and clinical decisions.”
Somewhat newer are the Journal of Negative Results, a biology and ecology publication, which launched in 2004, and the Journal of Pharmaceutical Negative Results, which dates back to 2010.
But these important journals aren’t cited all that often. That speaks to science’s strong positivity bias. That bias exists for many reasons, from the human desire to go for big, splashy stories, to the fact that successful clinical trials sell more reprints. And the bias drives research: When scientists know they need positive results to get into the big journals, which in turn earns them grants, promotions, and tenure, they’ll be pushed in that direction. And it means that we need some serious efforts, and incentives, for publishing negative studies, to help balance out those directed at positive publications.
As for the gastro journal, for the moment the negative studies’ issue is a pilot with potential but no firm plans for a second issue. However, the editors have discussed publishing at least one negative study a month in the future, Lacy said.
That would be welcome. Though we could use less negativity in our politics, when it comes to science, bring it on.
This article was corrected to indicate that the entire November issue will be devoted to negative findings.
Unfortunately, one additional reason why negative results don’t get much attention is that they are all too easy to get. Even a small mistake can lead to a null result, where a more carefully controlled hypothesis test would have found significance. For null results to be informative, readers must have an exceptionally high degree of trust that the researcher actually did the best possible test. False positives also arise, of course, but they are harder to obtain and easier to check.
One of the worst offenders in this category is Health Affairs, in their coverage of wellness. No doubt they noticed that their long-since-debunked 2010 puff piece has 500+ citations, while a 2013 negative article on wellness has only a handful. So their new strategy is to spin pieces positively, which increases citations and hence their “Impact Factor.”
Two recent examples. First, the state of Connecticut has a program that flouts guidelines (example: requiring mammograms before Age 40) and way too many checkups. Their costs increased. The HA article (written of course by wellness promoters, because no one is going to pay consultants to prove their product doesn’t work) called this failure a success, overlooking obvious flaws. For example, ER visits went way down because co-pays increased dramatically. But they attributed the decline to the wellness program. The incriminating information was all right there…and yet they chose to put a happy face on it. Here is the critique, that leads to the article. https://theysaidwhat.net/2016/04/14/connecticut-state-employee-wellness-program-wins-by-losing/
In another recent example, a research team (which gets money from the wellness industry) used all sorts of incentives and penalties to try to get employees to lose weight. None of them worked. They concluded that incentives probably could get people to lose weight but they just hadn’t found the right incentive yet–like the old joke that ends: “There must be a pony in here somewhere.” https://theysaidwhat.net/2016/03/04/does-the-new-york-times-now-support-corporate-fat-shaming/
One partial solution is ridiculously straightforward: look at the ratio of cites for negative and positive articles. Then, when calculating impact factor, weight the negative articles more heavily in the opposite ratio. That removes bias in favor of positive articles. There are still plenty of other incentives to be positive, but this is a start.
Comments are closed.