n most cases, being called a parasite would be a slur. But for a cheeky group of scientists, it’s now being worn as a badge of honor.
The Parasite is a new award for scientists who replicate the work of others. Started by Casey Greene and colleagues at the University of Pennsylvania Perelman School of Medicine, the Parasites competition is designed to “recognize outstanding contributions to the rigorous secondary analysis of data,” its website says. What it secondarily does, though, is serve as a great example of an incentive for good behavior — which science could use more of.
The name of the prize is a jab at the editors of the New England Journal of Medicine, who used the expression “research parasites” to describe how some scientists feel about those who conduct studies using data other people have generated. Many scientists reacted with disbelief and disdain, and while the journal later clarified that it wasn’t necessarily endorsing the idea, they also didn’t come out and say the term sent the wrong message, either.
The award — which generated a fair amount of buzz on social media — is unlikely to significantly shift the opinions of the unnamed scientists for whom the NEJM purportedly spoke. But it speaks to the urgent need for incentives to reward those scientists doing what’s best for their field. Unfortunately, our current incentive structure — based almost entirely on publishing in prestigious journals, with large cash rewards in some countries — discourages sharing, replication, and, some might argue, careful science.
Greene’s group isn’t alone in trying to find ways to stimulate replication through friendly competition. Consider a recent proposal by Dr. Michael Rosenblatt, chief medical officer at Merck, one of the world’s biggest drug companies. Writing in Science Translational Medicine, Rosenblatt calls on universities and industry — massive generators and funders, respectively, of science — to promote replication research. How? Through a “money-back guarantee,” wherein companies that licensed technology from universities would have their money refunded if that technology later turned out to be a dud.
As Rosenblatt sees it, that stick could also be a carrot for universities, as companies “would also be likely to pay a premium over current rates for data backed by such assurance over ‘nonguaranteed’ data, even from the same university. This approach places the incentive squarely with the investigator (including his or her laboratory) and the institution — precisely the leverage points for change.”
As Rosenblatt admits, the scheme is more of a trial balloon than a finished product. It has obvious problems: For starters, it likely incentivizes positive findings, not replication per se, and science already struggles with a bias toward the publication of positive results. What’s more, do we really want to push universities even more into carrying the drug industry’s water? Will they take the money and run away from liberal arts and basic research in areas that aren’t appealing to pharma companies?
It’s easy to pick on details. The important thing is to encourage a frank discussion of why today’s incentives are driving science in the wrong direction, and of potential alternatives. A group of major figures wrote last year in Science that such alternatives should reward scholars “for publishing well rather than often.” Make tenure decisions based on “the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality,” they argued. While we’re at it, why not reward mentoring?
Economists like to say that there are no bad people, just bad incentives. And it seems safe to say that science suffers from the latter. So send us your ideas. The incentive? Showing up in a future Watchdogs column.