Skip to Main Content

Science is largely an additive pursuit, with advances building on previous advances. Only rarely does a major breakthrough, such as the ones Thomas Kuhn described in his landmark 1962 book “The Structure of Scientific Revolutions,” occur to shake the foundations of a given field of research.

But a new study shows just how incremental science really is. Researchers at the University of California, Los Angeles, found that about 60 percent of more than 6.4 million papers in biomedicine and chemistry published between 1934 and 2008 reported findings that built on existing knowledge rather than forged novel connections in their specialty.


The authors of the new study say their findings show that the publish-or-perish model is stifling innovation and leading to timid science that eschews high-risk, high-reward ideas for safe bets — the kind that gain them citations in future papers.

“Published papers that make a novel connection are rare but more highly rewarded,” said Jacob Foster, a UCLA sociologist and co-author of the paper, which appears in the American Sociological Review, in a press release. “So what accounts for scientists’ disposition to pursue tradition over innovation? Our evidence points to a simple explanation: Innovative research is a gamble whose payoff, on average, does not justify the risk. It’s not a reliable way to accumulate scientific reward.”

In some ways, the phenomenon Foster is describing can be explained by the bad incentives of the “publish-or-perish” model of academic productivity. Counting notches on a CV encourages researchers to balkanize their findings rather than bundle them into a single, more emphatic article — sort of like a delicatessen forcing customers to buy bread, cheese, salami, and condiments and assemble their own sandwich. (Indeed, scientists derisively call the practice of chopping up data into several papers “salami slicing.”)


But there’s another way to look at the findings. The publish-or-perish model may push some to be very conservative in their work, but at the other extreme, it creates an incentive to cut corners and commit misconduct in the service of a home-run article in a sexy journal like Cell, Science, or Nature. Even a single paper in one of those titles can mean a fast track to a job at a prestigious institution. Evidence indicates that these journals are more likely to retract articles than other, less splashy titles – partly because more people scrutinize these journals, but partly, in all likelihood, because their high-stakes appeal is a magnet for cheats.

You could even argue — and we would — that there’s already too much of an emphasis on “big wins” and prizes. The Nobels, after all, don’t reward tiny advances; they reward the much-romanticized “Eureka!” moments that come so few and far between. Journalists choose to write about big findings, ones touted with press releases shouting “cure” and “breakthrough.” Researchers often have trouble publishing confirmatory studies in prestigious journals because editors worry that no one will cite them, driving down their journals’ Impact Factor, a too highly worshipped indicator of how often other scientists reference work in a particular title.

Cleaving closely to the past therefore may not be as bad for science as the UCLA researchers suggest, and even they acknowledge in the paper that “not all scientists should pursue risky strategies.” Incremental advances in knowledge can be critically important for future research. Indeed, it should be reassuring to know that most of the time researchers aren’t swinging for the fences but are content to hit for average. Here’s to Slow Science.

Comments are closed.