
The impact factor, a decades-old metric that purports to measure the quality of journals, is a bit like a corrupt bureaucrat: overly powerful, largely incompetent, and widely feared.
But the bureau has a new boss: This week Thomson Reuters announced it would sell its intellectual property and science arm, including the impact factor formula, to a pair of international investment houses.
The transaction is in its early stages yet, so it’s too soon to say what plans, if any, Onex Corp. and Baring Private Equity have for the impact factor.
But transitions can be good times to consider a change — and there’s plenty of reason to think that the impact factor needs some shaking up. Simply put, the IF, as it’s sometimes abbreviated, is a way of ranking journals by calculating how often scientists cite the papers that appear in their pages. As such, it’s an approximation of the intellectual heft and rigor of the research they publish. And, as a result, universities and funders of science weigh impact factor when they evaluate academics’ output to make decisions about promotion and grant-making. Publishing in high IF journals has become shorthand for quality research.
The seeds of the impact factor were planted in the 1950s and early 1960s by Eugene Garfield (disclosure: I.O. used to work for The Scientist, which Garfield founded) and his colleagues. The goal was reasonable: devise a way of accounting for the size of a journal when measuring how often scientists, on average, cited the articles different titles published each year. (Note that many of the most prestigious journals in science — the Lancet, Nature, the New England Journal of Medicine, Science — existed decades, even centuries, before the arrival of the impact factor.)
Since then, impact factor has become something of an ungovernable child. To critics, it has become an end in itself, namely a highly inaccurate indicator of the scientific worth of a particular paper. Those critics point out that the IF suffers from a major statistical flaw. It relies on mean, not median, citations — which greatly undermines its ability to make comparisons between journals, reports a new article on bioRxiv by biologist Stephen Curry and colleagues.
The result of this flaw is that a few highly cited papers can produce a spike in the measure. That’s akin to comparing basketball teams by the heights of their centers rather than the average of all their players. Indeed, Curry — and no, not that Steph Curry — and colleagues found that the vast majority of articles in a given journal receive fewer citations than the impact factor would predict. That phenomenon has prompted a few publishers, but not many, to provide readers with information about the statistical distribution of articles. And other critics point out that the IF, like any other metric, can also be gamed.
Coincidentally, news of the Thomson Reuters division sale broke the same day that the American Society of Microbiologists (ASM), which publishes a dozen academic journals, announced that it that was scrapping the metric entirely. And even top editors from Nature — which in the past has taken such pride in its IF that it ran subscription promotions pegged to that price in dollars — and Science were Curry’s coauthors on a paper widely seen as critical of the IF.
“To me, what’s essential is to purge the conversation of the impact factor,” Stefano Bertuzzi, the ASM’s chief executive, told Nature. “We want to make it so tacky that people will be embarrassed just to mention it.”
Of course, scrapping the impact factor would require tenure and grant committees to find another way to judge the merits of a given applicant’s work. One measure would be for journals that want to preserve the impact factor to at the very least indicate their citation distribution. Doing so “should help to refocus attention on individual pieces of work and counter the inappropriate usage of [journal impact factors] during the process of research assessment,” Curry’s group wrote.
But the best way to judge quality, say many skeptics of the IF, is quite old-fashioned. Throw away the CliffsNotes and read the paper. If that’s too difficult for the people making funding decisions, perhaps the wrong people are making those calls.