Contribute Try STAT+ Today

They’re not the kind of gangs that smuggle drugs and murder people. But people looking closely at the scientific literature have discovered that a small number of scientists are part of a different kind of cartel — ones that band together to reference each other’s work, gaming the citation system to make their studies appear to be more important and worthy of attention.

These so-called citation cartels have been around for decades, as the publishing consultant Phil Davis has pointed out. Thomson Reuters, which until recently owned the Impact Factor for ranking journals, has even sanctioned periodicals for evidence of cartel behavior.

Davis, who clearly has an eye for this kind of thing, unearthed a citation cartel a few years back when he came across a 2010 article in Medical Science Monitor with a glaring feature: Of its 490 references, 445 were to articles in an emerging medical journal called Cell Transplantation. Of the rest, 44 were to papers in … Medical Science Monitor. Davis also noticed this: “Three of the four authors of this paper sit on the editorial board of Cell Transplantation. Two are associate editors, one is the founding editor. The fourth is the CEO of a medical communications company.”


That wasn’t a one-off. Davis found similar cases involving the same authors. In 2012, Thomson Reuters sanctioned three of the four publications involved by denying them Impact Factors. The firm did the same to six business journals in 2014.

For authors, the payoff is clear: The more citations your articles generate, the more influential they appear. And journals have similar incentives: Encourage authors to cite papers that appear in your pages and you’ve created the illusion that your journal is highly influential. Indeed, the controversial Impact Factor ranks scientific periodicals on how frequently their articles earn citations. The lure is so strong that editing services have been found to produce papers — citations included — for a charge.


But sorting out true collusion from innocent network effects has been historically difficult. After all, some routes to generating citations are honest and fair: mentioning the work of frequent collaborators, for example, or working in a small field with few other scientists.

A new paper joins a small band of researchers trying to identify these cartels — before they do too much damage.

The paper, in Frontiers in Physics, is by a group at the University of Maribor in Slovenia. They use existing tools for analyzing data online and demonstrate that they can pick out cartel behavior in an artificial list of publications. The work is preliminary, and with good reason. “Trying to conclude whether articles have been published with the specific intent to increase the citation statistics of a cited journal, and in particular the journal’s impact factor, is perhaps a slippery slope,” wrote a different group of bibliometricians — yes, this field of study has its own name — in June. 

The authors of the new paper agree. Declaring that two authors have engaged in inappropriate back-scratching “is very dangerous, because we cannot ever be sure that this indictment really holds in the real-world,” they write. “We can only indicate that there is a high probability of citation cartel existence, but this fact needs to be confirmed using a detailed analysis.”

However large the cartel phenomenon, it’s just one among many illnesses afflicting modern science, which tends to reward quantity of metrics — more citations, more papers, more grant money — over quality.

As seductive as metrics are, however, they’re often fool’s gold. It’s sort of like cutting that Cali cocaine with baking powder — a subject about which we promise we have no knowledge. It’ll work on the street for a little while. But when you’re found out, it won’t be pretty.

  • I long suspected this was the case. I wish I could say it’s a comfort to find evidence that I was correct.

    It’s rather depressing to know much-respected fields of science are falling victim to the Wikipedia paradigm for citations.

  • Here’s an example of a cartel operation in reverse.
    In 1981 P.D.P. Wood published a statistical analysis of the methodology behind the selection of the 7 countries in the Seven Countries Study.

    Seven countries were selected from 21 to demonstrate a positive relationship between national mean consumption of saturated fat, and average mortality from coronary heart disease. The 21 countries fall into two distinct groups, one of 15, the other of 6, with a pooled correlation of -0.037 NS. Among the 15, fewer than 1 per cent of all possible drawings of 7 gave a significant positive correlation. The percentage increased rapidly as one or more members of the 6 were included in the draw. No drawings from the 15, and fewer than 10 per cent of all possible drawings from the 21, yielded a correlation greater than or equal to that obtained from the final selection of 7. It is concluded that the final selection was a biased sample.

    Wood, P. D. P. “A possible selection effect in medical science.” The Statistician (1981): 131-135.

    This would seem to be a conclusive and irrefutable finding. Wood even described the saturated fat hypothesis as obsolescent, because of the failure to confirm it in multiple trials by that date.

    Yet this paper has been cited 8 times since 1981.

    It’s important to note that it was not Keys’ intention to deceive; instead, he seems to have chosen those countries in which there were teams of researchers already interested in his ideas, and willing to put in the considerable amount of work needed to complete a detailed ecological comparison; which is bias of a different sort.

  • I had always understood that this problem was curtailed if not eliminated by the “peer review” process prior to publication. The “peers” doing the reviewing are supposed to be independent of the author. Are you saying that is not actually what happens and the “peer review” system is corrupt, or are these publications not considered peer reviewed and thus of suspect credibility to begin with? Is there in fact a recognized standard for acceptable peer review?

    • Peer reviewers are unpaid, and it’s a chore to check every reference. All that matters is that a reference backs up the claim made in the text, but sometimes (too often) this isn’t the case. You only have to read papers in the nutrition literature and check the references for the “received wisdom” aspects of their discussions to see that sometimes it’s impossible to find good references to back up statements that “everybody knows”, such as the statement that saturated fat causes heart disease.

  • It’s a game that was already playing 25 years ago, when there were far fewer fake journals than there are now. Scientists and doctoral students within the same organization would cite each other’s articles, adding a sentence here a there that would justify inserting a citation of a buddy’s paper. I once even had the surprise of having a woman’s name added to the list of authors of my paper by my PhD supervisor, even though I had not ever met the woman, nor did I know who she was. Apparently someone high in the hierarchy had asked my supervisor to include her name because she held some managerial role at the instrument that was used for data collection.

  • another possible descriptive term for this behavior….log·roll·ing
    North American
    noun: logrolling; noun: log-rolling

    1. informal
    the practice of exchanging favors, especially in politics by reciprocal voting for each other’s proposed legislation.

  • At the end of the article the “cartel phenomenon” is called an “illness afflicting modern science.” But the article doesn’t discuss how big the problem is. I suspect the “cartels” involve an extremely small proportion of scientists, publishing in low-impact journals. In that case, claiming that science is afflicted by this illness is like saying my body is afflicted with corrupt cells when I burn my tongue sipping my tea – it’s overblown and missing perspective.

    On the other hand, if the cartels are widespread, it’s a real problem that need to be addressed. Since the article doesn’t clarify, it is in danger of tempting readers to click a headline, but then doing little other than reinforcing existing opinions.

    • Brian, in wellness it is absolutely the rule, not the exception. Every legit economist (and, by the way, STATNews) has found that workplace wellness is nonsense, but a few low-impact publications have literally never published a negative “research” article about it.

  • It’s worst of all in wellness. No legitimate high-impact journal has published a favorable article on companies “playing doctor” with their employees since 2010 (and that one, following withering criticism, is all but retracted).

    So low-impact journals like the American Journal of Health Promotion publish pro-wellness nonsense by authors A,B, and C (All of whom make their living defending wellness) and peer-reviewed by authors D,E and F. Another month, the situation will be reversed.

    Then they cite each other’s articles and call one another “respected researchers” because they have so many peer-reviewed articles. Just this week, that journal announced a new “Fabricator-in-Chief.” This guy doesn’t just make up data — he brags about it.

    • I agree. In the nutrition epidemiology literature, many papers are obviously being peer-reviewed by people too aligned with the authors point of view to spot the paper’s faults and call for their correction. Nor are they reviewed by people who understand that a controversial question requires a balanced analysis of the controversy when evidence is inconclusive.
      See for example Richard Feinman’s comments on this Harvard paper (one from a notorious production line that gets continual media attention for its overstatements).
      “I think that an editor should be expected to recognize when a MS is about a controversial subject and should sensibly solicit reviews from experts on both sides of the controversy. Whatever the cause, failure to do this might be considered de facto bias. I think that this is very common and is, in my view, the major problem in the medical literature, particularly in this field.”

  • Get rid of the impact factor score. Get rid of the publish or perish requirement for physicians/scientist. There’s your solution!!
    Also get rid of medical school rankings…

  • Google Scholar is not a cartel. It includes all citations, top journals and masters theses alike. Harzing’s Publish or Perish lets you measure the h-index of journals, so the cartels are not as powerful as suggested here, unless your institution is aligned with only one of the metrics.

    • But if your concern is that someone else is enrolled in a deceitful cabal, then I guess you’re powerless to do much about it, except choose an egregious case and shine a very bright light on it.

  • I worked for the American Chemical Society’s journal publishing division for two years. I have never seen a culture so steeped in lying and deceit. In that short time I lost a ton of respect for scientists. So it’s (sadly) no surprise to me that such ‘cabals’ exist, but I certainly hope the industry (and yes, it’s certainly an industry) can police itself back to respectability.

    • “In that short time I lost a ton of respect for scientists.”

      Throughout my entire life experiences, I too have lost respect in those long held societal sacred cows that were suppose to be the standards of integrity, ethics, morals. There is no segment of society, no matter how noble they once were, that deserves, merits blind respect/faith that they will have integrity, ethics, morals in what they do. Too many people however, do not realize this fact of life and take prestige over truth and substance.

Comments are closed.