Skip to Main Content

Positive news about a potential Covid-19 treatment — a drug that blocks the receptor for the inflammatory protein interleukin-6 (IL-6) — highlight the hazards of sharing research findings via Twitter and other social media.

Researchers with the large REMAP-CAP clinical trial reported through a variety of channels, most notably Twitter, that the use of the IL-6 agonists tocilizumab or sarilumab significantly reduced deaths among critically ill patients with Covid-19.


In response to what should have been good news, some experts essentially shrugged it off on social media, partially because several earlier studies had yielded disappointing results.

It’s important to note that REMAP-CAP was conducted after the antiviral drug remdesivir and the anti-inflammatory drug dexamethasone were adopted as the standards of care in some countries, while the earlier studies were conducted before these other drugs were widely used.

We include that detail not to delve into the murky world of anti-IL-6-receptor therapy in Covid-19, but to note that the data from trials testing this approach are difficult to interpret. And for good reason: The many positive and negative studies, several of which have still not been released in sufficient detail to allow for in-depth review, studied patients at different stages of disease, in different countries, and at different times during the pandemic.


Randomized clinical trials usually produce valid and interpretable results, unless they are very small, but it is often unclear — even to experts — how to reconcile conflicting results.

Twitter and other social media platforms, which allow rapid dissemination of top-line findings, tend to generate messages lacking in nuance. We are seeing more and more clinicians and other interested parties come to view proposed Covid-19 treatments in black and white terms — not necessarily due to the underlying quality of evidence but because of the manner in which they have been shared on social media and discussed in academic circles.

The typical approach by the Twitterati has been to take a majority or minority position and defend it. Every complex problem has a solution that is clear, simple — and wrong.

The impact of rapid dissemination of unnuanced summary findings has had real-world consequences and has ironically delayed progress toward therapies for Covid-19 patients — potentially harming patients along the way by limiting the speed at which we are able to collect evidence and translate it into effective therapies.

Since March, the two of us have been conducting a multicenter clinical trial evaluating the utility of adding sarilumab to the current standard of care versus standard of care alone for patients with moderate-to-severe Covid-19. Since we expected the standard of care to change over time as clinicians learned more about treating Covid-19, our study design — similar to some others (including REMAP-CAP) — was not rendered irrelevant by the earliest trials that showed no benefit when given in the absence of medications subsequently found to be effective (remdesivir and dexamethasone).

Early in the pandemic, when no proven effective treatments were available and the standard of care meant no specific medication, it was easy to enroll participants in our trial because both patients and providers sought out anything that might be helpful.

Yet after negative headlines and posts on Twitter and other platforms appeared about studies investigating IL-6 receptor inhibition as monotherapy, providers lost interest in this approach. That trickled down to their patients. Recruitment in our trial — which was already difficult — slowed to a halt. Some on social media even questioned why anyone would continue to study these drugs, even with multiple studies already in progress.

Ironically, some of the people announcing the death of anti-IL-6 receptor therapies on the basis of a small number of studies were the same ones who strongly advocated for acquiring “high-quality evidence” to guide treatment for Covid-19. To us and to others, “high-quality evidence” usually emerges from the results of multiple well-conducted randomized trials, not just one or two. Acting in a manner that effectively slows the acquisition of additional data in novel clinical settings by discouraging provider enthusiasm and patient participation is a bit like saying “stop counting the votes” because you like the results of the sample of votes already counted.

During the pandemic, it has been necessary to convey results rapidly, before peer review and often even before a manuscript could be written: Time is of the essence. Yet medicine by tweet can be dangerous. Thought leaders, especially those with both professional expertise and a strong Twitter presence, have a particular obligation to be circumspect in their opinion bites, just as they would be required to be in their own peer-reviewed papers.

After the announcement of negative results from trials of anti-IL-6-receptor therapies, whether by tweet, press release, preprint, or peer-reviewed paper, the prevailing view quickly devolved from, “I wouldn’t use or study it,” to, “No one should use or study it.” When uttered by influential experts, such sentiments nudge the larger community to say, “Only a stupid or unethical person would continue to use or study this drug” — and providers on the frontlines listened.

The REMAP-CAP trial was stopped early because a review by its Data and Safety Monitoring Board determined that both drugs were effective — an announcement made via Twitter that included summary numbers without additional exhortation. Criteria for stopping a trial early are typically far more stringent than those conventionally used to say a treatment is effective at the scheduled end of a trial.

Yet the idea that these drugs are ineffective was so engrained that many comments included. “Show me data.” and even, “Why are people still studying this?” even after seeing positive results. It’s a bit like people seeing the stock price for a company double over the course of a year shaking their heads and saying, “Whoever invested in that was an idiot.”

Clinicians treating patients with Covid-19 can’t always afford to wait for peer review of clinical trial results. In principle, social media could provide a type of peer review, extending a discussion to many more people than are traditionally involved in reviewing a manuscript and writing letters to the editor after it is published.

As the adage goes, however, a lie, particularly a simple and short one, can travel halfway around the world while the truth is still putting on its shoes. It has been demonstrated time and again and across disciplines and topics how social media platforms have amplified this problem in politics. It’s no different with medicine and Covid-19.

The story about the impact of easily disseminated Twitter sound bites on the real-world practice of clinical medicine and the acquisition of new knowledge is far from being over. Just look at the question of whether to administer one or two doses of RNA-based vaccines, which is now being debated on social media, sometimes hotly.

Here’s what we think everyone should do before retweeting something with a snappy comment: Pay attention to whether the stridency of your opinion is related to the strength and breadth of the evidence. And pause for a second to consider whether the message itself might impair the scientific community’s ability to gather the high-quality evidence so desperately needed to guide sound decision making.

Your decision to share and influence others has a direct impact on the ability of science to bring you answers.

Paul Monach is chief of rheumatology at the VA Boston Healthcare System, a researcher focused on clinical trials, and a lecturer on medicine at Harvard Medical School. Westyn Branch-Elliman is an infectious disease physician and a clinician investigator at the VA Boston Healthcare System and assistant professor of medicine at Harvard Medical School. During the pandemic, they have led a multicenter trial of sarilumab for patients hospitalized with Covid-19.

  • There is nothing wrong with offering findings so long as the situation is made clear. eg. Ad hoc findings indicate there is an association between drug ‘x’ and positive disease outcome which invites a controlled study, given this was an anecdotal observation on ‘y’ subjects in a clinical setting under the following conditions ….

  • I had Bam on day 7 and would not be here if not for that. Getting a facility to give it to me was not easy, however. It is criminal to withhold good available therapy when so many lives can be saved.

  • With all the half-baked and distorted “information” inundating social media – real science is seriously hampered and public opinions are shaped by distortions that are totally running out of control. Social media ought to be for social topics, not rather serious scientific findings on projects still evolving. No wonder trials are tough to fill, vaccines are distrusted, wrong medications are over-hyped, and doctors are tossed about in this jungle. Scientific findings ought NOT to be shown on “social” media – and the public should realize that “analysing” data they have no education for is ludicrous. It really is a gong-show out there now.

  • Any use of a social media platform like Twitter by any scientist is IMO a stupid move. The content of what should be conveyed to show objectivity and completeness does not fit in Tweets, and does not reach only the right readers such as other scientists or at least the knowledgable competent well-educated. Scientists should not stoop to the low level of self-centered attention-hungry types like Musk and Trump who are addicted to media that are indeed only for “social” purposes.

  • I read about Leronlimab and it’s effect to critically ill patients with COVID 19. One of these patients is a woman from Orange County, CA who had a liver transplant but got COVID 19. She was in UCLA ICU, and with Leronlimab she is now home and resumed her activities. She has no symptoms like the long haulers. Any opinions about this medication?

Comments are closed.