n the lead-up to the 2016 U.S. election, Russian bots and trolls took to Twitter and other social media platforms to try to turn Americans against one another. But in addition to spreading false information and interfering in the election, a new study reports, a significant number of these malevolent actors tried to sow discord over vaccines.
An analysis of Twitter accounts previously identified as having been operated by Russian bots and trolls found they dove into the vaccine debate as early as January 2015, the researchers reported. They did not take one side or the other, but seemed to tweet pro-vaccine and anti-vaccine messages in roughly equal measure.
On a variety of issues, the overall aim of the Russian campaign appeared to be to erode social cohesion and generate confusion by amplifying the number of voices taking part in these debates on social media. But in the case of vaccines, that could have increased the misperception that the science on their safety and effectiveness isn’t settled — as is the case — but rather that it is still subject to debate.
“We do have a very strong suspicion that these accounts were attempting to generate discord,” said David Broniatowski, assistant professor in George Washington University’s department of engineering management and systems engineering and lead author of the study.
In the study, published Thursday in the American Journal of Public Health, Broniatowski and his co-authors focused on Twitter, analyzing tweets from accounts that had been identified as having been operated by Russian trolls, bots, and so-called content polluters whose aim is to disseminate spam and malware. The article is titled “Weaponized Health Communications: Twitter Bots and Russian Trolls Amplify the Vaccine Debate.”
The researchers compared tweets from the accounts to a selection of tweets from other users to see if the trolls and bots commented on vaccines more frequently than average Twitter accounts. They did.
“We found that, yeah, indeed, this was something that does seem to be part of the lexicon of what some of these bots and trolls use,” Broniatowski told STAT.
Even though the division of pro- and anti-vacccine tweets was roughly equal, that still skewed the picture of views on vaccines on Twitter, he noted, pointing to data from the Pew Trust that shows the vast majority of Americans support vaccination.
“We’ve always been a little puzzled why social media looks like there are so many anti-vaxxers,” said Broniatowski. “So even if somebody’s posting 50-50, compared to the Pew data, there are going to be more anti-vaxxers.”
Dr. Amesh Adalja, an infectious diseases physician and senior scholar at the Center for Health Security at Johns Hopkins Bloomberg School of Public Health, said any skepticism about the safety of vaccines risks feeding the concerns of parents who are worried about having their children vaccinated.
“The more the vaccine ‘debate’… is amplified it gains an undeserved sense of legitimacy and gives vaccine-hesitant individuals a pretense to forgo vaccination for themselves and their children,” said Adalja, who was harshly critical of the use of vaccinations in efforts to turn people against each other, calling it “overtly nihilistic.”
A spokesman for Twitter said that malicious accounts “are likely to target virtually any high profile conversation, since that’s where the views are.”
The spokesman, Ian Plunkett, told STAT that Twitter has aggressively ramped up preventive measures to try to keep such content from general users. In May, he noted, the platform identified and challenged nearly 10 million potentially automated accounts. “We put preemptive measures in place to ensure automated content is filtered from discoverable areas of the services — like trends and search. It’s possible that may users did not see this content before it was suspended,” Plunkett said.
Other experts, too, were unsurprised that Russian trolls and bots would delve into vaccines discussions, given the heat the topic can generate.
“Vaccination links to deep values around protection, health, harm, and the social contract,” said Julie Leask, an associate professor at the University of Sydney’s Susan Wakil School of Nursing and Midwifery who researches vaccine refusal. “People become highly invested in the discussion, and highly reactive to the notion that people refuse vaccines. The expression of sentiment at the margins — very pro- and very anti-vaccine — generates emotional energy and clicks.”
Adam Dunn, an associate professor in the Center for Health Informatics at Australia’s Macquarie University, said responding to this type of activity by internet bots and trolls would be challenging for public health authorities and may rely on the rooting out of the malicious accounts.
“The responsibility of managing the health of online conversations may … fall to Twitter itself, and work like this demonstrating the potential for real harm to human health provides a strong impetus for Twitter to act more often and more quickly to identify, isolate, or remove bots and trolls,” Dunn said.
Many of the accounts Broniatowski’s group studied have since been shut down, he said. But freeing the Twitter platform of bots and trolls is like playing whack-a-mole, he suggested.
For her part, Leask wasn’t certain how Twitter would have an outsized impact on vaccination decisions.
“When parents decide not to vaccinate, the decision isn’t usually taken lightly and a few tweets from bots are unlikely to change this trajectory,” she said. “The decision process is much more complex and centered on beliefs, experience and notions of what it means to be a ‘good parent’ held within that community. What we still need to establish is the relative role of social media independent of the influence of peer networks.”