Twitter placed a warning label atop a tweet by President Trump last week that contained misinformation about Covid-19: He falsely claimed it is less deadly than seasonal influenza. This week it applied the same label to another tweet by the president containing misinformation about Covid-19 immunity.
The label warns that the tweets violate Twitter’s rules, and the company’s executives assured the New York Times that by placing the label on a tweet, “engagements with the tweet will be significantly limited.”
Since there is currently no way for users to report tweets like this to Twitter as Covid-19 misinformation, we presume the label was added after the company’s internal review which, like Facebook and other social media platforms, is facing increased scrutiny for its role in spreading misinformation about the coronavirus. While the president himself is a significant, high-profile source of online Covid-19 misinformation, he is not the only one creating and spreading it. And though alerting users to misinformation is an important step, Twitter offers few tools for knowledgeable users to combat abundant and sometimes deadly misinformation spreading on the platform.
This spring, Twitter released information about a special verification process for physicians and public health leaders that aimed to empower credible and credentialed individuals with various types of health expertise to disseminate and evaluate emerging information on Covid-19. Twitter began verifying certain user accounts in 2009 as a means of combating impersonation, saying “An account may be verified if it is determined to be an account of public interest. Typically this includes accounts maintained by users in music, acting, fashion, government, politics, religion, journalism, media, sports, business, and other key interest areas.” A verified Twitter account is marked with a small blue checkmark next to the user’s name.
We hoped this special verification process was the beginning of an effort to combat misinformation about Covid-19 by crowdsourcing expertise to verified health experts. The three of us, along with many other physicians and experts, were verified under this program.
But despite using verification to elevate credible voices to share essential information about Covid-19, these efforts have not meaningfully reduced the spread of misinformation. In fact, verified experts who came across misinformation and tried to identify it as such by replying to or quoting the tweet may have unwittingly amplified it. This strategy has essentially amounted to watering the garden without pulling the weeds. The result has been that users still cannot easily tell the difference between a tweet that is simply widely shared from one that is credible.
Both Twitter and Facebook have rules against sharing misinformation as well as internal systems to flag or remove tweets that violate their rules. Facebook, relying on artificial intelligence and human fact-checker review, reported this week that it has removed 7 million posts containing Covid-19 misinformation and applied warning labels on another 98 million. These systems, however, can be easily evaded and have not come close to solving the problem on any platform.
On Twitter, which has already made the effort to verify the expertise of some of its users, there is an obvious opportunity to try something new: let verified experts report Covid-19 misinformation. This would result in more flagged-content reports for Twitter to sort through.
A key difference between these and reports of other types of abuse — spam, harassment, hate speech, threats of violence, misleading information about an election, and the like — is that these reports would be generated by users already verified as health experts. With the ability to report tweets to Twitter as Covid-19 misinformation, experts can identify them without amplifying the message.
Tweets that have been reported to contain clear misinformation, such as those denouncing the benefits of social distancing, mask use, or the existence of the virus at all, should be quickly marked as such when discovered. Because existing internal systems have proven to be incompletely effective, Twitter could tap into the verified expertise of some of its users to let other users know they are looking at a tweet that contains misinformation — a weed instead of a flower.
Clearly, the label of “misinformation” must be applied with caution, and only in situations where there is clear scientific consensus. We are not advocating for censorship, just the ability for expert users to identify instances of the 2020 equivalent of falsely yelling “Fire!” in a crowded theater.
Twitter has been a useful platform for discussion in areas of active scientific debate, and it would be a terrible mistake to use misinformation labels to stifle conversation in areas where there is no clear-cut scientific consensus. The platform has previously found ways to exercise judgment in marking other types of rule violations, such as labeling tweets containing misleadingly edited videos as “Manipulated media” and placing fact-checking flags to link users to accurate information about voting. We trust that judgment could be similarly exercised with Covid-19 misinformation in accordance with existing rules and clearly delineated criteria for what constitutes misinformation.
While we appreciate Twitter’s attempt to elevate important public health information during the pandemic, we see a missed opportunity to curb misinformation. As physicians and avid Twitter users, we value the ability to communicate directly with patients and colleagues in a public forum. Educating and learning from the public is an important aspect of our work as physicians and subject-matter experts, and while verification enhances our ability to fulfill that mission, we need tools beyond a blue checkmark to do our part to slow the spread of misinformation.
Christopher M. Worsham is a pulmonologist and critical care physician at Massachusetts General Hospital and a research fellow at Harvard Medical School. Lakshman Swamy is a pulmonologist and critical care physician at Cambridge Health Alliance. Rahul Ganatra is an internal medicine physician at VA Boston Healthcare System and an instructor in medicine at Harvard Medical School. The views expressed here are the authors’ and do not necessarily represent the views of their employers.
In the first 3-5 weeks of the pandemic, wasn’t it the “health experts” that said there was no asymptomatic spread and then changed their minds?
This was despite that fact that a small group of public health experts said that there was asymptotic spread and there is contagion for the annual flu or colds before symptoms.
Interesting isn’t it?
Since both the WHO and the CDC have gone back and forth on their positions on COVID mitigation strategies, how is a fact checker to determine which position is misinformation and which is valid information?
lol my friend is “verified” he’s a fricken semi-pro hockey player with 2k followers. Giving “blue checkmarks” the power of deciding free speech is laughable, give your head a shake.
Comments are closed.