
In an attempt to corral false coronavirus claims, Facebook launched a new strategy last month that the social media giant says pulls from a string of psychology studies on combating inaccurate posts.
The problem: The researchers behind some of those papers and outside experts say Facebook appears to be interpreting the findings incorrectly — and their approach could be running counter to the goal of tamping down on runaway misinformation.
Right now, when people interact with certain Covid-19 falsehoods that Facebook has internally flagged as harmful misinformation, they see a generic message in their News Feed that says, “Help friends and family avoid false information about Covid-19.” It includes a link to the World Health Organization’s myth-busting page about coronavirus.
What people are not told, however, is what misinformation they’ve interacted with — or that they’ve interacted with misinformation at all. Facebook, meanwhile, deletes the false post.
A Facebook spokesperson told STAT its messaging is intentionally vague and is designed to avoid what’s known as the backfire effect, a popular theory in psychology circles based on the idea that repeating falsehoods can paradoxically strengthen people’s incorrect views. When those falsehoods are harmful, such as a post that wrongly advises people to drink bleach to protect themselves from Covid-19, there’s even more reason to remove them, the spokesperson said.
The spokesperson cited three research papers as the foundation for its strategy in an email to STAT. But those studies — while they do raise the idea of the backfire effect — suggest that it would be better for Facebook to identify the misinformation and correct it to help people understand the facts.
“The bottom line is that you can repeat the misinformation to clarify that’s what it is you’re correcting. There was no backfire effect,” said Stephan Lewandowsky, a professor of cognitive science at the University of Bristol and a co-author on two of the studies Facebook cited. One of his studies found that retractions were more effective in correcting misinformation when they explicitly repeated the original falsehood.
The struggle over Covid-19 misinformation is indicative of the wider problem the company faces with rampant misinformation on its platform — an issue that has grown all the more urgent amid an unprecedented global pandemic where false information can have deadly consequences.
Facebook acknowledged that the jury is out on whether its approach is the best tactic — but said it is working hard to combat falsehoods on its site.
“Our goal is to reduce misinformation and reach people who’ve seen it with facts. Whether or not showing people misinformation they’ve previously seen is an effective method of achieving this is subject to debate within the research community,” the Facebook spokeswoman said in an emailed statement to STAT.
“We follow this work closely and engage directly with academics to understand the latest findings so we can improve our approach,” the spokeswoman said.
But Facebook’s strategy has proven confusing for some users, who said they weren’t aware after reading the generic message that they had interacted with misinformation. That was the case for Paul Young, a 59-year-old from Columbia, Mo. Young recently saw Facebook’s vague “Help friends and family” messages at the top of his News Feed and wrote it off as another generic Covid-19 information advisory, similar to those he’d seen on other websites. Young noted the message’s existence, bookmarked the link it contained, and moved on with his evening.
Young did not realize the message was a response specifically sent because he had interacted with a false post. He said he would have preferred a clearer and more direct warning that he had been engaged with misinformation.
Experts said a better approach would be for Facebook to identify the post as false — not simply delete it — and correct it. That way, people would quickly recognize that the post was wrong and update their beliefs accordingly.
“If you’re presenting someone with inaccurate information, what we found was that it’s OK to repeat it as long as you saliently pair it with a correction saying ‘this is not true,’” said Briony Swire-Thompson, a fellow at the Harvard Institute for Quantitative Social Sciences and a co-author of another study Facebook cited.
“That will not hinder belief-updating. It will actually potentially facilitate the correction,” Swire-Thompson said.
That approach is similar to how Facebook handles falsehoods that it does not label harmful. For those posts — such as an image of a shark swimming on a freeway — Facebook adds a warning label identifying the post as false and shades over any images included in the post. (Users can still click below the image to view it.)
Experts said it can be tricky to set policy based on psychology research.
Many studies suffer from issues including poor design and unreliability. As a whole, the psychology field is in the middle of a “reproducibility crisis,” where the findings of even famous, commonly held research studies are found to be mere myths.
“You would never make a policy judgement based on one study,” said John Torous, a Harvard psychiatrist who was not involved in the Facebook research. “Most psychology research has been known not to reproduce.”
Lewandowsky said another element of his work suggests that rather than trying to correct misinformation on a case-by-case basis, it could be helpful to simply point people to an alternative source of accurate information. But that approach won’t help to fight individual scenarios where people have been misled.
“You will not have corrected that specific misconception, but you will have provided people with an alternative framing, which is: ‘Here’s where I go for accurate information,’” Lewandowsky said.
Young, the Facebook user, did end up going back to his bookmarks and clicking the WHO link in the pop-up that Facebook sent him. But he still wants to know what posts he interacted with that were false.
“This doesn’t make me terribly comfortable,” said Young.