
Facebook has overhauled its approach to harmful Covid-19 health misinformation, announcing major changes that would send a much stronger message to users who have interacted with harmful falsehoods about the virus.
The decision on Tuesday comes after STAT reported in May on expert criticism of the social network’s handling of falsehoods about Covid-19 from the researchers whom Facebook said it had consulted to design the policy. Those experts told STAT that Facebook appeared to have misinterpreted the research and that the social media platform’s approach was unlikely to be effective.
Previously, the social media platform had opted to send users generalized messages informing them that they had interacted with misinformation. But experts said it would be more effective to immediately correct false posts with specific details on why they are inaccurate, an approach that the research suggests can tamp down more sustainably on rampant misinformation in the long run.
Now, Facebook is doing just that. Users who like, share, or comment on dangerous falsehoods about Covid-19 will receive a personalized notification from Facebook that tells them it has removed a post they interacted with because it violated its misinformation policies. When users click on the notice, they will see a thumbnail of the original post, information on where and how they interacted with it, and an explanation of why it was removed.
Facebook previously said that its more generic initial approach was intentional and based on findings from three psychology studies.
But authors of two of those studies told STAT in May that Facebook had seemingly bungled the take-home message of their research and warned that the misguided approach could run counter to the goal of tamping down on runaway misinformation.
“Facebook is definitely misunderstanding, or perhaps misusing, at least two out of the three studies,” Briony Swire-Thompson, a fellow at the Harvard Institute for Quantitative Social Sciences and a co-author of one of the studies Facebook cited, said in May.
Swire-Thompson’s research suggests that deleting misinformed posts is not enough to help combat misinformation. Instead, to help people understand why a specific piece of misinformation is false and guide them toward truth, it is better to identify a false post and quickly correct it, explain why it was false, and provide accurate information. Experts said that approach is more likely to discourage people from sharing further misinformation in the future.
“As long as you saliently pair the original misconception with a correction or a false tag saying ‘this is not true,’ it will not hinder [people’s] ability to update their beliefs. Instead it will actually potentially facilitate the correction,” said Swire-Thompson.