Media Matters has a new report that makes it clear Facebook is more concerned with looking like it’s acting on misinformation than it is actually doing something about it.
The report looks at just over 6,000 posts from President Donald Trump over the course of 2020, and found the 506 posts Facebook put a fact-check label on generated over 200 million interactions-— roughly 400,000 per post. The rest of the posts tended to get much less engagement, with an average of 152,000 and a total of 927 million interactions.
But of course it wasn’t just Trump himself making false claims. There’s a whole network of conservative media who use the former reality TV star for clicks, and who he would use for fodder. In 2020, Trump cited regularly-wrong-for-political-purposes outlets 868 times, but only 147 of those posts ignominiously earned labels from Facebook. But like the larger sample, those posts citing rightwing media that were flagged by Facebook averaged twice as many interactions as the others. More than 86% of the flagged posts, 127 of 147, were about election integrity.
Trump is now banned from basically every social media platform, but Media Matters found dozens of posts from accounts eagerly posting Trump statements of anti-election propaganda, with little apparent recourse from Facebook. That explicitly anti-democracy disinformation is what Facebook has deemed the most deserving of labeling, which is why it’s such a problem that the labels appear to backfire.
Because while the average Trump post got 152,000 interactions, and ones citing right-wing media slightly less than that, the ones citing fake news sites that were flagged got 291,000 interactions. Even worse, the posts that cite right-wing sources, and include the phrase “stop the steal,” a rallying cry of the Jan 6 attack on Congress, and were labeled as misleading saw an incredible average of 640,000 interactions.
Now, one might be tempted to defend Facebook in this instance by pointing out that things that got flagged are the ones that were most egregiously wrong, and that type of incendiary content always tends to perform better, so the labels may not be backfiring.
But debating the relative efficacy of labeling false information, for example by diving into Facebook’s claim that the labels limit Trump posts by a measly 8%, simply distracts from the point that Facebook doesn’t need to label things it knows are misleading, and therefore shouldn’t be shared.
If it’s clear the post isn’t true, they can just remove it. That way no one else can be tempted to share it!
And if it’s clear the person posting it is repeatedly sharing things that aren’t true, their account can be removed, too. As the data here shows, instead of chasing various posts citing those supposed “news” outlets that just so coincidentally seem to repeatedly publish stories that are false but make conservatives feel good, Facebook could decide that such toxic content is not welcome on its platform at all.
It doesn’t seem to want to do that though. Instead, according to a recent post, they’ve decided to double down on this labeling approach that apparently doubles engagement, putting up warnings when users go to pages that have repeatedly shared false info, further reducing how widely serial liars’ posts will spread (but still letting them post), and telling users when they share something that’s been fact-checked.
Which all seems like a lot more work, and allows for a lot more lies to spread, than simply banning the people who abuse the platform in the first place.