It's difficult to keep track of all the studies and internal documents that suggest Facebook's 2018 "algorithm" change specifically boosted far-right influence and conspiracy peddling, but throw this one on the pile, too. A new study again concludes that Facebook's supposed 2018 emphasis on "meaningful social interactions" disproportionately elevated Republican content over Democratic content, and in a way that wasn't matched on rival platform Twitter.
The NBC News writeup of the conclusions, reached by researchers at Miami University and Wright State University, found that the roughly-equal rates of interactions on posts from local Democratic and Republican party Facebook pages shifted to a more than three to one Republican advantage by July of 2019—again, a result not matched on Twitter—and that the results are first to show that the 2018 Facebook changes "amplified Republican causes at a hyperlocal level." The difference between engagement levels for the two parties has narrowed since then, but slowly.
Campaign Action
So we have here yet another study providing evidence that Facebook's algorithm efficiently inflated radical conservative causes in a disproportionate fashion, compared to other platforms, and which provides yet more backup for Facebook whistleblowers that came forward with internal evidence showing Facebook knew full well its new focus on boosting on-platform "interactions" was rewarding extremist content and conspiracy theories.
Facebook has been blowing smoke about its changes ever since, insisting that, by gum, the rise of QAnon, election hoaxes, white nationalist content, and other froth just happens to have come along independently of their corporate efforts to reward viral content over trustable content—and they'll be insisting that as long as a single lawyer remains in the building. They'd be better off attempting to pin at least some of the blame on Fox News, which has itself steadily radicalized in the past decade and now freely amplifies Facebook-launched conspiracy content, but the Fox News slide happened much earlier—even “new voice” Tucker Carlson got his current white nationalism power-hour from Fox in 2016—but that still doesn't explain why only Facebook saw a rise in Facebook extremism that exactly coincides with an internal Facebook change highlighting Facebook extremists.
Facebook's stream of vacuous denials has already veered into Cigarette Company Lawyer territory, and doesn't look any more credible with this new research. The theory that Facebook's algorithm changes are not directly responsible for a rise in extremist content on their platform is, at this point, barely more than a conspiracy theory itself.
The central problem remains what it always was: Facebook, forever in pursuit of "engagement," continues to show little to no interest in policing dangerous content on its own platform. It is the world's democracies that will have to suck it up and deal with the social chaos resulting from Zuckerberg and company's obsession with riding the revenue train as far as the tracks can be made to go, because installing the sort of safeguards that would put a true dent in conspiracy peddling and dangerous hoax promotion would lower "engagement," and thus revenue, and thus executive boat sizes.
Or, more to the point: They just do not care.
That would be the only possible conclusion from the news that, despite Facebook's alleged 2016 ban on gun sales on their platform, internal guidelines allow gun sellers to ignore those rules ten times before being booted, with The Washington Post reporting that "a separate five-strikes policy extends even to gun sellers and purchasers who actively call for violence or praise a known dangerous organization, according to the documents."
Got that? You can offer a gun for sale on Facebook while advocating that it be used to topple the government or promote white supremacy, and as long as you do it only four times instead of five, Facebook will look the other way.
Similarly, recent research by Media Matters showed that Facebook claims that it was working to address climate change denialism and "energy independence"-themed hoaxes boosted by its own algorithms resulted in a whopping two of the top 100 such posts being dinged with a fact-checking label. Everything else sailed right by.
Now, we can all agree that purging any media platform of all misinformation is likely an impossible task. But if an internal Facebook effort can't even tick down the list of the 100 most shared posts violating their standards, that doesn't speak to much of an effort. Even a single in-house moderator could make it through 100 such posts.
That suggests that whatever Facebook's actual misinformation policies are, there's not even one guy with a laptop inside the company who's been assigned to actually look for it.
As for "you can advocate for extremist violence while selling guns up to five times on our platform, but no more than that," that's just a straight-up policy botch. If you're already tracking how many times individual gun sellers are promoting violence, you ... have the necessary information already. I'm surprised the in-house legal teams didn't throw an absolute fit at the implications of that one.
So there you go. We've got more evidence that a single Facebook algorithm change in 2018, one bent on promoting viral content over news content, is responsible for the platform's descent into distinctly Republican-leaning extremism and hoaxes. We've got yet more reports suggesting that whatever Facebook claims to be doing about its violence and misinformation problems, little to none of it is actually trickling down into actual action.
Facebook refuses to moderate its platform to curb the conspiracy theorists and recruiters for extremism because it costs money Facebook doesn't want to spend and will reduce revenues it doesn't want to reduce. There's no great mystery here: It’s just another case of near-monopolistic tech power looking to wring money out of the nation while relying on corporate lobbying efforts to pave over evidence of large-scale public damage. Again.