Last week, US intelligence officials told Congress that Russia is working to get President Trump re-elected. They did not say what measures are being used. And they didn’t have to, because we already know how the Russians use bots and fake accounts to manipulate social media.
We also know that social media companies have been slow to respond, not out of helplessness, but rather fear of reprisal. As a deeply reported story by the Washington Post’s Craig Timberg published last week shows, Facebook deliberately chose to allow its platform to be used for misinformation, because removing pages with fake news would “disproportionately affect conservatives,' according to former George W. Bush White House Official Joel Kaplan, who now heads up Facebook’s Washington office. According to an anonymous former Facebook employee, “what [Facebook knows] about Republicans” is “tell them ‘yes’ or they will hurt us.” Thus the reason for the tepid “fact check” system that allowed the Daily Caller to serve as an arbiter of truth.
Now Twitter is in the process of rolling out something similar, according to a memo and demo leaked to NBC’s Ben Collins. Using a Wikipedia-like community voting system that flags problematic content with a big orange tag, the software then provides an actual fact from trusted or verified users underneath. Twitter didn’t say when it would be rolling the feature out publicly, but new research should provide a renewed sense of urgency.
According to a forthcoming new analysis covered by Oliver Millman in the Guardian yesterday, of the 6.5 million climate-related tweets posted in the month around Trump’s June 1 2017 announcement that he planned to pull the US out of the Paris Agreement, an astonishing 25% were generated by bots. For specific phrases like “fake science,” it was as high as 38%, while 28% of tweets about ExxonMobil were from bots.
And it hardly seems random. In the days around the announcement, there was a jump from hundreds of daily bot-tweets to over 25,000 a day. Unsurprisingly, the bots were largely critical of climate science and advocates, though there was some evidence that about 5% of the tweets supportive of climate action were from bots. That means, the analysis says, “that bots are not just prevalent, but disproportionately so in topics that were supportive of Trump’s announcement or skeptical of climate science and action.”
The article did not, however, identify who was behind these bot networks, which obviously someone is paying for. But do we really need to know? Does it matter if it’s Russia using bots to stoke division, or if it’s ExxonMobil using them to defend itself?
Regardless of who’s behind them, the effect is the same. As John Cook told the Guardian, “one of the most insidious and dangerous elements of misinformation spread by bots” isn’t necessarily that people are persuaded by what are often flimsy arguments, but instead that “the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts.”
That’s what makes Twitter’s new misinformation-flagging system so disappointing. By calling out how often users are being misled, it’s going to make everyone less trusting of everything. That’s good when it means they don’t trust bot accounts, but bad when they don’t trust real experts, either.
Here’s a radical idea: if Twitter can identify tweets with misinformation, why call extra attention to them with bright disclaimers, when they could just be removed?
But we must remember that by working the refs and attacking Big Tech, conservatives have successfully terrorized social media companies into allowing their disinformation. So when it comes to the question of whether or not their platform can be a vehicle to deliver politically correct but factually false messages to the public, social media companies feel pressured to “tell them ‘yes’ or they will hurt us.”
Top Climate and Clean Energy Stories: