It seems like every day there’s a dumb new climate denial conspiracy, but they all boil down to a handful of key messages: it’s not real, it’s not us, it’s not bad, solutions are worse, or regardless of all that, you just can’t trust climate scientists and activists. Which excuse they use depends on the flavor of disinformation necessary to respond to the news cycle. While you, dear readers, may not realize it — because we are so fantastically good at keeping things fresh after all these years — it’s all sort of the same thing over and over.
One of the people most attuned to this fact is John Cook, who founded Skeptical Science back in 2007 as a resource for debunking denial. Over the years, he and the team of volunteers assembled a list of “most popular” myths about climate change, topping out just short of 200 permutations of climate disinformation that serve as an Encyclopedia Debunkica for basically everything deniers say. By 2013, Cook was the lead author of the pivotal “97% consensus paper” that put the “scientists disagree” argument to rest, and from there he pursued a Ph.D. and has been leading the field of climate disinformation research.
Now Cook and a team of co-authors (Travis Coan, Constantine Boussalis, and Mirjam Nanko) have a new study out, in which they fed over 250,000 pieces of climate disinformation from 50 conservative think tanks and climate denial blogs into a computer, and the artificial intelligence learned how to distinguish between the different types of denial.
What they found was that over the last 20 years, rightwing think tanks have turned up the volume on attacks over solutions (even as renewables have steadily become cheaper and more widespread) while the blogs have been steadily beating the “science is unreliable” drum, and only in the past few years has solutions denial been more frequently featured. On most issues, the two camps are aligned, but the organizations have always been much more focused on attacking solutions, whereas blogs have been more ostensibly science-focused, which makes sense given that the organizations are funded to oppose policy, and most of the blogs were more amatuer efforts to play armchair scientist. Both the organizations and bloggers were united, however, on their second-most-common-claim: smearing the climate movement.
The algorithm they created was also able to classify some “sub-claims,” distinguishing, for example, between attacks on climate policies broadly (which ramp up during key legislative fights) and those on clean energy specifically, which have steadily grown as the technologies become more of a threat to fossil fuels. Similarly, comparing the proportion of attacks on the climate movement versus climate science, shows that since 2004, climate contrarian think tanks have focused more on attacking the movement than the science.
Going a step further, by looking at who’s funding those think tanks, the study shows that groups that get a high proportion of their money from a “key donor” set of dark money funders also tend to have similar messages. While more multi-purpose rightwing groups like Heritage and the Manhattan Institute have a lot of funders and tend to focus on attacking climate solutions, groups like CFACT and Heartland that rely on a small set of funders are united in peddling the outright science denial, and attacking climate scientists and activists.
As helpful as it is for Cook’s team to have taught a computer to recognize how climate disinformation has evolved over the past twenty years, future applications hold even greater promise.
“Misinformation spreads so quickly across social networks,” Cook told Cosmos, “we need to be able to identify misinformation claims instantly in order to respond quickly. Our research provides a tool to achieve this.”
And what a helpful tool it is! Because every time a report documents the extent of climate disinformation, Facebook’s response is that there’s not actually that much problematic content on its platform. But since the “research is for public good,” Cook tweeted, “the link to our code is in the paper” and they “would be happy to work with any platforms interested in using our research to reduce the damage caused by climate misinformation.”
Now Facebook has a chance to let the A.I. loose and prove just how much climate denial’s out there! Surely, if they’re honest and have an accurate understanding of the content on their platform, Cook’s code won’t find much there at all. And if it does, well then they’d know exactly who to deplatform, so that Facebook could be telling the truth next time they claim it’s no big deal!