An article in MIT's Technology Review, Chatbot Wears Down Proponents of Anti-Science Nonsense, tells us about one frustrated software developer's Charles Bronson-like solution to the canned, repetitive statements of climate change deniers and religious fundamentalists on Twitter. Tired of arguing with them, only to have every argument and fact he gave immediately ignored, he wrote a chatbot that identifies typical phrases in their comments, and automatically refutes them.
The result is the Twitter chatbot @AI_AGW. Its operation is fairly simple: Every five minutes, it searches Twitter for several hundred set phrases that tend to correspond to any of the usual tired arguments about how global warming isn't happening or humans aren't responsible for it.
It then spits back at the twitterer who made that argument a canned response culled from a database of hundreds. The responses are matched to the argument in question -- tweets about how Neptune is warming just like the earth, for example, are met with the appropriate links to scientific sources explaining why that hardly constitutes evidence that the source of global warming on earth is a warming sun.
The fact is that much of conservative "debate" is already effectively automated - they rote-memorize talking points fed to them by cable news or talk radio, regurgitate them as soon as they're triggered by a keyword, and don't read or consider a single counter-argument unless it triggers an additional canned phrase. So really this chatbot is just a one-up in a robotics arms race between science and corporate power. It's also gut-bustingly hilarious, and comports exactly with personal experience.
I once did something similar, albeit not nearly as technological - I compiled a database of typical right-wing claims and the facts that debunk them, then simply searched the file and cut-pasted the proper response if I decided the person arguing with me was spewing on autopilot. The results were almost mathematically predictable: First they gloss over what you said with a blanket dismissal and hope you go away; then they try to be obtuse and make arguments that boil down to "What is real? Who can know?"; then they get angry and attack your source's motives; then they attack your motives; and finally they abandon the discussion with an ignominious parting slander. If I'd been a programmer, I could have single-handedly destroyed internet wingnuttery six years ago. But alas, sometimes I'm lazy.
This story's software developer, Nigel Leck, could probably generate a lot of page hits if he were to share some transcripts of particularly satisfying automated beatdowns. Perhaps he could even host competitions (perhaps a better word would be "hunts") and award amateur programmers who manage to write scripts capable of reliably, autonomously identifying and deconstructing wingnut rote-recitation while avoiding false positives (e.g., snark).
The truly glorious thing about this story is that Leck's AI would not pass the Turing Test, so the fact that it not only reliably fools but defeats conservatives in debate suggests that right-wingers may not actually be conscious by the standards of Turing. Mr. Leck may very well have settled a long-standing question in American politics.