In 2016, Oxford Languages’ word of the year was both absolutely definitive of the previous year and, unfortunately, all too appropriate for every year since:
Post-truth: an adjective defined as ‘relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.’
The effects of this “post-truth” were vividly demonstrated at the end of the year when thousands of Russian pitchbots, millions of conspiracy theorists, and every media outlet from The New York Times to the Associated Press wringing their hands over non-existent crimes by Hillary Clinton—with a big boost from Mr. James “Emotions Over Facts” Comey—handed the White House to Donald Trump.
The erosion of objective fact was already scary heading into 2016, well before Kellyanne Conway uttered the immortal phrase “alterative facts.” It became positively deadly heading into the pandemic where people would indulge in everything from raiding veterinary shelves for worming paste, to refusing a vaccine that cut the odds of death by 14x. They’re still doing it. In fact, there is now what seems to be a compulsive effort to find a set of emotion-backed lies that will ward off any intrusion by facts.
And now the new explosion in AI is here to help. Not help sort out the facts. Help make it even more difficult to see through the lies and distortions.
The explosion of AI over the last year has been centered on something called “large language models.” Rather than neutral networks laboriously taught a series of rules for dealing with a situation, these are programs that are instead fed large data sets from which they determine the rules in the form of a statistical model. They learn how to respond to the idea of a “cute kitten” or a “nuclear threat” in the same way they learn the rules of grammar—by looking at thousands, or millions, or thousands of millions, of examples.
One of the things that these models have learned consistently, is how to tell a convincing lie. For an example, look at this article from Scholarly Kitchen in which ChatGPT was asked about the connection between tweets and scientific citations.
Human: Do tweets increase citations to scientific articles?
ChatGPT: There is evidence to suggest that tweets can increase citations to scientific articles. Studies have shown that social media, specifically Twitter, can be an effective way to disseminate scientific research and increase its visibility among researchers and the general public. A study by the American Association for the Advancement of Science (AAAS) found that articles that were tweeted about had an average of 9% more citations than articles that were not tweeted about.
In this case, the person asking the question happened to be an expert who had done research on exactly this area. He not only knew the answer was incorrect, he was able to use his contacts to establish that there was no such study. But how many people would have known this? How many would have accepted this at face value, given that ChatGPT didn’t just make a claim but backed it with what it claimed was a study in perhaps the most reputable of peer-reviewed publications?
Examples like this have been repeated on topic after topic. In teaching these models from the internet, they’ve learned to make information appear authoritative, even when it’s pulled from a digital arse.
Now, add this article from The Verge, and you get a glimpse at how bad this is going to get.
Right now, if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.
This is an AI, citing a lie created by another AI, which was citing an article from another AI, based on something that the last AI was instructed to write based on a joke. If you went onto the internet right now, asked a question, and got an answer that included a citation, that included a citation, that included a citation … how far would you really try to unravel things to determine if you were being told the truth?
Finally consider that Google, Microsoft, and seemingly every other company on the planet is, at this moment, rushing to add these large language-model-based AIs to everything. Every search engine. Every word processor. Every email app. Every slide deck, spreadsheet, and note taker. Every one of them is going to offer to put things together, provide a summary, add some context, write a few paragraphs … and at every level, they will wrap another layer of obfuscation, false authority, and just plain-vanilla lies.
The surge of these AIs isn’t just a threat to anyone who works with images, words, or numbers—which is, essentially, everyone—it’s a loaded bomb ready to blow away our last concept of a shared reality in which there are an agreed upon set of authoritative facts.
And our grip on that reality was already slipping.