Many decades ago, someone I know (who wants to remain anonymous) wrote a simple BASIC computer program designed to simulate interaction with one of the many idiots that attended our high school.
When the program ran, no matter what you typed, the computer would randomly answer with one of the only two responses that ever emerged from the mouth (and presumably the brain) of this dumbass. There's no need to get more specific, the bottom line is the simulator was dead-on.
At the time it was good for a laugh, but little did I know that this clever teen programmer had not only accurately simulated an alleged human being, but also debunked an idea I hadn't yet heard of: The Turing Test.
The Turing Test, for those who aren't wired into nerd-dom, is a test for artificial intelligence. A human interacts with another being via computer chat. The human is then asked if he is chatting with another person or a computer program. If the being on the other end is a computer and the person is fooled into thinking the computer is a person, then the computer can be said to have some measure of intelligence.
I understand the intent of the test, but it's certainly not fool-proof. The Turing Test reminds me of when I wanted to be a genius, but didn't want to do the work. I reasoned that since geniuses tend to be eccentric, my own eccentricities could be sold as mere side effects of a genius mind, right?
No.
Sure we sometimes forget, but we all know that the appearance of a trait does not mean that trait is present. The map is not the territory, looks arenât everything and I can see the emperor's butt. If a chat partner appears to be human, that doesn't mean it is.
So the Turing Test explains what to do when a program is mistaken for a person: we conclude the computer is intelligent. But what are we to deduce if the test reveals a human who can be mistaken for a computer?
Like my programmer pal, we can easily code a program to simulate the automated humanoids of today. I refer to political commentators we call "pundits." Pundits are in theory human beings, and if you interacted with them via a chat room, you should be able to detect this. However, I don't think it's that simple. If the Turing Test is applied, a pundit can easily be mistaken for a bot.
Let's consider two of the most automated pundits of today, Rush Limbaugh and Sarah Palin. Here is their source code:
X = Whatever President Obama Says. Say exact opposite of X.
If President Obama said, "Sarah Palin should not throw herself into a pile of mud," the next day Sarah Palin would issue a tweet in her unique brand of broken English, "Ya know folks, I've been hopinâ Sarah Palin should herself throw a mud pile into." The President could just as easily neutralize Rush Limbaugh by publicly expressing his wish that "Rush Limbaugh should keep talking." The next day Rush Limbaugh would announce, "My friends, I am fully in favor of Rush Limbaugh shutting up."
Childish? Ridiculous? Absolutely. But that's what passes for a lot of political debate. So? Is there something we should do with the fact that some people act robotic?
I suggest that, despite the results of the Turing Test, we err toward the conclusion of believing they are human.
Personally, I try hard not to over-simplify beings as complex as people, no matter how tempting. Should the momentum of power and hate fall in behind such simplification, after the caricatures comes the dehumanizing and next thing you know, someone is firing up the ovens.
It takes some effort, but whenever I hear the latest bot-speak quotation, I force myself to visualize Limbaugh, Palin and the punditocracy as human. I remind myself that they probably act like automatons because that's what the choir they preach to expects and pays for. I imagine that in their private lives, pundits are complex living beings, capable of honesty and humility. Bot-like personas are a disservice to the political discourse pundits claim to serve. That's yet another case where what the market rewards isn't good for humanity, but that's an entirely different discussion.
So by allowing that bot-pundits might be human, am I giving them credit they don't deserve? Probably, but clinging to that optimism serves me more than them.
The practice reminds me of forgiveness.
For the longest time, I was anti-forgiveness. Some people donât seem to deserve it, so monstrous are their actions. Time after time, I've seen the family of murder victims express a kind of forgiveness for the murderer. It seemed stupidly naive. Forgiveness didn't make sense to me until I thought of it not as a way to excuse the hated, but to free the hater from a lifetime of bitterness.
So even though some pundits fail the reverse Turing Test, I suggest we humanize them anyway, or else our worldview will be corrupted, and then we'll be just like them: indistinguishable from a two line computer program.
Maybe that is the true nature of the knee-jerk pundit's crime. Spreading falsehoods, promoting fear, rooting against America as long as it's bad for your rival are secondary to the sin of allowing something as beautiful as a human being to be reduced to a bot.
=
Larry Nocella writes the blog ROFL: Random Outbursts From Lar! at LarryNocella.com. He's the author of the novel Where Did This Come From? The world's first CarbonFree(R) novel according to Carbonfund.org. The book is available as an Amazon Kindle eBook. It is also available for reading online. P.S. You don't need a Kindle to read Kindle eBooks. Download the FREE Kindle app for PC, Mac and smartphones. You can then purchase Kindle books or download free ones. Enjoy!