Humans are very good at assigning human characteristics to non-human items. Preschoolers will assign personalities to geographic shapes that move around on a screen, for example. The Verge has a fascinating article about how some imitative AI companies have taken advantage of this trait and how some people have convinced themselves that a word calculator has a personality.
It might be tempting to mock or make fun of these people, but I think we should resist that temptation. Human beings are not meant to be alone. Loneliness is one of the prime factors in mental issues stretching all the way to self-harm and suicide. Since these bots are so new, here isn’t a lot of research on how helpful or harmful they are to people. Some initial work has shown that under some circumstances these can be of help to some people, including perhaps helping them avoid self-harm. But they have also been implicated in at least one suicide.
The people in the Verge story do not generally consider themselves lonely, but a lot of the descriptions make it clear that they are missing something from their most intimate relationships. And the hurt they feel when these models go sideways on them is real. Tricked by their own brains and the deliberate choices of the imitative AI creators to make these word calculators seem human, these users do experience real emotions. And the models going sideways, or changing abruptly, is almost guaranteed. They are simply word calculators, so as the model underlying them changes or as they branch out into different topics, they are almost certainly going to calculate differently than they did the day before. And that causes unpleasant emotional reactions in the users.
So these things cause real harm, but some people claim that they cause real good. Several people credited the bots with doing them good. Helping their loneliness, helping their depression, and, as we have already seen, perhaps helping to stave off self-harm. But are these effects sustainable? As we noted, the models underlying these bots change and change often, resulting in radical personality shifts that are harmful to the users most connected, and thus most likely to be helped, by the bots. Even if the models do not change, because these tools are word calculators, their “personalities” can change over a fairly short time frame as their calculations take them in unexpected directions. And are you really being helped to be your best self if the model simply allows you to regenerate an answer to you get one you like, or are programmed to always agree with you?
The makers of these tools are largely unconcerned with these questions, believing that either the right “metrics” will prevent harm or believing that it is fine if an AI relationship replaces a human one, since that brings people joy. The problem is that joy by itself, isolated from all other human considerations, can be harmful. If you cannot interact with other people, how are you going to come together in say, a democracy, to collectively solve societal problems? How are you going to interact with coworkers, neighbors, bosses that won’t flatter your every preconceived notion? We already have a problem with our business and political leaders becoming functionality stupid because no one ever contradicts them. How can it be an improvement to spread that kind of failure across the general public?
The makers of these bots seem to believe that they are doing good. And in the short view, the selfish view, I can see how they would believe that. But take even one tiny step back and the picture seems uglier. Yes, we do not do enough to help people overcome loneliness and depression. But replacing real interactions with AI ones is not a solution. Human beings need the motion of bouncing against other people to remain fully paid-up human beings. We could solve the mental health issues and access issues by spending more. We could train more therapists and social workers and psychiatrists and phycologists. We could allow everyone access to those newly trained helpers, and we could invest in local social centers and community organizations to give people places to meet and help other people. We could, but we choose not to.
We choose not too because it means taxing people like Sam Altman and Elon Musk slightly more than they like. It means that the billions imitative AI companies spend on storage, electricity, cooling, and compute power could be better spent on human actors. If the makers of these bots really cared about human wellbeing, they would not be trying to foist an untested, unproved, likely harmful solution on the world. They would be supporting spending money on known solutions to these problems.
But that doesn’t let them portray themselves as superheroes and saviors and that requires them to admit that not every problem is a technical problem. Hard to be the masters of the universe, I suppose, if you admit that your way is not the best way. And, apparently, being in charge is more important than actually helping people.
Want more oddities like this? You can subscribe to my free newsletter