Bloomberg has a story about an “AI drive through company” whose AI systems leave a lot to be desired. In reality, 70% of customer interactions were handled by people in the Philippines. Which, when you get right down to it, is the purpose of imitative AI.
Imitative AI companies like OpenAI and Google have focused their efforts on essentially three areas: writing, illustrating, and programming. The do this not because those are the areas where AI can be most useful to society, but because those are the areas where they think they can make the most money the quickest. A lot of corporate writing and illustrating does not have to be good, it merely has to be good enough. While imitative AI, with its tendency towards making mistakes and out and out lying, is arguably not good enough, if you squint you might be able to convince yourself that it is. Same thing for the weird hands and depressingly similar art styles that the art programs produce. Coding is a little more understandable, as the sheer amount of boilerplate code modern languages require is mind-numbing and might be less prone to making terrible mistakes given the amount of code available to copy from. But in all cases, the drive is not to help but to profit. Running these things are not cheap and no one is apparently making a profit.
And that is where companies like the “AI drive thru” come in.
Humans have a bias towards machines. They think that if it comes from a computer, it is likely correct. The drive thru company is taking advantage of that by labeling — arguably falsely — their system as AI. If people think a machine is handling their order, they are more likely to be forgiving. And this allows the company to essentially offshore service jobs. The AI label, even if it is the equivalent of a movie lot fake building front, allows the company to convince people that having their burger order taken by a person in the Philippines rather than inside the building is high tech reliability. And thus the restaurants they serve can keep more of your money and spend less of it on employees (I promise you, you are not seeing lower prices because of these systems. The savings are almost entirely split between the restaurant owners and the “AI” company.) At least until the well-known issues with offshoring begin to leak through the AI facade.
Instead of using machine learning (which is really what imitative AI is at its bottom) to do things like helps find new drugs or help with speech difficulties be understood, they spend their money and time making crappy imitations of human writing and art and pretending that they are using AI to ask if you want fries with that. None of this is new — AI companies reply upon human beings, often underpaid offshore employees to do much of the required work to train AI systems. And companies have always tried to pay as little to employees as they can get away with.
Deceptions like this are one of the core purposes of the hype surrounding AI. By slapping an AI label on what is largely offshoring, companies can cloak themselves in the aura of machine intelligence and the coming future when they are simply profiting off of gutting the local workforce. It is the oldest scam in the book just with a new high-tech sheen. Much of AI is hype meant to cloak the impoverishment of human beings under the guise of futurism.
How do I know that Skynet will not be coming for us anytime soon? Because not only does AI write crappy emails and insecure code — it needs a person to be able to supersize your fast-food order.