The Washington Post had two interesting articles about how bad domain specific imitative AI search can be. In the first case, Amazon’s new search bot was effectively useless. the tester could not tell if the answers were appropriate to the specific use case and hence could not trust it. More seriously, TurboTax and H&R Block’s tax assistant chat bots gave wrong answers consistently. Being audited is much worse, obviously, than not being able to figure out if the compost fan Amazon recommends is appropriate for your new hobby.
None of this should come as a surprise. Large language model imitative AI systems do not strike me as especially good search tools. The ability to parse human requests on the front end can be helpful, but the back-end requests are more problematic. Imitative AI systems have no model of the world, they merely calculate what is most likely to come next based on the material they are trained upon. that means they cannot really process the questions they are asked. And that is before you get to the lag between the data relevant to the question at the point in time they were trained and the data relevant to the question when the question is asked. It is no wonder the Amazon bot cannot figure out the proper context — it has old information and know model of the world to work from.
The drive to use imitative AI as a replacement for natural language search tools feels very much like a dead end to me. So far, the results have been underwhelming, as we can see from the articles above. The issues aren’t likely ot be corrected by better models, as the issues seem baked into the process. Without a model of the world, or at least a specific domain, then the system are limited to relying entirely upon their calculations about what is most likely to come next. Sometimes that will fit, sometimes it will not, sometimes it will be made up. And since the calculations are based upon the training data, the farther in the past the training, the less likely the results are to be relevant, at least for some domains.
Imitative AI is a fascinating experiment. Unfortunately, it costs so much to run these experiments that the companies involved must find some way to eventually make money from them. Otherwise, they cannot continue. But there really aren’t problems for which imitative AI systems provide good solutions. Instead, we get companies trying to create solutions out of thin air. And we get worse recommendations and chatbots that get the IRS angry at us. Those are not, to be clear, good things.
Capitalism distorts all things it touches. Instead of a mildly interesting advance in machine learning, imitative AI has to be, by the logic of capitalism, something it clearly is not: the next big financial thing. Too bad for the investors, employees, and people whose livelihoods are going to be upended, it almost certainly cannot be.