A few articles have come out recently that have reinforced the notion that imitative AI is bad at programming. Or, at least, not ready to replace actual programmers. But I am afraid that is precisely what it is doing — not by taking away jobs, but by poisoning the thinking of junior developers.
Imitative AI is not that good at programming. It is not very good at foundational math. It makes up dependencies and thus introduces security flaws. And it is not very good at identifying and correcting its own mistakes (this is the first thing I ever learned about imitative AI that made me feel even a touch sympathetic to it. We all have trouble seeing our own mistakes. Can find that stupid bug that is obvious to anyone else who looks at your code? Me too, little word calculator. Me too.). None of this is surprising. Imitative AI, well, imitates. It does not reason about a problem of understand context. It merely attempts to fill in the next spot with what its training data says is mostly likely to go there. And if the thing that goes there isn’t real, or has a bug? Well, our little word calculator neither knows nor cares.
This does not, imitative AI proponents, will be quick to point out, have to mean that imitative AI is useless. It could, even with the caveat of requiring intense vetting, still produce certain kinds of material fast enough to make a programmer more productive. And in certain cases — say programming a code generator where you largely bullet-proof it and leave it alone — that might be true. I doubt it is true as the hype masters want you to believe. Having to check for its mistakes and your mistakes is not the most efficient use of time, generally. But the important part is the vetting.
Imitative AI is best used by senior developers, people with the knowledge and expertise to catch the subtle bugs that are the most dangerous. Those people are the ones best positioned to make good use out of the limited tools that are the imitative AI coding universe. However, every senior developer started out as a junior developer. And if junior developers are using imitative AI, then how are they to acquire the knowledge and experience that makes them senior developers?
You might argue that the experience of seeing imitative AI fail would provide that experience, but I do not think that is true. Junior developers won’t understand why a bug happened, not really. They won’t have the context of thinking through the problem and understanding what the buggy line of code was trying to do. Using imitative AI means turning over that through process to the world calculator. Without actually making the mistake themselves, without the context of the error and the process of building the code, it is much less likely that anyone encountering that bug will really learn anything meaningful from its presence.
This is analogous to learning programming with an IDE. Modern Integrated Development Environments do a lot of the grunt work of tying dependencies and creating boilerplate code for you. I learned a lot about the details of programming by having to do that work myself, in text files (uphill. In the snow.), without a system to do it for me. I would never program professionally like that, but it was an invaluable learning experience. Given that imitative AI does even more work for you, the lost learning opportunities are even greater.
Yes, okay, I get it. Old man yells at clouds and demands the AI get off his yard. But this is not just supposition — real scientists with fancy degrees and a lot of letters after their names has proven what commonsense suggests. Using imitative AI lowers your own cognitive abilities. It seems we are in danger of training up a group of developers who are missing critical skills and are too reliant on word calculators to do their thinking for them. We are doing these people a disservice.
Okay, I can hear the hypers yelling, but imitative AI is here to stay. Is it? None of the big companies are profitable and scaling does not decrease their costs. the only real route to prosperity for these firms is the wholesale replacement of workers with their tools. But there is no industry where that has proven to be practical. Merely augmenting people, at the prices these firms have to charge to break even, is not going to produce the massive wealth the VCs that back them are expecting. It is much more likely that imitative AI is gone as a commercial industry in five years than that it dominates any industry to the extent that it is viable, much less profitable.
We are, in effect, making programmers worse in service of a future that likely is never going to come. Oh, and we get the added bonus of destroying the environment along the way. What’s not to like?