The UK regulators have come out with proposed regulations and they are half-hearted to be charitable:
Companies making foundation models should follow seven principles. That includes making sure developers and businesses that use these models are accountable for the output that consumers are given, ensuring broad access to chips and processors and the training data needed to develop these AI systems, and offering a diversity of business models by including open and closed models. The CMA also said companies should provide a choice for businesses to decide how to use the model, offer flexibility or interoperability to switch to other models or use multiple models at the same time, avoid anti-competitive actions like bundling or self-preferencing, and offer transparency into the risks and limitations of generative AI content.
There is nothing especially wrong with those principles, but there is nothing especially right about them either. They are entirely focused on ensuring market competetivness and that, frankly, is the least of the worries around AI.
Algorithms, and that is all AI is at its heart, have been used to deny people jobs, to racial discriminate in policing and sentencing, to skew welfare denials, to falsely accuse people of crimes they did not commit. Imitative or generative AI has all of those problems, plus a host of others. It has been caught plagiarizing and copying the works it is trained on. There are real questions about the ethics and legality of how most of these companies build these training sets, and, of course, it lies —I’m sorry, hallucinates is the industry chosen term — all the freaking time. Imitative AI is a misinformation super power.
Those are the real issues we should be concerned about. Who is responsible for the damage these systems do? How can we know that the algorithms in them are or are not biased? Who is to blame when these systems lie? Any regulator that punts on these issues, as the UK has done here, is not regulating AI at all. It doesn’t matter if AI companies can compete effectively in the marketplace if that competition is to see who can best burn down society.
We have to stop treating AI as if it is magic. It is just tech. We made this mistake with the internet, acting as if it was some strange creature we had never encountered before. That attitude was bullshit then and it is bullshit now. If someone came to us today and said this toaster is great — it makes the best toast, but it randomly incinerates a house within a fifty-block radius we would tell them to pound sand. Well, that’s all AI is — a fancy toaster. We should stop pretending otherwise and give it precisely the respect a fancy toaster deserves. No more. No less.