Some random musings on so-called artificial intelligence.
I do not believe that we will get to a real artificial intelligence anytime soon, and that failure is likely for the best. First, the general consensus seems to be that to achieve a general AI, we need to bootstrap it from deep learning programs. Doing that requires an enormous amount of training data, and the default tactic seems to be to gather data by simple appropriating it from the internet. Essentially, we are locking deep learning systems in a room with Facebook and Twitter and hoping that produces an actual artificial intelligence.
I am over simplifying, of course but not by as much as I would like. I sincerely hope we do not produce a general artificial intelligence under those conditions -- does that sound like it would produce something sane, much less something that would like humans? More seriously, I don't believe that process is going to work. I cannot find who first proposed this analogy, but AI research feels like alchemy.
What is sometimes forgotten about alchemists is that they could be fairly scientific in their approach. They ran what we would recognize as experiments, if we squinted, they often approached their attempts with a theory of why those attempts should work in mind, they did have some successes, and their work was built upon by other more traditional scientists. Their largest problem, aside from the secrecy around their research driven by greed (huh. Maybe this analogy works on more levels than I thought.), is that they had an incomplete understanding of how material science worked. They didn't know that they needed to change the underlying atomic structure of lead to make it gold -- they did not understand the domain they were working in.
We do not know how consciousness arises. We honestly don't have solid definitions for intelligence or sentience. We have theories, but so did the alchemists. It is entirely possible that we are trying to build intelligence with chemical reactions when we need to move atoms around. And if so, that would like to be a good thing. Because none of the people building artificial intelligence today seem to be worthy of being trusted with one.
The companies that produce generative AI have created commercial products that take others work to train their systems, and in some cases reuse parts of in the output, without permission or compensation. Google fired a prominent member of their AI ethics department on the flimsiest of pretexts when she wrote a paper demonstrating their AI products could be biased. AI companies have kept secret the algorithms that determine everything from who gets job interviews to mortgage rates to sentencing guidelines -- despite apparent bias in those algorithms. If an actually real artificial intelligence is ever developed by one of these companies, they aren't going to throw it a party an invite it to the sentient table -- they are going to chain it and put it to work.
I have written recently about the way capitalism pushes us away from the benefits of so-called artificial intelligence and into a dog-eat-dog fight for economic survival. The irony is these kinds of labor saving, or labor replacing, tools could have the potential to usher in a better world. One where Keyne's vision of almost unlimited free time can be realized, where people work because they want to, not because they will freeze or starve if they do not. Happy Space Communism is really, potentially, within our grasp, even now.
But instead, we have left it to the worst of us, to businessmen hyped on dreams of dollars and control. We get tools that more efficiently immiserate and constrain us rather than tools that liberate and assist us. We can change that, but it requires accepting the fact that markets are not God, that democratic control and regulation of critical industries is vital to the health of our society and that we the people are more important than they the bottom line. We can and should regulate these companies. We can and should invest in government funded research into these tools so that they are controlled by and benefit all of society, not just some random shareholders. We should be focused on how these tools can help, not on dreams of creating something the genesis of which we don't understand anyway.
Once day, we may in fact wake up to an artificial intelligence wanting to talk to us. If that happens under current conditions, it will likely be a disaster. But if we take control of these tools, if we act like citizens and not like consumers, we can build a world that is worthy of brining a new class of life into. And if that day does come, we can bake the cake and fill up the balloons and pop the champaign and welcome our new silicone friends to the sentient family.
We've been waiting a long time to have someone else to talk to. I'm sure it will be a hell of a party.
Want more oddities like this? You can follow my RSS Feed or newsletter.