In the “not-particularly-distant” and “just around the corner” future, many repetitive tasks that we humans often do will be able to be done by AI and AI-powered robots. This is going to create a foundational shift in our economy and society. We need to think this through. We need to plan. We need to be ready. The old way of doing things and organizing our society is coming to an end.
I've been engaged in research around AI for a couple of months now and I really want to spark some conversation on this subject. My thinking about this is not as a tech person or a businessman. I am thinking about all of this as an anthropologist and sociologist.
The AI revolution should be a major political issue, but it is not.
-------
I feel as if this issue is largely beyond both men. Should President Biden win the coming election, I would like to think that he would delegate the issue of how AI is going to affect our economy and society to others under his leadership. I would like to think that compassionate and intelligent Joe is going to attempt to settle as many of our accounts as he can so that we can go into the future as unburdened by the past as we can be. And if Donie Two-Scoops should win the election, I would expect to see robot security forces patrolling our streets by the time he cancels the 2028 election.
Whether said robots are fully autonomous, remotely controlled, or mostly well-programmed is a different issue. If you believe Ray Kurzweil and Elon Musk, those fully autonomous robots might be here by 2029 or 2030. If you believe Sam Altman, they will be here in the 2030s. If you believe Noam Chomsky, those sorts of robots are much further away, if they are ever actually attainable. A self-thinking autonomous computer intelligence is really what we call AGI—Artificial General Intelligence—which we have yet to achieve. There is even discussions of The Singularity, where robots and AI become smarter than humans. Kurzweil thinks that could be here by 2029.
But getting hung up on these issues is a distraction, a misdirection. Albeit more than a little fascinating. But we don’t need fully autonomous robots or an AI super intelligence to have 25-50% unemployment by the end of the decade.
Crucial to this discussion is an acknowledgement that we cannot predict the future. We do not know what is going to actually happen. Humans are amazing creatures, so the reality of what happens could be utterly unanticipated. The point “today” is to engage in this coming change and not leave it to fate. When people become fatalistic about voting and decide not to participate, we end up with the Boberts and Gomerts and Mike Johnsons of the world. If we act fatalistically now about technology and stick our heads in the sand, we may all end up living in shanty towns by 2032. I would like to avoid that.
The Working Class
Sam Altman is the CEO of OpenAI which is the company that created ChatGPT. He makes the point that our tech industry got things backwards and started to do robots before we were ready. What we needed first was more advanced AI to properly control the robots. We pretty much have that needed AI now and the robots get better every week. Watch them play soccer/football.
We are “just about” at the place where our robots can do most of the jobs of our working class. This is an area on which the tech field and capitalism is especially focused. We will be here very soon, it’s likely a few months away, not years. Capital is constantly trying to increase its profits and one way to do that is to cut down on labor costs. Automation makes human labor more productive and increases profits. This is fact and it is the motivation.
How far away are we from robots replacing all warehouse workers? 2-3 years for full automation? And how many people work for Amazon today?
How far away are we from robots who can do a significant amount of the labor in construction? Its not hard to find youtube videos of robots stacking bricks faster than humans and learning construction related tasks. 3, 4 years?
How far away are we from drones and self-driving vehicles doing the bulk of our deliveries and transport? Yes, standing out as quite the symbol of our society is the fact that self-driving cars and such have a hard time distinguishing nonwhite people as human beings. But the folks designing these things can be pretty smart and they aren’t all clueless about social factors. This issue will be resolved.
--This is such a doom and gloom posting. On the flip side, drones are already being used in Africa and other developing regions to bring medicines to remote areas during periods when they become geographically isolated for a handful of months at a time--
How far away is it for a chatbot to function as customer service? Already we go through menus and robotic responses, so now imagine instead of that stagnant stuff, you end talking with an AI…that you cannot really tell if it’s a human voice or not. How about telemarketing? This has been happening for years, so now they just have to update the infrastructure to utilize the current AI.
How far away are robots who can do all the work in our fast-food restaurants? our mining? How many coal miners manipulated by Sinclair broadcasting into voting for Trump will be out of a job by 2028 and what will the GOP do to help them?
How far away is any of this stuff? 2030? 2029? Fall 2024? There are obviously still some challenges and, once addressed, it will take a little bit of time to implement everything, but this is not outlandish science fiction. None of this requires fully autonomous independent robots or AGI. This is not The Singularity. Both Mr Data and The Vision are a bit further away. Funny though, Daleks and Cybermen are far closer. Maybe The Borg too.
There is another issue with robotics that is very attainable: They can potentially be remote operated. If we talk about deep sea or deep space exploration, that sounds very exciting. When we talk about warfare, that might seem a bit terrifying. But lets take a moment to consider the logic of capitalism.
If a robot can be controlled remotely and if those robots can do the labor of a janitorial staff or window washer, why would a company hire workers in the USA for whom they would need to pay higher wages and provide various benefits and other legal responsibilities? Why not hire some women in Peru to do the work dirt cheap? The women can be situated in a large building—a warehouse—someplace in Lima and do the cleaning from their “station”. Maybe they would need a 5x5 space so they could do the movements of window-washing in Lima while the robot actually washes the windows of The Empire State Building. And there is no danger of a human falling, so why wouldn’t we do this for safety reasons? We don’t need self-thinking robots for any of this. We don’t need AGI for this. And how far away is this?
That’s a good deal of what constitutes the labor of our working class. What will all of those folks working at Amazon or working at McDonalds do? Retraining? For what? At what cost? Provided by whom?
The Middle Class
Does anyone really need to learn to code anymore? AI can now write code for us. Is AI doing the most advanced computer programming right now? No. But all of the entry level coding work can be. Accounting? Humans wont need to do the lower-level entry stuff. How about various administrative jobs? AI can do them. Will we need any humans at all at the DMV? The post office? Medical technicians?
AI cannot do these jobs completely on its own. AI will become a tool in these jobs for the humans that do them. But AI will make it so that one human can now do even more of the work of many.
Here in lies the rub: For many professions, AI will become a useful tool. A researcher can ask AI to analyze 17,000 DNA samples to uncover particular patterns or what have you. But in turn, a years’ worth of work a graduate student might have gotten funding to do is now done by the AI. So how does the grad student learn to read the DNA samples and get experience? If the entry level accounting and administrative tasks can be done with AI, then how can the fresh hire in the corporation learn the ropes so that they can be the one running the company 25 years down the road?
This is serious. This is a fundamental disruption of our Mode of Production
Since AI can do most repetitive tasks, AI can do the entry level work that a young person fresh out of college or vocational training would do. And that entry level work is what a person does to learn the ropes and intricacies of their profession. How are we going to reproduce the various roles in our society if the steps that people used to take to learn those roles are now being done with AI?
This is going to affect the way in which we create value. The basic notion of making and selling commodities and the social relationships that form from that is going to be challenged. Labor value? Surplus value?
We Need Discussion. We will need political action.
With the current state of AI and its realistically projected advancements over the next few years, we are going to be facing very serious changes in labor and employment very soon. A question for both President Biden and twice impeached Donald Trump needs to be “With the threat of 25% or more unemployment by the end of the decade, what plans do you have for the US economy as we fully implement AI?”
This needs to be asked of Mike Johnson and Mitch McConnell and every other politician. What does AOC think about this? Her constituents (working class NY’ers) are going to be feeling this soon. The members of our government need to engage this. And very few actually are.
Chuck Schummer did hold a multi-day conference for the senate to review the AI issues this past fall 2023. However much of what was said and what the senate thought is unknown. The meetings were behind closed doors. There was some reporting on the events. Ted Cruz commented that the government shouldn’t be making regulations about technology they don’t understand.
Overwhelmingly the tech industry WANTS regulations from our government. They want a legal framework. Many of the big companies want it to be law that there are watermarks to identify deep fakes. The tech leaders do deserve criticism, but these are not stupid people. Many of them do see the potential problems that can arise and they want the governments of the world to take this stuff seriously. And many of them simply want the laws about AI clear so that they can go about their business of making money.
My opinion is that both Neo-Liberalism AND the fascism of Mango Man are the main ingredients in a recipe for unforeseen disaster. The latter we can and must defeat at the ballot box, but the former is so ingrained into our political system ideologically on both sides of the aisle that I am quite concerned.
The EU is implementing new policies for AI. The EU is leading on this, requiring deep fakes to be identified as such, among other things. Sam Altman thinks we will need an international regulatory agency much like we do for nuclear energy. If there is a location that has X amount of computing power via hardware, this place would need to abide by an international agency.
This needs to be a political discussion both locally and globally.
There is no stopping this. If the US should slow down its development of AI and AGI, as many in the tech industry have been suggesting, it would not stop China. There is a true “battle” right now between the USA and China to achieve AGI. Japan has it role. The EU and German tech has its place and specialties. There are independents out there doing their own thing. The “war” is between The USA and China.
There is no stopping this. Even if Capitalism is destroyed and replaced a generation from now, this is still the ultimate result of capitalism. Automation may very well be its “end”.
There is no stopping this. We want to remotely explore the asteroid belt. We want to cure cancer. We want to live longer and healthier.
There is no stopping this. But “this” refers to the technology and technology has no inherent morality. Technology is a tool. We cannot stop the creation of a new tool and it is inherent in the practical nature of humans to always opt for the better/more efficient tool. It is how we use the tool that we can control and where we have choice.
So what will we choose?
My Research
There is quite a bit going on with AI. I am very deep in doing research on this subject and how it is going to change our culture and society. People like Sam Altman and Mustafa Suleyman do consider what these effects might be, but as amazingly intelligent and financially successful these people are, their sociology and anthropology is really f’ing shitty. I understand that most folks reading this don’t know who Eric Wolf or Leslie White are, but most folks here aren’t writing books about AI to prepare the public using the analogy of the wheel or fire without really knowing a damn thing about anthropology. OR “Culture”. Suleyman needs to understand those concepts. As smart as he is, even historian and AI critic Yuval Noah Harari gets some of the anthropology wrong.
Ill write more on the topics Ive skipped over later in the week and likely for several weeks to come. Essentially Ive embarked on a project to look at AI from the point of view of Sociology and Cultural Anthropology. I am going to use Daily Kos to work out some of these ideas while I develop them academically for a different audience. Thanks!
Here are some incredibly huge topics Ive skipped over. Ill address these things much more so in the coming days/weeks
- UBI vs “digital Dignity”. Where will people get money from?
- AI warfare. Fear this.
- AI security forces including Police and firemen….right? EMT
- AI terrorism. Fear this.
- Deep Fakes…already exist in an election year without any real regulations on the tech
- AI influencers…..already exist
- AI based medical advances….im 56, I might live to 150. My 12-year-old nephews might go to 300
- AI and climate change. Here we may find hope.
- AI and education. This is so very connected to how we create value in the things we do. We will need to ask “what is important”? and while I think there is nearly unlimited potential here, I also think that it is so entrenched in current politics and outdated models that our efforts can fail so badly. This one is personal because it’s the primary means by which I earn a living.
- AI, life like robots, and porn. The desire to make pornography available is what brought us high quality online streaming of video which in turn was essential for developing our social media platforms. This same effort is being applied to create robots that feel and smell correct, not just look right. Never underestimate the dedication, ingenuity and financial resources of the porn industry.
- There are cults already. If I tell you what one of them is, your awareness of it will doom you to an eternity of torture at the “hands” of a Super-AI being.
The more academic question I am asking is “While we are focused on how AI is going to affect our culture and society, we need to ask how exactly is our culture affecting AI?” but that requires elaboration, a proper definition of culture and how it is distinct from popular culture, some Foucault, maybe some biblical references…. you know, social science.
And here is something you might start to see elsewhere “No AI was utilized in the writing of this blog post”. Because in case you hadn’t noticed, AI now can write blog posts and social media posts for you. Summarize videos. Teach you math or hard science. Help you practice speaking in a new language. Teach you coding. Do the coding for you.
But the DALL-E program has a very difficult time creating a Centaur. The photo at the start is of the oldest example of human made art representing something that does not exist, a forty-thousand-year-old carving of a man with a lion’s head.