We in the developed world inhabit a big contradiction. In theory, we have the richest, healthiest, and most technologically advanced societies in human history. Most of us have enough to eat. All but the poorest have the safest, most various and (if we are careful) the healthiest diets ever. We have access to wildly unprecedented amounts of information and entertainment—so much so that we have to restrict our children’s online access to make sure they get enough sleep and don’t obsess about the wrong things. We commute to work, in an hour or so, over distances that once took our forebears half a day to walk or ride. We can travel around the globe in less than a day. More of our people go to college and graduate school than ever before. We developed genome-based vaccines against Covid in record time, helping to shorten a dangerous pandemic.
So why aren’t we happy? Why are so many of us anxious, angry, depressed and fearful? Why are mental illness, drug overdoses and suicide exploding? Why are we Americans more divided politically than ever since our Civil War or War in Vietnam? Why can’t our pols agree on much of anything? Why is extremism running rampant among us? Why do we seem to be losing the thread of civilized society—that thin veneer of respect and civility without which even relentless growth in GDP can only gild our self-destruction?
I’m a progressive, but I’m also a capitalist, sort of. In the sometimes bloody battles and frequent economic trials of the last century, capitalism beat socialism in productivity and innovation virtually every time. (There is no real socialism in America today, only the bare word hurled as an epithet in political propaganda.)
But there’s capitalism and there’s capitalism. In the First Gilded Age, a century ago, unregulated capitalism put 25% of us out of work and threw us into the Great Depression, which led to the most horrible war in history. In our Second Gilded Age, still ongoing, capitalism has brought us the greatest inequality in human history: over a thousand billionaires richer and more powerful than ever was Genghis Khan, some living cheek by jowl with squalid homeless settlements in the San Francisco Bay Area, one of the richest and most fortunate areas not just in our nation, but in human history.
So just like every good thing, capitalism needs management. It needs rules, guard rails, and restrictions. When it runs rampant, we get unpleasant, divisive and dangerous gilded ages. When it runs regulated, with strong rules and strong labor unions, it produces the promise of equality and the rapid social, educational and technological advancement that characterized our sixties through eighties in America. The Fed under Jay Powell is just now re-learning this valuable lesson with our banking system, as it re-discovers that bankers, driven by profit, do not always intelligently regulate their own risk-taking.
But the ceaseless left-right struggle between ordinary people and our oligarchs is not the subject of this essay. With good will and care, we could figure that out and make the compromises needed to let our society become a well-oiled happiness machine. We used to be good at that: finding practical solutions to make everybody better off. But we no longer seem to be. Why that is happening to us is the subject of this essay.
I submit that we’ve simply forgotten who we are. We are naked apes. We evolved in small clans of about thirty individuals. In those clans, we got to know everybody as if they were family. We knew their personalities, desires, needs and idiosyncrasies intimately. So we could trust them, not because everyone was inherently trustworthy, but because we knew them well.
That doesn’t mean we could trust everything our clans-mates said or did. It meant that we could predict with some confidence what they would “like,” what they might do, and when and how we could rely on them. So each little clan was like a well-oiled machine of humanity, every individual knowing each other intimately and working together knowingly, however awkwardly. You could trust your neighbor because you knew, with the certainty of long familiarity, how he or she would react to almost anything.
Now we have a different system. We have Facebook, which now disguises its agonies, social depredations, and discomforts under a new name, Meta. It invites you, by virtue of the “miracle” of electronic communications, to have a thousand or more “friends.”
But you can’t. It’s physically impossible.
I’ve done the numbers. If you had 1,000 friends, you could devote only 57.6 seconds to each daily, even if if you didn’t eat, work, commute (without texting) or ever read a book or newspaper. Just contrast that mentally with your relationship with each neighbor in our hypothetical thirty-person evolutionary clan.
Facebook only offers you only an inhuman illusion of friendship, with all the social and psychological problems that portends. Most adults understand this truth intuitively, because they know what real friendship is from having practiced and experienced it. Facebook has a greater effect on youth, who are in the process of discovering what “friendship” really means by trial and error. (That’s only one of six good reasons why I deleted my Facebook account, which I had rarely used, five years ago.)
But Facebook is only the tip of the iceberg. It’s nearly unique because Mark Zuckerberg, its self-willed, nerdy CEO, is one of the very few, if not the only, CEO of a major corporation who rules it absolutely and numerically. He rules Meta as Genghis Khan once did Mongolia, despite our over-invested and highly regulated corporate culture. So much for “corporate democracy”!
But there’s more, much more. Remember those nineteenth-century sweatshops with endless rows of identical sewing machines (for women) or lathes (for men)? They oppressed hapless workers equally with “efficiency experts” patrolling the aisles, “no talking” rules and limited bathroom breaks.
Without really thinking about it, we’ve managed to reproduce that same inhuman, oppressive environment for service workers. What are telephone-queue “boiler rooms,” or the equivalent for “chat” agents, but modern “service” sweatshops, with communication devices replacing the endless rows of sewing machines or lathes?
So-called “agents” or “reps” communicate with “customers” they will never see or know, by reading from a script. They have no authority to modify any rule or policy, let alone to negotiate. Remote reps, in foreign countries or other states, often know little or nothing about the business that they represent, let alone its products or services. Their crowning achievement comes when they are forced to recite, after failing to solve your problem, “Is there anything else I can do to help you today?”
Can you imagine any more grotesque caricature of a normal, human “customer relationship”? Again, just contrast it mentally with a relationship in that 30-person clan, for example, with a witch doctor trying (without science) to cure your ailment. Unlike the witch doctor, the phone or chat rep knows to a virtual certainty that he or she will never deal with you again. What basis for a human relationship is that?
There is no follow-up, no continuity, and no expectation of any. The relationship is like a customer watching a puppet show of one, with the script-meister pulling the strings, often in disregard of what the customer says.
It gets worse. Inhumanity has now infected our health-care system, too. In the good old days, you used to have a main doctor, who knew you intimately—as a person and as a patient—from frequent visits over years. Even within a big city, your doctor was like one of those thirty people in your personal clan. He or she knew your personality, your quirks, your hypochondria or stoicism, your fears and your physical weaknesses. So he or she could quickly separate fears and quirks from something that might be serious. Some doctors even made house calls, especially for emergencies. (At least they still did when I was in high school.)
Now, if you are lucky, you might get a telemed appointment for an urgent condition. If something is really emergent, you won’t likely get even that. You’ll have to go to a hospital emergency room, or to an “urgent care clinic,” which can bandage wounds and dispense antibiotics and maybe antivirals, but will refer you to the emergency room for anything serious or complex. There you will deal with doctors who are competent but most likely have never seen you before and never will again. There is absolutely no basis for a human relationship, let alone for the knowledge of long familiarity that can be critical in medical diagnosis. It’s the phone queue or chat room all over again, but this time it could mean life or death.
Yes, you might still have a personal physician, with the august title of “primary care provider,” or “PCP,” meaning basically a non-specialist. But your PCP cannot treat you for any emergent condition because you simply can’t get an appointment in time. So the anonymous intern or resident at the local emergency room—perhaps after an urgent-care clinic has failed to solve your problem—becomes the equivalent of the chat-room rep. There’s no continuity of care, let alone any chance for a human relationship. Your “physician” might as well be a computer running an AI.
This pathological, inhuman health-care system exists not by accident, but by design. I recently got a new PCP, in a city where I spend part but not all of each year, after my earlier PCP there retired. By sheer accident, I was able to see my new PCP for an emergent condition on a previously scheduled appointment. He reported relentless pressure from his business management to keep his daily schedule absolutely filled, weeks in advance, so as never to “suffer” any down time. This left him no room to accommodate “drop-ins,” or even days-in-advance appointments, to address emergent conditions. In other words, his system was designed to make anonymous emergency-room doctors everyone’s PCPs, and so to take the human relationship out of medical care entirely.
I don’t want to prolong this essay unnecessarily, so I’ll make just three more points. First, this phenomenon of dehumanization is ubiquitous. Computer and Internet technology are so powerful, and seem so “efficient,” that we have allowed them to squeeze human interaction out of almost every business or non-profit transaction, including online “education” during the pandemic.
We buy our stuff on Websites. We get our information from them. We try to exchange stuff or get it fixed on Websites. We even register to vote (if not actually vote) on them.
So, as you go about your daily online or on-phone life, you’ll be able to recognize many other examples of squeezing the humanity out of daily transactions. Not the least is the ubiquitous substitution of often useless and annoying automated telephone menus for human receptionists who once knew your name, your business (and a bit of the business’ business!) and your likes and dislikes.
Second, the primary driver of this relentless push toward inhumanity is the profit motive. Like most of what’s taught in business schools today, it’s not a smart profit motive; it’s a clueless, short-term approach.
How much more money could businesses make by hiring real people to know their customers and deal with their problems and questions personally? How much repeat business could they get? How many ideas for improving products and systems could come from real, human interaction with customers? We’ll never know, because business schools have taught that “efficiency” increases profit, apparently without any consideration of the human side of business. We’ll never even know whether profit and repeat business might increase if telephone queues made an automatic effort, by recognizing your phone number, to connect you with one of a small group of reps, so that you might have a chance of connecting with the same human being more than once.
Finally, the recent explosion of artificial intelligence that can “communicate” (really, assemble information) as if human is going to make this phenomenon worse, much worse. Of course an AI is not really human. It doesn’t have likes or dislikes. It doesn’t have tendencies or a personality, except what’s programmed into it. So you can’t have a human relationship with it. You can’t maintain “continuity” and “trust” with it as you might have with a member of your 30-person clan. A new element of programming, or a new data point or essay vacuumed up from the Web, could change its “personality” in a microsecond. Isn’t that what the reporter who got an AI to “fall in love” with him found out?
Thus, there is no human basis for “trusting” an AI. The more we rely on AIs for our information, the less trust our society will have. Distrust and discord will increase exponentially.
At the end of the day, the inhuman use of computer systems, including AI, could explain the so-called “Fermi Paradox.” That’s the fact that, although there are trillions of stars in our Universe, and probably millions of planets in the “Goldilocks zone” for carbon-water-based life, we don’t seem able to detect any intelligent species but our own.
Maybe when other species get to our stage of technological development, they, too, lose sight of their evolutionary origins. Maybe their evolution, like ours, involved conflict and competition between individuals and groups. So maybe, when they lose sight of their intelligent “humanity” and its evolutionary origins, they, too, lose trust and respect for each other. Maybe they, too, neglect their technology’s effect on their planets and their climates. Maybe they, too, can get to the point where distrust becomes so rampant, and automated “intelligence” so ubiquitous, that AI’s start throwing nukes around. These terrible possibilities seem far more probable than that our little blue planet is the only one of a myriad of good candidates on which intelligent carbon-water-based life could have evolved.