Morning Open Thread is a daily, copyrighted post from a host of editors and guest writers. We support our community, invite and share ideas, and encourage thoughtful, respectful dialogue in an open forum.
This is a post where you can come to share what’s on your mind and stay for the expansion. The diarist is on California time and gets to take a nap when he needs to, or may just wander off and show up again later. So you know, it's a feature, not a bug.
Grab your supportive indulgence(s) of choice and join us, please. And if you’re brand new to Morning Open Thread, then Hail and Well Met, new Friend.
I know a few things about computers. In fact, I’d be willing to bet a six-pack that compared with the average computer user I know a great deal more. That is to say, I know how they work, albeit admittedly to a qualified extent. I have a computer science background in terms of education and training, but I fully disclose that I am not a computer scientist. Nor do I have anything but a smattering of knowledge as pertains to computer code or programming language, i.e. computer algorithms. I also know somewhat about robots and robotics, again to a qualified extent. Most of my knowledge of computers and robots comes from my Navy days as an electronics technician, working on radar and related weapons guidance and control systems, and then twelve years as an application engineer in digital control for industrial-scale heating and air conditioning systems. In the thirty-six years that have now passed since I got out of the Navy a whole lot has changed and advanced in computer technology and robotics, but the principles remain the same. We want them to accomplish specific tasks and so program them to do just that. We don’t want them “thinking” on their own and then ending up doing something wholly unexpected and adverse, worse even catastrophic.
But with advances in Artificial Intelligence seeming to emerge at exponential rates, should we be worried that computers and robots are all going to become “self-aware” and come to the conclusion, Terminator-style, that humans are just a cancer on the planet and need to be exterminated? That’s what the word “Robotpocalypse” refers to.
Maybe so.
But probably not.
For certain, it’s not going to look like this, even disregarding this is just cartoon CGI:
Over the past year or so I’ve been watching a lot of current documentaries on this subject, like Coded Bias and The Social Dilemma, (Netflix), and The Great Leap Forward along with The Big Reset 2.0: How Artificial Intelligence is Changing Our Society and several other related videos on YouTube. I’ve also been doing a considerable amount of reading in this subject, notably Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Cathy O’Neil; Crown Random House, 2016) and am currently in progress reading Superintelligence: Paths, Dangers, Strategies (Nick Bostrom; Oxford University Press, 2014). [Update: I’ve finished this book. I ended up just skimming it. It was overly-technical for me.]
The conclusion I have come to is that the Robotpocalypse is not coming, rather it’s already here and happening right now, in that we have become all too trusting and dependent on computers and specifically on computer programs, i.e. algorithms, to do our thinking and make our decisions for us. And many of these algorithms are churning out results that are just dead wrong. These algorithms, Artificial Intelligence if you will, are denying people housing, jobs, loans, education, health care, the freedom to travel. And they are opaque and not accountable.
“Show me that it’s going to be fair, that it’s legal, before you put it out. That’s what we don’t have yet.” --Cathy O’Neil.
Weapons of Math Destruction:
We live in the age of the algorithm. Increasingly, the decisions that affect our lives—where we go to school, whether we get a car loan, how much we pay for health insurance—are being made not by humans, but by mathematical models. In theory, this should lead to greater fairness: Everyone is judged according to the same rules, and bias is eliminated.
But as Cathy O’Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and uncontestable, even when they’re wrong. Most troubling, they reinforce discrimination: If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data.
Tracing the arc of a person’s life, O’Neil exposes the black box models that shape our future, both as individuals and as a society. These “weapons of math destruction” score teachers and students, sort résumés, grant (or deny) loans, evaluate workers, target voters, set parole, and monitor our health.
If you believe as I do that over-dependence on machine-generated decision making is creating a world rife with irremediable social injustice, and that it’s only going to get worse if we don’t do something about it, then what, if anything, can we do about it? Many of you, I dare say, are not going to like my advice and may disagree vehemently, but here it is: STOP (or at least cut way back) using Facebook, Twitter, Instagram and as much other social media as much as possible. Your data is being used against you and only for the benefit of the wealthy and powerful.
I know it’s ironic, perhaps even hypocritical, that I’m using a social media platform to rail against social media, but in my defense allow me to state that it’s not social media per se that I oppose, only the way that all the data it gathers about us is used to exploit us, to manipulate us, and to extract money from us. As much as I love technology and all the ways it can be used for good, it has a terrifying dark side that we need to be aware of and guard against. In short, don’t let a machine do your thinking for you.
And enjoy some music.