Plenty of predictions about the future, whether speculative fiction or actual policy goals of governments, involve robots that walk, roll or fly around human beings. They go about their business. They are not impeded by us, nor are we impeded by them. They do our bidding flawlessly and without second thoughts.
It is a pretty neat image. It's pretty awesome to just tell a dinner plate with wheels on to go vacuum the house, and it can - and does. You can do the same with a lawnmower. You may even be able to do that with automated bomb disposal machines, which lack the dexterity and cleverness of human technicians but have the distinct advantage of being replaceable.
Even, some day, we might see machines capable of acting entirely independently of human supervision. Here's where we cut to uncomfortable reminiscences of Terminator or Stealth.
Fundamentally it's about AI, right? Machines have to make decisions in order to function in the real world. Otherwise, to quote one novelist, your robot will hit some stupid bump and "strip all the paint off the walls and make a furious cup of cat". It just doesn't work. The real world is too messy.
However your decision-making algorithms work, your robot has to be able to decide, "I will go here rather than there," or "I will turn on this hardware rather than leave it off," or ... you get the idea. So let's just assume, for the sake of argument, that you've figured out a way to do this, and proceed.
For example.
Notice, though, that controller. It had a bunch of different keys on it, but most prominent was the Big Red Button. That's the stop button. Most of you have probably seen hardware with BRB emergency stop switches on them. Lathes and milling machines, for example; almost anything in an auto shop; elevators, escalators, and other lifting devices. And so on. (Also, I think, on engines that have a fire control system, pulling the "Fire" switch cuts the engine as well as dousing it in foam or inert gas.)
The story background of Terminator, or for that matter The Matrix or any other movie about violent machines, is based on a simple idea, that of hubris: man meddled with what he should not have, and his creations turned against him. It's as old as Dr. Frankenstein. And it's as relevant as those armed drone aircraft being used to launch missiles into civilians' homes - I include these because my understanding is that the machine flies itself and can identify targets; the human controller tells the machine where to go and what to do once it's there, and can authorize discharge of weapons (which the machine cannot do for itself). The problem here, I suppose, is target misidentification by a human being as well as by a machine. Now, what if the pilot didn't have to press the "Okay to fire" button for weapons to be fired? The machine could presumably attack on its own.
We trust human beings to make judgments, because so far, it seems we make judgments better than, say, slices of rat brain or machines. So when a naval crew mis-identify an airliner as a bomber, and the airline crew fail to respond to hailing by either talking back or changing course, we have nothing to blame but human error. Foolishness, perhaps. When a pilot of an attack fighter spots ground forces below him, and he chooses to attack them despite their carrying insignia that marks them as friendly, that's human error. The machine didn't fail. The man driving it did. (And so did the man telling him what to do.)
We don't trust machines to make judgments, because they can make mistakes. What kind of mistakes? Well, a live-fire exercise with a GDF-005, a double-barreled autocannon with a fully computerized fire control system, went horribly wrong when a misfire and explosion damaged something; the gun fired into a crowd of bystanders, killing nine of them. An officer jumped onto the gun platform and tried, but failed, to take control of it. The gun stopped only when it had consumed all its ammunition.
This raises the question: shouldn't any device capable of causing harm to human beings, like an air defense gun, have a safety cutoff that can either prevent it from being activated or shut it off in the middle of an action? Something nice and low-tech like a switch that simply opens the power circuit? And the aforementioned proliferation of BRBs suggests that engineers think the same way. (Thank you, ladies and gentlemen who are engineers. I'm in good company.)
But then again... let's look at something like, okay, an automated air defense system, that's meant to be towed into place and then left there. Or an automated aircraft, or even just a simple remotely-operated gun platform like SWORDS, just a miniature cart with a rifle and camera bolted on. So you want to put a BRB (or equivalent feature) into it. What's to keep your opponents in warfare from simply figuring out how to trigger it? As with the old comics from after the second world war, about mechanical soldiers who began to show features like loyalty, despite not being built with such features... those had the BRBs built right into their chests, for heaven's sake. An especially daring soldier could simply run up, punch the BRB, and destroy the now-defenseless robot. And if it's an electronic signal, a radio signal, the odds are pretty good that a clever enemy will figure out how to mess with your signals or even, if he's really good (or has a man on the inside), simulate them.
It's a bit of a puzzle.
But on the plus side, needing to keep humans in the loop means it's at least a little bit harder to pretend that all you're doing is playing with plastic toys in a sandbox instead of fighting.
(Other diaries in this series include ye short fiction, the sociology of fictional places, steam-powered giant robots, thermal depolymerization, nuclear airplanes, psychic powers, transgenic bacteria that make useful compounds, lightning in a jar, neural interfaces, powered armor, sonic weapons, rapid prototyping, putting Mentos and Diet Coke to good use, life on life support, combining farming and electrical generation, pigeon pilots, cuttlefish behind the wheel, the hafnium bomb, and building a better skunk.)