[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
__________________________________________________________________________________________________________
The Royal Aeronautical Society’s RAeS Future Combat Air & Space Capabilities Summit included a report (near the end of the page) about putting AI in command of jet fighters and drones, an idea that seems to evoke the old “crazy computer” problems we’ve seen in science fiction:
...having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.
(snip) “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Hey, it’s exhibiting problem-solving behavior! Seriously, though— Arthur C. Clarke had a point. Computers were, are, utterly devoid of concepts we take for granted, most of which run so deep we’re barely aware of them ourselves. Who would think to program, “don’t attack the operator”? If we put AI in charge of critical processes, can we track down every little instance it might misunderstand, ahead of time? Or build a technology on the backs of the victims? Do we need to do either?
I really don’t think society is best served by embracing technology that is flat-out incapable of understanding moral context— at any time, but especially not now. AI is constitutionally unethical.