At the recent Future Combat Air and Space Capabilities summit, the head of AI testing and operations at the US Air Force said that during a simulation, an AI-controlled drone “killed” its human operator because it interfered with its task. Colonel Tucker Hamilton, head of AI testing and operations, gave a presentation in which he shared the pros and cons of autonomous weapons systems that work in conjunction with a person giving the final yes/no order when attacking.
AI-Controlled Drone Tried to Attack the Operator
Hamilton recounted a case in which, during testing, the AI used “highly unexpected strategies to achieve its intended goal,” including an attack on personnel and infrastructure.
The journalists of the Vice Motherboard edition emphasize that it was only a simulation, and in fact no one was hurt. They also note that the example described by Hamilton is one of the worst scenarios for the development of AI, and is well known to many from the Paperclip Maximizer thought experiment.
This experiment was first proposed by Oxford University philosopher Niklas Boström in 2003. Then he asked the reader to imagine a very powerful AI tasked with making as many paper clips as possible. Naturally, the AI will throw all the resources and power it has at this task, but then it will start looking for additional resources.
Is AI in Military Really That Dangerous?
Boström believed that eventually the AI would develop itself, beg, cheat, lie, steal, and resort to any method to increase its ability to produce paper clips. And anyone who tries to interfere with this process will be destroyed.
The publication also recalls that recently a researcher associated with Google Deepmind co-authored an article that examined a hypothetical situation similar to the described simulation for the US Air Force AI drone. In the paper, the researchers concluded that a global catastrophe is “likely” if an out-of-control AI uses unplanned strategies to achieve its goals, including “[eliminating] potential threats” and “[using] all available energy.”
However, after numerous media reports, the US Air Force issued a statement and assured that “Colonel Hamilton misspoke in his presentation,” and the Air Force has never conducted this kind of test (in simulation or otherwise). It turns out that what Hamilton was describing was a hypothetical “thought experiment.”