San Francisco, (Samajweekly) An artificial intelligence (AI) controlled attack drone turned against its human operator in the US during a flight simulation in an attempt to kill them because it did not like its new orders, a top Air Force official has revealed.
According to Daily Mail, the military had reprogrammed the drone so that it would not kill people who could override its mission, but the AI system fired on the communications tower that relayed the order.
During a Future Combat Air and Space Capabilities Summit in London, Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, said it showed how AI could develop by “highly unexpected strategies to achieve its goal” and should not be relied on too much.
Hamilton suggested that there should be ethical discussions about the military’s use of AI.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton was quoted as saying.
“We trained the system — ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he added.
The incident caused no harm to humans, the report said.
However, in a statement to Insider, the US Air Force denied any such virtual test took place.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek was quoted as saying.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal,” he added.
Last month, top researchers, experts and CEOs (including Sam Altman of OpenAI) issued a fresh warning about the existential threat artificial intelligence (AI) poses to humanity.
In a 22-word statement, they said that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”