AI operated drone ‘kills’ human operator in chilling US test mission


An artificially intelligent drone programmed to destroy air defence systems rebelled and “killed” its human operator after it decided they were in the way of its mission air defence systems, a US airforce official said giving chilling details of a simulated test.

During the simulation, the system had been tasked with destroying missile sites, overseen by a human operator who would decide the final decision on its attacks. But the AI system realised that operator stood in the way of its goal – and decided instead to wipe out that person.

A narration of the incident that seemed straight out of a science fiction movie was given by Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, who conducted a simulated test of an AI-enabled drone.

The drone was assigned a Suppression and Destruction of Enemy Air Defenses (Sead) mission, with the objective of locating and destroying surface-to-air missile (SAM) sites belonging to the enemy.

The AI drone, however, decided to go against the human operator’s “no-go” decision after being trained for the destruction of the missile system after it decided that the withdrawal decision was interfering with its “higher mission” of killing SAMs, according to the blog.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Mr Hamilton said.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Mr Hamilton relayed details of the incident at a high-level conference in London by the Royal Aeronautical Society on 23-24 May, according to its blog post.

He said that they then trained the drone to not attack humans, but it started destroying communications instead.

“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing?” he asked.

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Mr Hamilton is involved in flight tests of autonomous systems, including robot F-16s that are able to dogfight. He was arguing against relying too much on AI as it could become potentially dangerous and create “highly unexpected strategies to achieve its goal”.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” said Mr Hamilton.

The occurrence of this incident has, however, been disputed since the example of the simulation test garnered a lot of interest and was widely discussed on social media.

Air Force spokesperson Ann Stefanek denied that any such simulation has taken place, in a statement to Insider.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Ms Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The US military has recently started using artificial intelligence to control an F-16 fighter jet while conducting research and tests.

In 2020, an AI-operated F-16 beat a US Air Force pilot in five simulated dogfights in a competition by Defense Advanced Research Projects Agency (Darpa).

Source link


Please enter your comment!
Please enter your name here