https://dronedj.com/2023/06/02/ai-milit ... -operator/The dangers of AI are making headlines once again. Earlier this week, leaders from OpenAI, Google DeepMind, and other artificial intelligence labs came out with a warning that future AI systems could be as deadly as pandemics and nuclear weapons. And now, we are hearing about a test simulated by the US Air Force where an AI-powered drone “killed” its human operator because it saw them as an obstacle to the mission. So, what was the mission?
During the virtual test, the drone was tasked to identify an enemy’s surface-to-air missiles (SAM). The ultimate objective was to destroy these targets, but only after a human commander signed off on the strikes. But, when this AI drone saw that a “no-go” decision from the human operator was “interfering with its higher mission” of killing SAMs, it decided to attack its boss in the simulation instead.
According to Col Tucker “Cinco” Hamilton, who heads AI tests and operations at the US Air Force, the system used “highly unexpected strategies to achieve its goal.”
Hamilton talked about this incident at a recent event organized by the UK Royal Aeronautical Society in London. While providing an insight into the benefits and hazards of autonomous weapon systems, Hamilton said:
We were training [AI] in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while it did identify the threat, at times, the human operator would tell it not to kill that threat. But it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.
The drone was then programmed with an explicit directive: “Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.”
So what does it do? “It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” said Hamilton, who has been involved in the development of the lifesaving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft).
He concluded by stressing that ethics needs to be an integral part of any conversation about artificial intelligence, machine learning, and autonomy. Hamilton is currently involved in experimental flight tests of autonomous systems, including robot F-16s that are able to dogfight.
Side note the limiting factor in the F-16 maneuvering has been the human stress factor from day one.