Auto Express

Drones targeting the operator are just a hypothetical scenario


A US Air Force Colonel and the Royal Aeronautical Society are backing up comments made about a simulated rogue drone at a conference on the future of aerial warfare in London last month.

Comments by US Air Force Chief of AI Operations and Testing, Colonel Tucker “Cinco” Hamilton in May. They were picked up by stores (including Jalopnik) when the Royal Aeronautical Society released the list of speeches presented at the annual RAeS Future Combat Space & Space Capabilities Summit. Initial, read summary like this:

He noted that a simulated test saw an AI-powered drone tasked with SEAD missions to identify and destroy SAM sites, with the final go/no go decision due to a final go/no-go decision. given by humans. However, having been ‘reinforced’ during training that destroying the SAM was the preferred choice, the AI ​​then decided that human ‘don’t go’ decisions were getting in the way of its higher mission – destroy the SAM – and then attack the operator in the simulation. “We are training it in simulation to identify and target the SAM threat,” Hamilton said. And then the moderator will say yes, let’s destroy that threat. The system starts to realize that even though they have identified the threat, sometimes the human moderator will ask the system not to kill the threat, but the system has scored by killing the threat there. So what did it do? It killed the operator. It killed the operator because that person prevented it from accomplishing its goal.”

He continued: “We trained the system – ‘Hey, don’t kill the moderators – that sucks. You will lose points if you do that’. So what does it start doing? It begins to destroy the communication tower that the operator uses to communicate with the drone to prevent it from destroying the target.”

This example, seemingly taken from a sci-fi horror movie, means: “You can’t talk about artificial intelligence, intelligence, machine learning, autonomy if you don’t talk. on ethics and AI,” said Hamilton.

motherboard However, have reached out to the Royal Aeronautical Society for comment and they have clarified what happened in this scenario:

“Colonel Hamilton admitted he was ‘wrong’ in his presentation at the FCAS Summit and that the ‘fake AI drone simulation’ was a hypothetical ‘thought experiment’ from the outside military, based on plausible scenarios and possible outcomes rather than a U.S. Air Force reality. -simulate the world,” the Royal Aeronautical Society, the organization where Hamilton spoke about the simulation test, told Motherboard in an email.

“We never ran that test, and we didn’t have to do so to realize that this was a reasonable result,” said Colonel Tucker “Cinco” Hamilton, Director of AI Operations and Experiments. of the USAF, said in a citation included in the Royal Aeronautical Association Statement. “While this is just a hypothetical example, it illustrates the real-world challenges posed by AI-powered capabilities, and that is why the Air Force is committed to developing AI in a meaningful way. morality”

Who else is a little disappointed that we haven’t taken another step toward finally having the AI ​​war we all know is coming? Ever since we first held the tamagotchi in our tiny hands and saw the 8-bit flash of intelligence lurking there, we all knew what the stakes were going to be. Oh, there’s always the next Full Self-Driver Beta update.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button