News

AI’s Next Fear: Hollywood’s Killer Robots Become Military Tools


When President Biden announced severe restrictions in October on sales of the most advanced computer chips to China, he sold it in part as a way of giving American industry an opportunity to restore competitiveness. painting.

But at the Pentagon and the National Security Council, there is a second agenda: arms control.

In theory, if the Chinese military can’t get the chip, it could slow down efforts to develop AI-driven weapons. That will give the White House and the world time to work out some rules for using artificial intelligence in everything from sensors, missiles and cyberweapons, and ultimately to watch out for some storms. Hollywood-evoked nightmare – autonomous killer robots and computers. lock the creators of their people.

Now, the fog of fear surrounding the popular ChatGPT chatbot and other generalist AI software has made chipping away at Beijing like a temporary solution. When Mr. Biden dropped by a White House meeting on Thursday with tech executives grappling with limiting the risks of technology, his first remark was that “what you’re doing has potential.” great power and great danger”.

It’s a reflection of recent classified briefings on the potential of new technology in fueling wars, cyber conflicts and – in the case, his national security aides said. the most extreme – decision-making to use nuclear weapons.

But even as Mr. Biden issued his warning, Pentagon officials, speaking at tech forums, said they thought the idea of ​​a six-month pause in the development of the next generation of ChatGPT and similar software is a bad idea: The Chinese won’t wait, and neither will the Russians.

“If we stop, guess who won’t: potential adversaries abroad,” said the Pentagon’s chief information officer, John Sherman. said on wednesday. “We have to keep moving.”

His blunt statement underscores the tension in the defense community today. No one really knows what these new technologies are capable of when it comes to weapon development and control, and they don’t know what kind of weapon control mode, if any, might work.

The omens are vague, but extremely disturbing. Can ChatGPT empower bad guys who previously didn’t have easy access to disruptive technology? Could it speed up confrontations between superpowers, leaving little time for diplomacy and negotiations?

“The industry isn’t stupid here and you’ve seen attempts at self-regulation,” said Eric Schmidt, a former Google president who served as the first president of the advisory group. Defense Innovation Board from 2016 to 2020.

“Therefore, there is a flurry of informal conversations going on in the industry — all informal — about the rules of assurance,” said Schmidt, who wrote to former secretary of state Henry Kissinger. What will AI security look like? a wide range of articles and books about the potential of artificial intelligence to enhance geopolitics.

Anyone who has tested early iterations of ChatGPT is clear on the initial attempt to put roadblocks in the system. For example, bots won’t answer questions about how to harm someone with a drug, or how to blow up a dam or cripple a nuclear centrifuge, all operations that the United States and Other countries have joined without the benefit of artificial intelligence tools.

But those action blacklists will only slow down the abuse of these systems; Few people think they can put an end to such efforts altogether. There’s always a little trick to getting around safety limits, as anyone who has tried to turn off the emergency beeps on a car’s seat belt warning system can attest.

While new software has pervaded this problem, it is hardly a new problem for the Pentagon. The first rules for the development of autonomous weapons were published a decade ago. The Pentagon’s Joint Artificial Intelligence Center was established five years ago to explore the use of artificial intelligence in combat.

Some weapons already operate on autopilot. Patriot missiles, which shoot down missiles or aircraft entering protected airspace, have long had an “automatic” mode. It allows them to fire without human intervention when overwhelmed with oncoming targets faster than humans can react. But they are said to be monitored by humans, who can abort attacks if necessary.

The assassination of Mohsen Fakhrizadeh, Iran’s top nuclear scientist, was carried out by the Israeli Mossad with an automatic machine gun powered by artificial intelligence, although there seems to be a high degree of remote control. Russia says it recently started production – but has not yet deployed – the Poseidon nuclear torpedo under the seabed. If true to Russia’s advertising, this weapon will be able to automatically move across the ocean, evading existing missile defense systems, to carry nuclear weapons days after launch.

To date there is no international treaty or agreement regarding such autonomous weapons. In an era where Arms control agreements are being scrapped faster rather than they are being negotiated, there is little prospect of such an agreement. But the type of challenge presented by ChatGPT and its peers is different and in some ways more complex.

In the military, AI-infused systems can speed battlefield decisions to the point where they create entirely new risks of accidental attacks or decisions made by false alarms. or intentionally false about upcoming attacks.

“A core issue with AI in the military and in national security is how do you counter attacks faster than human decision-making, and I think that’s still not the case,” said Schmidt. be solved. “In other words, the missile is approaching so fast that there must be an automatic response. What if it’s the wrong signal?”

The Cold War was filled with stories of false alarms – once because a training tape, used for nuclear reaction practice, had somehow been fed into the wrong system and issued a warning about a the coming great attack of the Soviet Union. (Right judgment sets everyone back.) Paul Scharre, of the Center for a New American Security, noted in his 2018 book “Army of None” that there have been “at least 13 incidents” nuclear near-misses between 1962 and 2002,” which “reinforces the view that near misses are normal, if terrifying, a condition of nuclear weapons.”

For that reason, when tensions between the superpowers were much less than they are today, a series of presidents tried to negotiate to allow more time for nuclear decision-making by all parties, so as not to which side enters the conflict. But innovative AI risks pushing countries in the other direction, towards faster decision-making.

The good news is that the great powers are likely to be careful — because they know what the enemy’s reaction will be. But so far there is no uniform rule.

Anja Manuel, a former State Department official and now principal of the Rice, Hadley, Gates and Manuel advisory group, recently wrote that even China and Russia are not ready for arms control negotiations yet. on AI, meetings on the topic will lead to discussions. about uses of AI that are considered “beyond light.”

Of course, the Pentagon would also worry about agreeing to multiple limits.

Danny Hillis, a computer scientist who pioneered parallel computers used for artificial intelligence, said: “I struggled very hard to get a policy that if you had automatic weapon elements, you need a way to turn them off. Mr. Hillis, who also served on the Defense Innovation Board, said that Pentagon officials protested, saying, “If we can turn them off, the enemy can turn them off too. “

Greater risk can come from individuals, terrorists, ransomware groups, or smaller countries with advanced cyber skills — like North Korea — learning to clone a smaller, less restricted version of ChatGPT more restrictive. And they may find that generalized AI software is perfect for speeding up cyberattacks and targeting disinformation.

Tom Burt, head of trust and safety operations at Microsoft, speed up with the use of new technology to improve his search engines, he said at a recent forum at George Washington University that he thinks AI systems will help defenders detect unusual behavior faster than that they help attackers. Other experts disagree. But he said he feared artificial intelligence could “drive” the spread of targeted misinformation.

All of this heralds a new era of arms control.

Some experts say that since the spread of ChatGPT and similar software cannot be prevented, the best hope is to limit the specialized chips and other computing power needed to develop the technology. That will certainly be one of many different arms control plans to be rolled out over the next few years, at a time when the nuclear powers, at least, seem uninterested in negotiating over old weapons. , let alone new weapons.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button