Tech

Runaway AI is an extinction risk, experts warn


Top character in developers of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building will someday could pose an existential threat to humanity comparable to nuclear war and pandemic.

“Minimizing AI-induced extinction must be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today. by AI Safety Centera non-profit organization.

The idea that AI can become unmanageable and accidentally or intentionally destroy humanity has long been debated by philosophers. But in the past six months, after some surprising and terrifying leaps in the performance of AI algorithms, the issue has been discussed more widely and more seriously.

In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of mankind, an AI startup with a focus on safety. Other signatories include Geoffrey Hinton And Yoshua Bengio—two of three scholars awarded the Turing Prize for their work on study carefullytechnology underpinning modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.

“The claim is a great initiative,” says Maximum signa professor of physics at the Massachusetts Institute of Technology and director of Institute of the future of life, a nonprofit focused on the long-term risks posed by AI. In March, the Tegmark Institute published a letter calling for a six-month pause about developing advanced AI algorithms to be able to assess risk. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.

Tegmark said he hopes the statement will encourage governments and the public to take the inherent risks of AI more seriously. “The ideal outcome is that the AI ​​extinction threat is widespread, allowing people to discuss it without fear of being ridiculed,” he added.

Dan Hendrycks, director of the Center for AI Safety, compared the current moment of interest in AI to the debate among scientists about creating nuclear weapons. “We need to have the conversations that nuclear scientists had before the creation of the atomic bomb,” Hendrycks said in a quote released alongside his organization’s statement.

The current warning tone is tied to some leap in the performance of AI algorithms known as big language modeling. These models consist of a specific type of artificial neural network that is trained on large amounts of human-written text to predict which words will follow a certain sequence. Given enough data and additional training in the form of human feedback on good and bad answers, these language models can generate text and answer questions with eloquence. and clear knowledge—even if their answers often make mistakes.

These language models are increasingly proving to be coherent and capable as they are given more data and computing power. The most powerful model created to date, OpenAI’s GPT-4, can solve complex problems, including those that seem to require some form of abstraction and common sense.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button