Business

Tech giant pledges commitment to AI safety – including ‘kill switch’


A variety of major technology companies included Microsoft, Amazonand OpenAI, on Tuesday agreed to a landmark international agreement on artificial intelligence safety at the AI ​​Safety Summit in Seoul.

The agreement will see companies from countries including the US, China, Canada, UK, France, South Korea and the United Arab Emirates make voluntary commitments to ensure development Secure development of their most advanced AI models.

Where they have not done so, each AI model maker will publish safety frameworks that outline how they measure the risk of their pioneering models, such as examining the risk of agents bad abuse of technology.

These frameworks will include “red lines” for technology companies to identify the types of risks associated with advanced AI systems that are considered “unacceptable” – these risks include but not limited to automated cyber attacks and biological weapons threats.

In such extreme cases, companies say they will deploy a “kill switch” to stop developing AI models if these risks cannot be guaranteed to be mitigated.

Rishi Sunak, the British Prime Minister, said in a statement on Tuesday: “This is the first time in the world that so many leading AI companies from different parts of the world have agreed to the same commitments. about AI safety”.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability about their plans for developing secure AI,” he added.

The pact agreed on Tuesday expands on a series of previous commitments made by companies involved in general AI software development at the UK AI Safety Summit in Bletchley Park, England, on November last year.

The companies have agreed to seek input on these thresholds from “trusted actors”, including their home governments where appropriate, before announcing them ahead of the AI ​​summit under The next one – the AI ​​Action Summit in France – is in early 2025.

The commitments agreed on Tuesday apply only to the so-called “border” model. The term refers to the technology behind general AI systems such as OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Since ChatGPT was first introduced to the world in November 2022, regulators and technology leaders have become increasingly worried about the risks surrounding advanced AI systems capable of creating Text and visual content are equal to or better than humans.

WSJ's Joanna Stern says Microsoft's new PC with AI is a 'disliked' product

The European Union has sought to limit uncontrolled AI development with the creation of the AI ​​Act, which was approved by the EU Council on Tuesday.

However, the UK has not proposed formal legislation for AI, instead opting for a “soft” approach to AI regulation, requiring regulators to apply existing laws to the technology .

The government recently said it would consider legislating for border models at some point in the future, but has not committed to a timeline for formal legislation.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button