Business

OpenAI and Anthropic Agree to Let the US AI Safety Institute Test Models


Jakub Porzycki | Nurphoto | Getty Images

OpenAI and Anthropic, two of the highest-valued artificial intelligence startups, have agreed to let the US AI Safety Institute test their new models before releasing them to the public, following growing industry concerns about safety and ethics in AI.

The institute, housed within the Department of Commerce at the National Institute of Standards and Technology (NIST), said in a press release that it will have “access to major new models from each company before and after public release”.

The group was formed after the Biden-Harris administration issued a US government order first executive order ABOVE artificial intelligence by October 2023, requiring new safety assessments, guidance on equity and civil rights, and research on the impact of AI on the labor market.

“We are pleased to have reached an agreement with the US AI Safety Institute to test our future models before releasing them,” OpenAI CEO Sam Altman wrote in a parcel on X. OpenAI also confirmed to CNBC on Thursday that, over the past year, the company has doubled its number of weekly active users to 200 million. Premise was the first to report this figure.

The news comes a day after reports that OpenAI is in talks to raise a funding round that values ​​the company at over 100 billion dollars. Thrive Capital is leading the round and will invest $1 billion, according to a source with knowledge of the matter who asked not to be named because the details are confidential.

Anthropic, founded by former OpenAI executives and researchers, was most recently valued at $18.4 billion. Anthropic counts Amazon as a lead investor, while OpenAI is heavily backed by Microsoft.

The agreements between the government, OpenAI and Anthropic “will enable collaborative research into how to assess capacity and safety risks, as well as methods to mitigate those risks,” according to Thursday’s announcement. free.

“We strongly support the mission of the US AI Safety Institute and look forward to working together to develop best practices and standards for safety of AI models,” Jason Kwon, chief strategy officer at OpenAI, told CNBC.

Jack Clark, co-founder of Anthropic, said the company’s “collaboration with the US AI Safety Institute leverages their deep expertise to rigorously test our models before widespread deployment” and “enhances our ability to identify and mitigate risks, promoting responsible AI development.”

Some AI developers and researchers expressed concern about safety and ethics in the increasingly for-profit AI industry. Current and former OpenAI employees published an open letter on June 4, describing potential problems related to the rapid advances taking place in AI and the lack of oversight and protection for whistleblowers.

“AI companies have strong financial incentives to avoid effective oversight, and we do not believe that tailor-made corporate governance structures are sufficient to change this,” they wrote. They added that AI companies “currently have only a weak obligation to share some of this information with governments, and no obligation to civil society,” and that they cannot “be trusted to share voluntarily.”

Days after the letter was published, a source familiar with the matter confirmed to CNBC that the FTC and the Justice Department had prepare to open antitrust investigation into OpenAI, Microsoft and Nvidia. FTC Chairwoman Lina Khan has described Her agency’s action is to “investigate the market for investments and partnerships being formed between AI developers and major cloud service providers.”

On Wednesday, California lawmakers passing A controversial AI safety bill has been sent to Gov. Gavin Newsom’s desk. Newsom, a Democrat, will decide whether to veto the bill or sign it into law by September 30. The bill, which would require safety testing and other safeguards for AI models with a certain cost or computing power, has been opposed by some tech companies for its potential to slow innovation.

CLOCK: Google, OpenAI and others oppose California’s AI safety bill

Google, OpenAI and others oppose California's AI safety bill

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button