Tech

Amazon, Google, Meta and other tech companies agree to AI protections set by the White House


President Joe Biden said Friday that the new commitments by Amazon, Google, Meta, Microsoft and others to lead the development of artificial intelligence technology to meet a range of AI protections mediated by his White House is an important step in managing the “enormous” promise and risks posed by the technology.

Biden announced that his administration has received voluntary commitments from seven US companies to ensure their AI products are safe before they hit the market. Some of the pledges called for third-party oversight of the operation of commercial AI systems, although they did not detail who would test the technology or hold the companies accountable.

“We have to be discerning and vigilant about the threats emerging technologies can pose,” Biden said, adding that companies have a “fundamental obligation” to make sure their products are safe.

“Social networks have shown we the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitments are a promising step, but we have much more work to do together.”

The increase in commercial investment in general AI tools that can convincingly write human-like text and generate new images and other media has attracted public interest as well as concerns about its potential to trick people and spread misinformation, among other dangers.

The four tech giants, along with OpenAI, the maker of ChatGPT, and starting a business Anthropic and Inflection, have committed to security testing “performed in part by independent experts” to protect against major risks, such as biosecurity and network securityThe White House said in a statement.

That test will also look at the potential for social harm, such as prejudice and discrimination, as well as more theoretical risks of advanced AI systems being able to gain control of physical systems or “replicate” themselves by creating copies of themselves.

The companies have also committed to implementing vulnerability reporting methods for their systems and using digital watermarks to help distinguish between real images and AI-generated images known as deepfakes.

The White House said it would also publicly report flaws and risks in its technology, including implications for fairness and bias.

Voluntary commitments are meant to be an immediate way to address risk before a long-term effort to acquire Conference to pass legislation regulating technology. Company executives plan to meet Biden at the White House on Friday when they pledge to follow the standards.

Some advocates of AI regulations say Biden’s move is a start, but more needs to be done to hold companies and their products accountable.

“It is not enough to have private discussions with stakeholders that lead to voluntary safeguards,” said Amba Kak, executive director of the AI ​​Now Institute. “We need a much broader public discussion, and that will bring up issues that companies will almost certainly not voluntarily commit to because it will lead to significantly different outcomes, issues that can more directly affect their business models.”

Senate Majority Leader Chuck Schumer, DN.Y., said he would introduce legislation to regulate AI. He said in a statement that he would work closely with the Biden administration “and our bipartisan colleagues” to build on the commitments made Friday.

Several tech executives have called for regulation, and some went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.

Microsoft Chairperson Brad Smith say in one blog post Friday that his company is making a number of commitments that go beyond the White House pledge, including support for regulation that would create a “licensing regime for highly capable models.”

But some experts and emerging competitors worry that the kind of regulation that is floating could be a boon for the big money frontrunners led by OpenAI. Google and Microsoft as smaller players are left out due to the high cost to make their AI system called big language model compliant with strict regulations.

The White House pledge notes that it primarily applies only to models that are “overall stronger than current industry frontiers,” established by existing models such as OpenAI’s GPT-4 and DALL-E 2 image generator, as well as similar releases from Anthropic, Google, and Google. Amazon.

Several countries are looking to regulate AI, including European Union lawmakers who are negotiating far-reaching AI rules for the 27-nation bloc that could restrict applications deemed the highest risk.

United Nations Secretary-General Antonio Guterres recently said the United Nations is the “ideal place” to adopt global standards and appointed a panel that will report back on global AI governance options later this year.

Guterres also said he welcomes calls from several countries to create a new United Nations agency to support global efforts to govern AI, inspired by models like the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House said on Friday that it was consulting on voluntary engagements with several countries.

The pledge focuses heavily on safety risks but does not address other concerns about the latest AI technology, including effects on employment and market competition, environmental resources needed to build models, and copyright concerns over human works, artwork, and other manual work being used to teach AI systems how to create human-like content.

Last week, OpenAI and the Associated Press announced an agreement for the AI ​​company to license the AP news archive. The amount it will pay for that content was not disclosed.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button