Business

Here’s what it means for US tech companies


The European Union’s landmark artificial intelligence law officially comes into effect on Thursday — and it means tough changes for America’s tech giants.

The AI ​​Act, a landmark law aimed at governing how companies develop, use and apply AI, has final approval by EU member states, legislators and the European Commission — the EU’s executive body — in May.

CNBC has everything you need to know about the AI ​​Act — and how it will affect the biggest global tech companies.

What is the AI ​​Act?

The AI ​​Act is a piece of EU legislation governing artificial intelligence. First proposed by the European Commission in 2020, the law aims to address the negative impacts of AI.

The campaign primarily targets major US tech companies, which are currently the main builders and developers of the most advanced AI systems.

However, many other businesses will also be subject to this regulation, even companies that do not operate in the technology sector.

The regulation sets out a comprehensive and harmonised regulatory framework for AI across the EU, taking a risk-based approach to regulating the technology.

Tanguy Van Overstraeten, head of the technology, media and communications practice at law firm Linklaters in Brussels, said the EU AI Act is “the first of its kind in the world”.

“This could impact many businesses, especially those developing AI systems as well as those deploying or using them only in certain cases.”

The law takes a risk-based approach to regulating AI, meaning that different applications of the technology will be regulated differently depending on the level of risk they pose to society.

For example, for AI applications that are considered “high risk,” strict obligations will be imposed under the AI ​​Act. These include adequate risk assessment and mitigation systems, high-quality training datasets to minimize the risk of bias, regular logging of activity, and mandatory sharing of detailed documentation of models with regulators to assess compliance.

The AI ​​revolution is being

Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decision-making systems, educational scoring systems, and remote biometric identification systems.

The law also imposes a blanket ban on any AI applications deemed “unacceptable” in terms of risk.

AI applications that pose unacceptable risks include “social scoring” systems that rank citizens based on the aggregation and analysis of their data, predictive policing, and the use of emotion recognition technology in the workplace or school.

What does this mean for US tech companies?

American giants like Microsoft, Google, Amazon, AppleAnd Metadata has been actively collaborating and investing billions of dollars in companies that it believes can lead the way in artificial intelligence as the technology grows exponentially globally.

Cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud also play a vital role in supporting AI development, as massive computing infrastructure is required to train and run AI models.

In this regard, big tech companies will certainly be the most targeted under the new rules.

“The AI ​​Act has implications far beyond the EU. It applies to any organization that has any activity or impact within the EU, which means the AI ​​Act will likely apply to you regardless of where you are,” Charlie Thompson, senior vice president of EMEA and LATAM for enterprise software company Appian, told CNBC via email.

“This will put big tech companies under much closer scrutiny when it comes to their operations in the EU market and their use of EU citizens’ data,” Thompson added.

Meta has limited the availability of its AI model in Europe due to regulatory concerns — though the move isn’t necessarily due to the EU AI Act.

Earlier this month, the Facebook-owned company said it would not offer samples of its LLaMa product in the EU, citing uncertainty over whether the product would comply with the EU’s General Data Protection Regulation, or GDPR.

Capgemini CEO: There is no 'silver bullet' to reap the benefits of AI

The company was previously ordered to stop training models on Facebook and Instagram posts in the EU over concerns this could breach GDPR.

How is the next generation of artificial intelligence handled?

Artificial intelligence is considered by the EU AI Act to be an example of “general purpose” artificial intelligence.

This label refers to tools that are capable of performing a variety of tasks at a similar level to — if not better than — humans.

General-purpose AI models include but are not limited to OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude.

For these systems, the AI ​​Act imposes strict requirements such as respecting EU copyright law, transparently disclosing information about how models are trained and conducting regular testing, and ensuring adequate cybersecurity protection.

Not all AI models are treated the same, however. AI developers say the EU needs to ensure that open-source models – which are free to the public and can be used to build custom AI applications – are not overly regulated.

Examples of open source models include Meta’s LLaMa, Stability AI’s Stable Diffusion, and Mistral’s 7B.

The EU makes some exceptions for AI models that are created open source.

But to qualify for an exemption from the rules, open source vendors must publicly disclose their parameters, including weights, model architecture, and how the model is used, and allow “access, use, modification, and distribution of the model.”

Under the AI ​​Act, open source models that pose “systemic” risks would not be exempt.

The Gap Between Closed-Source and Open-Source AI Companies Is Smaller Than We Think: Face-Hugging

“There needs to be a careful assessment of when the rules apply and the roles of the stakeholders,” he said. [who said this?] speak.

What happens if a company violates the regulations?

Companies Violating the EU AI Act may be penalized from €35 million ($41 million) or 7% of their total global annual turnover — whichever is higher — to €7.5 million or 1.5% of their total global annual turnover.

The amount of the fine will depend on the violation and the size of the company being fined.

That’s higher than the fines that can be imposed under GDPR, Europe’s strict digital privacy law. Companies face fines of up to €20 million or 4% of their annual global revenue for violating GDPR.

Oversight of all AI models covered by the Act — including general-purpose AI systems — will fall under the jurisdiction of the European AI Office, a regulatory body set up by the Commission in February 2024.

Jamil Jiva, global head of wealth management at fintech firm Linedata, told CNBC that the EU “understands that they need to impose heavy fines on companies that violate the rules if they want the regulations to have an impact.”

Martin Sorrell on the future of advertising in the age of AI

Similar to how the GDPR demonstrated the EU’s ability to “change the regulatory influence to enforce best data privacy practices” globally, with the AI ​​Act the bloc is again trying to replicate this, but for AI, Jiva added.

It’s worth noting, however, that while the AI ​​Act has come into effect, most of its provisions won’t actually come into effect until at least 2026.

Restrictions on general purpose systems will not take effect until 12 months after the AI ​​Act comes into force.

Generative AI systems currently available on the market — like OpenAI’s ChatGPT and Google’s Gemini — are also given a 36-month “transition period” for their systems to comply with the regulation.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button