The EU Takes the Lead in AI Regulation – But the New Rules Will Have Global Implications

The European Union Artificial Intelligence Act is intended to create a balance between regulation and innovation – and its implementation will be felt by partners all over the world.

Martha Buyer

December 26, 2023

4 Min Read
The EU Takes the Lead in AI Regulation
Richard Wayman / Alamy

This article was originally published in No Jitter.

Early in December, policymakers in the European Union made some important and decisive steps forward in taking the lead in global AI regulation. This action, though not yet ready for codification, will have a significant impact on the development and evolution of AI regulation worldwide. Interestingly, according to a recent article in MIT Technology Review, one of the key drivers in the adoption and deployment of GDPR was the perceived control and/or ownership of data belonging to citizens of the EU by American and Chinese tech entities.

The European Union Artificial Intelligence Act (AIA), has a much wider scope and is likely to have a significantly greater impact on global AI regulation going forward than did GDPR. Although not yet final, the AIA also intends to create a balance between regulation meant to safeguard both individuals and the data that exists about them and innovation that is enabled by access to vast quantities of data. It is expected that the final act will be released by the end of 2024, with an effective date in 2025 or 2026.

According to Thierry Breton, the current EU Commissioner for Internal Markets of the EU, “the AIA is much more than a rulebook—it’s a launchpad for EU startups and researches to lead the global AI race.” While it may have originated within the EU, the expectation is that its impact will be global.

Related:MIT Experts Call for Expanded AI Governance and Regulation

The AIA is built upon what’s known as a “foundation model,” which the IBM Research Blog defines as “flexible, reuseable AI models that can be applied to just about any domain or industry task.” The definition is sufficiently generic and large-scale to cover a lot of AI conceptual real estate, but the concern is, that despite the broadness of the definition, what happens when data collected under the broad definition is then used to solve a very specific use case.

Further, what happens when based on the data gathered from a broad array of sources is used to decide a specific problem, and the answer is not only incorrect, but results in not only quantifiable harm, but liability as well.

With these process and legal challenges in mind, the EU negotiators created a risk-based framework, according to the official EU document. It includes rules that will:

• Address risks specifically created by AI applications;
• Propose a list of high-risk applications;
• Set clear requirements for AI systems for high risk applications;
• Define specific obligations for AI users and providers of high risk applications;
• Propose a conformity assessment before [an] AI system is put into service or placed on the market;
• Propose enforcement after such an AI system is placed in the market; and
• Propose a governance structure at European and national level.

Related:AMD Takes On Nvidia with New GPU for AI

Further, this model defines four levels of risk in AI: minimal risk, limited risk, high risk and unacceptable risk.. Each level is carefully defined, but the takeaway is that the level of regulation is proportional to the risk category into which it falls. 

Although the AIA has yet to receive final approval from the EU, given the speed with which AI applications and products are expected to reach the market, it contains provisions reflecting that those rules will be vigorously enforced. As currently defined, those enforcement tools will create much larger compliance obligations than currently exist with GDPR, and will include stiff financial penalties for non-compliance based on a percentage of annual global revenue (yes, you read that correctly!) that are much more severe than anything even contemplated under GDPR rules. For multinational corporations, this means serious money for non-compliance, and that’s after the legal fees have been paid (and those won’t be peanuts either).

Included in the high and unacceptable risk categories are issues associated with the collection, maintenance and use (for good or evil) of biometric information. The EU has taken specific steps to make sure that those using data gathered from an AI source containing biometrics pose an “unacceptable risk” to the private information of EU citizens and residents, and thus requiring that such data be treated as “high risk,” subject to additional scrutiny and oversight. Specific target areas will involve where AI tools are used in hiring practices as well as in “predictive policing.”

As a side note, enhanced enforcement of privacy and data security rules is not limited to those governmental bodies within the EU. Most recently, the FCC has partnered with four states (Connecticut, Illinois, New York, and Pennsylvania) to increase privacy and data protection and cyber-security investigations. This makes sense given that these three prongs of data protection become increasingly important as more personal data is “out there.”

The EU has succeeded in creating much more than a blueprint for AI regulation going forward. Enterprises throughout the world are advised to pay attention, as compliance will be critical and non-compliance very, very costly.

Read more about:

Europe
Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like