Pioneering the future: the European Union’s regulatory framework for Artificial Intelligence

As early as October 20201, the European Council set forth an ambitious objective for the EU to become a “leading global player in the development of safe, reliable, and ethical artificial intelligence”.

On April 21, 2021, the European Commission unveiled its draft regulatory, the AI Act, establishing a set of harmonized rules on AI which was adopted by the European Council at the end of 2022.

Following the legislative journey, a provisional agreement was reached between the European Parliament and the Council on December 9, 2023, culminating in three days of intense negotiations.

This legislation was ultimately embraced on Wednesday, March 13, 2024, by a sweeping majority of the European deputies.

But what are the key contributions of this text?

Balancing technology neutrality with asymmetric regulation

The AI Act puts forward a broad definition to encompass all AI systems, aiming to prevent becoming outdated amid technological progress. It defines AI as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.2

While the text aims to be applicable to a broad spectrum of AI systems, it introduces an asymmetric field of application, moderating its claimed technological neutrality.

The legislation adopts a risk-based approach: risks are classified into three levels, from unacceptable to minimal, each associated with a specific legal regime.

Low-risk AI systems are only subject to light regulation3, while “high-risk” AI applications and systems categorized as presenting “unacceptable risks” are more heavily impacted.

High-risk AI systems under regulation

High-risk AI systems are those posing significant risks to health, safety, or fundamental rights of individuals. The AI Act dedicates its Title III to these systems.

Initially, high-risk AI is defined as systems designed to be used as a safety component or constituting such a product, subject to an ex-ante conformity assessment by a third party.

Additionally, Annex III lists limited high-risk AI systems, which the European Commission can update to stay abreast of technological advancements4.

These AI systems are, in principle, permitted in the European market, provided they comply with regulations set out in Chapter 2 of Title III. These requirements cover various aspects such as risk management systems, data governance, documentation and record-keeping, transparency, user information, human oversight, robustness, and (cyber)security of systems.

These obligations are primarily directed at AI system providers. However, users, manufacturers, or other parties are also subject to obligations outlined in Title III.

Unacceptable risk AIs are (in principle) prohibited.

Title II lists prohibited practices concerning AI systems whose use is deemed unacceptable because they contravene the Union’s values, especially due to potential violations of fundamental rights.

For example, prohibited are5:

” the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; (…).”

“the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following(…).”

An exception exists for real-time remote biometric identification systems in publicly accessible spaces, listed among prohibited practices, which can be authorized for law enforcement purposes, subject to necessity and proportionality conditions.

AI systems with specific manipulation risks are governed by a special regulatory framework

This specific regime applies to certain AI systems, particularly those intended to interact with natural persons or to generate content. This regime applies beyond the risk classification, meaning even low-risk AI systems may be subject to this transparency regime.

These systems will be subject to specific transparency obligations6.

For instance, additional obligations include informing individuals that they are interacting with an AI system or exposed to an emotion recognition system. The regulation also addresses “deep fakes7“, where it is mandatory, except for exceptions, to disclose that content is generated by automated means.

Conclusion

The AI Act is a comprehensive text attempting to regulate artificial intelligence systems, aiming to mitigate the associated risks and foster a trustworthy technology ecosystem.

It’s crucial to highlight that the intent is not to impose heavy regulations on businesses or individuals initiating AI systems. Rather, the goal is to limit the requirements to “the minimum necessary” to address AI risks without unduly restricting technological development or disproportionately increasing the cost of bringing AI solutions to market8.

This rationale is why the text reduces regulatory burdens on SMEs and startups and establishes regulatory sandboxes providing a controlled environment to facilitate the development, testing, and validation of AI systems9.

Now awaiting formal adoption by the Council! The legislation will come into effect 20 days after its publication in the Official Journal and will be fully applicable 24 months later, with some provisions coming into force sooner or later.

Let’s not forget, AI is rapidly evolving and is still in its infancy! Daily applications are as plentiful as the potential risks they may pose. In this context, the EU innovates by seeking to mitigate these risks without hindering the development of artificial intelligence.

The TAoMA Team remains at your disposal for any inquiries on this subject!

Juliette Danjean
Trainee lawyer

Jean-Charles Nicollet
European Trademark & Design Attorney – Partner

(1) Extraordinary meeting of the European Council on the 1st and 2nd of October, 2020.

(2) See article 5, Title II of the Regulation’s latest iteration.

(3) See Title IX’s singular article that encourages AI system providers to maintain codes of conduct. This initiative seeks to inspire providers of AI systems, which are not deemed high-risk, to voluntarily adopt standards designated for high-risk systems as outlined in Title III.

(4) The text currently lists specific high-risk AI systems, such as those used for biometric identification and the classification of individuals, along with systems managing critical infrastructure operations (including traffic, water, gas, heating, and electricity supply).

(5) See article 5, Title II of the Regulation’s latest iteration.

(6) See Title IV of the Regulation’s latest version.

(7) The European Parliament defines “deep fakes” as the result of AI-driven media manipulation, described as hyper realistic alterations to reality. (Accessible at: https://multimedia.europarl.europa.eu/fr/audio/deepfake-it_EPBL2102202201_EN)

(8) Recitals of the European Commission’s Proposed Regulation (1; 1.1)

(9) See Title V of the Regulation’s latest version.