Veuillez activer le javascript sur cette page
logo loader
Artificial intelligence - the European regulatory approach

Artificial intelligence - the European regulatory approach

In its 2021 communication on the European digital strategy, the EU Commission announced its plan to invest at least 1 billion euros annually in AI and 20 billion over the decade through the Member-states and private entrepreneurs. (1)

Before being regulated by legal principles, artificial intelligence has been the subject of numerous studies based on principles of ethics. These studies were then reflected in several reference documents: guidelines, charters, recommendations. These non-binding and evolving rules provided an initial assessment of the impact of AI on our lives.

However, these ethical principles are insufficient. The acceleration of the deployment of AI systems in our society makes it necessary to improve risk management and to better define responsibilities among the different parties involved in AI - the providers, the distributors and the users. A second step in the development of AI must include a normative approach.  

What you need to know

The European Union is building a European legal framework on artificial intelligence, including a proposal for a regulation with the purpose of laying down harmonised rules, published in April 2021, focusing on the risks and liability of the producers of AI systems, followed by two proposals for directives published on 28 September 2022, focusing on the definition and framework of liability in relation to AI systems. The regulation should be adopted by the beginning of 2024. This article is about the proposed regulation, aka AI Act.


1. The initial development of a flexible framework, built on ethics

Although artificial intelligence is not really new, we’ve seen a rapid acceleration of the deployment  and use of AI systems in the past few years in many areas of our economy and in our daily life, through an increase of automation (RPA systems), automated and autonomous driving systems, the increased use of RPA in the medical and finance industries, connected objects, automated translation systems, chatbots, etc.

Artificial intelligence generates two types of reactions: on the one hand, one acknowledges the benefits of AI systems (speed of data analysis, of providing answers, speed of execution, etc.); on the other hand, artificial intelligence concerns or even frightens people regarding the new types of risks generated by these systems on the rights of individuals.

    a) Approaching AI through ethics, or soft law

Approaching AI through ethics, or soft law, allowed to develop a reflection on the positive aspects of AI as well as its risks. Several groups or think tanks have been set up in France and internationally.

Among numerous initiatives, we can list the following:

    - The National Strategy for AI: in France, the Government launched a reflection on the development of AI in 2017. In the “National Strategy for AI”, the authors note that “the construction of an ethical model for artificial intelligence is more than ever a key success factor in international competition and for the protection of fundamental rights.” (2)

    - The European Commission Ethics Guidelines for Trustworthy AI, published on 8 April 2019;

    - The OECD Recommendation on Artificial Intelligence, adopted on 22 May 2019 by the OECD Artificial Intelligence Policy Observatory;

    - The Global Partnership on AI (GPAI), launched jointly by France and Canada in June 2020 aiming at guiding a responsible development and use of AI. (3)

These reflections have identified two main focus ideas:

    1. A “human” focus, i.e. the necessary preservation of human rights and of society, with the creation of a trustworthy AI protecting fundamental rights.

In the EU Commission Guidelines, the authors define artificial intelligence as trustworthy “if it includes the following three characteristics, which should be implemented during the full lifecycle of the system: a) it must be legal, by ensuring its compliance with applicable laws and regulations; b) it should be ethical, by ensuring adherence to ethical principles and values; and c) it should be robust, both technically and socially because even with good intentions, AI systems may cause involuntary damages.

    2. An “economic” focus, i.e. securing the market and stimulating innovation with the creation of a supporting economic environment to foster research and development of AI systems in Europe, in front of intensive international competition.

However, even though ethics is necessary, it is insufficient to regulate artificial intelligence, especially regarding the liability of AI systems producers and to address the issue of damages resulting from the use of AI systems.

    b) Tentative definitions - a transversal approach of AI

Even though to date there is no single definition of AI, several organisations have worked on the difficult challenge of attempting to define precisely what is artificial intelligence.

In 2018, the ACPR (French regulator of the financial industry) published the following definition: “Set of techniques and applications permitting to create a machine able to imitate, in an autonomous manner, the human intelligence.” (4)

In 2019, the OECD, in its Recommendation on Artificial Intelligence, defined an AI system as “an automated system which, for a given set of objectives defined by man, is able to make predictions, to come up with recommendations or to take decisions impacting on real or virtual environments. AI systems are designed to work based on diverse degrees of autonomy.”

And, in its initial version, the proposal for a European regulation included the following definition: “An artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

We then realise that AI doesn’t rely on a single technology. AI is multifaceted, rendering definitions attempts difficult. The purpose of the coming regulation will be to come up with a generic and technologically neutral definition of artificial intelligence, so that it remains appropriate through time.


2. The development of artificial intelligence law - the normative step


A regulation is being discussed at the European level. This regulatory framework which is rooted in the reflections on ethics, lays down the rules of liability in the area of artificial intelligence. It is organised around three pieces of regulation: the proposal for a regulation of 21 April 2021 concerning artificial intelligence (“AI Act”), (5) and two proposals for directives of 28 September 2022, focused on liability.

In February 2020, the European Commission published a white paper entitled “Artificial intelligence - A European approach to excellence and trust”. This white paper pointed out the risks generated by certain AI systems such as violations of fundamental rights or of personal safety and proposed to introduce a labelling system for certain AI systems.

The proposal for a regulation of artificial intelligence was developed in continuity of this document. It includes the double approach set out in the Guidelines, i.e. human approach and economic approach.

    a) Classifying AI systems - a risk-based approach

The AI Act classifies AI systems based on risks. This classification evolved during the examination of the regulation by the parliamentary commissions (Civil Liberties and Internal Market Commissions) in May 2023.

Four categories have been identified, which can be presented as a pyramid scheme based on the level of potential risks to fundamental rights:

   - At the bottom of the pyramid are minimum-risk AI systems;

   - On the second level are limited-risk AI systems, such as chatbots or “deepfakes”. These AI systems can be distributed without any limitation other than a safety assurance of these products and a disclosure obligation if the content is generated by automated means;

   - On the third level are high-risk AIs. These systems include biometric identification systems, AIs used in critical infrastructures or in essential public services. This AI category is the subject of most of the proposed regulation;

   - And, at the top of the pyramid are prohibited AIs. These AI systems represent an unacceptable level of risk for the rights of individuals or security. Prohibited AI systems include systems designed to manipulate behaviour, social credit, facial recognition in public space (but exceptions are provided), the exploitation of vulnerable people (including due to their social or economic situation).

The European Parliament enlarged the list of prohibited AI systems by adding the following: biometric categorisation, predictive police and databases of facial images, emotion recognition software for law enforcement, and border, work and education management.

    b) High-risk AI systems

The regulation focuses on high-risk AI systems. These systems are not prohibited but due to the risk of violation of fundamental rights, they must be regulated.

High-risk AI systems must comply with certain requirements and go through an assessment and conformity procedure prior to their commercial distribution (ex-ante control of conformity), followed by a control of risks (ex-post control of risks). The list of high-risk AI systems is set up and updated by the Commission. The following systems are also considered high-risk: AI systems generating a significant health risk, security risk or risk to the fundamental rights of the people.

The providers must ensure that their systems meet the requirements identified in the Ethics Guidelines for Trustworthy AI, specifically appropriate human control, a high level of robustness and safety as well as a high level of quality of the data sets used to feed the system. (6) They will also have to implement a risk monitoring management system (including risk identification and risk assessment, and remedial measures), and a traceability system (automated event recording). An EC label will be delivered to systems that comply with the regulatory requirements prior to their commercial distribution. (7)

The regulation doesn’t specifically cover general purpose artificial intelligence (GPAI) including generative AI such as ChatGPT. But companies that include these systems in their high-risk AI systems will have to comply with their obligations regarding the commercial distribution or the use of a high-risk AI system.

The European Parliament strengthened the obligations of the providers of high-risk AI systems and introduced an impact assessment procedure on fundamental rights, to be conducted by the users.

These obligations concern the providers and distributors that distribute AI systems in the EU, whether they are established in or out of the EU.

Non-compliance with these obligations by a provider is subject to an administrative fine of up to 30 million euros or 6% of its worldwide earnings, the higher amount being applied.

The proposed regulation includes the creation of a European AI Committee and of national authorities in charge of its enforcement.

     c) Next steps

The AI Act is still being discussed.

The initial proposal was amended during the successive presidencies of the Council of the European Union

The definition of AI system was modified in December 2022 to take into account the latest technological developments, i.e. “a system designed to work with autonomous elements and which, based on the data and the entries generated by the machine and/or by the human, deduces the manner to reach a given set of objectives using automated learning and/or approaches based on logic and knowledge and produces results generated by the system in the form of content (generative AI system), as well as predictions, recommendations or decisions which affect the environment with which the system interacts.” (8)

The latest version of the regulation was adopted on 14 June 2023 with the vote of the amended text by the European Parliament in plenary session.

The text is currently being negotiated through the trialogues step (trilateral negotiations between the Council of the EU, the Commission and the Parliament). It shouldn’t be adopted before the beginning of 2024. The AI Act will then become applicable two years after its adoption to allow the market to comply with the new requirements.

* * * * * * * * * *

(1) European Commission, A European approach to artificial intelligence, https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

(2) National AI strategy: https://www.entreprises.gouv.fr/fr/numerique/enjeux/la-strategie-nationale-pour-l-ia

(3) Global Partnership on AI : https://gpai.ai/ The members are : Australia, Brazil, Canada, European Union, France, Germany, India, Italy, Japan, Mexico, Netherlands, New Zealand, Poland, Republic of Korea, Singapore, Slovenia, Spain, United States

(4) ACPR - IA : enjeux pour le secteur financier, décembre 2018

(5) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence Act) and amending certain Union legislative acts, COM (2021) 206 final 21 April 2021 (“AI Act”)

(6) See the 7 requirements identified in the Ethics Guidelines for Trustworthy AI, i.e. human action and human control; technical robustness and safety; respect of privacy and data governance; transparency; diversity, non discrimination and fairness; societal and environmental well-being; and responsibility

(7) See AI Act, chapter 2

(8) General orientation of the Council of the EU, art. 3, par. 1


Bénédicte DELEPORTE
Avocat

Deleporte Wentz Avocat
www.dwavocat.com

July 2023