AI Act Compliance: What High-Risk AI Providers Need to Know

AI Act Compliance: What High-Risk AI Providers Need to Know

Key Takeaways

The European Regulation on Artificial Intelligence (aka "AI Act”) was adopted on June 13, 2024. It represents a major step forward in the regulation of AI technologies within the EU. Providers of AI systems are subject to specific obligations regarding compliance, governance, safety, and transparency in the development and marketing of AI systems. This first article provides a summary of the obligations applicable to providers of high-risk AI systems.


The European Regulation on Artificial Intelligence (the "AI Act"), adopted on June 13, 2024, marks a decisive advancement in the regulatory framework governing AI technologies within the EU. (1) The regulation seeks to balance technological innovation, with the protection of fundamental rights, and legal certainty for economic operators.

The regulation is based on a risk-based, tiered approach, distinguishing between prohibited AI systems, high-risk AI systems, limited-risk systems, and minimal-risk systems. General-purpose AI models (or "GPAI") fall into a separate category. (For an overview of the AI Act, see our previous article: Artificial Intelligence - the European Regulatory Approach)

Providers of AI systems are subject to specific obligations related to compliance, governance, safety, and transparency regarding the development and placement on the market of AI systems. As a result, providers must begin preparing for compliance. Some obligations apply to all AI system providers, regardless of the system’s risk level, while others are specific to providers of high-risk AI systems and providers of GPAI systems.

This first article provides a summary of the obligations applicable to providers of high-risk AI systems. A second article will be dedicated to the obligations applicable to providers of general-purpose AI systems.


1. Obligations Applicable to All AI System Providers

An AI provider is defined as "a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge" (Art. 3 AI Act).

Regardless of the system's risk level, all providers must comply with a common set of baseline requirements, organized around three main pillars: training, transparency, and adherence to prohibited AI practices.

    1.1 Training the Parties Involved in AI

Providers must ensure that individuals involved in the design, deployment, and operation of AI systems have a sufficient level of expertise in AI. This requirement must be proportionate to the context of use, the target audience, and the technical competencies of the relevant teams (Art. 4).

It is therefore recommended that providers formalize a training plan that includes modules covering AI use cases, fundamental rights, risks, and system limitations.

    1.2 Ensuring Transparency in Human-Machine Interactions

Users must be informed that they are interacting with an AI system, unless this is obvious to a reasonably well-informed and reasonably observant person (Art. 50.1). This information obligation does not apply to AI systems that are authorized by law for the prevention, detection, or investigation of criminal offenses.

When AI systems generate synthetic content (text, audio, image, or video), such content must be machine-readable and clearly identifiable as being created or manipulated by an AI system (Art. 50.2).

Accordingly, providers should develop a labeling policy and clear user documentation, particularly for systems embedded in interactive products (chatbots, generative AI, etc.).

    1.3 Complying with Prohibitions Set Forth by the AI Act

Article 5 of the regulation sets out a list of prohibited AI practices, regardless of the risk level of the system involved. Among the prohibited activities relating to placing on the market, putting into service, or using AI systems are:

     - The use of subliminal techniques to distort behavior;
     - The exploitation of psychological vulnerabilities linked to age (minors, elderly persons), disabilities, or social or economic circumstances;
     - Social scoring systems;
     - Biometric categorization based on sensitive characteristics;
     - Profiling aimed at assessing or predicting a natural person's likelihood of committing a criminal offense;
     - Biometric identification in public spaces for law enforcement purposes (subject to certain exceptions).

It is recommended that providers conduct compliance audits during the early stages of developing any new AI system to eliminate the risk of engaging in prohibited practices under the regulation.


2. Obligations of Providers of High-Risk AI Systems

An AI system is considered high-risk when both of the following conditions are met: (i) the AI system is intended to be used as a safety component of a product covered by European legislation (the applicable laws are listed in Annex I of the regulation), or the AI system itself constitutes such a product; and (ii) the system or product is subject to third-party conformity assessment before being placed on the market or put into service (Art. 6 and Annex III).

The high-risk AI systems referred to in Annex III of the regulation fall within areas such as biometrics (where permitted), critical infrastructure, education and vocational training (e.g., assessment of learning outcomes), employment (e.g., recruitment and candidate selection), administration of justice, among others.

The regulation imposes strict requirements on the placement on the market or putting into service of such systems, along with rigorous post-market monitoring obligations.

The main obligations that apply to providers of high-risk AI systems are outlined below.

    2.1 A Risk Management System Throughout the AI System Lifecycle

Providers must establish, implement, document, and maintain an up-to-date risk management system (Art. 9). The risk management procedure is an ongoing, iterative process conducted throughout the entire lifecycle of the AI system and must be regularly updated. Risk management covers the following stages:

     - Identifying and evaluating reasonably foreseeable risks to health, safety, or fundamental rights when the AI system is used in accordance with its intended purpose;
     - Evaluating risks in the event of reasonably foreseeable misuse of the AI system;
     - Assessing risks that may emerge based on the analysis of data collected after the AI system has been placed on the market;
     - Implementing appropriate and targeted risk management measures.

It is therefore recommended to establish an iterative risk management procedure for each new high-risk AI system, involving technical, legal, and business teams from the earliest design stages of the system.

    2.2 Data Quality and Data Governance


Providers are responsible for the quality of the datasets used for training, validation, and testing AI systems (Art. 10). Datasets must meet best practice criteria, particularly regarding:

     - Relevant design choices;
     - Traceability of data sources. For personal data, the original purpose of data collection must be determined;
     - Relevant data processing operations for data preparation (annotation, labeling, cleaning, updating, etc.);
     - Detection of potential biases that could impact health, safety, or fundamental rights, or lead to discrimination, as well as the prevention or mitigation of such biases.

It is advisable to develop a specific quality assurance plan to ensure compliant data governance and to minimize the risks of unlawful data collection and algorithmic discrimination.

    2.3 Technical Documentation and Operating Instructions

Before placing an AI system on the market or putting it into service, providers must compile comprehensive technical documentation (Art. 11).

The technical documentation must demonstrate that the AI system complies with regulatory requirements and must include all necessary information enabling national competent authorities and other relevant bodies to assess its conformity.

At a minimum, the technical documentation must include the information listed in Annex IV of the regulation, notably:

     - A general description of the high-risk AI system: intended purpose, provider’s name, system version, the manner in which the AI system interacts with hardware or software, and the hardware on which the AI system is placed on the market or put into service;
     - Operating instructions for deployers (Art. 13);
     - A detailed description of the AI system’s components and development process (technical specifications, overall logic of the AI system and algorithms, main design choices, system architecture, validation and testing procedures used, cybersecurity measures implemented).

Small and medium-sized enterprises (SMEs) and startups are permitted to provide a simplified version of this technical documentation.


3. Monitoring Compliance of High-Risk AI Systems

The compliance of high-risk AI systems, and its confirmation through CE marking, is one of the key means to build user trust in this technology and to ensure that providers uphold a certain standard of ethical responsibility.

    3.1 Conformity Assessment and CE Marking

Providers must submit high-risk AI systems to a conformity assessment procedure prior to placing them on the market or putting them into service (Art. 43).

The conformity of the AI system is evaluated based on the requirements outlined in Articles 8 to 15 of the regulation, namely: risk management, quality monitoring and data governance, traceability, among others.

The conformity assessment procedure varies depending on the type of high-risk AI system concerned. Conformity is either assessed internally by applying the evaluation procedure defined in Annex VI of the regulation or by a certified body in accordance with the procedure set out in Annex VII.

In case of non-conformity of a high-risk AI system, the provider must immediately take all necessary corrective actions to bring the system back into conformity (Art. 20). These measures may include withdrawal, deactivation, or recall of the AI system. The provider must inform the relevant economic operators (deployers, importers, etc.).

A CE marking, indicating the system’s compliance with the regulation, must be affixed by the provider to high-risk AI systems (Arts. 16 and 48).

    3.2 Registration of High-Risk AI Systems

Both the provider and the high-risk AI systems themselves (except for those used in critical infrastructure) must be registered in the EU database before the system is placed on the market or put into service (Art. 49).

    3.3 Cooperation with Authorities and Post-Market Obligations

The provider is required to retain the following documents for a period of 10 years from the date the AI system is placed on the market or put into service (Art. 18):

     - The technical documentation of the system;
     - Documentation related to quality management;
     - The EU declaration of conformity;
and, where applicable,
     - Documentation related to modifications approved by notified bodies; and
     - Decisions and other documents issued by notified bodies.

These documents, along with the event logs, must be made available to the competent authorities (Art. 21).

As a reminder, providers located outside the European Union must appoint an authorized representative established within the Union (Art. 22). The authorized representative will act as the point of contact and representative of the provider before the competent authorities.


In Summary

Obligations of Providers of High-Risk AI Systems


General Obligations
. Training of individuals involved in AI
. Transparency
. Compliance with prohibitions imposed by the AI Act


Obligations Specific to High-Risk AI Systems
. Risk management system
. Quality of data used
. Technical documentation and user instructions
. Conformity assessment mechanism and CE marking
. Logging and human oversight
. Cooperation with authorities

If you are a provider (developer) of high-risk AI systems, our firm is available to assist you in achieving compliance with the AI Act.


* * * * * * * * * * *


(1) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act)


Bénédicte DELEPORTE
Avocat

Deleporte Wentz Avocat
www.dwavocat.com

April 2025