
AI Act Compliance: What GPAI Providers Need to Know
Key Takeaways
The European Regulation on Artificial Intelligence (aka "AI Act”) was adopted on June 13, 2024. It represents a major step forward in the regulation of AI technologies within the EU. Providers of AI systems are subject to specific obligations regarding compliance, governance, safety, and transparency in the development and marketing of AI systems. This second article provides a summary of the obligations applicable to providers of general-purpose AI models (GPAI).
The European Regulation on Artificial Intelligence (the "AI Act"), adopted on June 13, 2024, marks a decisive advancement in the regulatory framework governing AI technologies within the EU. (1) The regulation seeks to balance technological innovation, with the protection of fundamental rights, and legal certainty for economic operators.
The regulation is based on a risk-based, tiered approach, distinguishing between prohibited AI systems, high-risk AI systems, limited-risk systems, and minimal-risk systems. General-purpose AI models (or "GPAI") fall into a separate category. (For an overview of the AI Act, see our previous article: Artificial Intelligence - the European Regulatory Approach)
Providers of AI systems are subject to specific obligations related to compliance, governance, safety, and transparency regarding the development and placement on the market of AI systems. As a result, providers must begin preparing for compliance.
In this article, we focus on the obligations applicable to providers of general-purpose AI models (GPAI). It follows our earlier article on the obligations of providers of high-risk AI systems.
1. Obligations Applicable to All AI System Providers
An AI provider is defined as "a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model, or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge" (Art. 3 AI Act).
Regardless of the system's risk level, all providers must comply with a common set of baseline requirements, organized around three main pillars: training, transparency, and adherence to prohibited AI practices.
1.1 Training the Parties Involved in AI
Providers must ensure that individuals involved in the design, deployment, and operation of AI systems have a sufficient level of expertise in AI. This requirement must be proportionate to the context of use, the target audience, and the technical competencies of the relevant teams (Art. 4).
It is therefore recommended that providers formalize a training plan that includes modules covering AI use cases, fundamental rights, risks, and system limitations.
1.2 Ensuring Transparency in Human-Machine Interactions
Users must be informed that they are interacting with an AI system, unless this is obvious to a reasonably well-informed and reasonably observant person (Art. 50.1). This information obligation does not apply to AI systems that are authorized by law for the prevention, detection, or investigation of criminal offenses.
When AI systems generate synthetic content (text, audio, image, or video), such content must be machine-readable and clearly identifiable as being created or manipulated by an AI system (Art. 50.2).
Accordingly, providers should develop a labeling policy and clear user documentation, particularly for systems embedded in interactive products (chatbots, generative AI, etc.).
1.3 Complying with Prohibitions Set Forth by the AI Act
Article 5 of the regulation sets out a list of prohibited AI practices, regardless of the risk level of the system involved. Among the prohibited activities relating to placing on the market, putting into service, or using AI systems are:
The use of subliminal techniques to distort behavior;
- The exploitation of psychological vulnerabilities linked to age (minors, elderly persons), disabilities, or social or economic circumstances;
- Social scoring systems;
- Biometric categorization based on sensitive characteristics;
- Profiling aimed at assessing or predicting a natural person's likelihood of committing a criminal offense;
- Biometric identification in public spaces for law enforcement purposes (subject to certain exceptions).
It is recommended that providers conduct compliance audits during the early stages of developing any new AI system to eliminate the risk of engaging in prohibited practices under the regulation.
2. Obligations for Providers of General-Purpose AI Models (GPAI)
A General-Purpose AI model (GPAI) is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market” (Art. 3).
General-purpose AI models are versatile AI systems that can be integrated into various downstream systems or applications. For instance, generative AI models are considered GPAI because they can create a variety of outputs, such as text, images, audio or video content, translations, software code that can then be used in a wide range of applications.
GPAI models are typically trained on large datasets collected from the internet. Such data collection may raise legal issues, particularly regarding data protected by copyright or involving personal data.
The AI Act distinguishes between two categories of GPAI models: standard GPAI models, subject to a common set of obligations, and GPAI models with systemic risk, subject to additional, more stringent requirements.
2.1 Common Obligations for All Providers of General-Purpose AI Models
Providers of GPAI models must comply with specific transparency and traceability requirements, namely (Art. 52):
- Provide technical documentation of the model, including its training and testing processes, and the results of its evaluation. This documentation must at a minimum include the information listed in Annex XI of the regulation (such as the tasks the model is intended to perform, the types of AI systems it may be integrated into, applicable acceptable use policies, input and output formats, and the model’s applicable license);
- Provide this documentation to the European AI Office and national competent authorities upon request;
- Provide information and documentation to AI system providers intending to integrate the GPAI model into their systems. This documentation must at a minimum include the information listed in Annex XII;
- Implement a policy to ensure compliance with copyright and related rights regulations;
- Make a public summary available describing the content used to train the GPAI model;
- Cooperate with the European Commission and national competent authorities.
Accordingly, it is recommended that GPAI providers document the entire model development cycle, particularly the sources of data, data cleansing processes, copyright compliance, and key algorithmic design choices.
2.2 Additional Obligations for GPAI Models with Systemic Risk
General-purpose AI models with systemic risk are those whose potential impact is considered particularly significant at the European level due to their computational power, scale of deployment, or capacity to generate high-risk uses (Art. 51).
The criteria for determining whether a GPAI model presents systemic risk are outlined in Annex XIII of the regulation. These criteria include, notably:
- The number of parameters of the model;
- The quality or size of the dataset used;
- The computing power required for model training, measured, for example, in floating-point operations (FLOPs);
- The model’s input and output modalities, such as the use of large language models (LLMs);
- The model’s learning capability, degree of autonomy, scalability, and access to external tools;
- The model’s impact on the European market, for instance if it has been made available to at least 10,000 registered business users in Europe or based on the number of registered end-users.
This classification is decided by the European Commission, either following a scientific panel’s alert or automatically based on the model’s characteristics.
A list of GPAI models with systemic risk will be published by the Commission (Art. 52).
Providers of GPAI models with systemic risk must, in particular (Art. 55):
- Conduct evaluations of their models to identify and mitigate systemic risks;
- Document and report to the European AI Office, and where applicable to national competent authorities, any serious incidents related to the model’s use and any corrective measures taken;
- Ensure an appropriate level of cybersecurity protection for their models.
2.3 Exclusions and Special Cases
Certain types of general-purpose AI models are exempt from all or part of the obligations outlined in Articles 51 et seq.:
- Open-source GPAI models (unless classified as GPAI with systemic risk) are exempt from the documentation requirements set out in Article 52;
- Models used exclusively for research, development, or prototyping purposes before their placement on the market;
- Models placed on the market or put into service exclusively for military, defense, or national security purposes.
It is also important to note that providers of GPAI models established outside the European Union must appoint an authorized representative within the Union (Art. 54).
The authorized representative will serve as the point of contact and representative before the national competent authorities.
The obligations applicable to general-purpose AI models will take effect on August 2, 2025.
If you are a provider (developer) or deployer of AI systems, our firm is available to assist you in achieving compliance with the AI Act.
(1) Regulation (EU) 2024/1698 of the European Parliament and of the Council of 13 June 13 2024, establishing harmonizsed rules on Artificial Intelligence.
Bénédicte DELEPORTE
Avocat
Deleporte Wentz Avocat
www.dwavocat.com
April 2025