Compliances
RGPD vs IA: The challenges of protecting personal data in the implementation of AIS
At a time when the first provisions of the artificial intelligence regulation are coming into force, the compliance of AI systems is becoming an essential issue.
Artificial intelligence (AI) is defined by Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 as follows: ” A system designed to work with elements of autonomy and capable, for a given set of human-defined goals, of generating results such as content, predictions, recommendations, or decisions that influence the environments with which it interacts.” The regulation distinguishes between artificial intelligence systems (AIS) and general-purpose AI models.
AIS are AI applications designed for specific tasks or areas, such as medical diagnostic support systems. In contrast, general-purpose AI models are versatile systems, capable of being used in a variety of contexts and for a variety of applications. For example, a natural language processing model can be adapted to perform machine translation.
Artificial intelligence raises complex issues, especially in the area of personal data protection. Indeed, artificial intelligence systems operate using a large or even massive quantity of data, justifying the establishment of a rigorous framework governing their use and processing, while ensuring respect for the fundamental rights of individuals, including respect for privacy.
The challenges are multiple : how to ensure that algorithms do not compromise the privacy of individuals? How can we ensure that the data analysis carried out by AI systems remains ethical and in accordance with the principles of transparency, fairness and accountability?
To face these challenges, which are not the same in the design phase and in the deployment phase, data protection authorities, such as the CNIL in France and the EDPS at the European level, must constantly reassess and adjust their doctrines to inform actors in the field on the compliance procedures to be carried out by integrating technological developments. Here we provide an overview of recent developments in this doctrinal and/or regulatory framework relating to AI and the RGPD.

GDPR certification of subcontractors and AI systems
In order to establish a rigorous and homogeneous framework, the CNIL has initiated a public consultation on a draft evaluation framework for the RGPD certification of subcontractors[1]. This approach aims to define an ambitious level of compliance while making it accessible, in particular to SMEs.
Setting up an AI system requires the intervention of multiple actors, manufacturers frequently use subcontractors, so it is their responsibility to ensure that these service providers, subcontractors within the meaning of the GDPR, present sufficient guarantees.
In accordance with article 28 of the GDPR, data controllers are required to ensure that their subcontractors provide sufficient guarantees in terms of data protection. Certification would thus make it possible to establish official recognition of good practices and to offer data controllers a reliable and transparent choice criterion.
The framework defined by the CNIL consists of 90 criteria divided into several sections:
⤷ Contractualization, overseeing the relationships between the data controller and the subcontractor.
⤷ Treatment preparation and safety measures, concerning the establishment of a compliant treatment environment.
⤷ Implementation of the treatment, covering the effective execution of processing operations.
⤷ End of treatment, defining the procedures for ceasing activity and deleting data.
A specific section is devoted to follow-up actions to be carried out over the period of validity of the certification, which is fixed at three years and is renewable.
The implementation of subcontractor certification meets several fundamental objectives. First of all, it makes it possible to strengthen compliance with the GDPR by framing the obligations of subcontractors, in particular with regard to security, confidentiality and cooperation with data controllers. In addition, this certification promotes the harmonization of requirements, thus avoiding heterogeneous conformity assessments.
In addition, it facilitates the choice of subcontractors for data controllers by offering a recognized reference label, making it possible to avoid time-consuming and expensive individual audits.
Certification also contributes to the valorization of the best practices of subcontractors by giving them a significant competitive advantage. It strengthens their credibility on the market and improves the trust of partners as well as end users, by guaranteeing reliable and secure processing of personal data.
Finally, it makes it possible to harmonize and professionalize practices by establishing a clear reference framework for subcontractors, thus facilitating their compliance. Certification also encourages the continuous improvement of processes through periodic renewal, guaranteeing the adaptation of companies to technological and regulatory developments.
It is important to note that subcontractors, as part of their collaboration with data controllers, may be required to handle data from international transfers, in particular between different countries. These transfers raise specific compliance and security challenges, requiring particular attention to ensure the protection of personal data at each stage.
[1] RGPD certification for subcontractors : the CNIL is consulting on a draft evaluation framework.

Personal data transfers between the European Union and the United States: ensuring the compliance of data transfers feeding AI systems
AI systems require vast data sets to learn and make accurate predictions, so data transfers become attractive to feed AI systems. This data comes from a variety of sources such as internal databases, sensors, or third-party services, and must be transferred securely.
The challenge of these transfers is crucial; the confidentiality, integrity and availability of data must be guaranteed during these transfers, in order to avoid any risk of leaks or malicious manipulation.
The GDPR establishes the legal framework for transfers of personal data to third countries or international organizations, in accordance with its Chapter V. This transfer is characterized by the transmission or provision of such data by a data controller or a subcontractor established in the European Union to a recipient located outside the European Economic Area (EEA).
Transfers of personal data must absolutely guarantee an adequate level of protection of the rights and freedoms of the persons concerned. As such, several regulatory mechanisms exist:
⤷ The European Commission's adequacy decision (article 45 GDPR) : when a third country provides a level of protection equivalent to that of the EU, the European Commission may adopt a decision allowing data to be transferred without additional measures.
⤷ Standard contractual clauses (CCT) (article 46 RGPD): in the absence of an adequacy decision, the parties may frame the transfer by collective agreements adopted by the European Commission or a national supervisory authority.
⤷ Other appropriate guarantees: such as codes of conduct or certification mechanisms (Article 46 RGPD), or even binding corporate rules (Article 47 RGPD),
⤷ Exceptional derogations (article 49 RGPD), applicable under strict conditions when other mechanisms cannot be put in place.
On January 30, the CNIL published the final version of the practical guide on data transfer impact assessments (AITD) in order to best support exporters of personal data. The latter, whether they are data controllers or subcontractors, are only required to carry out an AITD when their transfer is based on article 46 of the RGPD. The aim of the AITD is to assess whether the selected importer offers the required guarantees of protection during the transfer of personal data.
This guide offers a methodology detailing the preliminary steps and the essential elements to consider in conducting an AITD, based on the recommendations of the EDPS. However, it does not constitute an evaluation of the laws of third countries and its use remains optional.
Section 2 of the guide specifies the steps prior to completing an AITD. First, the exporter must verify the existence of a transfer of personal data and determine if it requires an AITD. Next, it must identify the qualification of the entities involved in the transfer (data controller, joint data controller or subcontractor), this classification defining the distribution of responsibilities and the specific obligations of each party. It is also up to him to define the scope of the transfer, including any subsequent transfers, and to ensure that the transfer complies with the principles of the GDPR.
If carrying out an AITD is necessary, section 3 of the guide then describes the six essential steps to follow:
⤷ Identify the data transfer;
⤷ Determine the transfer tool used;
⤷ Evaluate the legislation and practices in force in the country of destination;
⤷ Identify and adopt additional measures;
⤷ Implement these measures;
⤷ Regularly reassess the level of protection in order to anticipate possible changes.
This guide is thus a methodological tool intended to support exporters at each stage of the process, while giving them the opportunity to adopt alternative approaches adapted to their specific context.
Many data transfers take place between the European Union and the United States, yet the establishment of a legal regime guaranteeing compliance with European regulations seems far from being taken for granted. The transfer of personal data to the United States has been the subject of significant litigation before the Court of Justice of the European Union, in particular with the Schrems I and Schrems II judgments. The future of the Data Privacy Framework remains uncertain, in particular due to the recent vacancy of three seats on the Privacy and Civil Liberties Oversight Board (PCLOB), the body that guarantees the legality of the Data Privacy Framework, could weaken the control framework in place.
In addition, on January 20, Donald Trump repealed the artificial intelligence security decree signed by Joe Biden. The aim of this decree was to establish strict guidelines for AI, with an emphasis on the protection of privacy. It required that AI technologies respect fundamental privacy principles, by guaranteeing the security of the sensitive data collected and their use in accordance with current legislation.
In addition, it required algorithm developers to conduct thorough assessments to identify biases, vulnerabilities, and flaws in their systems, while requiring the results of these tests to be published to government authorities.
Personal data transfers are not only a question of legal compliance, they are also a major economic issue.. Major digital platforms, like Facebook, generate substantial advertising revenue through the exploitation of their users' personal data (around 32 euros per European user per year according to some estimates).
In this context, it is essential to ensure compliance with the requirements of the GDPR, especially in the context of exchanges between the European Union and the United States. These transfers, in particular feeding artificial intelligence systems, involve not only technical considerations relating to complex legal issues, but also to data security, requiring particular attention to preserve the confidentiality and rights of the individuals concerned.

Data anonymization: an update of recommendations needed with the development of AI systems
Pseudonymization and anonymization represent two distinct and essential approaches in the protection of personal data.
Pseudonymization consists in replacing directly identifiable information with fictional identifiers or aliases, thus making it more complex to identify individuals, although this identification remains theoretically possible. It should be noted that pseudonymized data continues to be personal data and therefore remains subject to the provisions of the GDPR.
On 17 January 2025, the EDPS adopted guidelines on pseudonymisation and suggested strengthening collaboration with competition authorities in order to harmonize the relationship between competition law and the protection of personal data.
According to the guidelines, pseudonymization is considered to be a relevant device to mitigate the risks associated with data processing. It may also promote the use of legitimate interest as a legal basis for processing, in accordance with article 6.1.f of the GDPR, subject to compliance with other regulatory requirements.
The guidelines also offer an analysis of the technical measures and guarantees needed to ensure confidentiality and prevent the unauthorized identification of individuals when using pseudonymization. These guidelines are open for public consultation until 28 February 2025, allowing stakeholders to submit their contributions to enrich the final document.
Conversely, anonymization aims to transform data irreversibly, so that it becomes impossible to establish a link between the data and the people concerned, even by cross-checking this information with other sources. Once anonymized, the data is no longer considered personal data and thus escapes the application of the GDPR.
However, with technological advances, especially in the fields of artificial intelligence and big data analysis, re-identification techniques have become considerably refined. Data that was once considered anonymous can now be cross-referenced with other data sets to identify individuals.
This evolution raises crucial questions about the reliability and effectiveness of anonymization methods in the face of the increasing capabilities of data processing tools. At the same time, the proliferation of connected objects, mobile applications and online services has led to the collection of new forms of data, such as geolocation information, consumer habits or even biometric data. These new categories of data present considerable protection challenges, requiring increased vigilance.
In this context, it is necessary to harmonize anonymization practices. Indeed, the diversity of methods used by organizations leads to disparate levels of protection, which can compromise the overall effectiveness of security measures. The CNIL has already recognized the need to update its guidelines, in the light of technological advances that make certain techniques obsolete or less effective.
During its plenary session on 30 January 2025, she raised this issue, stressing the importance of adapting anonymization practices to the challenges posed by artificial intelligence and data processing methods. Although the publication of a specific date for the revision of these recommendations has not yet taken place, it is expected that these new guidelines will be established within some time, in order to support organizations in implementing effective anonymization practices that comply with current data protection requirements.
Recently, the EDPS issued an important opinion on AI models, it thus highlights that AI models cannot automatically be considered anonymous. It recommends careful evaluation on a case-by-case basis, taking into account the specificities of the model, its dissemination context and possible extraction techniques that could make personal data identifiable. This approach highlights the importance of a rigorous analysis of the risks associated with anonymization, in an environment where re-identification techniques are becoming more and more sophisticated.
At the end of this opinion, the EDPS underlined the need to guarantee data protection from the very beginning when developing these technologies. Furthermore, the EDPS opinion also recalls that the use of legitimate interest as a legal basis requires a three-step test.
This involves first identifying the legitimate interest pursued, followed by an assessment of the need for the treatment, before balancing it with the rights and freedoms of the persons concerned. This approach reinforces the need for businesses to prove that their treatment is genuinely justified and proportionate. Finally, the opinion discusses the consequences of unlawful data processing.
In the event of non-compliance with the rules, corrective measures can be applied, ranging from limiting processing to deleting the data, or even the model itself. The impact of infringements will be assessed according to the scenario in which the model is used, regardless of whether it is operated by the data controller or by a third party. This precision highlights the responsibility of businesses in managing compliance throughout the lifecycle of AI models.
The EDPS opinion highlights the importance of a rigorous, contextual and well-supervised approach to data processing in the field of AI. It warns against excessive use of legitimate interests and reinforces the need for truly effective anonymization. In the event of a violation, the opinion highlights clear sanctions adapted to the specific challenges of each case. In short, this opinion recalls the collective responsibility of companies and regulators to ensure strict compliance with the GDPR, in a context where AI technologies are rapidly evolving.
These developments illustrate the continuous commitment of European authorities to adapt and strengthen the protection of personal data in the face of technological and legal developments.
Other articles that may interest you

Contracts
6/1/2025
13 min read
Software & unilateral price revision: between contractual freedom and legal framework
Through this article, we want to share with you several feedback that can help you prevent the emergence of disputes and, therefore, to secure your commercial relationships.
We will not mention relationships between traders, governed by the Commercial Code. We will focus on a particular, although relatively common, situation, namely commercial relationships between a software publisher and a professional customer.

Innovations
13/1/2025
10 min read
AI ACT : Protection of rights and artificial intelligence
For several years, the European Union has sought to oversee the development of artificial intelligence in order to reconcile innovation and the protection of fundamental rights. In this context, Regulation EU 2024/1689 (AI Act) was adopted by the European Parliament and the Council on 13 June 2024, prior to its publication in the Official Journal of the European Union on 12 July 2024.
This text establishes regulations based on a risk-based approach, prohibiting certain practices and imposing strict requirements, especially for high-risk AI systems. The application of this regulation is particularly significant in the field of health, where AI promises major advances while requiring compliance with numerous European laws, such as the RGPD and the MDR regulation.

Compliances
12/2/2025
8 min read
RGPD vs IA: The challenges of protecting personal data in the implementation of AIS
At a time when the first provisions of the artificial intelligence regulation are coming into force, the compliance of AI systems is becoming an essential issue.
Artificial intelligence (AI) is defined by Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 as follows: ” A system designed to work with elements of autonomy and capable, for a given set of human-defined goals, of generating results such as content, predictions, recommendations, or decisions that influence the environments with which it interacts.” The regulation distinguishes between artificial intelligence systems (AIS) and general-purpose AI models.
AIS are AI applications designed for specific tasks or areas, such as medical diagnostic support systems. In contrast, general-purpose AI models are versatile systems, capable of being used in a variety of contexts and for a variety of applications. For example, a natural language processing model can be adapted to perform machine translation.
Artificial intelligence raises complex issues, especially in the area of personal data protection. Indeed, artificial intelligence systems operate using a large or even massive quantity of data, justifying the establishment of a rigorous framework governing their use and processing, while ensuring respect for the fundamental rights of individuals, including respect for privacy.
The challenges are multiple : how to ensure that algorithms do not compromise the privacy of individuals? How can we ensure that the data analysis carried out by AI systems remains ethical and in accordance with the principles of transparency, fairness and accountability?
To face these challenges, which are not the same in the design phase and in the deployment phase, data protection authorities, such as the CNIL in France and the EDPS at the European level, must constantly reassess and adjust their doctrines to inform actors in the field on the compliance procedures to be carried out by integrating technological developments. Here we provide an overview of recent developments in this doctrinal and/or regulatory framework relating to AI and the RGPD.