Innovations
AI ACT : Protection of rights and artificial intelligence
For several years, the European Union has sought to oversee the development of artificial intelligence in order to reconcile innovation and the protection of fundamental rights. In this context, Regulation EU 2024/1689 (AI Act) was adopted by the European Parliament and the Council on 13 June 2024, prior to its publication in the Official Journal of the European Union on 12 July 2024.
This text establishes regulations based on a risk-based approach, prohibiting certain practices and imposing strict requirements, especially for high-risk AI systems. The application of this regulation is particularly significant in the field of health, where AI promises major advances while requiring compliance with numerous European laws, such as the RGPD and the MDR regulation.

AI ACT : General and cross-sectoral presentation
Artificial intelligence (AI) is a set of technologies that is constantly evolving, bringing numerous economic, environmental, and social benefits to various sectors.
However, depending on the contexts of application, use and the degree of technological development, AI can generate risks and threaten public interests, or fundamental rights guaranteed by European Union law.
In order to oversee the development of AI, the European Union adopts its first binding text: EU Regulation 2024/1689 establishing harmonized rules concerning artificial intelligence (IA Act).
This text was definitively adopted by the European Parliament and the Council on 13 June 2024, then published in the Official Journal of the European Union on 12 July 2024. Its entry into force varies according to the provisions concerned, lasting between one and three years.
Prior to the introduction of this regulation, numerous ethical declarations concerning Artificial Intelligence were adopted. But with the evolution of technology, the European Union is changing its approach, it is abandoning its soft law strategy, and is implementing a binding regulation, based on a risk-based approach.
This risk-based approach articulates all the provisions of the regulation. First of all, the IA Act prohibits certain practices, the list of which is set out in section 5. In addition, given the rapid evolution of technologies, the European Commission will be required to conduct an annual assessment of the need to review the list of prohibited practices, in order to prevent the risk of programmed obsolescence of digital tools, taking into account the speed of development of artificial intelligence.
Indeed, the AI system cannot have the effect or purpose of manipulating behaviors, social scoring, biometric identification or categorizing individuals. The implementation of these practices is punishable by an administrative fine, or by non-monetary sanctions provided for by Member States.
Next, the regulation mentions the existence of high-risk artificial intelligence, whose system can have repercussions on the safety or fundamental rights of individuals. This system can be integrated into a product or associated with a particular domain. Finally, the regulation refers to artificial intelligence for general use, thus designating a system that can be integrated into any product.
The establishment of regulatory sandboxes for artificial intelligence illustrates the European Union's desire to adopt an approach that seeks a balance between the objective of not paralyzing innovation, while trying to limit the occurrence of risks for the people concerned.
Thus, these sandboxes create a controlled environment for experimentation and testing at the development and pre-marketing stages, allowing the reuse of personal data, including sensitive data, thus facilitating the development of systems of major public interest, such as improving the health system.
One of the major challenges of the regulation lies in taking into account the role and the multiplicity of actors involved in the chain of responsibility. An important distinction is made between direct operators on the one hand and indirect operators on the one hand and indirect operators on the other.
1 — Direct operators
First of all, when it comes to direct operators, the supplier bears most of the responsibility.
The supplier develops AI and brings it to the market, as soon as AI is put on the market, they can be responsible. Essential phase of preventing the occurrence of risks through a certification procedure that will be combined with other approaches depending on the characteristics of the product concerned, in particular in the field of health.
The deployer is a natural or legal person using an AI system under his own authority except when this system is used in the context of a non-professional personal activity. The distributor is a natural or legal person who is part of the supply chain, he makes an AI system available on the Union market, he could be responsible for this integration.
2 — Indirect operators
The importer is a natural or legal person located or established in the Union who places on the market an AI system that bears the name or brand of a natural or legal person established in a third country, he may be responsible if the AI system is at high risk or presents systemic risks.
The trustee has received and accepted a written mandate from a supplier of an AI system or a general-purpose AI model to perform on its behalf the obligations and procedures established by this Regulation and shall ensure that the AI system complies with the requirements.
In practice, this text on the AI act is a perfect illustration of the EU Declaration for the Digital Decade on digital values that we can look forward to as a citizen. Nevertheless, manufacturers will once again have to take into account this new regulatory layer. In this respect, the field of health offers a topical example of this phenomenon of legislative stratification.

AI ACT: Application specific to the health field for a DMN provider
1 — The various legislations to be respected by the DMN supplier
The use of AI in the field of health fulfills a specific objective: the sharing and opening of health data for research.
This objective contributes to the improvement of the health of all. The openness of health data will make it possible to accelerate and secure access to health data, in all transparency, in particular for the benefit of digital tools integrating artificial intelligence systems.
Artificial intelligence is very promising, it can help us meet certain challenges encountered in the field of health, such as the creation of synthetic medical data sets or the simulation of medical scenarios.
However, in accordance with the General Data Protection Regulation (GDPR, Regulation (EU) 2016/679), health data is classified as sensitive data.
For this reason, their processing is in principle prohibited, unless it can be justified to be within one of the 10 exceptions provided for by article 9 of the RGPD and, where applicable, in compliance with the prior formalities to be completed by the data controller.
Artificial intelligence systems operating in the healthcare sector will generally be classified as high-risk AIs. This includes, for example, AI systems designed to contribute to the diagnosis or prevention of certain diseases.
Such systems can only be placed on the Union market if they meet certain requirements that ensure that they do not present unacceptable risks to the public interests of the Union as recognized and protected by law.
First of all, as part of a certification procedure with a certified body, the supplier must meet the transparency, documentation and security requirements imposed by the regulation, it must set up a risk management system and data governance (collection methods, verification of adequacy, control of biases, pseudonymization, etc.).
Compliance with these requirements is demonstrated by the award of a CE mark attesting to the conformity of the product.
Particular attention deserves to be paid to the coordination of the various European regulations imposing various obligations on digital medical devices in order to be placed on the market. When a supplier wishes to market a digital medical device integrated into the patient record, it must comply with the requirements of European Union law.
This includes compliance with the principles of the GDPR regarding the processing of personal data, in particular lawfulness, transparency and purpose limitation.
The supplier must also carry out an evaluation of its device by a competent national body, such as AFNOR Certification in France, to obtain a CE mark attesting to compliance with safety and health requirements. If the device is integrated into the patient file, the EHDS regulation of 2024 imposes an EU declaration of conformity, verifying the interoperability and portability of the system.
If the device uses artificial intelligence classified as high risk, CE marking certification is mandatory in order to attest to compliance with the requirements set out in the regulation concerning risk management and transparency.
Note that, according to the MDR and IA Act regulations, a single CE mark may suffice if the high-risk AI system is subject to other EU legislation. A guidance note by the Office of Artificial Intelligence and the Medical Device Regulations (MDR) and In Vitro Diagnostics (IVDR) Coordination Group will be developed by 2025 to clarify the relationships between these regulations.
If the supplier wishes to enter the French market, it will also have to ensure compliance with Franco-French requirements relating to remote surveillance.
In fact, according to articles R6316-1 and following of the Public Health Code, the health professional who uses a remote surveillance activity must carry out his act under the conditions provided for by the law (authentication of the health professionals involved in the act or even identification of the patient.)
The health professional must also enter in the patient file and, where applicable, in the shared medical file his identity, an account of the carrying out of the remote surveillance act, the acts and prescriptions carried out as part of the telemedicine act or the telecare activity.
Health Insurance coverage will only take place if the conditions provided for in the Social Security Code are met, and in case of compliance with the conditions set out in the Interoperability and Security Framework for digital medical remote surveillance devices of July 25, 2022.
2 — Responsibility
It should be emphasized that with the emergence of artificial intelligence, the damage it could cause is likely to multiply, thus leading to an increase in claims for damages based on the principle of responsibility.
However, national law could hinder the effective implementation of claims for compensation, in particular because of the difficulty of proving the harm suffered by the victim, or because of the reasons for exemption provided for in the Civil Code, such as the state of scientific and technical knowledge at the time the product was put into circulation.
Therefore, directives will be adopted as a result of the IA regulation. Initially, the European Union will adopt a directive on extra-contractual liability specific to AI, the objective is to simplify the probationary rules for victims of damage caused by AI.
More specifically, while the proposed directive still places the burden of proof on the victim, it intends to adjust the probationary regime by establishing measures for the disclosure of evidence and rebuttable presumptions for certain types of AI.
In other words, a court may therefore order the disclosure to victims of relevant evidence concerning high-risk AI systems suspected of having caused their harm, provided that the disclosure is necessary and proportionate to the needs of the action.
Once the injunction is issued, the addressee will be required to provide the required evidence, otherwise liability will be presumed. The text also establishes a presumption of causality between the fault and the alleged damage, when several conditions listed in the text are met.
On the other hand, the European Union intends to adopt a directive to adapt liability for defective products. The plaintiff will have to establish proof of the harm suffered, demonstrate that the product does not guarantee the safety that the victim was entitled to expect, as well as the causal link between these two elements. The text continues to establish a no-fault liability regime.
In addition, the definition of defect includes new criteria to adapt to problems related to artificial intelligence. It will therefore be evaluated whether the operator maintains control over his product, and whether the AI has the capacity to continue learning.
Finally, the concept of product, initially limited to only movable goods, is expressly extended to software, and therefore to faulty AI systems.

Other articles that may interest you

Contracts
6/1/2025
13 min read
Software & unilateral price revision: between contractual freedom and legal framework
Through this article, we want to share with you several feedback that can help you prevent the emergence of disputes and, therefore, to secure your commercial relationships.
We will not mention relationships between traders, governed by the Commercial Code. We will focus on a particular, although relatively common, situation, namely commercial relationships between a software publisher and a professional customer.

Innovations
13/1/2025
10 min read
AI ACT : Protection of rights and artificial intelligence
For several years, the European Union has sought to oversee the development of artificial intelligence in order to reconcile innovation and the protection of fundamental rights. In this context, Regulation EU 2024/1689 (AI Act) was adopted by the European Parliament and the Council on 13 June 2024, prior to its publication in the Official Journal of the European Union on 12 July 2024.
This text establishes regulations based on a risk-based approach, prohibiting certain practices and imposing strict requirements, especially for high-risk AI systems. The application of this regulation is particularly significant in the field of health, where AI promises major advances while requiring compliance with numerous European laws, such as the RGPD and the MDR regulation.

Compliances
20/1/2025
12 min read
EHDS Regulation : European Health Data Space
For many years, the European Council has been calling on Member States to strengthen the implementation of their digital health strategies. In this context, on 3 May 2022, the European Commission presented a proposal for a regulation to establish the European Health Data Area (EHDS).
The draft regulation was adopted by the Member States on 22 March 2024 and then by the European Parliament on 24 April 2024. The publication of the text in the Official Journal is expected in autumn 2024, and its entry into force varies depending on the provisions concerned (between 2 years and 10 years).