Article 22 of the Belgian constitution provides for the right to protection of private and family life, and forms the cornerstone of the Belgian laws governing or impacting privacy in general. In addition, Article 8 of the European Convention on Human Rights has direct effect in Belgium and is a cornerstone of the rule of law and of the Belgian law enforcement system.
However, from the point of view of digital technologies and innovation, the most important regulation in Belgium for businesses is the General Data Protection Regulation, also referred to as the GDPR (Regulation (EU) 2016/679), which applies to all member states of the EU. Along with the European legislation, the Belgian Law of 30 July 2018 on the protection of natural persons with regard to the processing of personal data also applies. This Belgian legislation adopts a number of principles enshrined in the GDPR in respect of the activities of specific state and public bodies. In respect of businesses, it does not add or deviate much from the standard rules laid down by the GDPR.
Belgium has established its supervisory authorities by implementing the Law of 3 December 2017, as required by the GDPR. The main supervisory authority is vested with investigative and corrective powers and is entitled to fine a controller or processor if they do not comply with the GDPR or the Belgian Law of 30 July 2018. The fines as listed in Article 83 of the GDPR may not, however, be imposed on public authorities and their appointees or agents, unless they are a legal person governed by public law offering goods or services on a market (Article 221, Section 2 of the Belgian Law of 30 July 2018).
In addition to the GDPR and the Belgian Law of 30 July 2018, other laws have been enacted to respect privacy and fundamental rights in different fields, such as consumer protection, electronic communications, electronic commerce, direct marketing and the use of closed-circuit television (CCTV), etc. Indeed, the Code of Economic Law (CEL) contains certain provisions on direct marketing in its Book VI and is supplemented in this respect by the Royal Decree of 4 April 2003, regulating the sending of advertising by e-mail. In addition, the Law of 21 March 2007 on the use of camera surveillance regulates the use of CCTV in public and private areas. The authority responsible for the enforcement of these regulations is the Belgian Data Protection Authority (DPA).
In December 2024, Belgium also enacted a major reform of private investigations that aims to translate the essential requirements of data protection law in the field of intelligence gathering activities of the private sector (see 4.3 Employment Privacy Law). The Act on Private Investigations is public policy, and breaches thereof can lead to rejection or cancellation of evidence in court, as well as to administrative or criminal offences.
At present, no specific legal regime has been enacted with respect to artificial intelligence (AI).
The Belgian DPA consists of:
The DPA has the right to conduct audits.
Furthermore, investigations may be launched on the initiative of the DPA, where a complaint is lodged by a data subject or a body, organisation or association that has been properly constituted in accordance with the law of an EU member state, has statutory objectives of public interest and is active in the protection of data subjects’ rights and freedoms.
Alongside the DPA, different regulators and public authorities have a role to play in data sharing, open data and the national implementation of the EU data spaces strategy.
With respect to AI, it is still unclear whether the DPA will be vested with regulatory powers under the EU AI Act and, if so, to what extent. That being said, there is little doubt that the DPA will exercise its powers in relation to automated decision-making, and the impact of AI projects on fundamental rights, as often as it can.
The DPA must comply with the GDPR and the Belgian Law of 30 July 2018. When a complaint is filed or an investigation is launched, there will usually be an initial fact-finding phase during which the authority will ask a business to provide factual information. Afterwards, proceedings on the merits can be started in front of the Litigation Chamber of the DPA, in the scope of which parties can submit their respective arguments in writing and possibly be heard.
After the proceedings, the Litigation Chamber is entitled to:
In the event that the DPA imposes an administrative fine, such fine must be effective, proportionate and dissuasive, pursuant to Article 83 of the GDPR. Furthermore, specific circumstances must be taken into account when imposing an administrative fine and deciding on its amount.
If the respondent does not agree with the decision handed down by the Litigation Chamber, the respondent may lodge an appeal before the Market Court (Brussels Court of Appeal) within 30 days of notification of the decision. The Market Court can overturn the decision, in whole or in part, and remand the case, or decide on all grounds and substitute its decision.
Since February 2024, any interested third party affected by a decision of the DPA, who was not a party to the proceedings before the Litigation Chamber, may also lodge an appeal before the Market Court, insofar as it suffers personal, direct, certain, current and legitimate harm due to the decision of the Litigation Chamber.
The Litigation Chamber also has the power to propose a transaction. To facilitate a faster resolution, the DPA has recently issued a (non-binding) settlement policy to help companies navigate DPA transactions.
While there is no official calculation method for fines in Belgium, the DPA consistently refers to the European Data Protection Board (EDPB) Guidelines 4/2022.
These Guidelines outline a methodology for determining the sum of the fine, namely determining:
The DPA uses this methodology to determine the extent of administrative fines. In Belgium, fines are transferred to the State Treasury.
Recent Decisions From the DPA in 2024
Security failures result in EUR200,000 fine (Decision No 166/2024)
The DPA fined a hospital EUR200,000 for breaching the GDPR following a cyber-attack in 2021. The attack compromised the personal data of 300,000 individuals and made the hospital’s servers inaccessible. The hospital was found to have failed to conduct a data protection impact assessment (DPIA), establish an effective information security policy or implement essential security measures, such as employee training and system log monitoring.
EUR45,000 fine for GDPR violations at the workplace (Decision No 114/2024)
On 6 September 2024, the DPA imposed a fine of EUR45,000 on a company following a complaint from an individual who had been employed as a temporary worker for approximately one year. The company collected employees’ fingerprints for time registration without offering alternatives, establishing a legal basis, or informing employees about data storage, retention and third-party transfers. The DPA found the company in violation of GDPR principles, including purpose limitation, data minimisation and transparency.
GDPR violations related to dark patterns in cookie consent (Decision No 113/2024)
The DPA fined Mediahuis EUR25,000 per day for using dark patterns and illicit cookie practices on its websites following a complaint. The complainant, represented by the European Center for Digital Rights (NOYB), highlighted the absence of an “accept all” button, deceptive button colours and difficulties in withdrawing consent. The DPA ordered Mediahuis to adjust the cookie banners within 45 days to include a refusal button and avoid deceptive colours. If non-compliance persists beyond 45 days, a fine of EUR25,000 per day per website will be imposed. The DPA also reprimanded Mediahuis, stating that only strictly necessary cookies may be used based on legitimate interest.
Delayed access request response leads to EUR100,000 fine (Decision No 207/2024)
The DPA fined an unnamed telecommunications company for failing to respond promptly to a client’s access request. The company made unsolicited changes to the individual’s subscriptions. When the individual submitted an access request under Article 15 of the GDPR, the company took 14 months to respond, thereby violating Articles 12(2), 12(3), and 15 of the GDPR.
EUR172,431 EUR fine for failing to honour data subject rights (Decision No 87/2024)
The DPA fined a company for failing to erase a data subject’s personal data used in direct marketing, and for having an overloaded, part-time data protection officer (DPO) unable to perform their tasks effectively. The initial fine of EUR245,000 was reduced to EUR172,431 due to the company’s financial situation.
Non-compliant cookie banner (Decision No 156/2024)
The Belgian DPA imposed a fine of EUR40,000 per day on RTL Belgium for GDPR violations related to non-compliant cookie banners, following a complaint by NOYB. The complaint highlighted the absence of a “reject all” button and the use of misleading colours in the cookie banner. The DPA required RTL Belgium to:
After RTL Belgium complied with these corrective measures, the DPA acknowledged their compliance, resulting in the dismissal of the case and the waiving of the imposed fines.
To date, Belgium has not adopted any national legislation on AI or machine learning. However, the AI Act has entered into force and will have direct effect in Belgium as it becomes progressively applicable.
However, it is worth noting that:
The AI Act and GDPR should be viewed as complementary frameworks, each with their own rules and obligations. Since many AI systems deal with personal data, staying compliant with both set of rules is a must. The following parallels can be identified between the AI Act and the GDPR.
The AI Act outlines general principles for all AI systems and specific obligations to implement these principles, influenced by the OECD AI Principles and the High-Level Expert Group (HLEG)-AI’s seven ethical principles. Recital 27 of the AI Act lists principles such as human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, fairness and social and environmental wellbeing. These principles are detailed in various articles of the AI Act. For example, Article 10 prescribes data governance for high-risk AI systems, Article 13 addresses transparency, Articles 14 and 26 introduce human oversight and monitoring requirements, and Article 27 mandates fundamental rights impact assessments for certain high-risk AI systems.
The AI Act and the GDPR have different scopes and requirements, which can create challenges for compliance and consistency. Additional guidance from authorities such as the EDPB, the European Commission and/or the AI Office is of great value. It is worth mentioning the following guidelines.
Currently, fines imposed by the DPA are much more common than private litigation concerning data protection infringements. This is most likely due to the high costs of litigation combined with the relatively low number of claims for damages.
In 2024, the CJEU issued several rulings regarding standard damages in relation to data protection, as outlined in Article 82 of the GDPR. Key elements to consider include the following:
Cases C-182/22, C-189/22
The CJEU ruled that, under Article 82(1) of the GDPR, compensation for non-material damage due to personal data theft does not require consideration of the severity of the GDPR infringement. The CJEU clarified that compensation should fully cover the damage, and may be minimal if the damage is not serious. Furthermore, “identity theft” for the purposes of compensation requires actual misuse of the stolen data, but compensation is not limited to cases involving subsequent identity theft or fraud.
Case C-590/22
The CJEU has ruled that a data subject may seek compensation for non-material damages caused by the fear of disclosure of personal data, even if the disclosure itself is not proven, as long as the negative consequences of that fear are proven. Merely proving an infringement, however, is insufficient for compensation; actual damage must be proven.
Case C-741/21
The CJEU clarified the right to compensation for non-material damage under the GDPR:
Case C-687/21
The CJEU held that non-material damages under Article 82 require the claimant to prove a well-founded fear and a real risk of misuse of personal data.
Case C-340/21
The CJEU ruled that the fear of potential misuse of personal data by third parties constitutes non-material damage under Article 82(1) of the GDPR. Controllers must compensate for damages from unauthorised data disclosure or access unless they prove no fault on their part. The CJEU clarified that such incidents alone do not imply inadequate security measures by the controller, who must prove the measures’ appropriateness.
On 31 May 2024, the Law of 21 April 2024, which amends Books I, XV and XVII of the Belgian CEL and transposes Directive (EU) 2020/1828 on representative actions to protect the collective interests of consumers (RAD), was published in the Belgian Official Journal.
The new Belgian law does not introduce a completely new legal system to allow so-called class actions, as collective redress actions have been available in Belgium for consumers since 2014 and for SMEs since 2018.
Nevertheless, the following changes are notable.
So far, few class actions have been initiated: about a dozen of these actions have been filed. Class actions are relatively rare, and there are currently no signs that they will become more frequent in the future. It remains to be seen whether the Representative Actions Directive will have any impact on the frequency of class actions once it is implemented in Belgian law. At this stage, the authors anticipate that the implementation of the Directive is unlikely to bring any major increase in the number of class actions filed considering that Belgian law is already substantially in line with the Directive.
Although the Data Act has entered into force, many of its provisions will only become applicable 20 months after 11 January 2024 – ie, starting from 12 September 2025, and there are certain exceptions with longer transition periods:
The Data Act aims to remove barriers to accessing data for both consumers and businesses in a context where the volume of data generated by humans and machines is increasing exponentially. This translates into various specific objectives:
As the Data Act aims to regulate the use of data, and since such data has become omnipresent in contemporary society, the impact of the Data Act cannot be underestimated.
Key obligations relate to, among other things:
The European Commission has published a comprehensive overview of the Data Act on its website, including its objectives and how it works in practice. In addition, it has published frequently asked questions about the Data Act.
While the scope of the GDPR is limited to the processing of so-called personal data, the scope of the Data Act is much broader as it applies to any data. Given the overlap in the definitions of “data” and “personal data”, there is inevitably an overlap between the obligations under the GDPR and those under the Data Act. However, since the GDPR is considered a so-called lex specialis it will, with regard to personal data, prevail over the obligations under the Data Act. Consequently, the provisions of the Data Act are without prejudice to the GDPR regime, privacy rights and the right to confidentiality of communications, all of which must be complied with when adhering to the requirements under the Data Act.
As the GDPR and the Data Act both address the manner in which certain data is used, there are similarities and differences in the handling of such data between the two legal instruments. Some of these differences and similarities are listed in the following.
There are no specific laws implementing the Data Act and the use of internet of things (IOT) services and data processing services in Belgium. Obviously, cybersecurity and cyber-resilience requirements may apply under applicable legislation but this is beyond the scope of the present chapter.
Member states are required to designate at least one competent authority to deal with the enforcement of the Data Act. It is not known whether Belgium will designate the DPA as the competent authority. In any event, the DPA will remain responsible for monitoring the application of the Data Act insofar as the protection of personal data is concerned.
Belgian legislation implementing the E-Privacy Directive regulates both cookies and any other type of online tracking technology. It imposes (i) transparency requirements (such as posting a cookie notice online) and (ii) an opt-in consent requirement for all non-essential cookies (ie, all cookies that are not strictly necessary to transmit a communication over an electronic communications network or to provide an information society service requested by the user).
The DPA has published guidelines, a non-exhaustive checklist and extensive case law on the use of cookies and the applicable transparency and consent requirements – eg, in relation to the Transparency and Consent Framework of Interactive Advertising Bureau Europe (IAB Europe).
To summarise, the DPA states that:
In Belgium, there is no specific legislative code that compiles advertising standards. Commercial advertising is governed by a regulatory framework comprising binding legal provisions and self-regulating, non-binding professional rules.
The Belgian CEL outlines the core principles governing advertising practices in Belgium:
In Belgium, advertising laws and regulations are primarily enforced by the Belgian courts. Specific regulatory authorities are responsible for certain aspects of advertising law, including:
Additionally, the self-regulatory body known as the Jury for Ethical Advertising oversees ethical standards in advertising.
The DPA has adopted specific guidelines regarding direct marketing. Furthermore, it is worth noting that the Digital Services Act introduces two new restrictions concerning targeted advertising on online platforms. First, it bans advertising targeting minors based on profiling. Second, it bans targeted advertising based on profiling using special categories of personal data, such as sexual orientation or religious beliefs.
The employment relationship between employee and employer constitutes a specific domain for the protection of personal data. There are two conflicting principles:
Although the GDPR aims to harmonise the rules on the protection of personal data within the EU, it provides an exception in the field of employment relationships due to the characteristic relationship between employer and employee. It is therefore possible to establish specific rules for the processing of employees’ personal data, both at the sector and company level, for example through collective labour agreements.
It is important to note the following.
Several collective bargaining agreements (CBAs) must be observed as it has been concluded that they provide specific privacy protection for employees. This is the case for camera surveillance (CBA No 68 of 16 June 1998) and the electronic monitoring of the internet and emails (CBA No 81 of 26 April 2002).
In December 2024, the Act on Private Investigations entered into force. Its practical implications for employers are that they must create an internal policy to describe the circumstances and authorised methods of investigations, consult with the collective bodies, update their privacy policies and inform employees. In addition, businesses must only use external investigation suppliers that are duly licensed and abide by the new legal provisions, which tend to guarantee the respect of data protection rules in the context of private investigations. Some investigation methods and the collection of some categories of information are prohibited, sometimes with a possible exemption if the individual has given their consent.
In each phase of an asset deal, personal data is collected and processed, requiring compliance with the GDPR. The main points regarding the processing of personal data are summarised as follows.
Confidentiality and/or Data Processing Agreement
In the initial phase, a confidentiality agreement (non-disclosure agreement) is often signed to prevent the spread of information and keep exploratory talks secret. This agreement should include provisions on data protection.
Agreement With the Data Room Provider
An agreement must be made between the seller(s) and/or the target company and the data room manager (the processor) that complies with the GDPR, including mandatory mentions of Article 28. If the manager is outside the European Economic Area (EEA), additional restrictions on cross-border data transfer apply.
Processing Personal Data in the Due Diligence Report
Information in the data room will be analysed by the potential buyer and their advisors. Under the minimisation principle, only personal data that is strictly necessary for the specified purposes can be shared in the data room. Businesses must therefore find ways to assess whether documents should be redacted in part, and to make sure that spreadsheets and tables are obfuscated or the circulation thereof is limited to those who have an actual need to know. Lawyers must keep this information confidential, but it may be that not all other professionals are bound by the same duty. All individuals with access to the data room must commit to keeping the information confidential and not spreading it beyond the intended purpose. Participants often sign a digital confidentiality agreement before accessing the data room, which should include data processing provisions.
Clauses on the Risks of Data Processing
If due diligence reveals potential data protection risks, it is important for the buyer to obtain guarantees from the seller regarding the legality of the initial data collection and processing, and of the lawful transfer of personal data for the asset deal, including confirmation that data subjects have been informed and given the right to object if necessary.
The seller should make clear agreements about their liability and co-operation post-transfer, and ensure the buyer will process the data lawfully and in accordance with applicable legislation.
Change of Data Controller in Asset Deal
Once ownership is transferred, the buyer will be considered the new data controller for personal data related to the business operations. The business information is transferred at the time of ownership transfer – or at a later date, which is often the case – virtually or even physically. The originals of employment contracts, individual accounts, etc, must be transferred to the new employer, and contracts with customers and/or suppliers are transferred, as well as all relevant databases with personal data. Once this transfer has effectively taken place, it is the responsibility of the new data controller to ensure GDPR compliance, provide all data subjects with proper information about their data processing and respect all their rights.
Data Processing in a Transitional Services Agreement
Post-transfer, the buyer and seller may continue to collaborate. For example, the seller might handle payroll until the buyer finds a suitable provider. In such cases, the seller acts as a processor for the buyer. The reverse can also happen, where the buyer handles complaints for the seller. A transitional services agreement should be made addressing data processing and ensuring necessary safeguards on the part of the data controller.
Transfers of personal data from Belgium to a country outside the EEA are regulated by Chapter V of the GDPR. No additional restrictions apply under Belgian law.
General
As the GDPR is a European instrument, all EU countries are subject to the same requirements. However, when personal data is transferred outside the EU, the following rules must be taken into account to ensure that the level of protection of data subjects under the GDPR is not undermined.
The transfer of data outside the EU is subject to:
The European Commission has (so far) recognised Andorra, Argentina, Canada (commercial organisations), the Faroe Islands, Guernsey, Israel, the Isle of Man, Japan, Jersey, New Zealand, the Republic of Korea, Switzerland and the United Kingdom under the GDPR and the Law Enforcement Directive (LED), and the United States (commercial organisations participating in the EU–US Data Privacy Framework) and Uruguay as providing adequate protection.
In the absence of an adequacy decision, data transfers outside the EU are still possible if appropriate safeguards have been enforced. These could be binding corporate rules, standard data protection clauses as adopted or approved by the European Commission, etc.
Based on the case law of the CJEU (Schrems I and II), data exporters are required to conduct a data transfer impact assessment. They must identify and implement supplementary measures to ensure that personal data transferred to a third country that has not received an adequacy decision is given an essentially equivalent level of protection.
Derogations
In the absence of an adequacy decision for a specific third country or appropriate safeguards, it is still possible to transfer personal data to a third country or an international organisation, subject to one of the following conditions:
As mentioned in 5.1 Restrictions on International Data Transfers, for a company to transfer data, an adequacy decision or other adequate safeguards are required. As many of these mechanisms have been approved before, no additional government notification or approval is required.
However, in the event that the company invokes binding corporate rules, the latter must have been approved by a supervisory authority before personal data can be transferred to a third country (Article 47.1 of the GDPR).
There are currently no specific data localisation requirements in Belgium. However, the EU will introduce data localisation requirements as part of the European Health Data Space (EHDS) Regulation.
European entities can sometimes face repercussions in relation to the extraterritorial enforcement of unilateral sanctions by third countries. The EU considers that such enforcement is contrary to international law and has implemented Regulation 2271/96, (the blocking statute) as a way of protecting itself. The blocking statute has been transposed into Belgian legislation through Law of 2 May 2019.
The blocking statute prohibits European entities from complying with specific sanctions, prohibiting co-operation with the relevant third country’s authorities.
Since 2018, the blocking statute has applied to US sanctions against Iran and Cuba.
New Standard Contractual Clauses (SCCs)
On 12 September 2024, the European Commission announced its intention to launch a public consultation on the introduction of additional SCCs for international transfers of personal data to non-EU controllers and processors that are directly subjected to the GDPR, a situation not yet covered by the existing SCCs. The adoption of these new SCCs will necessitate that international organisations consider whether the data importer is directly subject to the GDPR and whether to apply these new SCCs or the 2021 SCCs. This public consultation was scheduled to take place in the fourth quarter of 2024. However, to date this additional set of SCCs has not yet been published.
EDPB Guidelines on Article 48 GDPR
The EDPB has launched a public consultation on its guidelines on Article 48 of the GDPR. These guidelines aim to clarify the application of Article 48 of the GDPR, which addresses international requests for personal data transfers and disclosures. These guidelines clarify how EU controllers and processors should handle such requests, emphasising compliance with both Article 6 (legal grounds for processing) and Chapter V of the GDPR (international data transfers). The EDPB offers detailed recommendations to ensure data protection principles are adhered to when responding to third-country requests.
Bastion Tower
Pl du Champ de Mars 5
1050 Bruxelles
Belgium
+32 2 515 93 00
lena.tausend@osborneclark.com www.osborneclarke.comThe involvement of artificial intelligence (AI) in the healthcare sector is particularly noteworthy. AI is, and will continue to be, used for diagnostics, drug development, treatment personalisation, virtual health assistants, health administration and remote patient monitoring, amongst other applications. It opens up new opportunities for organisations, healthcare professionals and clinics, enabling them to improve their offerings, develop new solutions and address various societal challenges. Although AI can generate benefits, it also raises a number of legitimate concerns related to human safety and security, freedom, privacy, integrity, dignity, self-determination and non-discrimination.
This article delves into the implications of the EU’s Artificial Intelligence Act (AIA) for healthcare professionals using AI systems in the context of remote patient monitoring. A wide range of stakeholders are covered under the AIA. These include not only the providers and manufacturers of AI systems but also the users, such as healthcare professionals. Any healthcare professional using an AI system under their authority will be considered a deployer, unless the AI system is used in the course of a personal non-professional activity. As a consequence, healthcare professionals are – as deployers – required to comply with a long list of obligations, which may notably range from compliance with instructions for use to assigning human oversight and ensuring that input data is relevant.
How To Qualify an AI System Used in the Context of Remote Patient Monitoring Under the AIA
High-risk AI comprises two categories:
AI systems used for the purpose of remote patient monitoring may fall under both categories of high-risk AI systems.
Under Article 2, Section 1 of the MDR, software can qualify as a medical device where it is intended to be used for specific purposes such as diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease.
Further, the MDR states that medical devices requiring a conformity assessment, and thus classified as high-risk AI, include all type IIa, IIb, and III devices. In the context of remote patient monitoring, Annex VIII of the MDR describes which medical devices fall under type II–III, including several devices used for diagnosis and monitoring as well as software intended to monitor physiological processes.
Requirements for Healthcare Professionals as a Deployer Under the AIA
Qualifying remote patient monitoring tools as high-risk AI systems and healthcare professionals as deployers triggers a cascade of compliance requirements. The main requirements, which are listed in Article 26 of the AIA, are explained below.
AI literacy
As early as February 2025, healthcare professionals that use AI systems must have sufficient knowledge about AI.
AI literacy is defined in Recital 56 as the skills, knowledge and understanding that allow providers, deployers and affected persons to make informed decisions regarding AI systems. This also includes awareness about the opportunities, risks and potential harm associated with AI. Article 4 of the AIA provides that deployers, in the same ways as providers, are obliged to ensure, to the best of their ability, a sufficient level of AI literacy of their employees and anyone else who operates or uses these systems on their behalf.
In the context of healthcare professionals, this means, for example, that physicians will need to properly inform and educate caregivers about AI systems’ risks and limitations. They should inform them about how to use the AI system, and of its limitations, as well as how and when to monitor data from the AI system. This also means that physicians and caregivers must be aware that AI systems used in remote patient monitoring may contain biases or ignore essential information that could lead to false-positive or false-negative results. False positives could lead to unnecessary anxiety for patients and potentially unnecessary medical interventions. In contrast, false negatives can result in missed diagnoses or delayed treatment, potentially worsening patient outcomes.
To give a more practical example, a physician should explain to caregivers and patients the specifics of the environments in which manufacturers or providers specifically state that their AI system will not operate accurately. Suppose that a remote patient monitoring tool is designed to detect skin cancer or follow up on the stages of skin cancer. The provider specifies that the tool requires specific lighting conditions to function correctly. If the physician or caregiver is not aware of the limitations of this AI system and the specific conditions are not met – such as poor lighting when seeking to detect skin lesions – the AI system may fail to provide accurate readings.
Instructions for use
In accordance with Article 13, Section 2 of the AIA, providers must give instructions for use and make them available to deployers. These instructions should include comprehensive information on the system’s characteristics, capabilities and performance limitations. They should also outline potential risks related to the use of high-risk AI systems, including “any actions by the deployer that could influence system behaviour, under which the system might pose risks to health, safety, and fundamental rights, on the changes that have been pre-determined and assessed for conformity by the provider and on the relevant human oversight measures, including the measures to facilitate the interpretation of the outputs of the AI system by the deployers”.
Deployers of high-risk AI systems must take the necessary technical and organisational measures to ensure that the systems are used correctly and in accordance with these instructions.
Human oversight
Providers are responsible for the basic implementation of human oversight tools, and deployers are subsequently obliged to assign human oversight to natural persons who have the necessary competence, training and authorisation. This requirement is intended to prevent or minimise risks to health, safety or fundamental rights that may arise from the use of a high-risk AI system (such as biased output or false negatives).
Input data screening
Data provided to or directly captured by an AI system, on the basis of which the system processes an output – defined as input data – must be relevant and sufficiently representative with respect to the intended purpose of the high-risk AI system. This obligation, however, only applies to the extent the deployer exercises control over such input data. In healthcare, for example, this could mean including diverse patient data to avoid bias. Alternatively, if the AI system is designed for remote monitoring of a specific condition, such as diabetes, the input data should include relevant medical records and diagnostic information.
Post-market surveillance and vigilance
In accordance with Article 26, Section 5 of the AIA, healthcare professionals using AI systems for remote patient monitoring must monitor the operation of any such AI system on the basis of the accompanying instructions for use. If they identify that the use of the AI system may result in a significant risk, or if they identify a serious incident, they may need to inform the provider (and where legally required also the distributor and/or the relevant market surveillance authorities) and suspend the use of the system.
Log keeping
The AIA mandates the automatic recording of logs on high-risk AI systems. This ensures a level of traceability of the AI systems’ functioning throughout their life cycle, and facilitates the monitoring of high-risk AI systems to detect situations that might pose risks to health, safety or fundamental rights, as well as the establishment and proper documentation of a post-market monitoring system. This also allows for the evaluation of continuous compliance of AI systems with the AIA’s requirements. Pursuant to Article 26, Section 6 of the AIA, deployers of high-risk AI systems must retain logs under their control for at least six months, considering the AI system’s intended purpose. When logs are managed by healthcare professionals, it is crucial to ensure that they can be stored long term and that data governance policies are in place to regulate the retention period.
Transparency
Transparency and explainability are key to foster trust in AI. It is therefore not surprising that the AIA emphasises the importance of these principles several times and imposes various transparency obligations. In the context of remote patient monitoring, the following transparency obligations merit consideration by healthcare professionals:
Data protection impact assessment and co-operation
These obligations are straightforward: deployers must co-operate with the relevant competent authorities in any actions they take concerning a high-risk AI system to implement the AIA. This co-operation may include providing any requested information about the AI system used. In addition, they must comply with their obligation to carry out a data protection impact assessment under Article 35 of the GDPR, for which they can use the instructions for use.
Fundamental rights impact assessment for high-risk AI systems
Article 27 of the AIA provides that, prior to deploying a high-risk AI system as defined in Article 6(2), deployers who are public bodies or private entities providing public services, and those deploying AI systems specified in points 5(b) and (c) of Annex III, must carry out an assessment of the system’s impact on fundamental rights.
The term “public services” is used broadly in the AIA, without clear criteria or further guidance on how to identify such services. This could result in a wider range of organisations being subject to this obligation than expected at first sight. Recital 96 provides some context by listing examples of public services, such as healthcare. As a result, healthcare professionals may find themselves subject to this obligation.
Where applicable, healthcare professionals must thus ensure that a fundamental rights impact assessment is carried out prior to the first use of a high-risk AI system, consisting of the following elements:
The AI Office is responsible for developing a template questionnaire to assist deployers in fulfilling their obligations.
The Interplay Between the AIA and the GDPR
Remote patient monitoring involves the collection and processing of a large amount of personal (health) data. As a result, remaining compliant with both is a must for healthcare professionals. An extensive discussion of the interplay between the AIA and the GDPR is beyond the scope of this article, so the following is only a general overview of the similarities between the two regulations.
Scope of the GDPR and the AIA
The GDPR applies to:
This contrasts with the material scope of the EU AI Act, which is centred on the definition of an AI system. The material scope of the EU AI Act extends to providers, deployers, importers, distributors and authorised representatives. Unlike the GDPR, the EU AI Act includes a detailed risk categorisation framework that imposes different obligations depending on the risk level of the AI system. Most of the obligations under the EU AI Act apply to high-risk AI systems.
Roles under the GDPR and AIA
Healthcare professionals using AI systems must consider their roles under both the GDPR and the AIA. This is crucial, as different obligations under the GDPR and AIA may apply depending on their qualification.
The GDPR makes a distinction between controllers and processors, with controllers being responsible for the strictest levels of GDPR compliance. The AIA distinguishes between different categories of actors, such as providers, deployers, distributors, importers, etc. The provider and the deployer are the most important roles in practice.
Consider a physician in a hospital using an AI system to remotely monitor a patient’s mental health. In this scenario, the physician (or the healthcare organisation employing the physician) is using the AI system in his or her practice, making him or her a deployer under the AIA. The physician is responsible for ensuring that the AI system is used in accordance with the AIA’s requirements. Simultaneously, the physician is collecting, using and managing personal mental health data to provide medical services. This makes him or her a controller under the GDPR, as he or she determines the purposes and means of processing personal data. As a result, he or she must ensure compliance with the obligations under the GDPR.
Principles under the AIA and the GDPR
The GDPR sets out seven data protection principles: lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality.
The AIA outlines general principles that apply to all AI systems and specific obligations to implement these principles in specific ways. These principles are influenced by the OECD AI Principles and the seven ethical principles for AI developed by the High-Level Expert Group on Artificial Intelligence (HLEG-AI). Recital 27 refers to the following principles: human agency and oversight, technical robustness and safety, privacy and data governance, transparency , diversity, non-discrimination, fairness and social and environmental wellbeing. These principles are further concretised in various articles of the AIA: Article 10 prescribes data governance practices for high-risk AI systems, Article 13 addresses transparency, Articles 14 and 26 introduce human oversight and monitoring requirements and Article 27 mandates fundamental rights impact assessments for certain high-risk AI systems.
Human oversight and automated decision-making
The provisions related to human oversight in the AIA and automated decision-making in the GDPR are important, especially in the healthcare sector, where the integration of AI systems has significantly transformed decision-making processes. AI systems in healthcare can be categorised as fully automated and partially automated decision-making tools, each with distinct levels of human oversight. Fully automated decision-making systems function independently, making decisions without human intervention. For example, an AI insulin management system autonomously adjusts insulin delivery by analysing data from sensors and fitness trackers, geolocation data from smartphones and hand-gesture sensing data. The system identifies patterns in individual behaviour and regulates insulin levels accordingly. In contrast, partially automated medical decision systems involve AI systems that make initial decisions but require human input in specific situations. For example, an AI system that monitors cardiac patients continuously analyses personalised heart rate data collected from wearable or implantable devices. When it detects arrhythmias, it automatically transmits the relevant information to the patient’s cardiologist, who then decides on the appropriate course of action. The below only provides a general overview of the different provisions.
Article 22 of the GDPR grants data subjects the right not to be subjected to decisions based solely on automated processing, including profiling, which produce legal effects or similarly significant effects. The only situations where such automated decision-making is allowed are those in which it is necessary for entering into or performing a contract, when there is authorisation by European or member state law or when there is explicit consent from the data subject. In any case, measures must be implemented to protect fundamental rights, such as ensuring the right for meaningful human intervention on the part of the data controller to express his or her point of view and contest the decision. Similarly, the AIA aims to protect fundamental rights and freedoms by ensuring appropriate human oversight and intervention, known as the “human-in-the-loop” effect. Indeed, Article 14 of the AIA requires that high-risk AI systems be designed and developed to allow for effective human overview during their use, including appropriate human-machine interface tools. Further, Article 26(1) requires deployers of AI systems to implement technical and organisational measures to ensure that the system is used in accordance with its instructions of use, including with respect to human oversight.
If a certain level of human oversight is lacking, for example because the human decision-makers are not properly trained, the AI system might not be considered partially automated, thus falling under the automated decision-making framework of Article 22 of the GDPR.
Reporting incidents
Reporting obligations relating to serious incidents or the malfunctioning of AI systems may partially overlap with GDPR reporting obligations when personal data is involved. In the section headed “Roles under the GDPR and AIA”, a brief discussion was provided of the obligation of a healthcare professional using AI systems for remote patient monitoring to inform the provider and, where legally required, also the distributor and/or the relevant market surveillance authorities if they identify a significant risk or a serious incident. If such an incident results in a data breach (ie, compromises the confidentiality, availability or integrity of the data processed by the AI system), healthcare professionals may also need to notify the relevant data protection authority and, in some cases, the affected data subjects. The incident should be reported to the relevant supervisory authority without undue delay and, where feasible, not later than 72 hours after becoming aware of the breach, unless the breach is unlikely to put the data subjects’ rights and freedoms at risk, and to affected data subjects if the breach is likely to result in a high risk to the rights and freedoms of the data subjects.
Penalties
Both the GDPR and the AI Act provide for administrative fines, the extent of which depend on the severity of the infringement. Under the GDPR, minor infringements can result in fines up to EUR10 million or 2% of the total annual global turnover, whichever is higher. Examples of such infringements include violating the GDPR’s principle of privacy by design and default. For more serious breaches, fines can escalate to EUR20 million or 4% of the total annual global turnover, for example for breaches of the GDPR’s provisions on processing principles and data subjects’ rights. With regard to the AIA, penalties are outlined in Article 99 of the AIA. Serious breaches, such as non-compliance with prohibited AI practices or failure to meet quality requirements for high-risk AI systems, can lead to fines up to EUR35 million or 7% of worldwide annual turnover. For less serious breaches, like providing incorrect, incomplete or misleading information to notified bodies or national competent authorities, the fine is EUR7.5 million or 1% of worldwide annual turnover, whichever is higher.
Conclusion
The integration of AI technologies into healthcare offers a potentially transformative opportunity, but also presents complex legal and regulatory challenges, particularly under the AIA, GDPR and MDR. With the coming into force of the AIA, healthcare professionals will need to navigate a rigorous compliance landscape resulting from the broad definition of “deployer” and extensive obligations on those who use high-risk AI systems.
This requires adopting a proactive and strategic approach: assessing AI systems (with a focus on high-risk categories), developing robust compliance frameworks and understanding compliance requirements under the various applicable regulations, and engaging with regulators to stay informed of further guidance and ensure alignment with compliance timelines and obligations.
Bastion Tower
Pl du Champ de Mars 5
1050 Bruxelles
Belgium
+32 2 515 93 00
lena.tausend@osborneclark.com www.osborneclarke.com