Data Protection & Privacy 2025

Last Updated March 11, 2025

Belgium

Law and Practice

Authors



Osborne Clarke is an international legal practice with over 330 partners and more than 1,300 lawyers in 26 locations. In Brussels, the firm’s data, IP and IT experts work together as a team to support high-profile Belgian and international clients on complex regulatory matters, including the implementation of the Digital Services Act, the Digital Markets Act and the Digital Operational Resilience Act (DORA). Osborne Clarke has a strong international client base in a range of industry sectors, including life sciences, retail, financial services and particularly fintech, as well as specialist technology clients and companies in the digital sector. The team intercedes in data privacy matters at different levels, from communicating with the Belgian Data Protection Authority, drafting data protection policies and carrying out data protection audits to assisting clients with disputes before the Belgian Data Protection Authority.

Article 22 of the Belgian constitution provides for the right to protection of private and family life, and forms the cornerstone of the Belgian laws governing or impacting privacy in general. In addition, Article 8 of the European Convention on Human Rights has direct effect in Belgium and is a cornerstone of the rule of law and of the Belgian law enforcement system.

However, from the point of view of digital technologies and innovation, the most important regulation in Belgium for businesses is the General Data Protection Regulation, also referred to as the GDPR (Regulation (EU) 2016/679), which applies to all member states of the EU. Along with the European legislation, the Belgian Law of 30 July 2018 on the protection of natural persons with regard to the processing of personal data also applies. This Belgian legislation adopts a number of principles enshrined in the GDPR in respect of the activities of specific state and public bodies. In respect of businesses, it does not add or deviate much from the standard rules laid down by the GDPR.

Belgium has established its supervisory authorities by implementing the Law of 3 December 2017, as required by the GDPR. The main supervisory authority is vested with investigative and corrective powers and is entitled to fine a controller or processor if they do not comply with the GDPR or the Belgian Law of 30 July 2018. The fines as listed in Article 83 of the GDPR may not, however, be imposed on public authorities and their appointees or agents, unless they are a legal person governed by public law offering goods or services on a market (Article 221, Section 2 of the Belgian Law of 30 July 2018).

In addition to the GDPR and the Belgian Law of 30 July 2018, other laws have been enacted to respect privacy and fundamental rights in different fields, such as consumer protection, electronic communications, electronic commerce, direct marketing and the use of closed-circuit television (CCTV), etc. Indeed, the Code of Economic Law (CEL) contains certain provisions on direct marketing in its Book VI and is supplemented in this respect by the Royal Decree of 4 April 2003, regulating the sending of advertising by e-mail. In addition, the Law of 21 March 2007 on the use of camera surveillance regulates the use of CCTV in public and private areas. The authority responsible for the enforcement of these regulations is the Belgian Data Protection Authority (DPA).

In December 2024, Belgium also enacted a major reform of private investigations that aims to translate the essential requirements of data protection law in the field of intelligence gathering activities of the private sector (see 4.3 Employment Privacy Law). The Act on Private Investigations is public policy, and breaches thereof can lead to rejection or cancellation of evidence in court, as well as to administrative or criminal offences.

At present, no specific legal regime has been enacted with respect to artificial intelligence (AI).

The Belgian DPA consists of:

  • an executive committee;
  • a general secretariat;
  • a first-line service;
  • an authorisation and opinion service;
  • an inspection service; and
  • a litigation chamber.

The DPA has the right to conduct audits.

Furthermore, investigations may be launched on the initiative of the DPA, where a complaint is lodged by a data subject or a body, organisation or association that has been properly constituted in accordance with the law of an EU member state, has statutory objectives of public interest and is active in the protection of data subjects’ rights and freedoms.

Alongside the DPA, different regulators and public authorities have a role to play in data sharing, open data and the national implementation of the EU data spaces strategy.

With respect to AI, it is still unclear whether the DPA will be vested with regulatory powers under the EU AI Act and, if so, to what extent. That being said, there is little doubt that the DPA will exercise its powers in relation to automated decision-making, and the impact of AI projects on fundamental rights, as often as it can.

The DPA must comply with the GDPR and the Belgian Law of 30 July 2018. When a complaint is filed or an investigation is launched, there will usually be an initial fact-finding phase during which the authority will ask a business to provide factual information. Afterwards, proceedings on the merits can be started in front of the Litigation Chamber of the DPA, in the scope of which parties can submit their respective arguments in writing and possibly be heard.

After the proceedings, the Litigation Chamber is entitled to:

  • dismiss the complaint;
  • order the dismissal of the prosecution;
  • order the stay of proceedings;
  • propose a settlement;
  • issue warnings and reprimands;
  • order compliance with the requests brought by the data subject relating to the exercise of their rights;
  • impose periodic penalty payments; or
  • impose administrative fines.

In the event that the DPA imposes an administrative fine, such fine must be effective, proportionate and dissuasive, pursuant to Article 83 of the GDPR. Furthermore, specific circumstances must be taken into account when imposing an administrative fine and deciding on its amount.

If the respondent does not agree with the decision handed down by the Litigation Chamber, the respondent may lodge an appeal before the Market Court (Brussels Court of Appeal) within 30 days of notification of the decision. The Market Court can overturn the decision, in whole or in part, and remand the case, or decide on all grounds and substitute its decision.

Since February 2024, any interested third party affected by a decision of the DPA, who was not a party to the proceedings before the Litigation Chamber, may also lodge an appeal before the Market Court, insofar as it suffers personal, direct, certain, current and legitimate harm due to the decision of the Litigation Chamber.

The Litigation Chamber also has the power to propose a transaction. To facilitate a faster resolution, the DPA has recently issued a (non-binding) settlement policy to help companies navigate DPA transactions.

While there is no official calculation method for fines in Belgium, the DPA consistently refers to the European Data Protection Board (EDPB) Guidelines 4/2022.

These Guidelines outline a methodology for determining the sum of the fine, namely determining:

  • step one – which and how many actions and infringements are under review;
  • step two – what amount serves as the starting point for calculating the fine for the established infringements (starting amount);
  • step three – which mitigating or aggravating circumstances, if any, necessitate an adjustment of the amount from step 2;
  • step four – what maximum amounts apply to the infringements and whether any increases from the previous step exceed these amounts; and
  • step 5 – whether the final amount of the calculated fine meets the requirements of effectiveness, deterrence and proportionality, where this can be adjusted if necessary.

The DPA uses this methodology to determine the extent of administrative fines. In Belgium, fines are transferred to the State Treasury.

Recent Decisions From the DPA in 2024

Security failures result in EUR200,000 fine (Decision No 166/2024)

The DPA fined a hospital EUR200,000 for breaching the GDPR following a cyber-attack in 2021. The attack compromised the personal data of 300,000 individuals and made the hospital’s servers inaccessible. The hospital was found to have failed to conduct a data protection impact assessment (DPIA), establish an effective information security policy or implement essential security measures, such as employee training and system log monitoring.

EUR45,000 fine for GDPR violations at the workplace (Decision No 114/2024)

On 6 September 2024, the DPA imposed a fine of EUR45,000 on a company following a complaint from an individual who had been employed as a temporary worker for approximately one year. The company collected employees’ fingerprints for time registration without offering alternatives, establishing a legal basis, or informing employees about data storage, retention and third-party transfers. The DPA found the company in violation of GDPR principles, including purpose limitation, data minimisation and transparency.

GDPR violations related to dark patterns in cookie consent (Decision No 113/2024)

The DPA fined Mediahuis EUR25,000 per day for using dark patterns and illicit cookie practices on its websites following a complaint. The complainant, represented by the European Center for Digital Rights (NOYB), highlighted the absence of an “accept all” button, deceptive button colours and difficulties in withdrawing consent. The DPA ordered Mediahuis to adjust the cookie banners within 45 days to include a refusal button and avoid deceptive colours. If non-compliance persists beyond 45 days, a fine of EUR25,000 per day per website will be imposed. The DPA also reprimanded Mediahuis, stating that only strictly necessary cookies may be used based on legitimate interest.

Delayed access request response leads to EUR100,000 fine (Decision No 207/2024)

The DPA fined an unnamed telecommunications company for failing to respond promptly to a client’s access request. The company made unsolicited changes to the individual’s subscriptions. When the individual submitted an access request under Article 15 of the GDPR, the company took 14 months to respond, thereby violating Articles 12(2), 12(3), and 15 of the GDPR.

EUR172,431 EUR fine for failing to honour data subject rights (Decision No 87/2024)

The DPA fined a company for failing to erase a data subject’s personal data used in direct marketing, and for having an overloaded, part-time data protection officer (DPO) unable to perform their tasks effectively. The initial fine of EUR245,000 was reduced to EUR172,431 due to the company’s financial situation.

Non-compliant cookie banner (Decision No 156/2024)

The Belgian DPA imposed a fine of EUR40,000 per day on RTL Belgium for GDPR violations related to non-compliant cookie banners, following a complaint by NOYB. The complaint highlighted the absence of a “reject all” button and the use of misleading colours in the cookie banner. The DPA required RTL Belgium to:

  • add a button to its cookie banner allowing the refusal of all cookies via a single click on every layer where the “accept all” button appears; and
  • use colours in its cookie banner that are not manifestly misleading, ensuring that the “accept all” and “refuse all” buttons are displayed equivalently.

After RTL Belgium complied with these corrective measures, the DPA acknowledged their compliance, resulting in the dismissal of the case and the waiving of the imposed fines.

To date, Belgium has not adopted any national legislation on AI or machine learning. However, the AI Act has entered into force and will have direct effect in Belgium as it becomes progressively applicable.

However, it is worth noting that:

  • the DPA has issued advice on draft laws covering the use of AI – this advice generally considers the rules applicable to automated decision-making (Article 22 of the GDPR) or the proportionality of using AI systems; and
  • the DPA has issued guidelines on AI and data protection – on 19 September 2024, it released guidelines on AI, detailing the relationship between the GDPR and the AI Act in AI system development.

The AI Act and GDPR should be viewed as complementary frameworks, each with their own rules and obligations. Since many AI systems deal with personal data, staying compliant with both set of rules is a must. The following parallels can be identified between the AI Act and the GDPR.

  • Scope: The GDPR’s material scope covers the processing of personal data by automated means and non-automated means if the data forms part of a filing system. Its territorial scope is based on establishment and target criteria applying to entities established in the EU, and to those outside the EU processing data related to offering goods or services to, or monitoring individuals in, the EU. In contrast, the EU AI Act’s material scope focuses on AI systems and extends to providers, deployers, importers, distributors and authorised representatives. The EU AI Act includes a detailed risk categorisation framework, with most obligations applying to high-risk AI systems. The EU AI Act has a broad geographical scope of application and can catch entities based outside the EU in different respects.
  • Roles: When using AI systems, it is important to consider roles and obligations under both the GDPR and the AI Act, as different requirements may apply based on one’s role. The GDPR distinguishes between controllers and processors, with controllers bearing the strictest compliance responsibilities. The AI Act categorises actors into providers, deployers, distributors, importers, etc, with providers and deployers being the most significant in practice. However, the roles may overlap, completely or in part, with a deployer qualifying as a data controller or a provider qualifying as a data processor, etc.
  • Principles: The GDPR sets out seven data protection principles: lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, and integrity and confidentiality (Article 5 of the GDPR).

The AI Act outlines general principles for all AI systems and specific obligations to implement these principles, influenced by the OECD AI Principles and the High-Level Expert Group (HLEG)-AI’s seven ethical principles. Recital 27 of the AI Act lists principles such as human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination, fairness and social and environmental wellbeing. These principles are detailed in various articles of the AI Act. For example, Article 10 prescribes data governance for high-risk AI systems, Article 13 addresses transparency, Articles 14 and 26 introduce human oversight and monitoring requirements, and Article 27 mandates fundamental rights impact assessments for certain high-risk AI systems.

  • Human oversight and automated decision-making: Article 22 of the GDPR grants data subjects the right not to be subjected to decisions based solely on automated processing unless necessary for a contract, authorised by law or based on explicit consent. It also mandates measures to protect fundamental rights, including human intervention and the ability to contest decisions. Similarly, the AI Act requires high-risk AI systems to allow effective human oversight during use and mandates technical and organisational measures to ensure proper use and oversight. Without adequate human oversight, AI systems may fall under the automated decision-making framework of Article 22 of the GDPR.
  • Reporting incidents: Reporting obligations for serious incidents or malfunctions of AI systems can overlap with GDPR reporting requirements when personal data is involved. For example, deployers using AI must inform the provider, and possibly the distributor or market surveillance authorities, if they identify a significant risk or serious incident. If such an incident results in a data breach compromising the data processed by the AI system, they must also notify the relevant DPA within 72 hours and, if necessary, the affected data subjects. This ensures compliance with both the AI Act and GDPR requirements.
  • Penalties: Both the GDPR and the AI Act impose administrative fines based on the severity of the infringement. Under the GDPR, minor infringements can result in fines up to EUR10 million or 2% of global annual turnover, while serious breaches can lead to fines up to EUR20 million or 4% of global annual turnover. The AI Act outlines penalties in Article 99, with serious breaches, such as non-compliance with prohibited AI practices, resulting in fines up to EUR35 million or 7% of global annual turnover. Minor breaches, like providing incorrect information, can incur fines up to EUR7.5 million or 1% of global annual turnover.

The AI Act and the GDPR have different scopes and requirements, which can create challenges for compliance and consistency. Additional guidance from authorities such as the EDPB, the European Commission and/or the AI Office is of great value. It is worth mentioning the following guidelines.

  • On 19 September 2024, the DPA released guidelines on AI and data protection, detailing the relationship between the GDPR and the AI Act in AI system development.
  • On 18 December 2024, the EDPB adopted Opinion 28/2024 on the use of personal data for AI model development and deployment. The opinion addresses (i) the conditions under which AI models can be considered anonymous, (ii) the use of legitimate interest as a legal basis for AI development and use, and (iii) the implications of developing AI models with unlawfully processed personal data. It also considers the use of both first and third-party data.

Currently, fines imposed by the DPA are much more common than private litigation concerning data protection infringements. This is most likely due to the high costs of litigation combined with the relatively low number of claims for damages.

In 2024, the CJEU issued several rulings regarding standard damages in relation to data protection, as outlined in Article 82 of the GDPR. Key elements to consider include the following:

  • not every breach of the GDPR automatically gives rise to a claim for compensation under Article 82 of the GDPR;
  • “damage” must be interpreted broadly;
  • damage caused by a breach of personal data protection is no less serious than bodily injury;
  • Article 82 of the GDPR does not have a threshold of seriousness or a minimum threshold that the damage must exceed;
  • the fear that personal data will be misused as a result of a cyber-attack can be a compensable non-material damage;
  • excluding liability according to Article 82 (3) of the GDPR is only possible within certain limits;
  • the GDPR contains no provisions on how to assess damages, and national courts must therefore apply each member state’s national provisions subject to principles of equivalence and effectiveness under EU law;
  • when determining the amount of compensation, Article 82 of the GDPR does not require taking into account the extent of fault or the number of GDPR violations by the controller against the data subject;
  • when GDPR infringements occur alongside breaches of national law that pertain to personal data protection but do not aim to clarify GDPR requirements, these simultaneous breaches do not need to be considered when determining the amount of damages under Article 82 of the GDPR; and
  • Article 82 of the GDPR serves a compensatory purpose rather than a deterrent or punitive one.

Cases C-182/22, C-189/22

The CJEU ruled that, under Article 82(1) of the GDPR, compensation for non-material damage due to personal data theft does not require consideration of the severity of the GDPR infringement. The CJEU clarified that compensation should fully cover the damage, and may be minimal if the damage is not serious. Furthermore, “identity theft” for the purposes of compensation requires actual misuse of the stolen data, but compensation is not limited to cases involving subsequent identity theft or fraud.

Case C-590/22

The CJEU has ruled that a data subject may seek compensation for non-material damages caused by the fear of disclosure of personal data, even if the disclosure itself is not proven, as long as the negative consequences of that fear are proven. Merely proving an infringement, however, is insufficient for compensation; actual damage must be proven.

Case C-741/21

The CJEU clarified the right to compensation for non-material damage under the GDPR:

  • an infringement alone does not constitute “non-material damage” – there must be evidence of “suffered damage” and a causal link;
  • controllers cannot claim exemption from liability for the mere fact that a person acting under its authority failed to follow its instructions;
  • the assessment of compensation for non-material damage does not need to follow criteria similar to those for administrative fines; and
  • multiple infringements related to the same processing operation should be considered in the compensation assessment.

Case C-687/21

The CJEU held that non-material damages under Article 82 require the claimant to prove a well-founded fear and a real risk of misuse of personal data.

Case C-340/21

The CJEU ruled that the fear of potential misuse of personal data by third parties constitutes non-material damage under Article 82(1) of the GDPR. Controllers must compensate for damages from unauthorised data disclosure or access unless they prove no fault on their part. The CJEU clarified that such incidents alone do not imply inadequate security measures by the controller, who must prove the measures’ appropriateness.

On 31 May 2024, the Law of 21 April 2024, which amends Books I, XV and XVII of the Belgian CEL and transposes Directive (EU) 2020/1828 on representative actions to protect the collective interests of consumers (RAD), was published in the Belgian Official Journal.

The new Belgian law does not introduce a completely new legal system to allow so-called class actions, as collective redress actions have been available in Belgium for consumers since 2014 and for SMEs since 2018.

Nevertheless, the following changes are notable.

  • A generalised opt-in regime: Consumers only need to decide whether to opt in after a decision on the merits is issued, but there is the possibility to enter into collective settlements on an opt-out basis. Previously, the Belgian CEL allowed the judge to choose which opt-in/opt-out system would apply to a particular collective redress action. However, this is not common practice in other jurisdictions, where the legislator typically determines the applicable system. The new law introduces changes in this respect: the mechanism has been revised for the negotiation phase. To reach an agreement, it is necessary to leave as much room for negotiation as possible. Therefore, the parties themselves can decide whether the group will be formed according to an opt-out or an opt-in approach. If no agreement is reached at the end of the negotiation phase, the substantive procedure (“on the merits”) will start. The composition of the group will then be based on an opt-in system. This opt-in phase has been moved to a different stage in the procedure, namely after the decision on liability, which results in an obligation for the defendant to pay compensation.
  • Limited rules on litigation funding: To ensure the independence of the qualified entity, third-party funding will be subject to necessary supervision. One of the conditions for recognition is that the group representative must be independent and not financially influenced by its funders. If this is not the case, the minister may refuse recognition, or the court may declare the collective redress action inadmissible. Furthermore, there is a transparency requirement to state in the request that the collective redress action is funded by a third party, and an obligation on the representative to identify the funding third parties as well as the amounts funded.
  • Group representatives: Besides recognised entities, it is now possible to set up an ad hoc entity specifically for introducing collective redress proceedings.
  • Definition of qualified entities: Qualified entities that are allowed to bring representative actions now benefit from a clear and precise definition. This definition includes entities recognised in another member state. Additionally, the text expressly addresses the question of ad hoc entities and allows them to start actions. A complete list of qualified entities will be published on the website of the Federal Public Service Economy. Pending representative actions must be published by the qualified entities.
  • Cross-border actions: The law permits cross-border collective redress actions, enabling foreign qualified entities to initiate collective redress proceedings in Belgium and allowing Belgian qualified entities to do the same abroad;
  • Material scope extension: The scenarios in which a class action can be initiated have been broadened, and the Belgian CEL now expressly includes the instruments listed in Annex 1 of the RAD, such as the MiFID II Directive and the Prospectus Directive, including the abusive selling of financial products.

So far, few class actions have been initiated: about a dozen of these actions have been filed. Class actions are relatively rare, and there are currently no signs that they will become more frequent in the future. It remains to be seen whether the Representative Actions Directive will have any impact on the frequency of class actions once it is implemented in Belgian law. At this stage, the authors anticipate that the implementation of the Directive is unlikely to bring any major increase in the number of class actions filed considering that Belgian law is already substantially in line with the Directive.

Although the Data Act has entered into force, many of its provisions will only become applicable 20 months after 11 January 2024 – ie, starting from 12 September 2025, and there are certain exceptions with longer transition periods:

  • the obligation to design or manufacture a connected product/related service in a way that the product and related service data is accessible by default (“access by design”) will become applicable in 32 months, meaning from 12 September 2026; and
  • the provisions on contractual terms and conditions in private sector data contracts will not be applicable until 12 September 2027 in relation to contracts concluded on or before 12 September 2024, provided that the contract in question is of indefinite duration or is due to expire at least ten years after 11 January 2024.

The Data Act aims to remove barriers to accessing data for both consumers and businesses in a context where the volume of data generated by humans and machines is increasing exponentially. This translates into various specific objectives:

  • empowering users of connected products with respect to the access and use of data;
  • promoting data sharing among businesses for commercial purposes or to foster more innovation;
  • introducing new mechanisms for data reuse by public sector organisations in exceptional circumstances;
  • ensuring greater fluidity in the cloud computing and edge computing markets and increasing trust in these services; and
  • establishing a framework to promote data interoperability.

As the Data Act aims to regulate the use of data, and since such data has become omnipresent in contemporary society, the impact of the Data Act cannot be underestimated.

Key obligations relate to, among other things:

  • access to data from connected devices;
  • the scope of data to be shared under the Data Act;
  • obligations for data holders obliged by law to make data available;
  • unfair contractual terms in data sharing contracts between businesses;
  • European data spaces;
  • switching between cloud service providers;
  • the prevention of unlawful access and transfer of cloud-based data;
  • the interoperability of data, data sharing mechanisms and cloud services; and
  • smart contracts for data sharing.

The European Commission has published a comprehensive overview of the Data Act on its website, including its objectives and how it works in practice. In addition, it has published frequently asked questions about the Data Act.

While the scope of the GDPR is limited to the processing of so-called personal data, the scope of the Data Act is much broader as it applies to any data. Given the overlap in the definitions of “data” and “personal data”, there is inevitably an overlap between the obligations under the GDPR and those under the Data Act. However, since the GDPR is considered a so-called lex specialis it will, with regard to personal data, prevail over the obligations under the Data Act. Consequently, the provisions of the Data Act are without prejudice to the GDPR regime, privacy rights and the right to confidentiality of communications, all of which must be complied with when adhering to the requirements under the Data Act.

As the GDPR and the Data Act both address the manner in which certain data is used, there are similarities and differences in the handling of such data between the two legal instruments. Some of these differences and similarities are listed in the following.

  • With regard to connected devices, the Data Act enhances the right to data portability, and users may be able to access and port both personal and non-personal data generated by those objects.
  • Any processing of personal data should also comply with the GDPR, including the requirement of a valid legal basis for processing under Articles 6 and 9 of the GDPR. The Data Act itself does not constitute a legal basis for the collection or generation of personal data by the data holder.
  • Where the user is not the data subject, the Data Act does not create a legal basis for providing access to personal data, or for making personal data available to a third party, and should not be understood as conferring any new right on the data holder to use personal data generated by the use of a connected product or related service. In those cases, the data holder can comply with requests by anonymising personal data or transmitting only personal data relating to the user (in the case of data that includes personal data about multiple people).
  • Technical measures to comply with the principles of data minimisation and data protection, by design and by default, may include pseudonymisation and encryption as well as the use of technology that permits algorithms to be brought to the data and allows valuable insights to be derived while only processing the necessary data.

There are no specific laws implementing the Data Act and the use of internet of things (IOT) services and data processing services in Belgium. Obviously, cybersecurity and cyber-resilience requirements may apply under applicable legislation but this is beyond the scope of the present chapter.

Member states are required to designate at least one competent authority to deal with the enforcement of the Data Act. It is not known whether Belgium will designate the DPA as the competent authority. In any event, the DPA will remain responsible for monitoring the application of the Data Act insofar as the protection of personal data is concerned.

Belgian legislation implementing the E-Privacy Directive regulates both cookies and any other type of online tracking technology. It imposes (i) transparency requirements (such as posting a cookie notice online) and (ii) an opt-in consent requirement for all non-essential cookies (ie, all cookies that are not strictly necessary to transmit a communication over an electronic communications network or to provide an information society service requested by the user).

The DPA has published guidelines, a non-exhaustive checklist and extensive case law on the use of cookies and the applicable transparency and consent requirements – eg, in relation to the Transparency and Consent Framework of Interactive Advertising Bureau Europe (IAB Europe).

To summarise, the DPA states that:

  • only strictly necessary cookies are exempt from consent requirements (ie, essential technical cookies such as cookies for load balancing and strictly necessary functional cookies such as cookies for temporary storage of language choice, cookie preferences or shopping basket content) – all other categories of cookies may only be placed and read if users have given their prior, free, specific, informed, unambiguous and active consent;
  • cookie walls are not allowed;
  • designs that give more prominence to the “accept” option are prohibited – eg, using a particular colour that may influence the user’s choice;
  • the use of cookies for the controller’s advertising/profiling purposes, and for third-party advertising/profiling, must be considered as separate purposes, meaning that separate consent must be obtained;
  • the user must be given the option, if necessary in a second layer of the cookie banner, to accept (or not) the use of cookies by a “partner” (joint controller);
  • a single cookie should not be used to serve different purposes;
  • consent should be clear and explicit – consent may not be inferred from continued browsing, the browser settings of a visitor or the closing of a banner, pre-ticked boxes may not be used and consent may not be linked to the acceptance of the terms and conditions or privacy policy;
  • the controller should offer an easy way to withdraw consent in one click (eg, via a link or a button) – according to the DPA, however, such withdrawal should go beyond not placing the cookie in the future, and the controller should also ensure “the intended effect” of the withdrawal of consent;
  • cookie consent needs to be “refreshed” every six months – ie, controllers need to present the cookie banner again to the user within six months after they have given their consent to cookies; and
  • controllers must (i) keep information demonstrating how the consent mechanism (eg, the cookie banner) has been adapted over time and (ii) retain previous versions of the cookie policy, which must be dated and include a version number.

In Belgium, there is no specific legislative code that compiles advertising standards. Commercial advertising is governed by a regulatory framework comprising binding legal provisions and self-regulating, non-binding professional rules.

The Belgian CEL outlines the core principles governing advertising practices in Belgium:

  • Article I.8.13° of the CEL defines advertising as any communication with the direct or indirect aim of promoting the sale of products or services, irrespective of the place or means of communication used;
  • Article VI.17 of the CEL lists the conditions under which comparative advertising is legal;
  • Articles VI.93 to 103 of the CEL prohibit unfair commercial practices against consumers, including misleading and aggressive practices;
  • Articles VI.104 to 109 of the CEL prohibit unfair market practices against other enterprises, including misleading and aggressive practices; and
  • Book XII of the CEL contains rules concerning electronic marketing activities.

In Belgium, advertising laws and regulations are primarily enforced by the Belgian courts. Specific regulatory authorities are responsible for certain aspects of advertising law, including:

  • the Belgian DPA for data protection breaches;
  • the Financial Services and Markets Authority (FSMA) for insurance product advertising;
  • the Federal Agency for Medicines and Health Products; and
  • the Federal Public Service (FPS) Economy via its mediation service.

Additionally, the self-regulatory body known as the Jury for Ethical Advertising oversees ethical standards in advertising.

The DPA has adopted specific guidelines regarding direct marketing. Furthermore, it is worth noting that the Digital Services Act introduces two new restrictions concerning targeted advertising on online platforms. First, it bans advertising targeting minors based on profiling. Second, it bans targeted advertising based on profiling using special categories of personal data, such as sexual orientation or religious beliefs.

The employment relationship between employee and employer constitutes a specific domain for the protection of personal data. There are two conflicting principles:

  • on the one hand, the authority of the employer over their employee and the resulting subordination – an employer can therefore give instructions to their employees and monitor their performance; and
  • on the other hand, the right to privacy of the employees, which prohibits the employer from exercising authority over the personal aspects and activities of their employees.

Although the GDPR aims to harmonise the rules on the protection of personal data within the EU, it provides an exception in the field of employment relationships due to the characteristic relationship between employer and employee. It is therefore possible to establish specific rules for the processing of employees’ personal data, both at the sector and company level, for example through collective labour agreements.

It is important to note the following.

  • Consent: It is difficult to obtain valid consent in an employment context due to the imbalance, and it can be withdrawn (preventing further processing). Instead, employers might consider other legal bases such as the performance of a contract, compliance with a legal obligation or legitimate interest.
  • The processing of sensitive data is subject to a stricter regime. There are also special rules around the processing of personal data relating to criminal convictions and offences.
  • Concerning recruitment:
    1. Background checks are legal to some extent but subject to strict regulations. This includes that the background checks be conducted in relation to the information that is relevant to the position for which the person is applying; the information cannot be obtained through any alternative means, such as directly asking the candidate during the interview, when there are reasonable doubts about the accuracy of the information provided by the candidate; the candidate has been informed in advance that background checks will be conducted; and the candidate will be receiving a privacy notice providing them with an insight into how and why their personal data will be processed.
    2. Copy of a criminal record – You should not ask for a candidate’s criminal record unless it is supervised by public authorities or required by specific legislation for certain professional activities (eg, taxi drivers, private detectives, police officers, security personnel or certain financial and insurance roles), even with the candidate’s consent. You should only request a copy of the criminal record if legally required for the professional activity. If you do not meet these exceptions, and the candidate consents, you may ask them to show the extract without taking notes or storing a copy. Showing an extract does not constitute personal data processing, and candidates cannot be compelled to show their criminal record.
  • There are specific considerations to take into account in relation to a dismissed employee’s professional mailbox:
    1. access to a terminated employee’s professional mailbox must comply with GDPR rules and be based on a legitimate aim, such as following up on urgent matters;
    2. access should be limited to what is strictly necessary, like setting up an out-of-office message or identifying urgent professional emails;
    3. employers must transparently outline the conditions for accessing the mailbox after employment ends, possibly through an IT policy;
    4. access should also be time-limited; and
    5. the terminated employee should have enough time to delete private data from their professional mailbox, even if personal use is prohibited by the employer.
  • Employee monitoring is always a balancing exercise between the employee’s right to privacy and the employer’s right to take measures that ensure the smooth running of the company. The employer may have an interest in monitoring the use of the company’s email or internet access by its employees.

Several collective bargaining agreements (CBAs) must be observed as it has been concluded that they provide specific privacy protection for employees. This is the case for camera surveillance (CBA No 68 of 16 June 1998) and the electronic monitoring of the internet and emails (CBA No 81 of 26 April 2002).

In December 2024, the Act on Private Investigations entered into force. Its practical implications for employers are that they must create an internal policy to describe the circumstances and authorised methods of investigations, consult with the collective bodies, update their privacy policies and inform employees. In addition, businesses must only use external investigation suppliers that are duly licensed and abide by the new legal provisions, which tend to guarantee the respect of data protection rules in the context of private investigations. Some investigation methods and the collection of some categories of information are prohibited, sometimes with a possible exemption if the individual has given their consent.

In each phase of an asset deal, personal data is collected and processed, requiring compliance with the GDPR. The main points regarding the processing of personal data are summarised as follows.

Confidentiality and/or Data Processing Agreement

In the initial phase, a confidentiality agreement (non-disclosure agreement) is often signed to prevent the spread of information and keep exploratory talks secret. This agreement should include provisions on data protection.

Agreement With the Data Room Provider

An agreement must be made between the seller(s) and/or the target company and the data room manager (the processor) that complies with the GDPR, including mandatory mentions of Article 28. If the manager is outside the European Economic Area (EEA), additional restrictions on cross-border data transfer apply.

Processing Personal Data in the Due Diligence Report

Information in the data room will be analysed by the potential buyer and their advisors. Under the minimisation principle, only personal data that is strictly necessary for the specified purposes can be shared in the data room. Businesses must therefore find ways to assess whether documents should be redacted in part, and to make sure that spreadsheets and tables are obfuscated or the circulation thereof is limited to those who have an actual need to know. Lawyers must keep this information confidential, but it may be that not all other professionals are bound by the same duty. All individuals with access to the data room must commit to keeping the information confidential and not spreading it beyond the intended purpose. Participants often sign a digital confidentiality agreement before accessing the data room, which should include data processing provisions.

Clauses on the Risks of Data Processing

If due diligence reveals potential data protection risks, it is important for the buyer to obtain guarantees from the seller regarding the legality of the initial data collection and processing, and of the lawful transfer of personal data for the asset deal, including confirmation that data subjects have been informed and given the right to object if necessary.

The seller should make clear agreements about their liability and co-operation post-transfer, and ensure the buyer will process the data lawfully and in accordance with applicable legislation.

Change of Data Controller in Asset Deal

Once ownership is transferred, the buyer will be considered the new data controller for personal data related to the business operations. The business information is transferred at the time of ownership transfer – or at a later date, which is often the case – virtually or even physically. The originals of employment contracts, individual accounts, etc, must be transferred to the new employer, and contracts with customers and/or suppliers are transferred, as well as all relevant databases with personal data. Once this transfer has effectively taken place, it is the responsibility of the new data controller to ensure GDPR compliance, provide all data subjects with proper information about their data processing and respect all their rights.

Data Processing in a Transitional Services Agreement

Post-transfer, the buyer and seller may continue to collaborate. For example, the seller might handle payroll until the buyer finds a suitable provider. In such cases, the seller acts as a processor for the buyer. The reverse can also happen, where the buyer handles complaints for the seller. A transitional services agreement should be made addressing data processing and ensuring necessary safeguards on the part of the data controller.

Transfers of personal data from Belgium to a country outside the EEA are regulated by Chapter V of the GDPR. No additional restrictions apply under Belgian law.

General

As the GDPR is a European instrument, all EU countries are subject to the same requirements. However, when personal data is transferred outside the EU, the following rules must be taken into account to ensure that the level of protection of data subjects under the GDPR is not undermined.

The transfer of data outside the EU is subject to:

  • an adequacy decision of the European Commission (Article 45 of the GDPR);
  • appropriate safeguards (Article 46 of the GDPR), such as binding corporate rules; or
  • in the event of a specific situation, one of the derogations set forth in Article 49 of the GDPR.

The European Commission has (so far) recognised Andorra, Argentina, Canada (commercial organisations), the Faroe Islands, Guernsey, Israel, the Isle of Man, Japan, Jersey, New Zealand, the Republic of Korea, Switzerland and the United Kingdom under the GDPR and the Law Enforcement Directive (LED), and the United States (commercial organisations participating in the EU–US Data Privacy Framework) and Uruguay as providing adequate protection.

In the absence of an adequacy decision, data transfers outside the EU are still possible if appropriate safeguards have been enforced. These could be binding corporate rules, standard data protection clauses as adopted or approved by the European Commission, etc.

Based on the case law of the CJEU (Schrems I and II), data exporters are required to conduct a data transfer impact assessment. They must identify and implement supplementary measures to ensure that personal data transferred to a third country that has not received an adequacy decision is given an essentially equivalent level of protection.

Derogations

In the absence of an adequacy decision for a specific third country or appropriate safeguards, it is still possible to transfer personal data to a third country or an international organisation, subject to one of the following conditions:

  • the data subject has explicitly consented to the proposed transfer after having been informed of the possible risks;
  • the transfer is necessary for the performance of a contract between the data subject and the controller or the implementation of pre-contractual measures taken at the data subject’s request;
  • the transfer is necessary for the conclusion or performance of a contract concluded in the interest of the data subject;
  • the transfer is necessary for important reasons of public interest;
  • the transfer is necessary for the establishment, exercise or defence of legal claims;
  • the transfer is necessary in order to protect the vital interests of the data subject or other persons, where the data subject is physically or legally incapable of giving consent; or
  • the transfer is made from a register that, according to EU or member state law, is intended to provide information to the public and which is open to consultation.

As mentioned in 5.1 Restrictions on International Data Transfers, for a company to transfer data, an adequacy decision or other adequate safeguards are required. As many of these mechanisms have been approved before, no additional government notification or approval is required.

However, in the event that the company invokes binding corporate rules, the latter must have been approved by a supervisory authority before personal data can be transferred to a third country (Article 47.1 of the GDPR).

There are currently no specific data localisation requirements in Belgium. However, the EU will introduce data localisation requirements as part of the European Health Data Space (EHDS) Regulation.

European entities can sometimes face repercussions in relation to the extraterritorial enforcement of unilateral sanctions by third countries. The EU considers that such enforcement is contrary to international law and has implemented Regulation 2271/96, (the blocking statute) as a way of protecting itself. The blocking statute has been transposed into Belgian legislation through Law of 2 May 2019.

The blocking statute prohibits European entities from complying with specific sanctions, prohibiting co-operation with the relevant third country’s authorities.

Since 2018, the blocking statute has applied to US sanctions against Iran and Cuba.

New Standard Contractual Clauses (SCCs)

On 12 September 2024, the European Commission announced its intention to launch a public consultation on the introduction of additional SCCs for international transfers of personal data to non-EU controllers and processors that are directly subjected to the GDPR, a situation not yet covered by the existing SCCs. The adoption of these new SCCs will necessitate that international organisations consider whether the data importer is directly subject to the GDPR and whether to apply these new SCCs or the 2021 SCCs. This public consultation was scheduled to take place in the fourth quarter of 2024. However, to date this additional set of SCCs has not yet been published.

EDPB Guidelines on Article 48 GDPR

The EDPB has launched a public consultation on its guidelines on Article 48 of the GDPR. These guidelines aim to clarify the application of Article 48 of the GDPR, which addresses international requests for personal data transfers and disclosures. These guidelines clarify how EU controllers and processors should handle such requests, emphasising compliance with both Article 6 (legal grounds for processing) and Chapter V of the GDPR (international data transfers). The EDPB offers detailed recommendations to ensure data protection principles are adhered to when responding to third-country requests.

Osborne Clarke

Bastion Tower
Pl du Champ de Mars 5
1050 Bruxelles
Belgium

+32 2 515 93 00

lena.tausend@osborneclark.com www.osborneclarke.com
Author Business Card

Trends and Developments


Authors



Osborne Clarke is an international legal practice with over 330 partners and more than 1,300 lawyers in 26 locations. In Brussels, the firm’s data, IP and IT experts work together as a team to support high-profile Belgian and international clients on complex regulatory matters, including the implementation of the Digital Services Act, the Digital Markets Act and the Digital Operational Resilience Act (DORA). Osborne Clarke has a strong international client base in a range of industry sectors, including life sciences, retail, financial services and particularly fintech, as well as specialist technology clients and companies in the digital sector. The team intercedes in data privacy matters at different levels, from communicating with the Belgian Data Protection Authority, drafting data protection policies and carrying out data protection audits to assisting clients with disputes before the Belgian Data Protection Authority.

The involvement of artificial intelligence (AI) in the healthcare sector is particularly noteworthy. AI is, and will continue to be, used for diagnostics, drug development, treatment personalisation, virtual health assistants, health administration and remote patient monitoring, amongst other applications. It opens up new opportunities for organisations, healthcare professionals and clinics, enabling them to improve their offerings, develop new solutions and address various societal challenges. Although AI can generate benefits, it also raises a number of legitimate concerns related to human safety and security, freedom, privacy, integrity, dignity, self-determination and non-discrimination.

This article delves into the implications of the EU’s Artificial Intelligence Act (AIA) for healthcare professionals using AI systems in the context of remote patient monitoring. A wide range of stakeholders are covered under the AIA. These include not only the providers and manufacturers of AI systems but also the users, such as healthcare professionals. Any healthcare professional using an AI system under their authority will be considered a deployer, unless the AI system is used in the course of a personal non-professional activity. As a consequence, healthcare professionals are – as deployers – required to comply with a long list of obligations, which may notably range from compliance with instructions for use to assigning human oversight and ensuring that input data is relevant.

How To Qualify an AI System Used in the Context of Remote Patient Monitoring Under the AIA

High-risk AI comprises two categories:

  • AI that is a component of, or is itself, a product subject to EU product safety regulations that must undergo a third-party conformity assessment, as required by the regulations listed in Annex I (Article 6, Section 1 of the AIA); and
  • AI that is specifically classified as high-risk, as listed in Annex III (Article 6, Section 2 of the AIA).

AI systems used for the purpose of remote patient monitoring may fall under both categories of high-risk AI systems.

  • AI biometric categorisation systems are listed in Annex III of the AIA. Such categorisation systems may cover certain digitised clinical activities such as:
    1. automated study participant triage or selection tools; and
    2. remote patient monitoring algorithms that collect or analyse biometric data, such as heart rate, blood pressure or temperature.
  • AI systems used in remote patient monitoring may also qualify as high-risk AI systems under Article 6, Section 1 of the AIA if (i) they qualify as medical devices under the Medical Device Regulation (MDR) (listed in Annex I as one of the EU product safety regulations) and (ii) to the extent that they are subject to a conformity assessment by the competent authority. The latter is the case for type IIa, IIb and III medical devices. It is therefore important to understand what is covered by the definition of a medical device and when a medical device falls into the type IIa, IIb and III categories.

Under Article 2, Section 1 of the MDR, software can qualify as a medical device where it is intended to be used for specific purposes such as diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease.

Further, the MDR states that medical devices requiring a conformity assessment, and thus classified as high-risk AI, include all type IIa, IIb, and III devices. In the context of remote patient monitoring, Annex VIII of the MDR describes which medical devices fall under type II–III, including several devices used for diagnosis and monitoring as well as software intended to monitor physiological processes.

Requirements for Healthcare Professionals as a Deployer Under the AIA

Qualifying remote patient monitoring tools as high-risk AI systems and healthcare professionals as deployers triggers a cascade of compliance requirements. The main requirements, which are listed in Article 26 of the AIA, are explained below.

AI literacy

As early as February 2025, healthcare professionals that use AI systems must have sufficient knowledge about AI.

AI literacy is defined in Recital 56 as the skills, knowledge and understanding that allow providers, deployers and affected persons to make informed decisions regarding AI systems. This also includes awareness about the opportunities, risks and potential harm associated with AI. Article 4 of the AIA provides that deployers, in the same ways as providers, are obliged to ensure, to the best of their ability, a sufficient level of AI literacy of their employees and anyone else who operates or uses these systems on their behalf.

In the context of healthcare professionals, this means, for example, that physicians will need to properly inform and educate caregivers about AI systems’ risks and limitations. They should inform them about how to use the AI system, and of its limitations, as well as how and when to monitor data from the AI system. This also means that physicians and caregivers must be aware that AI systems used in remote patient monitoring may contain biases or ignore essential information that could lead to false-positive or false-negative results. False positives could lead to unnecessary anxiety for patients and potentially unnecessary medical interventions. In contrast, false negatives can result in missed diagnoses or delayed treatment, potentially worsening patient outcomes.

To give a more practical example, a physician should explain to caregivers and patients the specifics of the environments in which manufacturers or providers specifically state that their AI system will not operate accurately. Suppose that a remote patient monitoring tool is designed to detect skin cancer or follow up on the stages of skin cancer. The provider specifies that the tool requires specific lighting conditions to function correctly. If the physician or caregiver is not aware of the limitations of this AI system and the specific conditions are not met – such as poor lighting when seeking to detect skin lesions – the AI system may fail to provide accurate readings.

Instructions for use

In accordance with Article 13, Section 2 of the AIA, providers must give instructions for use and make them available to deployers. These instructions should include comprehensive information on the system’s characteristics, capabilities and performance limitations. They should also outline potential risks related to the use of high-risk AI systems, including “any actions by the deployer that could influence system behaviour, under which the system might pose risks to health, safety, and fundamental rights, on the changes that have been pre-determined and assessed for conformity by the provider and on the relevant human oversight measures, including the measures to facilitate the interpretation of the outputs of the AI system by the deployers”.

Deployers of high-risk AI systems must take the necessary technical and organisational measures to ensure that the systems are used correctly and in accordance with these instructions.

Human oversight

Providers are responsible for the basic implementation of human oversight tools, and deployers are subsequently obliged to assign human oversight to natural persons who have the necessary competence, training and authorisation. This requirement is intended to prevent or minimise risks to health, safety or fundamental rights that may arise from the use of a high-risk AI system (such as biased output or false negatives).

Input data screening

Data provided to or directly captured by an AI system, on the basis of which the system processes an output – defined as input data – must be relevant and sufficiently representative with respect to the intended purpose of the high-risk AI system. This obligation, however, only applies to the extent the deployer exercises control over such input data. In healthcare, for example, this could mean including diverse patient data to avoid bias. Alternatively, if the AI system is designed for remote monitoring of a specific condition, such as diabetes, the input data should include relevant medical records and diagnostic information.

Post-market surveillance and vigilance

In accordance with Article 26, Section 5 of the AIA, healthcare professionals using AI systems for remote patient monitoring must monitor the operation of any such AI system on the basis of the accompanying instructions for use. If they identify that the use of the AI system may result in a significant risk, or if they identify a serious incident, they may need to inform the provider (and where legally required also the distributor and/or the relevant market surveillance authorities) and suspend the use of the system.

Log keeping

The AIA mandates the automatic recording of logs on high-risk AI systems. This ensures a level of traceability of the AI systems’ functioning throughout their life cycle, and facilitates the monitoring of high-risk AI systems to detect situations that might pose risks to health, safety or fundamental rights, as well as the establishment and proper documentation of a post-market monitoring system. This also allows for the evaluation of continuous compliance of AI systems with the AIA’s requirements. Pursuant to Article 26, Section 6 of the AIA, deployers of high-risk AI systems must retain logs under their control for at least six months, considering the AI system’s intended purpose. When logs are managed by healthcare professionals, it is crucial to ensure that they can be stored long term and that data governance policies are in place to regulate the retention period.

Transparency

Transparency and explainability are key to foster trust in AI. It is therefore not surprising that the AIA emphasises the importance of these principles several times and imposes various transparency obligations. In the context of remote patient monitoring, the following transparency obligations merit consideration by healthcare professionals:

  • deployers of the high-risk AI systems referred to in Annex III, which make decisions or assist in making decisions related to natural persons, must inform the natural persons that they are subject to the use of a high-risk AI system (Article 26, Section 11 of the AIA); and
  • when using an emotion recognition system or a biometric categorisation system, the deployer must inform the natural persons exposed thereto about the operation of the system (Article 50 of the AIA).

Data protection impact assessment and co-operation

These obligations are straightforward: deployers must co-operate with the relevant competent authorities in any actions they take concerning a high-risk AI system to implement the AIA. This co-operation may include providing any requested information about the AI system used. In addition, they must comply with their obligation to carry out a data protection impact assessment under Article 35 of the GDPR, for which they can use the instructions for use.

Fundamental rights impact assessment for high-risk AI systems

Article 27 of the AIA provides that, prior to deploying a high-risk AI system as defined in Article 6(2), deployers who are public bodies or private entities providing public services, and those deploying AI systems specified in points 5(b) and (c) of Annex III, must carry out an assessment of the system’s impact on fundamental rights.

The term “public services” is used broadly in the AIA, without clear criteria or further guidance on how to identify such services. This could result in a wider range of organisations being subject to this obligation than expected at first sight. Recital 96 provides some context by listing examples of public services, such as healthcare. As a result, healthcare professionals may find themselves subject to this obligation.

Where applicable, healthcare professionals must thus ensure that a fundamental rights impact assessment is carried out prior to the first use of a high-risk AI system, consisting of the following elements:

  • a description of the deployer’s procedures in which the high-risk AI system will be used, in accordance with its intended purpose;
  • a description of the time period and frequency of the AI system’s intended use;
  • the categories of natural persons and groups who could be affected by its use;
  • the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons, considering the instructions of use given by the provider;
  • a description of the implementation of human oversight measures according to the instructions for use; and
  • the measures to be taken if these risks materialise, including internal governance and complaint mechanisms.

The AI Office is responsible for developing a template questionnaire to assist deployers in fulfilling their obligations.

The Interplay Between the AIA and the GDPR

Remote patient monitoring involves the collection and processing of a large amount of personal (health) data. As a result, remaining compliant with both is a must for healthcare professionals. An extensive discussion of the interplay between the AIA and the GDPR is beyond the scope of this article, so the following is only a general overview of the similarities between the two regulations.

Scope of the GDPR and the AIA

The GDPR applies to:

  • an entity that processes personal data if it is established in the EU, regardless of where the actual data processing takes place; or
  • an entity that is established outside the EU if it processes personal data in connection with offering goods or services to individuals in the EU or monitoring the behaviour of individuals in the EU.

This contrasts with the material scope of the EU AI Act, which is centred on the definition of an AI system. The material scope of the EU AI Act extends to providers, deployers, importers, distributors and authorised representatives. Unlike the GDPR, the EU AI Act includes a detailed risk categorisation framework that imposes different obligations depending on the risk level of the AI system. Most of the obligations under the EU AI Act apply to high-risk AI systems.

Roles under the GDPR and AIA

Healthcare professionals using AI systems must consider their roles under both the GDPR and the AIA. This is crucial, as different obligations under the GDPR and AIA may apply depending on their qualification.

The GDPR makes a distinction between controllers and processors, with controllers being responsible for the strictest levels of GDPR compliance. The AIA distinguishes between different categories of actors, such as providers, deployers, distributors, importers, etc. The provider and the deployer are the most important roles in practice.

Consider a physician in a hospital using an AI system to remotely monitor a patient’s mental health. In this scenario, the physician (or the healthcare organisation employing the physician) is using the AI system in his or her practice, making him or her a deployer under the AIA. The physician is responsible for ensuring that the AI system is used in accordance with the AIA’s requirements. Simultaneously, the physician is collecting, using and managing personal mental health data to provide medical services. This makes him or her a controller under the GDPR, as he or she determines the purposes and means of processing personal data. As a result, he or she must ensure compliance with the obligations under the GDPR.

Principles under the AIA and the GDPR

The GDPR sets out seven data protection principles: lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality.

The AIA outlines general principles that apply to all AI systems and specific obligations to implement these principles in specific ways. These principles are influenced by the OECD AI Principles and the seven ethical principles for AI developed by the High-Level Expert Group on Artificial Intelligence (HLEG-AI). Recital 27 refers to the following principles: human agency and oversight, technical robustness and safety, privacy and data governance, transparency , diversity, non-discrimination, fairness and social and environmental wellbeing. These principles are further concretised in various articles of the AIA: Article 10 prescribes data governance practices for high-risk AI systems, Article 13 addresses transparency, Articles 14 and 26 introduce human oversight and monitoring requirements and Article 27 mandates fundamental rights impact assessments for certain high-risk AI systems.

Human oversight and automated decision-making

The provisions related to human oversight in the AIA and automated decision-making in the GDPR are important, especially in the healthcare sector, where the integration of AI systems has significantly transformed decision-making processes. AI systems in healthcare can be categorised as fully automated and partially automated decision-making tools, each with distinct levels of human oversight. Fully automated decision-making systems function independently, making decisions without human intervention. For example, an AI insulin management system autonomously adjusts insulin delivery by analysing data from sensors and fitness trackers, geolocation data from smartphones and hand-gesture sensing data. The system identifies patterns in individual behaviour and regulates insulin levels accordingly. In contrast, partially automated medical decision systems involve AI systems that make initial decisions but require human input in specific situations. For example, an AI system that monitors cardiac patients continuously analyses personalised heart rate data collected from wearable or implantable devices. When it detects arrhythmias, it automatically transmits the relevant information to the patient’s cardiologist, who then decides on the appropriate course of action. The below only provides a general overview of the different provisions.

Article 22 of the GDPR grants data subjects the right not to be subjected to decisions based solely on automated processing, including profiling, which produce legal effects or similarly significant effects. The only situations where such automated decision-making is allowed are those in which it is necessary for entering into or performing a contract, when there is authorisation by European or member state law or when there is explicit consent from the data subject. In any case, measures must be implemented to protect fundamental rights, such as ensuring the right for meaningful human intervention on the part of the data controller to express his or her point of view and contest the decision. Similarly, the AIA aims to protect fundamental rights and freedoms by ensuring appropriate human oversight and intervention, known as the “human-in-the-loop” effect. Indeed, Article 14 of the AIA requires that high-risk AI systems be designed and developed to allow for effective human overview during their use, including appropriate human-machine interface tools. Further, Article 26(1) requires deployers of AI systems to implement technical and organisational measures to ensure that the system is used in accordance with its instructions of use, including with respect to human oversight.

If a certain level of human oversight is lacking, for example because the human decision-makers are not properly trained, the AI system might not be considered partially automated, thus falling under the automated decision-making framework of Article 22 of the GDPR.

Reporting incidents

Reporting obligations relating to serious incidents or the malfunctioning of AI systems may partially overlap with GDPR reporting obligations when personal data is involved. In the section headed “Roles under the GDPR and AIA”, a brief discussion was provided of the obligation of a healthcare professional using AI systems for remote patient monitoring to inform the provider and, where legally required, also the distributor and/or the relevant market surveillance authorities if they identify a significant risk or a serious incident. If such an incident results in a data breach (ie, compromises the confidentiality, availability or integrity of the data processed by the AI system), healthcare professionals may also need to notify the relevant data protection authority and, in some cases, the affected data subjects. The incident should be reported to the relevant supervisory authority without undue delay and, where feasible, not later than 72 hours after becoming aware of the breach, unless the breach is unlikely to put the data subjects’ rights and freedoms at risk, and to affected data subjects if the breach is likely to result in a high risk to the rights and freedoms of the data subjects.

Penalties

Both the GDPR and the AI Act provide for administrative fines, the extent of which depend on the severity of the infringement. Under the GDPR, minor infringements can result in fines up to EUR10 million or 2% of the total annual global turnover, whichever is higher. Examples of such infringements include violating the GDPR’s principle of privacy by design and default. For more serious breaches, fines can escalate to EUR20 million or 4% of the total annual global turnover, for example for breaches of the GDPR’s provisions on processing principles and data subjects’ rights. With regard to the AIA, penalties are outlined in Article 99 of the AIA. Serious breaches, such as non-compliance with prohibited AI practices or failure to meet quality requirements for high-risk AI systems, can lead to fines up to EUR35 million or 7% of worldwide annual turnover. For less serious breaches, like providing incorrect, incomplete or misleading information to notified bodies or national competent authorities, the fine is EUR7.5 million or 1% of worldwide annual turnover, whichever is higher.

Conclusion

The integration of AI technologies into healthcare offers a potentially transformative opportunity, but also presents complex legal and regulatory challenges, particularly under the AIA, GDPR and MDR. With the coming into force of the AIA, healthcare professionals will need to navigate a rigorous compliance landscape resulting from the broad definition of “deployer” and extensive obligations on those who use high-risk AI systems.

This requires adopting a proactive and strategic approach: assessing AI systems (with a focus on high-risk categories), developing robust compliance frameworks and understanding compliance requirements under the various applicable regulations, and engaging with regulators to stay informed of further guidance and ensure alignment with compliance timelines and obligations.

Osborne Clarke

Bastion Tower
Pl du Champ de Mars 5
1050 Bruxelles
Belgium

+32 2 515 93 00

lena.tausend@osborneclark.com www.osborneclarke.com
Author Business Card

Law and Practice

Authors



Osborne Clarke is an international legal practice with over 330 partners and more than 1,300 lawyers in 26 locations. In Brussels, the firm’s data, IP and IT experts work together as a team to support high-profile Belgian and international clients on complex regulatory matters, including the implementation of the Digital Services Act, the Digital Markets Act and the Digital Operational Resilience Act (DORA). Osborne Clarke has a strong international client base in a range of industry sectors, including life sciences, retail, financial services and particularly fintech, as well as specialist technology clients and companies in the digital sector. The team intercedes in data privacy matters at different levels, from communicating with the Belgian Data Protection Authority, drafting data protection policies and carrying out data protection audits to assisting clients with disputes before the Belgian Data Protection Authority.

Trends and Developments

Authors



Osborne Clarke is an international legal practice with over 330 partners and more than 1,300 lawyers in 26 locations. In Brussels, the firm’s data, IP and IT experts work together as a team to support high-profile Belgian and international clients on complex regulatory matters, including the implementation of the Digital Services Act, the Digital Markets Act and the Digital Operational Resilience Act (DORA). Osborne Clarke has a strong international client base in a range of industry sectors, including life sciences, retail, financial services and particularly fintech, as well as specialist technology clients and companies in the digital sector. The team intercedes in data privacy matters at different levels, from communicating with the Belgian Data Protection Authority, drafting data protection policies and carrying out data protection audits to assisting clients with disputes before the Belgian Data Protection Authority.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.