Data Protection & Privacy 2026

Last Updated March 10, 2026

EU

Law and Practice

Authors



Gerrish Legal is a Paris and Stockholm-based boutique law firm with additional presence in London, specialising in privacy, data protection, AI and technology law. With lawyers qualified in France, England and Wales and Ireland, the firm’s multilingual team advises international clients – from scale-ups to listed multinationals – across sectors such as SaaS, life sciences, fashion, recruitment, security and catering. Its core practice focuses on GDPR compliance, international data transfers, AI regulation, digital platform regulation, privacy-by-design frameworks, data breach management and privacy litigation. The firm also has strong expertise in commercial law, particularly technology contracts (SaaS), cross-border commercial arrangements and intellectual property matters. Gerrish Legal advises both EU-based organisations on privacy compliance and non-EU companies expanding into Europe (particularly France) by adapting their data governance frameworks and commercial practices to EU regulatory requirements, including the GDPR, AI Act, Data Act and sector-specific digital regulations.

EU data protection law is grounded in EU constitutional sources, notably Article 16, Treaty on the Functioning of the European Union (TFEU) which constitutes the legal basis for legislation and fundamental rights protection (Article 7 of the Charter of Fundamental Rights of the EU which consecrates respect for private life and family life, and Article 8 which directly consecrates the protection of personal data). These establish both EU legislative competence and the constitutional status of data protection as a fundamental right).

The centrepiece of the data privacy framework is the General Data Protection Regulation (Regulation (EU) 2016/679 (GDPR) on the protection of natural persons with regard to the processing of personal data and on the free movement of such data), which was adopted by the EU and became directly applicable across all member states on 25 May 2018 without the need for national transposition.

The GDPR prevails over incompatible national provisions. Its objective is to ensure an equal and harmonised level of protection for personal data across the Union while safeguarding the free movement of such data. It operates as the EU’s general (horizontal) regime for personal data processing.

These sources interact through the primacy and direct applicability of EU law: the GDPR serves as the default framework, while sectoral instruments either displace it for specific scopes or add rules consistent with GDPR principles (lex specialis).

EU-level interpretation is also shaped materially by Court of Justice of the European Union (CJEU) case law. The CJEU’s jurisprudence has been central in defining core concepts of data protection law.

Additionally, EU regulatory convergence is promoted through EDPB soft-law instruments (for example, Guidelines 05/2021). While formally non-binding, these guidelines play a significant harmonising role in enforcement practice.

The GDPR has explicit extraterritorial reach where processing relates to the offering of goods or services to individuals in the EU or monitoring their behaviour in the EU (Article 3(2), GDPR), and its international transfer regime can significantly extend compliance requirements beyond the EU (Chapter V, GDPR), as reflected in leading transfer case law (CJEU, Schrems II, Case C-311/18).

The EU privacy/data protection framework also overlaps with EU regimes on non-personal data, cybersecurity and AI: non-personal and mixed datasets are addressed through the free flow framework and data economy rules (Regulation (EU) 2018/1807 (Free Flow of Non-Personal Data); Regulation (EU) 2022/868 (Data Governance Act); Regulation (EU) 2023/2854 (Data Act)), while security obligations intersect with GDPR security requirements (Directive (EU) 2022/2555 (NIS2 Directive); Article 32, GDPR). AI governance adds further product- and risk-based obligations that must be implemented alongside GDPR where personal data is used (Regulation (EU) 2024/1689 (AI Act)).

Under EU law, the general principles governing the processing of personal data are set out in the GDPR.

Article 5, GDPR establishes core principles which structure all processing activities: lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability (Article 5(1)-(2), GDPR).

Chapter 2 (Articles 6-11), GDPR sets out the substantive conditions for lawful processing. Processing must be based on one of the lawful grounds listed in Article 6, GDPR (consent, contract, legal obligation, vital interests, public task or legitimate interests) (Article 6(1), GDPR), with stricter conditions for special categories of data (Article 9, GDPR).

Chapter 4 (Articles 24-43), GDPR establishes obligations for controllers and processors.

Controllers bear primary responsibility for ensuring and demonstrating compliance (Articles 5(2) and 24), must implement data protection by design and by default (Article 25), conduct data protection impact assessments where processing is likely to result in high risk (Article 35), appoint a data protection officer where required (Articles 37–39) and ensure appropriate technical and organisational security measures proportionate to risk (Article 32).

Processors may process data only on documented instructions from the controller (Article 28), must implement appropriate security measures (Article 32) and assist controllers in fulfilling data subject rights and compliance duties.

Organisations acting as controllers or processors must operationalise the accountability principle (Articles 5(2) and 24, GDPR) through documentation, internal governance structures and demonstrable risk management practices.

Chapter 3 (Articles 12–23) also grants data subjects a broad catalogue of enforceable rights. These include the right to transparent information at the time of collection (GDPR, Articles 12–13); the right of access (Article 15); rectification (Article 16); erasure (“right to be forgotten”) (Article 17); restriction of processing (Article 18); data portability (Article 20); and objection, including an absolute right to object to direct marketing (Article 21). Individuals also have rights relating to automated decision-making and profiling (Article 22). Requests must generally be answered within one month (Article 12(3)), and data subjects may lodge complaints with a supervisory authority (Article 77) and seek judicial remedies (Articles 78–79).

The CJEU has played a central role in interpreting these rights (for example, Google Spain, C-131/12; Schrems II, C-311/18).

Key “to dos” include:

  • maintaining a record of processing activities (Article 30);
  • conducting data protection impact assessments where processing is likely to result in high risk (Article 35), and consulting supervisory authorities where residual high risk remains (Article 36);
  • implementing appropriate technical and organisational security measures (Article 32);
  • establishing procedures for detecting and managing personal data breaches, including notification within 72 hours where required (Articles 33–34);
  • appointing a data protection officer where mandatory (Article 37); and
  • ensuring lawful transfer mechanisms for international data flows (Chapter V GDPR).

These obligations operate cumulatively with relevant sectoral instruments such as the e-Privacy Directive (Directive 2002/58/EC) and, where applicable, Directive (EU) 2016/680.

As a principle, Article 9(1), GDPR prohibits processing data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data used for unique identification, health data, and data concerning sex life or sexual orientation.

By exception, such processing is permitted only where one of the exhaustively listed grounds in Article 9(2) applies, namely:

  • explicit consent (Article 9(2)(a));
  • employment, social security and social protection obligations under EU or Member State law (Article 9(2)(b));
  • vital interests (Article 9(2)(c));
  • legitimate activities of certain non-profit bodies (Article 9(2)(d));
  • data manifestly made public by the data subject (Article 9(2)(e));
  • legal claims (Article 9(2)(f));
  • substantial public interest (Article 9(2)(g));
  • preventive or occupational medicine and health care management (Article 9(2)(h)–(3));
  • public health (Article 9(2)(i)); and
  • archiving, scientific or historical research and statistical purposes subject to safeguards (Article 9(2)(j) and Article 89(1)).

Automated decision-making based on special categories of data is further restricted by Article 22(4) of the GDPR. Member states may introduce additional conditions for genetic, biometric and health data (Article 9(4), GDPR).

The GDPR also provides a specific regime for children’s data in the context of information society services. Where processing is based on consent (Article 6(1)(a), GDPR) and relates to the direct offer of such services to a child, consent is valid only if the child is at least 16 years old, unless member states lower the age threshold (not below 13) (Article 8(1), GDPR). For children below the applicable age, consent must be given or authorised by the holder of parental responsibility, and controllers must make reasonable efforts to verify this (Article 8(2), GDPR).

Data relating to criminal convictions, offences and related security measures are governed separately under Article 10, GDPR. Unlike Article 9 data, such data may be processed only under the control of a public authority or where authorised by EU or member state law providing appropriate safeguards.

Comprehensive registers of criminal convictions may be kept only under the control of a public authority. In the law enforcement context, processing by competent authorities falls under Directive (EU) 2016/680 (Law Enforcement Directive), which establishes a parallel regime.

Patient data is a special category of personal data under Article 9, GDPR. As a principle, processing health data is prohibited, unless an Article 6 legal basis and an Article 9(2) exception applies.

Health data includes information relating to physical or mental health, such as healthcare services, genetic and biometric data.

Only data that is truly anonymised falls outside the GDPR. Pseudonymised data remains personal data. The processing carried out to achieve anonymisation must itself be lawful.

In practice, life sciences companies most commonly rely on:

  • explicit consent (Article 9(2)(a)), given specifically for research or product development purposes, and provided it is explicit, freely given, specific and informed;
  • scientific research purposes (Article 9(2)(j)), provided the processing is based on EU or member state law and subject to appropriate safeguards under Article 89(1), GDPR (eg, pseudonymisation, access controls, data minimisation);
  • public interest in the area of public health (Article 9(2)(i)), such as ensuring high standards of quality and safety of healthcare or medical devices, where supported by EU or member state law; and
  • preventive or occupational medicine, diagnosis or management of healthcare systems (Article 9(2)(h) GDPR), particularly where the company acts on behalf of or in co-operation with healthcare providers subject to professional secrecy.

The European Health Data Space Regulation (EHDS), once applicable, will significantly impact life sciences companies operating in the EU by establishing a harmonised framework for secondary use of electronic health data.

Secondary use for research, innovation, regulatory purposes and public health policy will require a data permit issued by a national health data access body. Permits will specify the authorised purpose, datasets and conditions of use, and data will generally be accessed through secure processing environments, subject to strict purpose limitation, security, governance controls and a prohibition on re-identification. The EHDS expressly prohibits certain uses (eg, advertising, discriminatory decision-making, decisions detrimental to individuals in areas such as employment or insurance).

It further strengthens individuals’ rights (including enhanced access and portability, and in some cases the ability to restrict secondary use) and introduces interoperability obligations for electronic health record systems, reinforcing privacy by design under the GDPR.

Overall, the EHDS aims to facilitate innovation by expanding lawful access to large-scale health datasets while tightening governance and limiting purely commercial uses. Together, the GDPR and EHDS create a high-compliance but innovation-oriented regime: companies may anonymise and use patient data for development or research where valid grounds exist and robust safeguards are in place, while future access will increasingly depend on EHDS governance compliance.

Under EU law, any AI system that processes personal data is subject to the GDPR. Controllers must identify a valid Article 6 legal basis (and Article 9 exception where special categories are involved), comply with the core principles in Article 5 (including fairness, transparency, purpose limitation, data minimisation and accuracy), and implement data protection by design and by default (Article 25). Given the scale and complexity of AI training and inference, Article 32 security obligations are particularly significant. A Data Protection Impact Assessment is required under Article 35 where AI processing is likely to result in a high risk, including large-scale profiling, behavioural inference or use of sensitive data.

Article 22, GDPR also specifically restricts decisions based solely on automated processing that produce legal or similarly significant effects. Such decisions are prohibited unless one of the narrow exceptions apply: necessity for entering into or performing a contract, authorisation by EU or member state law with appropriate safeguards, or explicit consent. In those cases, controllers must ensure meaningful human intervention and enable individuals to express their views and contest the decision, and must provide meaningful information about the logic involved and the envisaged consequences.

The Artificial Intelligence Act (AI Act) also established a harmonised, risk-based framework for the development, placing on the market and deployment of AI systems in the EU. It classifies systems into prohibited, high-risk, limited-risk and minimal-risk categories:

  • prohibited practices (eg, certain manipulative techniques, social scoring and specific biometric identification uses) may not be deployed;
  • high-risk systems, including those used in employment, creditworthiness, law enforcement, migration, education and critical infrastructure, are subject to ex ante conformity assessments and ongoing compliance obligations;
  • limited-risk systems, mainly subject to transparency obligations; and
  • minimal-risk systems, which are largely unregulated under the AI Act.

For high-risk systems, the Act imposes extensive requirements centred on risk management, technical documentation, record-keeping, robustness and cybersecurity. A core focus is data governance: providers must ensure appropriate data management practices, assess the quality of training, validation and testing datasets, implement bias detection and mitigation measures, and document data sources and preprocessing methods.

Transparency is required across several risk levels. Users must be informed when interacting with an AI system (eg, chatbots) or when content is AI-generated (eg, deepfakes). High-risk systems must also be accompanied by clear instructions for use and sufficient information to enable effective oversight.

Human oversight is a structural requirement for high-risk systems: they must be designed so that natural persons can understand system capabilities and limitations, monitor outputs and intervene or override where necessary.

The regimes are complementary: the GDPR governs the lawfulness of personal data processing and individual rights, while the AI Act regulates AI systems from a product safety and governance perspective. In practice, GDPR compliance is always required where personal data is involved, and additional AI Act obligations apply depending on risk classification, with transparency, data governance and human oversight forming the core pillars for high-risk AI.

Under EU law, a “personal data breach” is defined as a security breach leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data (Article 4(12), GDPR). This covers confidentiality breaches (unauthorised access/disclosure), availability breaches (loss/inaccessibility) and integrity breaches (unauthorised alteration).

Controllers and processors must implement appropriate technical and organisational measures to ensure a level of security appropriate to risk (Articles 5(1)(f) and 32, GDPR), adopting a risk-management approach rather than guaranteeing absolute security (eg, CJEU, Natsionalna agentsia za prihodite, Case C-340/21).

When a breach occurs, controllers must promptly assess whether it is likely to result in a risk to individuals’ rights and freedoms, and comply with the GDPR’s notification and documentation obligations.

Where the breach is likely to result in a risk, the controller must notify the competent supervisory authority without undue delay and, where feasible, within 72 hours after becoming aware of it (Article 33(1), GDPR); any delay must be justified.

Processors must notify the controller without undue delay after becoming aware of a breach (Article 33(2), GDPR). The notification must include, at minimum, a description of the breach, the categories and approximate number of data subjects and records concerned, contact details for the DPO or other point of contact, likely consequences and measures taken or proposed to address and mitigate the breach (Article 33(3), GDPR); information may be provided in phases (Article 33(4), GDPR).

Controllers must document all breaches (facts, effects, remedial action) to enable regulatory verification (Article 33(5), GDPR). Where the breach is likely to result in a high risk, the controller must communicate it to affected individuals without undue delay, using clear and plain language and providing equivalent core information and mitigation steps (Article 34(1)–(2), GDPR).

Communication to individuals may be avoided in limited cases, including where robust technical measures (eg, effective encryption) render the data unintelligible, or where subsequent measures eliminate the high risk, or where individual notification would involve disproportionate effort (in which case a public communication may be used) (GDPR, Article 34(3)).

Operationally, organisations should have a breach response plan that enables rapid detection, containment and documentation; an initial legal and technical qualification of the event; a structured risk assessment; timely regulator and (if required) individual notifications; and remediation steps (containment, patching, credential resets, access reviews, restoration and enhanced monitoring), followed by a post-incident review (Articles 24, 25 and 32, GDPR).

Supervisory authorities have extensive investigative and corrective powers, including requiring information, conducting audits, ordering notifications, restricting processing and imposing administrative fines (Articles 57, 58 and 83, GDPR). However, corrective measures are not automatic and must be necessary and proportionate (see judgment of 26 September 2024 C-768/21).

Data breaches also create civil liability exposure and potential mass claims. Data subjects may lodge complaints with supervisory authorities (Article 77, GDPR) and seek judicial remedies and compensation for material or non-material damage (Articles 79 and 82, GDPR). The Court of Justice has confirmed that fear of potential misuse of data following a breach may itself constitute compensable non-material damage (Natsionalna agentsia za prihodite).

At EU level, privacy and data protection oversight is organised around national supervisory authorities, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), with additional roles for sectoral regulators under specific instruments (eg, the ePrivacy Directive).

National Supervisory Authorities (NSAs)

Under Article 51, GDPR, each member state must designate at least one independent authority responsible for monitoring and enforcing the GDPR. NSAs are vested with investigative, corrective and sanctioning powers under Articles 57 and 58, GDPR, including the power to conduct audits and inspections, require information, order compliance, impose processing bans and administrative fines.

Proceedings are typically triggered by (i) complaints lodged by data subjects (Article 77, GDPR), (ii) referrals or co-operation requests from other supervisory authorities in cross-border cases, or (iii) ex officio investigations. In cross-border cases, the “one-stop-shop” mechanism (Articles 56 and 60, GDPR) designates a lead supervisory authority, which co-operates with other “concerned” authorities. Disputes may be escalated to the EDPB for a binding decision under Article 65, GDPR.

NSAs also enforce Directive 2016/680 (law enforcement data processing) and, depending on national law, the ePrivacy Directive (2002/58/EC), sometimes alongside or instead of sector-specific regulators (eg, telecommunications authorities). Their decisions are binding domestically, subject to judicial review.

European Data Protection Board (EDPB)

Established under Articles 68–76, GDPR, the EDPB ensures consistent application of the GDPR across the Union. It issues guidelines, recommendations and best practices (Article 70 GDPR), advises the European Commission (including on adequacy decisions), and adopts binding decisions in dispute resolution under Article 65. Its guidelines are formally non-binding but highly persuasive in practice; Article 65 decisions are legally binding on the national authorities concerned.

European Data Protection Supervisor (EDPS)

The EDPS supervises compliance with EU data protection rules by EU institutions, bodies and agencies (currently under Regulation (EU) 2018/1725). It exercises comparable investigative and corrective powers within the EU institutional framework and may issue reprimands, orders or administrative fines. Its decisions are binding on EU institutions and subject to review by the CJEU.

Sectoral and Related Authorities

Under the ePrivacy Directive, member states may designate specific authorities for rules on confidentiality of communications, cookies and direct marketing rules. The GDPR’s one-stop-shop mechanism does not automatically apply to national rules implementing the ePrivacy Directive, which may lead to parallel national proceedings.

In addition, for international data transfers, the European Commission plays a key role through adequacy decisions (Article 45, GDPR), while national authorities retain investigative powers and may refer validity questions to the CJEU (as illustrated by Schrems case law).

Under EU law, investigations and enforcement actions in data protection matters are primarily governed by the GDPR, which establishes a decentralised enforcement system based on independent national supervisory authorities (Articles 51–59, GDPR), combined with a co-operation and consistency mechanism for cross-border processing (Articles 56, 60–66, GDPR).

Initiation of Investigations

Proceedings may be triggered by (i) a complaint lodged by a data subject (Article 77, GDPR), (ii) ex officio action by a NSA, or (iii) a co-operation request from other supervisory authorities in cross-border cases (Articles 60–61, GDPR). In cross-border scenarios, the “lead supervisory authority” (usually that of the controller’s main establishment) conducts the investigation under the one-stop-shop mechanism (Article 56, GDPR), in co-operation with “concerned” authorities. If disagreement persists, the matter may be referred to the EDPB, which can adopt a binding decision under the consistency mechanism (Article 65, GDPR).

Conduct of Investigations and Procedural Guarantees

NSAs have extensive investigative powers, including the power to order the provision of information, carry out audits and inspections, and access to premises and data (Article 58(1), GDPR). They may adopt corrective measures (Article 58(2) GDPR), including warnings, reprimands, orders to comply, temporary or definitive processing bans, suspension of data flows and administrative fines.

The GDPR does not fully harmonise procedural timelines for investigations; these are governed by national administrative law, subject to EU law principles of effectiveness, equivalence and the right to good administration (Article 41, Charter of Fundamental Rights of the EU) and the right to an effective remedy (Article 47, Charter). Data subjects must be informed of the progress or outcome of their complaint and have the right to challenge legally binding decisions or inaction before national courts (Articles 78–79, GDPR).

Controllers must respond to data subject requests within one month, extendable by two further months where necessary (Article 12(3), GDPR). Failure to respond may prompt a complaint and subsequent enforcement. In practice, national laws may provide for hearings, written submissions and settlement-like discussions during investigations, but there is no harmonised EU-level “transaction” mechanism; informal resolution or commitments may nevertheless influence the authority’s choice of corrective measures.

Sanctions and Remedies

The GDPR provides for a harmonised regime of administrative fines (Article 83, GDPR), structured in two tiers: up to EUR10 million or 2% of the total worldwide annual turnover (Article 83(4)), and up to EUR20 million or 4% of worldwide annual turnover (Article 83(5)), whichever is higher. Fines must be effective, proportionate and dissuasive (Article 83(1)). In addition, supervisory authorities may impose non-pecuniary corrective measures (Article 58(2)). Member states may lay down additional penalties, including criminal sanctions, for infringements not subject to administrative fines or to supplement them (Article 84).

Data subjects are entitled to compensation for material or non-material damage resulting from a GDPR infringement (Article 82). Controllers and processors may be held jointly and severally liable, subject to rights of recourse between them.

Criteria for Setting Penalties

Article 83(2), GDPR lists the factors to be taken into account when deciding whether to impose a fine and determining its amount. These include: the nature, gravity and duration of the infringement; the number of data subjects affected and the level of damage; whether the infringement was intentional or negligent; actions taken to mitigate damage; the degree of responsibility, taking into account technical and organisational measures; previous infringements; co-operation with the authority; categories of personal data involved; how the infringement became known; compliance with prior measures; adherence to approved codes of conduct or certification mechanisms; and any financial benefits gained or losses avoided.

Recent CJEU case law (eg, Joined Cases C-683/21 and C-807/21, 5 December 2023) confirms that administrative fines require a culpable infringement (intentional or negligent conduct) attributable to the controller or processor.

Overall, EU enforcement combines harmonised substantive with national procedural frameworks. The system is designed to ensure consistent application across the Union, while preserving judicial oversight by national courts and, ultimately, the CJEU.

In the last 24 months, EU enforcement has centred on the accountability principle. National supervisory authorities and the CJEU increasingly require operational compliance frameworks, not merely formal documentation.

Accountability, Automated Decision-Making and Minimisation

Recent CJEU case law has clarified fault requirements for administrative fines in Deutsche Wohnen (C-807/21) and National Police of Latvia (C-683/21), strictly interpreted the prohibition on automated decision-making in SCHUFA (C-634/21), and reinforced data minimisation and purpose limitation in Schrems v Meta (C-446/21). In its 4 October 2024 judgment in Schrems v Meta (C-446/21), the court held that large-scale, indefinite processing for behavioural advertising cannot be justified simply because some data was made public. Together, these rulings confirm that accountability and privacy by design/default (Articles 24–25 GDPR) are concrete, documented governance obligations.

Practical takeaway

GDPR compliance must function as embedded, risk-based governance infrastructure, evidenced in practice.

International Data Transfers

After the CJEU invalidated Safe Harbor and Privacy Shield in Schrems (C-362/14) and Schrems II (C-311/18), the Commission adopted the EU–US Data Privacy Framework (Implementing Decision (EU) 2023/1795). In Latombe v Commission (T-553/23, 3 September 2025), the General Court upheld the adequacy decision, accepting that safeguards introduced by Executive Order 14086 and the Data Protection Review Court could ensure essentially equivalent protection.

The EDPB’s Guidelines 05/2021 (adopted 14 February 2023) clarified the cumulative criteria for a “transfer” under Chapter V, GDPR. Concurrently, CJEU case law has reaffirmed that supervisory authorities retain investigative powers even where an adequacy decision exists, consistent with Schrems II.

Practical takeaway

Organisations should map cross-border data flows, rely on adequacy decisions where available, conduct and document transfer impact assessments for non-adequate destinations, and periodically reassess transfers – even where DPF certification applies.

Security, Liability and Regulatory Convergence

In Natsionalna agentsia za prihodite (C-340/21), the CJEU confirmed that fear of misuse after a breach may constitute compensable non-material damage under Article 82 GDPR, increasing litigation exposure. In IAB Europe (C-604/22), the court adopted a broad approach to joint controllership in digital advertising, while in EDPS v SRB (C-413/23 P) it emphasised contextual identifiability in assessing pseudonymised data.

Parallel obligations under the Digital Services Act (Regulation (EU) 2022/2065) and the Data Act (Regulation (EU) 2023/2854, applicable from 12 September 2025) reinforce convergence.

Practical takeaway

Organisations should implement integrated compliance models aligning GDPR governance with platform, advertising and data-sharing obligations.

Overall, enforcement reflects a fundamental-rights-oriented, proportionate and evidence-based standard: demonstrable, risk-based compliance embedded in organisational decision-making.

Mass and Collective Data Privacy Actions

A defining trend is the growth of mass claims, particularly in jurisdictions with procedural mechanisms facilitating collective redress (eg, the Netherlands), and member states implementing the Representative Actions Directive (EU) 2020/1828). These mechanisms lower procedural barriers and increase strategic litigation risk for large-scale data processing operations.

Damages

Courts across Europe are seeing a rise in compensation claims under Article 82, GDPR, especially for non-material damages – one of the most debated issues in EU data protection law. A central question is whether “loss of control” over personal data is sufficient to establish damage. The CJEU clarified in UI v Österreichische Post AG (C-300/21, 4 May 2023) that mere infringement of the GDPR is not sufficient: claimants must demonstrate actual damage and a causal link, although no minimum seriousness threshold is required. This leaves national courts discretion in interpreting and quantifying non-material harm.

Security

Beyond classic post-breach litigation, claims increasingly scrutinise the adequacy of technical and organisational measures under Article 32, GDPR, as well as incident detection and notification practices under Articles 33-34. In Natsionalna agentsia za prihodite (C-340/21), the CJEU confirmed that the mere occurrence of a breach does not automatically establish non-compliance; courts must assess whether the security measures implemented were appropriate to the risk.

International Data Transfers

Cross-border data transfers remain a significant source of litigation and regulatory exposure. Organisations must navigate adequacy decisions, standard contractual clauses and politically contested frameworks such as the EU–US Data Privacy Framework, which continues to attract scrutiny from privacy activists and regulators. Transfer impact assessments and ongoing monitoring remain essential in practice.

Technology-Driven Disputes

A growing strand of litigation and regulatory action concerns technology-driven processing, particularly where AI, automated profiling and other high-risk algorithmic systems intersect with data protection rights. Regulatory investigations into AI chatbots generating deepfake or harmful content illustrate heightened scrutiny of system design, risk assessments and compliance with GDPR obligations, especially where sensitive data or children’s rights are implicated.

Disputes also arise from large-scale biometric and tracking practices, underscoring continued regulatory focus on facial recognition and mass surveillance technologies.

At EU level, in the past two years, the CJEU has delivered a series of judgments that have significantly structured privacy litigation in the EU, particularly in relation to Article 82, GDPR compensation claims, security obligations and the scope of data subject rights. Many of these decisions refine and consolidate an increasingly coherent line of case law governing civil liability and procedural standards in GDPR litigation.

  • Compensation under Article 82, GDPR – in Österreichische Post (C-300/21, 4 May 2023), the court clarified that compensation requires three cumulative elements: (i) an infringement of the GDPR, (ii) actual damage (material or non-material), and (iii) a causal link between the infringement and the damage. A mere infringement is insufficient to found liability. At the same time, member states may not impose a minimum seriousness threshold for non-material damage. The judgment firmly established Article 82 as a compensatory (not punitive) mechanism and structured the analytical framework now applied by national courts.
  • Security obligations and breach-related liability (Articles 24 and 32 GDPR) – in Natsionalna agentsia za prihodite (C-340/21, 14 December 2023), the court clarified that the mere occurrence of unauthorised disclosure or access does not automatically establish that the controller failed to implement “appropriate” technical and organisational measures under Articles 24 and 32, GDPR. The appropriateness of security measures must be assessed concretely and in light of the risk, taking into account the nature of the processing and the data involved. The CJEU thus rejected any irrebuttable presumption that a breach equals non-compliance. However, it acknowledged that erroneous disclosure by employees may indicate deficiencies in organisational measures if it reflects inadequate risk assessment or internal governance. The judgment also confirmed that fear of misuse may constitute non-material damage, provided that it is well founded and substantiated. This case therefore reinforces the risk-based logic of the GDPR while maintaining a fact-sensitive approach to liability.
  • Fear, loss of control and hypothetical risk (Article 82) – in MediaMarktSaturn (C-687/21, 25 January 2024), the CJEU further clarified the boundaries of non-material damage. It held that the concept of “non-material damage” may, in principle, encompass well-founded fear of future misuse and even temporary loss of control over personal data. However, a purely hypothetical risk is insufficient. Where it is established that an unauthorised third party did not actually become aware of the personal data, the mere fear of possible future dissemination does not, in itself, constitute compensable damage. The judgment reiterates that the data subject must demonstrate actual damage, however minimal. In this respect, MediaMarktSaturn does not depart from earlier case law but refines it by drawing a clearer distinction between abstract risk and substantiated harm.
  • Liability regime and burden of proof (Article 82(3)) – in juris GmbH (C-741/21, 11 April 2024), the CJEU clarified the operation of Article 82(3), confirming that the GDPR establishes a fault-based liability regime with a reversed burden of proof. A controller cannot avoid liability merely by arguing that the damage resulted from negligence of an employee acting under its authority. Since employees act under the controller’s authority within the meaning of Article 29, GDPR, the controller remains responsible unless it proves that it was not in any way responsible for the event giving rise to the damage. The exemption under Article 82(3) therefore applies only where the controller demonstrates the absence of a causal link.
  • Right of access and scope of the “copy” obligation (Article 15 GDPR) – in FT v DW (C-307/22, 26 October 2023), the CJEU strengthened the effectiveness of the right of access. It held that patients are entitled to obtain a first copy of their medical records free of charge, irrespective of the purpose of the request, including where the data is sought for potential litigation. National law cannot impose systematic fees for the first copy. The CJEU also clarified that the right to obtain a “copy” may require full reproduction of documents, including diagnoses and treatment details, where necessary to ensure intelligibility and effective exercise of rights.

Taken together, these decisions have not revolutionised EU privacy litigation but have consolidated a structured and increasingly predictable framework. This jurisprudence now provides national courts with a coherent template for adjudicating GDPR-based civil claims.

At EU level, collective redress in privacy and data protection matters is structured around Article 80, GDPR and the Representative Actions Directive (RAD) (Directive (EU) 2020/1828).

Article 80, GDPR allows not-for-profit organisations representing data subjects to bring complaints and judicial remedies on their behalf. Member states may also permit such organisations to act without an individual mandate. This mechanism has enabled strategic litigation by consumer and privacy associations, particularly in cases involving large-scale tracking, platform practices and data breaches.

The RAD, applicable since 25 June 2023, requires all member states to ensure the availability of representative actions aimed at protecting the collective interests of consumers. It applies to infringements of a broad list of EU legislation set out in Annex I, including data protection rules.

Only designated “qualified entities” may bring representative actions under the RAD. These must generally be non-profit organisations or public bodies pursuing consumer interests and satisfying independence and transparency requirements, including safeguards regarding third-party funding. For cross-border actions, designation criteria are harmonised and subject to mutual recognition across member states.

The RAD requires member states to provide for both injunctive measures and redress measures (including compensation), but it leaves significant procedural discretion at national level. In particular, member states may choose between opt-in or opt-out participation models, or adopt hybrid approaches, especially for redress actions. As a result, the structure and practical reach of collective compensation vary across jurisdictions.

Participation models (opt-in or opt-out) and the availability of collective compensation therefore differ across member states. In practice, injunctions remain more common and procedurally straightforward than collective damages. Recent developments centre on the domestic implementation of the RAD and the gradual emergence of case law applying these new mechanisms, with some jurisdictions (notably the Netherlands and Germany) becoming more active fora for data-related collective litigation.

At EU level, rules on non-personal data are shaped primarily by a set of “data economy” instruments to facilitate data circulation, access and market fairness, rather than solely protect privacy.

The Free Flow of Non-Personal Data Regulation (Regulation (EU) 2018/1807) addresses barriers to the movement of non-personal data within the internal market by prohibiting member states from enforcing data localisation restrictions for non-personal data, allowing it to be stored or processed anywhere in the EU.

The Data Governance Act (Regulation (EU) 2022/868) complements this framework by creating mechanisms to encourage voluntary data sharing. It regulates data intermediation services, introduces a framework for data altruism organisations, and establishes conditions for the reuse of certain protected public sector data.

The Data Act (Regulation (EU) 2023/2854) goes further and provides a broad, cross-sector regime governing access to and use of data generated by connected products and related services, including in the IoT environment. It imposes obligations on data holders to make data available to users and, in defined circumstances, to third parties. It also introduces business-to-business data sharing obligations subject to fair, reasonable and non-discriminatory (FRAND) conditions, establishes rules to facilitate cloud switching and interoperability, and includes safeguards against unlawful access by third-country authorities.

Taken together, these instruments form a layered regulatory architecture in which non-personal data regulation, competition policy and data protection law operate in parallel and, where relevant, cumulatively.

The EU’s data economy instruments are designed to operate alongside, not instead of, the GDPR. They regulate access to and use of data, but do not alter the fundamental rules governing the processing of personal data.

Where datasets are purely non-personal, instruments such as the Free Flow of Non-Personal Data Regulation and the Data Act apply without triggering GDPR obligations. In the case of mixed datasets, the GDPR continues to govern the personal data component, while the data economy framework regulates access, sharing and portability at the level of the dataset.

The Data Act expressly states that it does not affect the application of Union data protection law. Accordingly, any data-sharing obligation must comply with GDPR requirements, including the existence of a lawful basis, respect for purpose limitation and data minimisation, and appropriate security safeguards.

Non-personal data may also be protected under trade secret law, intellectual property rules or contractual confidentiality. The Data Act seeks to balance broader access rights with the protection of commercially sensitive information through confidentiality safeguards.

In this way, the EU framework is layered: data economy legislation promotes access and re-use, while data protection law continues to constrain the processing of personal data.

Across the EU data economy framework, several recurring principles and obligations emerge, distributed across different instruments (Free Flow Regulation, Data Governance Act, Data Act, etc).

Common Structural Features

Although the relevant rules are spread across several instruments, they reflect a shared policy orientation. The EU data framework seeks to promote data mobility and access, reduce structural imbalances in data-driven markets, and prevent technical or contractual lock-in, while preserving confidentiality and legitimate commercial interests.

A first recurring principle is the free circulation of non-personal data within the internal market. The Free Flow Regulation prohibits unjustified data localisation requirements and reinforces the idea that non-personal data may be stored and processed anywhere in the Union.

A second core feature is the promotion of access to and re-use of data. The Data Act establishes access rights in defined contexts, particularly for data generated by connected products and related services, allowing users to obtain and, in certain cases, direct the sharing of such data with third parties. More broadly, EU legislation aims to ensure that data access is not unreasonably withheld where it is economically and socially valuable.

Fairness and non-discrimination are also central themes. In business-to-business settings, data sharing may be subject to fair, reasonable and non-discriminatory (FRAND) conditions. Switching and interoperability obligations for data processing services aim to prevent lock-in and enhance market contestability.

Finally, all instruments recognise the need to safeguard confidential business information. Trade secrets and commercially sensitive data remain protected, and access obligations must be implemented with appropriate confidentiality safeguards.

Instrument-Specific Rights and Duties

While these principles are common across the framework, certain instruments introduce more specific obligations.

For instance, the Data Act imposes duties on “data holders” to make product-generated data accessible to users and, in defined circumstances, to third parties. It also introduces interoperability and switching obligations for data processing service providers.

Similarly, the Data Governance Act regulates data intermediation services and data altruism organisations, requiring neutrality, transparency and organisational safeguards.

Organisational Compliance Considerations

For organisations, compliance begins with identifying their role under the relevant instruments (data holder, user, intermediary or cloud service provider) and mapping relevant data flows, particularly in IoT environments.

Contractual arrangements must be reviewed to ensure alignment with access and FRAND standards, and technical systems assessed for interoperability and switching readiness. Internal processes should safeguard trade secrets and commercially sensitive information.

Where personal data is involved, GDPR obligations apply in parallel.

Under the Data Governance Act and the Data Act, member states are required to designate one or more competent authorities responsible for supervision and enforcement. The institutional choice is left to national law, and approaches differ across the EU.

Some member states have opted to entrust enforcement, at least in part, to their data protection authority, particularly where Data Act obligations overlap with personal data processing (eg, France, Spain). Others have designated digital, communications or competition regulators as lead authorities, reflecting the market-regulatory dimension of the framework (eg, Germany, Netherlands).

This divergence reflects the hybrid nature of EU data economy legislation, which combines elements of data governance, digital market regulation and, in certain cases, competition oversight. Where personal data is involved, co-ordination with GDPR supervisory authorities is necessary. In parallel, disputes involving dominant platforms or data access conditions may also fall within the remit of competition authorities.

Overall, enforcement remains decentralised but increasingly requires co-operation across regulatory domains, mirroring the integrated structure of the EU’s digital regulatory strategy.

Online tracking technologies (including cookies, SDKs, pixels and similar device identifiers) are governed primarily by Article 5(3) of the ePrivacy Directive (Directive 2002/58/EC), as implemented in national law, in conjunction with the GDPR.

Article 5(3) establishes a general opt-in model: storing or accessing information on a user’s device requires prior informed consent, unless the technology is strictly necessary to provide a service explicitly requested by the user. This rule applies regardless of whether the data accessed is personal data. Where personal data is subsequently processed, the GDPR applies in parallel.

Although the core consent requirement is harmonised, implementation and enforcement vary across member states. In practice, most member states require granular, prior consent for analytics and advertising cookies, typically via consent management platforms. Legitimate interests cannot substitute for consent at the device-access stage under Article 5(3), even if they may be relied upon for subsequent processing under the GDPR in limited contexts.

Personalised and targeted advertising in the EU is regulated primarily under the GDPR and the ePrivacy Directive, supplemented, in the case of large online platforms, by the Digital Services Act (DSA).

Under the GDPR, personalised advertising must rely on a valid lawful basis under Article 6 and comply with transparency, purpose limitation and data minimisation principles. In practice, consent is frequently relied upon in online advertising environments, particularly where advertising is based on user-level data. The use of special categories of data for marketing purposes is generally prohibited unless explicit consent is obtained.

The DSA introduces additional constraints for online platforms, notably prohibiting targeted advertising based on profiling using sensitive data and restricting targeted advertising directed at minors based on profiling.

Rules governing unsolicited electronic marketing (such as email or SMS campaigns) derive from the ePrivacy Directive as implemented in national law. While most member states apply an opt-in model for business-to-consumer (B2C) communications, the treatment of business-to-business (B2B) marketing varies. Some jurisdictions extend consent requirements to communications addressed to corporate contacts, whereas others allow opt-out systems for professional communications, subject to national conditions.

Accordingly, although the core data protection standards applicable to personalised advertising are harmonised, practical compliance in online marketing remains partly dependent on national implementation choices.

In the employment context, the GDPR applies as a baseline, but Article 88 permits member states to adopt more specific workplace rules. As a result, data protection obligations are often complemented by national labour laws, works council rights or collective agreements.

Employee monitoring, whether through time-recording systems, IT usage controls or CCTV, must be justified, proportionate and transparent. Employers commonly rely on legitimate interests, but this requires a careful balancing exercise given the imbalance of power in employment relationships. Consent is generally not considered freely given in this setting.

Remote work and bring-your-own-device (BYOD) arrangements require appropriate technical and organisational measures, particularly to ensure data security and a clear separation between professional and private information.

In recruitment, processing must be limited to what is necessary for the role. Special categories of data require a specific legal basis, and information relating to unsuccessful applicants should not be retained longer than necessary.

In many member states, national labour law imposes additional safeguards, especially regarding workplace surveillance.

During due diligence throughout an M&A transaction, any personal data disclosed to potential buyers must be limited to what is necessary and proportionate. In practice, secure and access-restricted virtual data rooms (VDRs) are used, with encryption, logging and tiered access controls. Sensitive information should be anonymised or redacted where feasible. Disclosure must rely on a lawful basis, typically legitimate interests, and be supported by appropriate confidentiality arrangements, including NDAs and documented access restrictions.

In asset deals, the transfer of personal data to the purchaser requires a lawful basis and must remain compatible with the original purposes of processing. Where a new controller is introduced, transparency obligations under Articles 13 and 14, GDPR may apply. In share deals, although the legal entity remains unchanged, changes in processing practices or governance structures may trigger updated transparency or internal compliance measures.

Cross-border transactions must comply with the international transfer regime under Chapter V GDPR.

Following closing, integration requires alignment of privacy notices, retention schedules and technical and organisational measures. Data must be securely transferred, retained or deleted in accordance with the agreed transaction structure and GDPR principles.

Cross-border transfers of personal data from the EU are governed by Chapter 5 of the GDPR.

A “transfer” occurs where personal data is disclosed or made accessible to a controller or processor in a third country (that is to say, outside of the EU) or to an international organisation. This includes remote access from outside the EU.

Personal data may be transferred to a third country only if Chapter 5 conditions are met. The primary route is an adequacy decision under Article 45, GDPR. Where the European Commission has determined that a third country ensures an essentially equivalent level of protection, personal data may flow to that country without additional transfer authorisation. In the absence of adequacy, transfers must rely on appropriate safeguards under Article 46, GDPR, most commonly standard contractual clauses or binding corporate rules. Exporters are required to assess whether the legal framework of the recipient country allows the safeguards to be effective in practice.

Derogations under Article 49, GDPR are available only for specific and occasional situations and cannot be used for systematic or large-scale transfers.

Regarding non-personal data, EU law does not impose a comparable transfer regime. However, the Data Act introduces safeguards aimed at preventing unlawful third-country governmental access to data held by EU data processing service providers.

Under the GDPR, international transfers of personal data do not generally require prior registration, notification or approval by a supervisory authority, provided that a recognised transfer mechanism under Chapter 5 is used.

Sector-specific frameworks (eg, in financial services, telecommunications or export-controlled industries) may impose separate notification or approval requirements under national or EU law.

EU law promotes the free movement of data within the Union. The Free Flow Regulation prohibits member states from imposing unjustified localisation requirements for non-personal data. For personal data, the GDPR does not impose localisation requirements but restricts transfers to third countries unless Chapter V conditions are met. Remote access from a third country is typically considered a transfer and must comply with GDPR transfer mechanisms.

Sector-specific rules may impose localisation or residency requirements in limited contexts (eg, certain public sector like health or financial data), but these are exceptions rather than the rule at EU level.

EU law contains rules that may restrict compliance with certain foreign disclosure or discovery orders.

Under Article 48, GDPR, judgments or administrative decisions from third-country authorities requiring the transfer or disclosure of personal data are enforceable in the EU only if based on an international agreement, such as a mutual legal assistance treaty (MLAT). In other words, foreign court orders cannot, by themselves, justify a transfer of personal data from the EU.

Where a foreign authority requests access to personal data, the disclosure must still comply with the applicable transfer mechanism (eg, adequacy or appropriate safeguards).

Recent developments at the EU level regarding the regulation of international transfers of personal data continue to be shaped by the aftermath of the judgment in Schrems II (C-311/18). The EU–US Data Privacy Framework (DPF) was adopted on 10 July 2023, to restore a formal transatlantic data transfer mechanism by addressing privacy concerns related to US surveillance and redress options.

Although debate persists as to whether the DPF fully achieves its intended level of protection, the General Court, in its judgment of 3 September 2025, Latombe v Commission (T-553/23), upheld the DPF, accepting that safeguards introduced by US Executive Order 14086 (7 October 2022) and the Data Protection Review Court could ensure a level of protection essentially equivalent to that guaranteed by EU law.

In May 2025, the Irish Data Protection Commission (DPC) imposed a substantial EUR530 million fine on TikTok Technology Limited (“TikTok”) for failing to ensure equivalent protection for personal data transferred to China. The DPC found that, despite conducting a transfer impact assessment (TIA), TikTok had not sufficiently assessed Chinese laws and practices affecting the data and therefore failed to demonstrate “essential equivalence” under the GDPR. This decision raises doubts about the sufficiency of standard contractual clauses (SCCs) for transfers to certain third countries and highlights the practical challenges of compliance where risk assessments depend on third-country legal frameworks rather than solely on the physical location of data.

The CJEU and the EDPB have also provided important clarifications on international transfer rules, including the requirement to conduct transfer impact assessments to determine whether supplementary measures are needed in addition to SCCs (see EDPB Guidelines 05/2021 and Recommendations 01/2020; Schrems I (C-362/14); Schrems II (C-311/18); Ministerstvo zdravotnictví (C-710/23); and Bindl v Commission (T-354/22)). Together, this case law and regulatory guidance consolidate a framework that requires exporters to assess third-country laws carefully and implement effective supplementary safeguards where necessary.

Looking ahead, proposed amendments under the Digital Omnibus Regulation include a more contextual definition of personal data, under which information would not qualify as personal data for a given entity if that entity cannot reasonably identify the data subject. This could potentially reduce the scope of GDPR obligations, including those related to international transfers, by excluding certain data from being classified as personal data. In practice, this may ease burdens on certain cross-border data routing arrangements by limiting the application of Chapter 5, GDPR.

In EDPS v SRB (C-413/23 P), the CJEU confirmed that whether information constitutes personal data depends on the circumstances, particularly on who holds the additional information necessary for re-identification and whether re-identification is reasonably likely in relation to a particular recipient. This confirms a contextual (or relational) understanding of personal data: information may constitute personal data in the hands of one actor, yet not for another recipient lacking reasonably available means of re-identification.

Gerrish Legal

15 rue de Surène
75008
Paris
France

Kammakargatan 47
11124
Stockholm
Sweden

+33 6 74 02 45 07

info@gerrishlegal.com www.gerrishlegal.com
Author Business Card

Trends and Developments


Authors



Gerrish Legal is a Paris and Stockholm-based boutique law firm with additional presence in London, specialising in privacy, data protection, AI and technology law. With lawyers qualified in France, England and Wales and Ireland, the firm’s multilingual team advises international clients – from scale-ups to listed multinationals – across sectors such as SaaS, life sciences, fashion, recruitment, security and catering. Its core practice focuses on GDPR compliance, international data transfers, AI regulation, digital platform regulation, privacy-by-design frameworks, data breach management and privacy litigation. The firm also has strong expertise in commercial law, particularly technology contracts (SaaS), cross-border commercial arrangements and intellectual property matters. Gerrish Legal advises both EU-based organisations on privacy compliance and non-EU companies expanding into Europe (particularly France) by adapting their data governance frameworks and commercial practices to EU regulatory requirements, including the GDPR, AI Act, Data Act and sector-specific digital regulations.

AI Governance in 2026: Why the GDPR Remains at the Centre of Europe’s Expanding Digital Rulebook

Regulatory layering, constitutional anchoring and procedural recalibration

For US and other non-European SaaS and technology companies operating or expanding in Europe, the regulatory landscape has evolved from regulatory fragmentation into structured constitutional layering. In particular, the adoption of the Artificial Intelligence Act (Regulation (EU) 2024/1689) (AI Act) and the Data Act (Regulation (EU) 2023/2854) (Data Act) has significantly expanded the EU’s digital rulebook.

Notwithstanding the breadth of these legislative developments, the General Data Protection Regulation (Regulation (EU) 2016/679) (GDPR) remains the central legal instrument governing AI systems whenever personal data is involved.

This centrality is neither accidental nor transitional. The AI Act expressly preserves European data protection law. The Data Act operates alongside it. More fundamentally, the GDPR’s architecture is anchored in Articles 7 and 8 of the Charter of Fundamental Rights of the European Union, which the Court of Justice of the European Union (CJEU) has repeatedly interpreted as requiring effective, not merely formal, protection (see Schrems I, Case C-362/14; Schrems II, Case C-311/18). AI regulation in Europe therefore sits within an already constitutionalised data protection framework.

That constitutional positioning has concrete jurisdictional consequences. The GDPR’s extraterritorial scope under Article 3(2) ensures that non-EU controllers and processors offering goods or services to individuals in the EU, or monitoring their behaviour, fall within scope irrespective of establishment. For US and other non-European SaaS providers deploying AI-enabled analytics, behavioural optimisation tools, AI agents or foundation models into the European market, this threshold is routinely met in practice.

In practice, regulatory scrutiny of AI systems continues to crystallise through core GDPR provisions: lawful basis (Article 6), purpose limitation and minimisation (Article 5), automated decision-making safeguards (Article 22), security obligations (Article 32) and international transfer restrictions (Articles 44–49). While the AI Act introduces system-level risk categorisation and conformity assessments, those mechanisms do not displace the GDPR’s operative requirements where personal data is processed. The assessment of lawfulness, proportionality and individual rights remains anchored in the GDPR.

The European Commission’s publication of COM (2025) 837 final, referred to as the “Digital Omnibus”, marks the first substantive legislative proposal to revisit aspects of the GDPR since 2018. It reflects an institutional recognition that the cumulative expansion of the Union’s digital regulatory framework – including the AI Act, the Data Act, the DMA, the DSA and cybersecurity instruments – may generate procedural duplication and compliance friction.

However, the proposal does not alter the substantive architecture of the GDPR, nor its rights-based core. Articles 5 (principles), 6 (lawfulness), 22 (automated decision-making), 25 (protection by design), 32 (security) and Chapter V (international transfers) remain intact. Nor does it displace the jurisprudential trajectory established in Schrems II (Case C-311/18), Meta Platforms (Case C-252/21), or SCHUFA (Case C-634/21), each of which reinforces substantive fundamental rights protection over formalistic compliance.

For all technology companies operating or scaling in the EU, the practical conclusion is clear: Europe is refining procedural mechanics, not retreating from constitutional data governance. In 2026, the GDPR continues to constitute the primary normative and interpretative framework through which AI systems involving personal data are assessed.

From generative AI to autonomous agents at enterprise level

The technological evolution since 2023 has shifted enterprise deployment from generative output tools to agentic execution systems. AI agents increasingly perform operational functions: executing transactions, allocating resources, adjusting pricing, scoring performance, approving access and managing workflow orchestration across enterprise environments.

This shift unfolds within an expanding Union regulatory framework. The AI Act subjects certain decision-making systems to enhanced governance where they qualify as “high risk”, including systems used in creditworthiness assessments, employment, access to essential services and other consequential contexts. Such systems are subject to structured obligations relating to risk management, technical documentation, data governance and human oversight.

However, the growing operational autonomy of enterprise AI systems materially heightens exposure under Article 22(1) GDPR, which grants individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects without human intervention. The qualification of a system as “high-risk” under the AI Act does not exhaust the legal analysis. Article 22 operates independently and may apply irrespective of AI Act classification.

Article 22 was historically under-litigated. That position shifted with SCHUFA (Case C-634/21), where the CJEU adopted a functional approach, holding that automated credit scoring which plays a decisive role in lending decisions may fall within Article 22. The Court emphasised the real influence of algorithmic outputs, rather than the formal allocation of final decision-making authority. The inquiry thus focused on practical effect.

This reasoning is directly relevant to contemporary enterprise AI agents. Systems determining eligibility thresholds, dynamic pricing parameters, workforce scoring, procurement prioritisation or access controls may, depending on their operational influence, fall within Article 22 even where nominal human validation exists. Supervisory authorities increasingly assess whether human oversight is meaningful in substance. A nominal “human in the loop” who routinely validates automated outputs without critical review may not prevent classification as solely automated processing: the assessment is functional rather than merely formal.

At the same time, the enterprise deployment of AI agents now extends beyond clearly “decisional” systems. In commercial contexts, AI agents increasingly generate tailored proposals, negotiate pricing structures and conclude transactions through conversational interfaces. In some sectors, agent-based interaction is beginning to displace traditional web infrastructure: users interact with agents that access structured datasets and generate executable outputs, with transactional completion occurring within the interface itself. In these models, the website becomes secondary; the dataset and agent architecture become primary.

Many such systems may fall outside the strict scope of Article 22 where outputs are advisory, iterative or optimisation-driven rather than determinative of legal rights. Yet this does not remove them from the GDPR framework. Where personal data is processed, the core principles of Articles 5 and 6 remain fully engaged. The compliance analysis shifts from decisional effect to structural processing risks.

In agent-based architectures, particular attention must be paid to data minimisation in large-scale ingestion models; purpose limitation where datasets are repurposed across agentic workflows; transparency obligations where decision logic is mediated through conversational interfaces rather than static disclosures; and accuracy where systems dynamically update or synthesise data in real time. The increasing autonomy of enterprise AI heightens exposure not solely because Article 22 is triggered, but because processing becomes persistent, embedded and opaque within organisational infrastructure.

The result is a layered regulatory interaction. The AI Act structures governance at the level of system design and risk categorisation. Article 22 GDPR operates as a rights-based constraint where automated outputs materially affect individuals. Beyond Article 22, the broader GDPR principles continue to regulate the scale, architecture and accountability of enterprise AI systems. Compliance with one regime does not displace the other. Enterprise AI governance therefore requires simultaneous attention to decisional effects, structural data flows and architectural autonomy.

Lawfulness, purpose limitation and continuous model optimisation

AI-driven SaaS platforms frequently rely on continuous data ingestion to refine models and optimise outputs. This is even more the case where AI agents are deployed across enterprise-level infrastructure. In practice, optimisation capacity is closely linked to the scale and diversity of data processed. In all cases, data collected for one operational purpose is often reused for analytics enhancement or behavioural calibration. This dynamic directly engages Article 5(1)(b) (purpose limitation), Article 5(1)(c) (data minimisation) and Article 6 (lawful basis) of the GDPR.

Under Article 6(4), further processing must be assessed for compatibility with the original purpose. The CJEU has consistently emphasised strict adherence to purpose boundaries. In Bara (Case C-201/14), the CJEU held that repurposing personal data without proper notification and legal basis violates transparency principles. In Breyer (Case C-582/14), it confirmed that even dynamic IP addresses may constitute personal data where identification is reasonably possible.

Supervisory authorities are increasingly sceptical of broad reliance on legitimate interests under Article 6(1)(f) for large-scale AI optimisation. Generic references to “service improvement” or “innovation” are arguably insufficient where processing materially affects individuals or involves behavioural profiling. Necessity and proportionality must be demonstrated concretely.

For non-European technology companies accustomed to expansive contractual data licences, this represents a structural compliance shift. European data protection law does not treat contractual permission as determinative of lawfulness. As confirmed in Meta Platforms (Case C-252/21), contractual framing cannot artificially broaden processing beyond what is objectively necessary.

Continuous optimisation models therefore sit in inherent tension with purpose limitation required under GDPR. Governance frameworks and product design must therefore anticipate this structural constraint.

Transparency, fairness and the substantive legitimacy of AI processing

As AI systems increasingly mediate access to services, information and economic opportunities, the legitimacy of processing turns not only on lawful basis and purpose limitation, but equally on transparency and fairness – two structural principles of EU data protection law. Transparency under Articles 13–15 GDPR and fairness under Article 5(1)(a) are not ancillary disclosure obligations. They operationalise the fundamental right to data protection and structure the relationship between organisations and individuals whose data is processed.

In AI contexts, transparency operates as a precondition of legitimate deployment. Where individuals cannot meaningfully understand the nature, logic or implications of automated systems affecting them, the substantive lawfulness of processing becomes difficult to sustain. Legitimacy under the GDPR is not secured through formal compliance alone; it depends on whether processing remains aligned with reasonable expectations and fundamental rights protections.

The CJEU’s jurisprudence reinforces this orientation. In Meta Platforms (Case C-252/21), the court clarified that reliance on contractual necessity under Article 6(1)(b) requires objective necessity, not strategic drafting. More broadly, the judgment confirms that fairness functions as a substantive constraint on how processing operations are structured and justified. It is not merely a formal recital-level principle.

These principles have concrete implications for AI systems that infer preferences, assign behavioural risk categories, personalise outputs or structure opportunities in ways that materially influence individuals. Generic disclosures or abstract descriptions of “AI use” may fail to satisfy transparency and fairness requirements. Supervisory authorities increasingly assess whether individuals receive intelligible explanations of the logic involved and the foreseeable consequences of profiling or automated evaluation.

Transparency obligations under Articles 13–15, GDPR require the provision of meaningful information about the logic involved, as well as the significance and envisaged consequences of processing. While sometimes described as a “right to explanation”, this entitlement is legally framed as a right of access and transparency rather than full algorithmic disclosure. The obligation concerns intelligibility and practical comprehensibility – not the disclosure of source code or proprietary model architecture.

In algorithmic management systems, transparency cannot be reduced to a notice-layer exercise. Where automated or algorithmically influenced evaluations shape professional opportunities, remuneration, access or progression, explanations must enable individuals to understand the rationale and, where appropriate, to challenge it. Formal human oversight without informational clarity will not satisfy this standard.

The AI Act reflects this heightened sensitivity. Its transparency provisions – including obligations to inform individuals when interacting with certain AI systems and to disclose specific high-risk techniques such as emotion recognition or biometric categorisation – institutionalise a system-level expectation of intelligibility. However, these obligations supplement rather than displace the GDPR’s broader framework of transparency, fairness and proportionality. Where personal data is processed, legitimacy remains anchored in the GDPR.

Enterprise AI governance therefore requires dual attention: ex ante risk assessment through robust DPIAs under Article 35, and ongoing compliance with transparency, accountability and proportionality principles under Articles 5 and 13–15. In European regulatory practice, transparency is inseparable from legitimacy. It is measured not by the presence of formal disclosures, but by whether individuals can meaningfully understand how automated systems structure decisions, opportunities and outcomes affecting them.

Enterprise AI, algorithmic management and processor exposure

The regulatory exposure of enterprise AI systems does not arise solely at the point of automated decision-making. It emerges equally from the institutional embedding of algorithmic evaluation and AI use across an organisation.

In workforce and enterprise environments, AI systems and agents are not discrete customer-facing tools but embedded components of organisational governance and managerial oversight. They structure recruitment pipelines, allocate tasks, monitor productivity, score performance, prioritise procurement flows and model workforce optimisation. Their function is managerial rather than transactional.

Where personal data is processed in such environments, the GDPR analysis extends beyond Article 22 and automated decision-making (as discussed above). Articles 5 and 6, GDPR govern lawfulness, purpose limitation and proportionality. More structurally, Article 35, GDPR requires controllers to conduct a data protection impact assessment (DPIA) where processing is likely to result in a high risk to the rights and freedoms of natural persons.

Systematic and extensive evaluation of individuals based on automated processing – particularly where combined with large-scale monitoring – is expressly identified as a trigger for DPIA obligations. In workforce contexts, this threshold is frequently met. Continuous monitoring, predictive analytics and performance modelling operate within an environment of institutional power asymmetry, amplifying the potential impact on individuals’ rights.

However, the shift in enterprise AI governance lies not only in Article 22 exposure, but in the scale, persistence and organisational centrality of algorithmic use – especially as organisations migrate toward agent-based architectures. What was previously episodic assessment becomes continuous evaluation. What was once advisory becomes embedded into managerial infrastructure.

Supervisory authorities have demonstrated particular sensitivity to such deployments. DPIAs in this context must go beyond formal documentation. They require substantive assessment of necessity, proportionality, data minimisation and the effectiveness of safeguards – including meaningful human review mechanisms. Where residual high risk remains, Article 36 GDPR may require prior consultation with supervisory authorities before processing commences.

This governance shift also has material consequences for processors and technology suppliers. While the obligation to conduct a DPIA rests formally with the controller, the practical burden increasingly extends to processors and technology suppliers.

Article 28(3)(f), GDPR requires processors to assist controllers in ensuring compliance with Articles 32–36, including DPIA preparation and prior consultation where required. In the context of enterprise AI systems, this assistance is no longer limited to generic security documentation. Our experience is that European customers increasingly expect detailed technical descriptions of model functionality and training logic; clear articulation of data inputs, outputs and processing flows; documentation of override mechanisms and human review pathways; risk mitigation descriptions aligned with DPIA frameworks; and transparency matrices mapping system features to GDPR safeguards.

This expectation is reinforced by the AI Act’s documentation architecture, which requires high-risk system providers to maintain detailed technical files, risk management systems and data governance documentation. Even where the AI Act does not formally apply, enterprise customers increasingly replicate its structure contractually.

Accordingly, procurement practice has evolved. It is now common for European enterprise customers to require AI-specific contractual appendices sitting alongside data processing agreements. These AI appendices typically address: allocation of controller and processor roles in model deployment; use of customer data for model training or improvement; explainability and transparency commitments; audit rights relating to algorithmic governance; security measures and incident reporting specific to AI systems; and limitations on autonomous system modification or retraining. This development shifts AI governance into the contractual layer. Risk allocation becomes central. Providers must therefore carefully delineate the scope of their processing activities and the boundaries between customer-configured decision logic and system architecture alongside the extent of assistance obligations under Article 28 and the limits of liability relating to customer-defined deployment contexts.

Without careful structuring, providers risk de facto assumption of controller-level exposure through overly broad contractual commitments.

At the same time, resistance to reasonable assistance is commercially untenable. Enterprise customers – particularly in regulated sectors – increasingly require substantive DPIA support and structured co-operation in supervisory engagement. The provider response must therefore be calibrated: offering documented assistance, transparency and structured governance support, while maintaining clear contractual allocation of responsibility for lawfulness determinations and deployment choices.

For processors and suppliers, it additionally requires structured contractual governance, calibrated assistance under Article 28 and careful risk allocation in AI-specific appendices.

In this environment, compliance is no longer confined to the legality of discrete automated decisions. It extends to the architecture of organisational oversight – and to the contractual frameworks through which AI systems are supplied, configured and governed.

Data sovereignty, international transfers and essential equivalence

International data transfers remain one of the most structurally sensitive aspects of EU data protection law.

In Schrems II (Case C-311/18), the CJEU invalidated the EU–US Privacy Shield and clarified that transfer mechanisms under Article 46 must ensure “essentially equivalent” protection in practice. The Court emphasised substantive assessment of third-country legal regimes.

The subsequent adoption of the EU–US Data Privacy Framework (Commission Implementing Decision (EU) 2023/1795) under Article 45 has reintroduced a presumption of adequacy for participating US organisations and thus stabilised transatlantic transfers. However, adequacy does not eliminate accountability obligations under Articles 5 and 32.

Recent enforcement demonstrates the continued sensitivity of transfer risk. In 2025, the Irish Data Protection Commission imposed a EUR530 million fine on TikTok Technology Limited, finding that transfer impact assessments had not sufficiently addressed Chinese legal risks. The decision reinforces that documentation alone cannot compensate for deficiencies in substantive risk assessment or technical safeguards.

For AI systems operating across distributed infrastructure, transfer risk extends beyond hosting location. Remote administrative access, cross-border inference processing and sub-processor chains may constitute transfers under Article 44. In AI ecosystems characterised by continuous optimisation and federated infrastructure, data flows are often dynamic rather than static, complicating transfer mapping and impact assessments.

Beyond strict legal doctrine, data sovereignty has also evolved into both a regulatory and a commercial expectation. European enterprise customers increasingly request EU-based hosting, regional inference endpoints and contractual restrictions on secondary training uses. These requirements often exceed minimum legal standards, but reflect heightened sensitivity to strategic autonomy and geopolitical risk.

Accordingly, transfer compliance is no longer a discrete legal assessment but an architectural consideration. Data sovereignty increasingly shapes infrastructure design, vendor selection and governance strategy, reinforcing the GDPR’s role as the constitutional benchmark for cross-border data flows.

Special categories and inference risk

Article 9, GDPR prohibits processing of special categories of personal data, subject to limited exceptions. These categories – including data revealing health, political opinions, religious beliefs, trade union membership, genetic and biometric data – reflect heightened constitutional sensitivity within the EU legal system.

AI systems capable of large-scale inference significantly complicate this framework. Supervisory authorities have indicated that outputs revealing health status, political opinions or ethnic origin may fall within Article 9, even where not explicitly collected. In this respect, the legal qualification of processing may turn not only on the nature of the data collected, but on the attributes that can reasonably be derived from them.

The CJEU’s protective jurisprudence in cases such as OT v Vyriausioji tarnybinės etikos komisija (Case C-184/20) supports broad interpretation of sensitive data protections. The court has consistently emphasised that Article 9 must be interpreted in light of its objective of ensuring enhanced protection for data that exposes individuals to particular risks of discrimination or social exclusion.

For AI systems deployed in biometric identification, behavioural analytics, sentiment analysis or predictive profiling contexts, inference capability becomes legally determinative. The assessment must consider both input data and the reasonably foreseeable outputs that a model is capable of generating. A system that derives sensitive attributes through probabilistic modelling may fall within Article 9 even if such attributes were not intentionally targeted.

The expansion of inference capability therefore broadens the potential scope of Article 9 beyond traditional data collection paradigms. In AI-driven environments, sensitive data risk arises not only from what is collected, but from what can be inferred.

Conclusion: procedural recalibration and strategic positioning in 2026

Taken together, the AI Act, the Data Act, the DMA, the DSA and related cybersecurity instruments represent not fragmentation but consolidation within the Union’s digital framework. However, none of these instruments displace the GDPR’s constitutional status. Where personal data is processed, the assessment of digital systems continues to be anchored in the GDPR’s principles of lawfulness, purpose limitation, fairness, transparency and accountability.

For technology providers, this architecture precludes siloed compliance. AI Act risk classification, data governance obligations under the Data Act, platform responsibilities under the DSA and competition-facing duties under the DMA intersect with, rather than replace, data protection governance. Risk assessment, impact analysis, automated decision-making safeguards and transfer architecture must therefore be evaluated within a unified compliance framework grounded in the GDPR’s substantive requirements.

This convergence of regulatory regimes – spanning data protection, competition, consumer protection and strategic autonomy – elevates digital governance to a matter of enterprise-wide risk management rather than discrete legal compliance. It increasingly demands oversight at the highest organisational levels.

Within this consolidated architecture, the Digital Omnibus signals procedural recalibration rather than substantive retreat. It seeks to streamline supervisory co-ordination without altering the underlying constitutional logic. Supervisory authorities remain guided by CJEU jurisprudence that consistently interprets data protection as a fundamental right requiring effective and practical protection.

For non-European technology companies, the strategic insight is therefore not merely that regulation is increasing in volume. Rather, the enforcement logic of the Union’s digital framework has stabilised around constitutional principles – fairness, proportionality, accountability and sovereignty – which now permeate the expanding rulebook.

The European Union is not downgrading its regulatory model; it is consolidating and systematising it. In this environment, embedding privacy, risk governance and architectural accountability at the design stage is not a defensive compliance strategy but a structural condition for sustainable participation in the European market.

Gerrish Legal

15 rue de Surène
75008
Paris
France

Kammakargatan 47
11124
Stockholm
Sweden

+33 6 74 02 45 07

info@gerrishlegal.com www.gerrishlegal.com
Author Business Card

Law and Practice

Authors



Gerrish Legal is a Paris and Stockholm-based boutique law firm with additional presence in London, specialising in privacy, data protection, AI and technology law. With lawyers qualified in France, England and Wales and Ireland, the firm’s multilingual team advises international clients – from scale-ups to listed multinationals – across sectors such as SaaS, life sciences, fashion, recruitment, security and catering. Its core practice focuses on GDPR compliance, international data transfers, AI regulation, digital platform regulation, privacy-by-design frameworks, data breach management and privacy litigation. The firm also has strong expertise in commercial law, particularly technology contracts (SaaS), cross-border commercial arrangements and intellectual property matters. Gerrish Legal advises both EU-based organisations on privacy compliance and non-EU companies expanding into Europe (particularly France) by adapting their data governance frameworks and commercial practices to EU regulatory requirements, including the GDPR, AI Act, Data Act and sector-specific digital regulations.

Trends and Developments

Authors



Gerrish Legal is a Paris and Stockholm-based boutique law firm with additional presence in London, specialising in privacy, data protection, AI and technology law. With lawyers qualified in France, England and Wales and Ireland, the firm’s multilingual team advises international clients – from scale-ups to listed multinationals – across sectors such as SaaS, life sciences, fashion, recruitment, security and catering. Its core practice focuses on GDPR compliance, international data transfers, AI regulation, digital platform regulation, privacy-by-design frameworks, data breach management and privacy litigation. The firm also has strong expertise in commercial law, particularly technology contracts (SaaS), cross-border commercial arrangements and intellectual property matters. Gerrish Legal advises both EU-based organisations on privacy compliance and non-EU companies expanding into Europe (particularly France) by adapting their data governance frameworks and commercial practices to EU regulatory requirements, including the GDPR, AI Act, Data Act and sector-specific digital regulations.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.