The recent enactment of the Artificial Intelligence Act (AIA) by the European Parliament, which entered into force in April 2024, marked the first major comprehensive high-level legislative effort towards AI regulation, which is expected to deepen from 2025 onwards (companies have one year to adapt to AIA).
No specific national laws/guidelines have been enacted yet. As an EU member state, Portugal is subject to the immediate and direct application of European Regulations, the transposition of European Directives, and close regard for European Guidelines. As such, specific legislation (namely on privacy and data protection, IP, product safety and consumer protection) was enacted with close proximity to European standards.
In addition to the direct applicability of the AIA, AI-based systems must abide by the following.
Generative AI creates new data/content that resembles training data distribution, and its techniques are used to synthesise novel data samples. Predictive AI methods focus on making accurate predictions or decisions based on existing data/patterns.
Both systems can reproduce multiple realities due to several types of learning: supervised learning (where algorithms learn from labelled data to predict specific outcomes), unsupervised learning (by extracting meaningful patterns from unlabelled data), and reinforcement learning (optimising decision-making policies over time, often through trial and error).
AI’s growing presence in day-to-day market solutions makes it hard to distinguish between key and general applications. A notable key application is management: almost all management tasks can be automated with valid input in predictive AI solutions, with faster and more efficient preventive response and reallocation of resources (scenarios from CRM to airport traffic management) and, in the case of generative AI, in automated customer messaging bots (in most advanced systems, bots can already include elements of generative and predictive AI).
Telecoms companies are already implementing AI systems by designing networks with predictive AI as a tool to improve operational efficiency by balancing the network’s distribution and reducing operational costs.
Similarly, law firms in Portugal are adopting AI – especially generative models – to automate administrative tasks, support legal research, streamline billing, assist in due diligence, and speed up contract review, enhancing productivity and reducing human error.
National investment programmes in AI innovation include:
There are no apparent differences between generative and predictive AI.
Although AI regulations differ globally due to cultural norms and legislative contexts, there is a progressive global involvement in AI innovation and the widespread adoption of national AI strategies.
Several countries are forming multi-stakeholder groups of AI experts outside government to advise on current and future opportunities, risks and challenges of AI use in the public sector, as well as AI observatories to oversee the implementation of AI strategies, indicating a trend that may expand, as other countries progress in their AI strategy implementation.
As of 2025, Portugal has expanded public-private partnerships and integrated AI into healthcare, justice and education, aligned with European Data Space strategies. The national HealthDataHub, launched under the European Health Data Space Regulation, represents a key step toward secure, AI-driven innovation in medical research and public health.
The EU AIA is the first legislation focused entirely on AI. It introduces harmonised rules for placing AI systems on the market, bans certain high-risk practices (eg, social scoring), and sets transparency, safety and compliance requirements, particularly for high-risk and general-purpose AI. Key provisions began applying in February 2025, with others (eg, for general-purpose models) taking effect in August 2025, and full implementation by August 2026.
The AIA is part of a broader EU framework alongside the Data Governance Act, Data Act, Digital Services Act, Digital Markets Act, NIS2 and the Cyber Resilience Act. Together, these instruments support the EU’s 2030 digital strategy.
Portugal applies the AIA directly and complements it through national measures under AI Portugal 2030, which promotes AI integration across sectors such as healthcare, justice, education and public services. The launch of the HealthDataHub under the European Health Data Space Regulation reflects this commitment to secure, data-driven innovation.
Existing Portuguese and EU laws – including GDPR, IP, consumer protection and cybersecurity – also apply to AI systems, alongside sector-specific regulations. National authorities, particularly CNPD, oversee data governance and ethical use.
Predictive AI, often used in automated decision-making, may face tighter regulation than generative AI, which typically involves content creation unless deployed in high-risk contexts. Portugal may develop further rules tailored to national priorities.
Portugal also supports public-private partnerships and aligns with Common European Data Spaces, promoting responsible AI adoption while fostering innovation, especially among SMEs and start-ups.
Portugal has not yet enacted AI-specific legislation, however, the EU’s AIA, which entered into force on 1 August 2024, is directly applicable across all member states, including Portugal. Certain provisions, such as the prohibition of specific AI practices and AI literacy obligations, became effective on 2 February 2025.
Given Portugal’s past experience, framing legislation will align with EU trends. The Portuguese legislature usually has strong concerns regarding privacy and personal data (sensitive data: health data and biometrics), and in the context of labour relationships, which may result in stricter obligations or security measures, further conditioning criteria, and surveillance of AI Systems used in these fields.
Portugal’s primary authorities in the AI field, the CNPD and the National Telecommunications Authority (ANACOM) have begun to engage more actively in AI-related regulatory activities. On 3 January 2025, ANACOM announced the designation of national supervisory authorities under the AIA. However, as of this date, neither CNPD nor ANACOM have issued specific guidelines or recommendations concerning the regulation or use of AI systems.
Under the EU Digital Action Plan, the strategy promoted by INCoDe.2030 defined various objectives up to 2030. However, these do not include guidelines on the regulation or use of AI systems, rather focusing on programmes encouraging digitalisation of companies and innovation in the sector.
Being an EU Regulation, the AIA is directly applicable in all member states, including Portugal, following its entry into force on 1 August 2024. As such, significant national deviations are not allowed. However, like other EU Regulations, the AIA requires complementary national implementation measures, such as designating supervisory authorities and establishing penalty regimes, to ensure its effective application within the national legal framework.
In contrast, EU Directives, such as the NIS2 Directive, require formal transposition into national law by the parliament or government. These acts incorporate the directive’s provisions into the Portuguese legal system and assign responsibility for enforcement.
As of 2025, Portugal has not enacted standalone, AI-specific legislation, but it is actively aligning with the AIA’s phased implementation. National authorities, including ANACOM and CNPD, are expected to play key roles in monitoring compliance and enforcement. Portugal traditionally adheres closely to EU regulatory frameworks and often adopts strict positions, particularly on data protection, privacy and the use of AI in sensitive areas like healthcare and labour.
Further national measures may be introduced to address sector-specific concerns, in line with Portugal’s broader AI strategy under AI Portugal 2030.
Portugal has not yet enacted AI-specific legislation or guidelines; hence, there are no immediate inconsistencies or contradictions with current EU legislation and underlining principles, and this is not expected to occur.
Only applicable in the United States.
Portugal still needs to enact AI-specific legislation and guidelines; full enactment of the AIA is expected. Once this occurs, national framing legislation will be revised to accommodate and align with EU regulations.
Local public bodies have not yet issued additional recommendations or directives in this regard, including non-binding.
The AIA has been formally enacted at the EU level and is undergoing phased implementation across all member states, including Portugal. While the AIA is directly applicable, Portugal has been required to adopt national measures, such as designating supervisory authorities, setting enforcement mechanisms, and possibly developing complementary sector-specific rules, to ensure the Regulation functions effectively within its domestic legal system.
The AIA could significantly impact predictive AI. If, based on a case-by-case analysis, these systems are deemed high-risk, they could be subject to stricter government oversight by national NRAs, potentially slowing down their development and deployment. Generative AI may be less impacted and escape stricter regulatory requirements unless used in high-risk applications, but it will still be affected by data protection and IP regulations.
Even though Portuguese courts have not yet dealt directly with AI-related matters, as of 2025, AI-related litigation is gaining traction in Portugal, as there is a rise in third-party-funded consumer class actions against major tech companies, including Google, Apple, Sony, TikTok and Meta, driven by active consumer associations and a favourable legal framework. Although several cases were decided at first instance in 2024, 2025 is expected to bring clarifying rulings on key procedural issues, such as legal standing, funder independence, data protection mandates and class admissibility.
On a different note, recently, a Portuguese appellate court sparked controversy, when judges were accused of relying on AI-generated text in their rulings, prompting an official inquiry. The case raised concerns over judicial transparency and the appropriate role of AI in legal decision-making.
Internationally, AI case law is also evolving. In the US, federal courts have seen multiple class actions against generative AI developers. However, overall, courts have been reluctant to impose liability on AI developers, demanding more specific, factual and technical details.
The legal landscape of generative AI is still evolving, and the coming months will be pivotal in shaping the future direction of AI litigation.
Under Article 77 of the AI Act, EU member states must prepare and publish a list of the authorities designated to supervise compliance with European Union legislation that safeguards fundamental rights and notify this list to the European Commission, a step that Portugal has already completed. The list includes the following entities:
In addition, ANACOM has been assigned the role of co-ordinating the activities of all other designated national authorities.
Portugal’s AI Portugal 2030 strategy, launched in 2019 under the Portugal INCoDe.2030 initiative, provides a non-binding framework to drive AI adoption. It focuses on research and innovation, public administration modernisation, and sectoral specialisation to position Portugal as a global AI leader.
Additionally, the EU AIA, which came into force on 1 August 2024, is directly applicable in Portugal. While some provisions will be phased in, full compliance is expected by 2 August 2026. Portugal is aligning its national framework to ensure compliance.
Currently, there is no specific public enforcement mechanism under Portuguese law directly related to or affecting AI.
Although it does not amount to a formal enforcement mechanism, the CNPD has played an increasingly active role in overseeing international data transfers, marketing communications and the protection of data subject rights. More recently, its attention has extended to the use of AI in biometric technologies.
In parallel, the CNPD has published several guidelines and best practice recommendations, some of which address AI-related issues and align with the positions of the European Data Protection Board (EDPB).
Certain aspects of the AIA set out infringements that are administrative offences punishable with fines. The severity, duration and consequences of the infringement, as well as the size of the provider, are determined on a case-by-case basis.
The following fines are set out (Article 71 and 72 AIA):
As of April 2025, Portugal has made solid progress in aligning with the EU’s AIA.
To that end, Portugal has started setting up the necessary governance structures. The Ministry for Youth and Digital Modernisation recently published a list of public bodies that will oversee the protection of fundamental rights in relation to high-risk AI systems. These include regulators and inspectorates from across various sectors.
Meanwhile, the AI Portugal 2030 strategy remains the backbone of the country’s national AI policy. Led by the INCoDe.2030 initiative, it continues to push forward efforts in education, research, innovation and the development of AI tools and services.
That said, Portugal still does not have a dedicated national body focused specifically on setting technical standards for AI. For now, the country is sticking closely to EU-level guidance and relying on European standardisation bodies like CEN, CENELEC and ETSI, which are responsible for developing technical standards that support the AI Act.
Portugal has not enacted guidelines on AI, but implementation of the AIA is expected.
AIA will be directly applicable, and Portugal is usually well aligned with EU directives and guidelines, as issued by EU’s standard-setting bodies, namely:
Some other jurisdictions have chosen a sectorial approach, focusing on non-binding principles or sandboxes (namely the US and the UK), which are not expected to have an overall determining impact soon in the Portuguese or EU jurisdictions.
Other international standards may be of relevance (namely (i) the International Organization for Standardization published ISO/IEC 42001:2023, providing guidelines for implementing AI management systems (AIMS), aiming to increase the level of AI compliance; and (ii) the Bletchley Declaration – AI Safety Summit, a collaborative effort, signed by 28 countries, underscoring international co-operation’s importance in unlocking AI’s potential benefits while ensuring safety), but these shall serve as framing guidelines only.
Also in 2025, the European Commission released official guidelines on prohibited AI practices, clarifying key obligations for member states and operators under the AIA.
Companies doing business in Portugal should embrace present and future guidelines issued by European-level bodies, considering the potential impact of enforcement actions by local NRAs.
Portugal enacted the Strategy for the Digital Transformation of Public Administration 2021–2026, which aims to harness the potential of the vast volume of data accessible to the Public Administration (PA) to improve public services, support better decision-making and enhance transparency.
AI is already being used in various public service platforms, such as ePortugal, which features the Sigma chatbot, helping users find information on the portal. In May 2023, the government introduced the “Virtual Assistant,” developed using Azure OpenAI Service, designed to support citizens’ digital interactions with the state through voice and natural language processing.
However, there are still major challenges to the development and use of AI in the public sector, particularly in the areas of skills and training, responsibility and ethics, public participation, and societal perception and trust. Additionally, PA bodies are subject to GDPR, which limits the use of facial recognition and biometric technologies, except in clearly defined security or public interest scenarios, regulated by other complementary legislation (not specific to AI).
In early 2025, the Portuguese government began finalising its National Artificial Intelligence Agenda, which is expected to be officially presented at the end of Q1 2025. This document, part of the Digital National Strategy (approved 12 December 2024), outlines the country’s strategic approach to AI, aiming to build a robust and innovative AI ecosystem in Portugal.
A public consultation process was launched, including public sessions in Lisbon, Évora and Porto in January 2025, led by the Agency for Administrative Modernisation (AMA). Citizens and organisations can also submit contributions online, ensuring transparency and inclusivity.
There are no national decisions nor currently pending cases in Portuguese courts regarding the use of AI systems by the public administration.
The AIA does not apply to AI systems for exclusive military, defence or national security purposes.
Article 4(2) TEU and the specificities of Union defence policy justify the exclusion of AI systems from military and defence activities. Public international law is a more appropriate legal framework for regulating AI systems in these activities, including the use of lethal force.
The National Republican Guard (GNR) already uses AI applied to geographic information systems, specifically terrain risk models, to analyse the risk of criminal phenomena, enabling better decision-making and proactive balancing of the institution’s resources to combat them.
From the publicly available information, it is also possible to conclude that the Ministry of Defence and the Portuguese Armed Forces are involved in projects that include AI. However, more detailed information is not available.
Generative AI introduces several ethical dilemmas, namely related to misinformation, privacy breaches, IP infringement, bias and discrimination. The potential for creating false content (“fake news”), leveraging personal data without consent, perpetuating biases and discrimination and infringing on intellectual property rights/copyright, reinforces the importance of ethics guidelines for a trustworthy AI, supported by an underlying strong regulatory framework.
With the entry into force of the AIA and its gradual implementation throughout 2025, the EU has introduced legally binding obligations to address many of these concerns, especially transparency, accountability and risk classification of AI systems, including general-purpose models like generative AI.
Addressing technical issues such as bias mitigation, transparency and accountability in AI systems requires robust mechanisms for auditing, evaluating and enhancing model performance. Transparency regarding data sources, training methods and tailored parameters is essential for building trust and accountability. The principles already set forth by the AIA are strong guidelines towards this objective.
Protecting IP rights under AI and its assets, including models and training data used, input and output, depends, in addition to a strong and updated legal framework, on implementing strong and clear IP protection strategies by all interested parties, such as copyright registration and robust licensing agreements with tailor-made contractual provisions. The future T&Cs set by AI tool providers will play a significant role in determining the extent of IP rights and potential infringements.
High-level risks include:
Recent US and European case law in 2024 and early 2025 (notably decisions on the copyrightability of AI-generated works and fair use during training) indicate that courts are becoming less tolerant of broad, unrestricted scraping practices without rights-holder consent.
The current legal framework for personal data protection (GDPR, national implementation laws and DSA) also applies to AI systems environments.
GDPR-compliant generative AI systems are one of the major roadblocks. This is particularly challenging regarding information duties and the exercise of rights by data subjects.
GDPR allows data subjects to request the deletion of their data. As for generative AI (LLMs) this may require the deletion of the entire AI model if it is impossible to remove individual data points. However, the practical implementation of this right by AI models, which learn from the data but do not necessarily store it, is a complex issue that is still under heavy debate.
The AI system operator must implement mechanisms to correct inaccurate data. However, the technical implications of assuring, with a high level of confidence, the ability to erase and/or rectify specific data as granted by the GDPR from an AI model without deleting the entire learning set are considerable and still dubious. Challenges for data limitation and minimisation are no less once consent can serve as legal ground for processing data input by the user; the same does not apply when the user inputs personal data from a third party.
Compliance with all relevant GDPR principles presents significant challenges, as AI systems require large amounts of data and may produce outputs that go beyond the original purpose of data collection. Companies must implement strict data governance and management practices, including transparent data collection, use and storage policies and robust mechanisms for obtaining and managing consent.
The EDPB issued Opinion 28/2024, reinforcing the obligation for AI providers to adopt Privacy by Design measures and ensure that data subject rights, including rectification and erasure, are enforceable even in complex AI contexts. The opinion clarified that while full model retraining may not be technically feasible in all cases, providers must demonstrate proportional safeguards and implement mitigation steps such as flagging, output filtering or data versioning mechanisms.
In parallel, AI tool providers have begun integrating user-side controls for data export and consent revocation, particularly in enterprise deployments of large language models.
On the same note, Article 10 of the AIA requires specific mapping regarding data governance:
Data Protection and Privacy
Protection of personal data on AI technology has the following benefits and risks.
Benefits
Key benefits stemming from it are (i) the ability to provide personalised services and experiences; (ii) increased efficiency and reduced costs.
AI systems can analyse users’ behaviour, preferences and past interactions to provide relevant content and recommendations, improving user satisfaction and engagement and leading to better business outcomes.
Risks
Enlisting AI’s help in processing large amounts of personal data does not come without its caveats. If not properly designed and managed, such data processing could be deemed illicitly accessed or misused, leading to security breaches. Moreover, using AI for automated decision-making can lead to biased or unfair decisions.
As a rule, fully automated individual decision-making, including profiling that has a legal or similarly significant effect, is forbidden (Article 29 Data Protection Working Party; Guidelines on Automated Individual Decision-Making and Profiling for the purposes of Regulation 2016/679).
The same principles are applicable to processing machine-generated data without direct human oversight (Article 14 (2) AIA), which can lead to similar risks.
Human involvement cannot be bypassed by controllers. While automated processing can increase efficiency and reduce costs, it can also lead to errors, discrimination or biases if the AI system is incorrectly designed or monitored. These risks can be mitigated through proper data governance, including transparency and regular performance audits.
Data security
Safekeeping of the processed information is another critical aspect of AI systems. Given the often sensitive nature of the data handled, implementing robust security measures to prevent unauthorised access and data breaches is paramount. This includes practices such as encryption, access controls and regular security testing.
While AI technology has the potential to provide significant benefits, it is essential to carefully manage the associated risks, especially when it comes to personal data. This requires a comprehensive approach prior to implementation, envisioning specific objectives, robust data governance, transparency, IP protection and strong data security measures.
The Portuguese Bar Association addressed AI at its last congress (July 2023) but did not release any guidelines.
However, AI use in legal practice has grown significantly. Top-tier firms in Portugal are investing in tools for contract review, due diligence, legal research and billing automation, often through internal LLMs trained on proprietary databases.
Generative AI is increasingly used for drafting documents, translating, summarising case law and internal legal queries. Tools for extracting and categorising information from legal texts now support faster, more structured access to legal content. AI also plays a role in contract management, identifying key clauses and suggesting edits.
In 2025, a partnership between the Ministry of Justice, Microsoft and Legislation Studio launched a practical AI-powered legal guide for citizens covering processes like marriage, divorce and business incorporation.
Despite the lack of binding national regulation, law firms align their AI practices with GDPR, consumer protection laws and the Code of Ethics for European Lawyers, emphasising confidentiality, human oversight and accountability. The AIA adds further obligations, namely transparency (Article 13, 52), user information (Article 53), and bias mitigation (Articles 15–17), especially for high-risk systems.
Portugal currently has no specific liability regime for AI. General civil and contractual liability rules apply, with liability typically falling on the person (natural or legal) directly responsible for the damage. Under most EU legal systems, this requires proving an act or omission – whether negligent or intentional – linked to the resulting harm.
In the context of AI, however, this framework faces significant challenges. AI systems may operate autonomously, exhibit unpredictable behaviour, or function as “black boxes”, making it difficult to identify a clear causal link between human action and damage. This can hinder injured parties from fulfilling the burden of proof under traditional liability rules.
Current product liability rules – based on Directive 85/374/EEC – apply to producers for harm caused by defective products. While classifying AI systems as “products” remains debatable, a similar approach may be applied based on updated consumer protection principles, such as those in Directive 2011/83/EU.
In the absence of AI-specific liability laws, companies are increasingly adopting contractual frameworks that define obligations and responsibilities across the AI value chain. These include technical mechanisms (logging and traceability), organisational safeguards (monitoring and auditing), and contractual clauses that allocate liability and support accountability in case of bias or system errors.
Moreover, given the risks of autonomous AI behaviour, mandatory insurance requirements for AI providers are also being considered as a way to strengthen user protection and ensure financial accountability.
There are currently no legislative initiatives envisaged in Portugal regarding specific liability for AI. As a member of the EU, liability for AI will involve the transposition of future European directives, namely:
Matters relating to defectiveness, fault assessment and evidence disclosure persist. Technical work, in addition to legislative work, needs to be done to ensure that these regulations are effective in responding to the nuances of generative and predictive AI.
While various regulatory bodies worldwide have acknowledged the importance of collectively addressing algorithmic bias, there is a notable absence of specific legislation or regulations dedicated to this issue. Bias in this context refers to situations where AI systems’ outcomes disproportionately favour or discriminate against specific ideas, groups or individuals, potentially leading to unlawful discrimination. This may seriously affect certain categories of individuals, more fragile or susceptible to discrimination (regarding sex, sexual orientations, race, religion, political stands, minors, etc).
Within the EU context, the AIA primarily addresses mitigating risks associated with discrimination linked with biased algorithms, even though it does not explicitly mention bias prevention. The AIA includes mandatory requirements for high-risk AI systems concerning risk management, data governance, technical documentation, oversight, conformity assessment and accuracy, all of which play a crucial role in safeguarding against bias and discrimination.
The AIA also emphasises the need for explainability in AI systems, requiring organisations to clarify how data, models or algorithms were used to reach specific outcomes and justify their methods.
Companies using AI must prioritise transparency and explainability, especially when decisions risk discrimination. Detailed records support fairness claims. While the ECHR and GDPR offer frameworks, their sufficiency against future algorithmic bias is uncertain. In the absence of specific laws, aligning the AIA principles with existing EU legislation remains the best approach.
Under the GDPR (Article 9), facial recognition and biometrics are deemed sensitive data; thus, general processing is forbidden and only possible in specific and justifiable circumstances. In Portugal, Law 58/2019 (complementing the GDPR) provides stricter guidelines, determining that biometrics in the labour context (fingerprints, facial recognition) can only be used for purposes of attendance and access control to the employer’s premises.
CNPD has adopted a conservative approach and maintains a very stringent viewpoint, issuing an opinion (Opinion 2021/143) expressing very restrictive views on the use of video surveillance images and personal data resulting therefrom (even within the scope of public safety and crime prevention), also expressing concerns about potential future uses, such as the use of drones, AI, the capture of biometric data and the overall use of cameras in public and the use of images, all within the context of privacy protection and restricted use of personal data.
There are exceptions for criminal investigation (fingerprints), but no advanced facial recognition programs for live video surveillance are available.
Non-compliance with the applicable rules consists of administrative offences punishable with fines under GDPR.
Technology within Automated Decision-Making (ADM) in AI systems involves the use of ML algorithms and other models to make decisions without human intervention (credit scoring, disease diagnosis, personalised advertising). These systems can recur to neural networks, decision trees or natural language.
There are several enacted regulations, in particular the following.
Non-compliance with the above obligations could result in administrative offences, both under GDPR and AIA (Article 83 and Articles 71 and 72, respectively).
The AIA aims to strengthen the effectiveness of existing rights and remedies by establishing specific requirements and obligations, including transparency (full disclosure towards users of AI systems and their particularities), technical documentation to be made available and disclosed and record-keeping of AI systems (Recital (5a) AIA).
Transparency means that AI systems must be developed and used in a way that allows appropriate traceability and explainability while making humans aware that they communicate or interact with an AI system. Deployers must also be duly informed of the capabilities and limitations of that AI system and affected persons about their rights (Recital (14a) AIA).
All AI systems that are considered high-risk must comply with the provisions of the AIA, namely, Article 13 and Title IV, for AI systems intended to interact with natural persons directly.
Failure to comply with transparency obligations is subject to administrative offences, punishable with fines (see 5.3 Enforcement Actions).
Although AI will change some paradigms, most will not be a revolution but an adaptation of the already existing practices, namely as resulting from online business; the same applies to procurement.
The main existing concerns about procurement of other services/products should simply be reinforced. More specifically:
Companies should take particular care with the contracts concluded, avoiding generic user licences or simple adhesion contracts.
There are no tools that are forbidden regarding the hiring and termination of employees. To lower the risks of discriminatory decisions, the Portuguese Labour Code (PLC) imposes obligations on employers to keep a register of recruitment procedures including the following information, broken down by gender:
Termination without cause (either subjective or objective) is forbidden, and the written grounds for termination must be provided.
The employer is required to notify the Commission for Gender Equality when opposing the renewal of a term contract if the employee is pregnant, enjoying parental rights or an informal carer.
PLC forbids the use of remote surveillance tools to monitor employee performance. Use of electronic surveillance is allowed only where required for:
The CNPD has issued guidelines that prohibit the systematic tracking of an employee’s activity, including the use of software that registers the web pages visited, real-time terminal location, use of peripheral devices (mice and keyboards), capturing desktop images, observing and recording when access to an application starts, controlling the document the employee is working on, and recording the time spent on each task.
Law 58/2019 of 8 August 2019 provides that data collected through remote surveillance can only be used in disciplinary action to the extent that the employee engaged in criminal conduct. Biometric data may only be used to access the company’s premises and control attendance.
Electronic monitoring is subject to prior Works Council (WC) advice.
AI is already being used by multiple businesses, especially those providing consumer services on a large scale, for example platform companies providing car travel and food delivery services.
Portugal is positioning itself as a leading hub for responsible AI through its National AI Strategy, which emphasises sustainability, competitiveness and innovation. Programmes like COMPETE 2020 and COMPETE 2030 have supported the integration of AI into traditional industries, fostering transformative digital solutions.
Portugal’s tech ecosystem continues to grow, with start-ups such as Aplicable AI creating solutions in areas like recruitment, reflecting broader trends in AI-driven business optimisation. These developments show a strong national commitment to ensuring AI is not only widely adopted but also aligned with ethical, social and economic goals, particularly within platform-based digital services.
In any case, the use of these tools, although widely spread and admitted, must always comply with GDPR, general consumer protection, cybersecurity and privacy rules, as well as with the AIA.
The Portuguese financial services sector is undergoing rapid transformation through the adoption of AI systems, driven by Big Data, machine learning and LLMs. These technologies are reshaping firm-client relations and enhancing services such as anti-money laundering (AML/CFT), fraud detection, payment monitoring, credit risk evaluation, robo-advisory and algorithmic trading.
While AI offers significant improvements in efficiency and personalisation, it raises legal and ethical concerns, including cybersecurity risks, data vulnerability, lack of explainability and potential for behavioural manipulation. Currently, Portugal does not have AI-specific financial regulation, relying instead on existing frameworks such as the Portuguese Securities Code, the Legal Framework of Credit Institutions, Law 83/2017, MiFID II, Market Abuse Regulation and GDPR.
In 2025, regulatory developments are accelerating. Banco de Portugal and the CMVM have issued guidance on AI governance, emphasising transparency, human oversight and suitability assessments, particularly in credit scoring and robo-advisory services. With the Digital Operational Resilience Act (DORA) now fully applicable, financial entities must assess and test the resilience of AI systems, especially those involving third-party providers.
Additionally, under the upcoming AIA, use cases such as creditworthiness assessments (CWA) and credit scoring will be treated as high-risk, subject to conformity assessments and enhanced documentation. The Consumer Credit Directive (EU) 2023/2225, pending transposition in Portugal, will prohibit the use of certain sensitive personal data in automated credit evaluations and reinforce GDPR compliance.
There are only a few specific AI systems in this sector. However, AI is already revolutionising healthcare by aiding in-patient treatment, monitoring health data on a large scale and aiding in drug discovery. Its ability to systemise data and improve disease diagnosis early is increasingly recognised by the scientific and clinical communities.
In Portugal, a digital symptom evaluator accessible through the CUF mobile app enables patients to respond to a series of questions in order to receive potential diagnoses for referral, serving as an initial assessment. In early 2024, the National Health Service (NHS) introduced a funding initiative for the integration of AI tools in dermatological diagnoses: through an app, individuals take a picture of their skin condition and forward it to a dermatologist for review, reducing in-person consultations. Also, the National Strategy for the Health Information Ecosystem (ENESIS 2022) aims to propel the digital transformation of Portugal’s healthcare sector and develop Health Information Ecosystem (eSIS) through the activity plans of the SPMS and other entities.
These applications may involve software as a medical device (SaMD) and related technologies like ML algorithms, whose data use and sharing is now subject to regulation under AIA. ML is pivotal in digital healthcare, offering the ability to learn from data and enhance performance over time. Nonetheless, it entails risks of:
High-risk AI systems must be designed and developed to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately (AIA Article 13).
On another note, Portugal, through its Ministry of Health, announced progress in implementing the European Health Data Space (EHDS). This initiative represents a significant milestone in aligning national infrastructure with EU-wide standards for secure access to and use of health data. Portugal is currently co-ordinating the HealthData@EU pilot, which supports the creation of the technical and governance elements necessary for secure cross-border access to electronic health records, e-prescriptions and health research data.
EU companies developing software/medical devices powered by AI that process the personal data of patients must abide by GDPR and AIA, taking into special consideration that health-related data is sensitive data, subject to stricter restraints.
Autonomous vehicles powered by AI are subject to various regulations and standards that govern their operation, safety and data collection; often intersecting transport and technology law and varying by jurisdiction. Data privacy, security and liability are significant concerns.
Autonomous vehicles collect vast amounts of data, some of which can be personal or sensitive. Protecting this data is crucial to comply with privacy laws like those under the GDPR and to maintain user trust.
Portugal has yet to enact specific liability dispositions applicable to autonomous vehicles. However, under general principles, only a fully autonomous vehicle (ie, “without an on/off button”) would create a really new legal problem, which, according to publicly available data, will not happen soon.
If human command is possible, the provision contained in Article 503(1) of the Civil Code continues to provide framing of liability for damage caused by land vehicles: if the user of any type of such vehicle has a choice between operating it manually or using the autopilot, the domain of use remains.
Future transposition of the European Directives on liability in AI systems will ease the proof requirements in such cases. Now, given the legislation in force and the state of development of autonomous vehicles, verification will always be on a case-by-case basis, checking the degree of autonomy of the vehicle and the specific circumstances of the events that caused the damage.
AI, in both neutral deep learning networks and ML solutions, is gaining prominence, allowing for a higher level of production automation. Even in cases where AI systems are not specifically applied to automation (replacing workers), they are already being used as resource management solutions, making it possible to manage waste, logistics, costs, etc.
One of the major changes in “Industry 4.0” is in respect of so-called collaborative robots, trained with spatial notions and without programming limitations on repeating the same function. These robots allow humans and robots to co-exist in the factory. The AIA (Recital (28)) already mentions that this type of machine “should be able to operate and perform their functions in complex environments safely”.
On this note, Portugal is participating in the establishment of one of Europe’s first AI factories. This initiative, part of the European High-Performance Computing (EuroHPC) project, involves collaboration among multiple countries to develop AI-based computing infrastructures. These AI factories are designed to support the AI innovation ecosystem by providing resources for small and medium-sized enterprises (SMEs), public administration bodies and researchers, facilitating the integration of AI into various industries, including manufacturing.
In Portugal, there are still no regulations governing the use of AI in professional services.
The introduction of AI in workflows comes with an aggravated duty of responsibility to ensure that confidentiality duties and professional obligations are respected. Implementation of AI systems should be well-designed and included in the pre-existing working model for predefined purposes. This dynamic planning prevents future problems, including copyrighted material in deliverables, lack of client consent or other non-compliance with applicable regulatory standards.
AI systems and their software are protected under existing national and EU IP frameworks, including Portugal’s transposition of EU Directive 2009/24/EC and the CDADC. While these frameworks do not specifically regulate AI, their general provisions still apply. International agreements like the Berne Convention and TRIPS also establish minimum copyright standards.
Directive 2019/790/EU (DSM) may regulate the use of training data in LLMs through its text and data mining (TDM) provisions. However, its application is complex, particularly when TDM is performed for commercial, rather than research, purposes, or when right-holders have restricted data use. Article 4(1) of the DSM could even require deleting training data before validation/testing, although this remains debatable.
Outputs of LLMs may fall into three categories: (i) IP infringements due to existing materials, (ii) derivative creations, or (iii) autonomous creations, each assessed individually. Clear and robust T&Cs are crucial to manage input and output rights.
Even when data is lawfully processed under the GDPR and DSA, T&Cs may still affect the legality of outputs. Due to the difficulty of ensuring consent during web scraping, training data should be used in privacy-compliant, controlled environments. AI systems should be equipped to assess T&Cs before scraping. OpenAI’s opt-out tool for websites illustrates one method to mitigate IP infringement risks during training.
Even though most of the various intellectual property agencies’ positions, as well as the classic academic doctrines, sustain that the centre of invention is “human” (anthropocentrism), the discussion continues with advances in generative AI.
Actions proposed by Dr Thaler are the “classical” current decisions on this matter, which support the understanding (in the UK, USA, and some European jurisdictions) that AI systems do not fulfil the requirements to be considered inventors or authors for the purposes of protection under IP rights (both patents and authorship rights). The future will tell if this position remains unchanged.
See also 8.1 Specific Issues in Generative AI and 8.2 Data Protection and Generative AI.
Trade secrets, such as algorithms, datasets and proprietary AI models, play a crucial role in safeguarding AI innovations. Maintaining secrecy can be challenging, specifically in collaborative research environments or when AI technologies are integrated into shared products or services.
Non-disclosure agreements (NDAs) are common and prevent unauthorised disclosure or use of AI-related confidential information by outlining the obligations of the parties involved in collaborations, research projects or business partnerships, ensuring sensitive information remains confidential.
Licensing agreements are used to control the use of technology and data by third parties, protecting a company’s IP rights under a commercial relationship.
In both cases, confidentiality can be compromised, and enforcement mechanisms are limited.
While contractual agreements offer valuable tools for protecting AI technologies and data, companies must adopt a comprehensive IP strategy, implementing robust contractual measures, balancing secrecy with collaboration and innovation.
The current dominant position relies on the anthropocentric nature of IP protection, which is applicable to artworks.
The emergence of AI-generated works of art has raised questions regarding authorship and related IP protection eligibility, as well as ownership of the copyright for AI-generated works.
In many jurisdictions, the default rule is that the human creator or the employer of the human creator owns the authorship rights to works created by an AI system. However, there is ongoing debate about whether AI itself should be recognised as the author and owner of its creations, particularly in cases where the AI system operates autonomously without direct human involvement in the creative process.
In the USA, according to the US Copyright Office’s January 2025 report, AI-generated works are only eligible for copyright if there is sufficient human control over the expressive elements. Prompts alone are not enough, and ultimate decisions on protection rest with the courts. In addition to authorship rights, other forms of IP protection may apply to works generated by AI. Innovative AI algorithms or processes used to generate artistic or literary works could potentially be granted patents. Similarly, trade marks could protect distinctive logos, symbols or brands associated with creative outputs generated by AI.
Overall, the changing landscape of IP protection for AI-generated works highlights the need for flexible and adaptive legal frameworks that balance innovation, creativity and ownership rights in the digital age. As AI technologies continue to advance, policymakers, legal scholars and stakeholders must collaborate to address the complex issues surrounding the protection and exploitation of AI-generated content in a manner that promotes both artistic expression and technological progress.
Pending IP litigation will undoubtedly dictate the industry. The cases pending on OpenAI are the ones to keep an eye out for. One of the main points already mentioned is the IP issues raised by learning models (especially by web scraping of publicly available information subject to copyright) and the “transformative" versus “reproductive” nature of the content generated by AI systems. In addition to the rules and regulations that are emerging and are expected to regulate this issue in the future, one of the main arguments considered (with prominence in US case law) is the “fair use” test.
A key development in this area is the report issued by the US Copyright Office in 2024, which outlines the Office’s current position on copyright protection for AI-generated works, the role of human authorship and how training data usage may implicate copyright law. The report reinforces that copyright protection only applies to works with meaningful human authorship and confirms that use of copyrighted materials for training may not necessarily fall under fair use, depending on the specifics of the case. This report is likely to influence both future litigation and legislative developments.
New trends are expected in case law, and companies are advised to pay close attention to these trends and adapt their business models to the new rulings and regulations.
Portugal’s Autoridade da Concorrência (AdC) is actively addressing emerging antitrust issues related to AI. One of the key concerns is acqui-hires, where companies acquire AI start-ups primarily for their talent, potentially reducing innovation and competition. The AdC is also monitoring the use of AI-powered algorithms that could facilitate price-fixing and collusion, reiterating that such practices are incompatible with competition law. Additionally, it is scrutinising the control of large datasets by dominant firms, ensuring that data-driven market power is not abused to harm competition. To enhance its enforcement capabilities, the AdC has announced the integration of AI tools in its investigations, as outlined in its 2025 competition policy priorities. These initiatives demonstrate Portugal’s commitment to adapting to technological advancements while ensuring that AI development supports fair competition and consumer protection.
Portugal’s cybersecurity framework aligns with EU regulations, including the NIS2 Directive and the AIA, which came into force in 2024. The AIA imposes cybersecurity obligations on high-risk AI systems and general-purpose models like LLMs, recognising their role in malware creation, social engineering and advanced cyber threats. While Portugal has no standalone AI cybersecurity law, it is transposing NIS2 to strengthen defences against AI-driven threats, thus, if an AI system is deployed on a critical, essential or important infrastructure, further obligations from the NIS2 Directive will apply. The government is enhancing cybersecurity resilience to address AI-enabled attacks, ensuring compliance with EU standards to mitigate risks from increasingly sophisticated cybercriminal activities.
Portugal follows the EU’s Corporate Sustainability Reporting Directive (CSRD), requiring companies to disclose ESG data using the European Sustainability Reporting Standards (ESRS). While this ensures transparency, ESG reporting can be complex, especially for high-impact companies. AI is legally permitted in this context and can significantly streamline data collection, analysis and reporting.
AI can also enhance public access to ESG information, helping citizens and investors make informed decisions. This transparency is key to driving corporate responsibility. However, AI presents risks: biased algorithms may reinforce inequality and inaccurate outputs could lead to compliance issues. Moreover, AI’s high energy use raises environmental concerns; potentially at odds with ESG goals.
Portugal, in line with EU policy, is addressing AI’s environmental footprint but has yet to enact specific national legislation. Meanwhile, the country supports innovation while working to ensure AI aligns with sustainability priorities.
Implementing specific best practices for AI in organisations requires addressing several key issues, namely (in addition to the above):
Rua Garrett, 64
1200-204 Lisboa
Portugal
+351 21 093 30 00
+351 21 093 30 01/02
geral@servulo.com www.servulo.com