Artificial Intelligence 2024

Last Updated May 28, 2024

Romania

Law and Practice

Authors



Lexters is based in Bucharest, Romania, and has an outstanding team of multilingual lawyers who possess the talent, experience and creativity to effectively address the legal and business challenges faced by clients from the technology sector. The firm provides top-notch guidance due to its fluency in the language of technology. It seamlessly integrates corporate/M&A, IP/IT and private equity/VC practices to tailor solutions according to business needs, and readily adapts to changes in society. Lexters acts as a bridge between Central and Eastern Europe and the United States and the rest of the world.

Currently, there are no specific laws dedicated exclusively to regulating artificial intelligence (AI) or machine learning in Romania. However, various laws and regulations may apply to AI and its applications across different sectors.

In terms of data protection, the General Data Protection Regulation (GDPR) applies to the processing of personal data in Romania, encompassing AI and machine learning models that utilise personal data. In addition, Law No 190/2018 addresses the processing of certain categories of personal data, the role of data protection officers, certification bodies, and the applicable sanctions for both public and private entities.

Moreover, the Romanian government is expected to enforce the EU AI Act. This legislation imposes binding rules on transparency and ethics, requiring tech companies to notify individuals when they interact with AI systems, such as chatbots or biometric categorisation and emotion recognition systems.

Along with the AI Act, the EU Commission has proposed the Artificial Intelligence Liability Directive (AILD), which is a regulatory initiative aimed at addressing compensation for damage caused intentionally or negligently by AI systems. This directive seeks to establish common rules on non-contractual civil liability for harm resulting from the use of AI systems. The AILD complements the AI Act, which focuses on preventing harm caused by AI. While the AI Act aims to mitigate risks associated with AI technologies, the AILD is designed to ensure that victims of harm have access to compensation through the application of liability law.

Moreover, member states were required to transpose the NIS Directive into national legislation by 9 May 2018, with the aim of establishing a high common standard of network and information security across EU member states, recognising the growing significance of networks and information systems to European economies.

Romania complied by enacting Law 362/2018, known as the NIS Law, on 9 January 2019. This law designates CERT-RO as the national authority responsible for overseeing network and information systems security, as well as serving as the single point of contact for co-operation with other member states' authorities.

The NIS Directive was replaced by Directive (EU) 2022/2555, known as NIS2, which came into force in January 2023. EU member states have 21 months to transpose NIS2 into their national legislative frameworks, with the deadline being in September 2024. NIS2 aims to address the deficiencies of the original legislation and adapt to the changing digital landscape.

Another potential change worth monitoring in the coming period is the development of the Cyber Resilience Act, approved by the European Parliament on 12 March 2024. This Act introduces cybersecurity assessments and requirements for digital products, along with provisions for automatic security updates and mandatory reporting of vulnerabilities or incidents to ENISA by member states. ENISA will further evaluate systemic risks and share information with other member states as needed. The next stage involves formal adoption by the Council for the Act to be enacted into law.

Artificial intelligence has the potential to transform industries and boost productivity in Romania. Researchers in Romania are delving into the possibilities of AI across sectors like manufacturing, healthcare, finance, agriculture and transportation. The integration of AI and machine learning technologies brings numerous advantages to both businesses and consumers, including heightened efficiency, streamlined processes and enhanced decision-making capabilities.

Nevertheless, it is worth highlighting that Romania is still in the nascent stages of AI adoption, and there is a pressing need for clearer impact assessment on using AI in business for enhancing efficiency. Despite the immense potential, many businesses may not fully grasp how AI can elevate their operations. However, this trend is changing fast.

Moreover, Romania is experiencing a surge in the number of start-ups venturing into AI exploration across diverse sectors. These start-ups are actively trialling and deploying AI solutions to tackle challenges and streamline operations in industries such as healthcare, finance, manufacturing, agriculture and transportation. As the AI landscape continues to evolve, these start-ups are pivotal in propelling innovation and driving the uptake of AI technologies throughout Romania.

The Romanian government is beginning to show interest in AI and is at the early stages of exploring its potential. Initiatives and funding programmes are being considered to encourage collaboration between academia, industry and the government sector. These efforts aim to lay the groundwork for talent development, knowledge exchange and the eventual adoption of AI technologies across various sectors.

To kickstart this journey, the Minister of Research, Innovation and Digitalisation in Romania has introduced the Romanian Committee for Artificial Intelligence. This initiative is designed to facilitate the development of new entities relevant to the AI field, such as the Scientific and Ethical Council in Artificial Intelligence. A Coalition for AI has also been proposed as a consultative platform, bringing together representatives from the private sector to collaborate on shared perspectives and present unified opinions to policymakers. Through these initiatives, Romania aims to shape its AI landscape and contribute to the development of public policies regarding AI adoption in the country.

Romania has not implemented any specific legislation dedicated solely to regulating AI. Instead, AI-related matters are governed by existing laws and regulations spanning various sectors, including privacy and data protection.

As a member state of the European Union (EU), Romania typically aligns its regulatory framework with EU directives and regulations, to ensure uniformity and facilitate seamless cross-border activities. By embracing the proposed AI Act put forth by the EU, Romania stands to gain from a unified approach to AI regulation across the EU, fostering interoperability and easing the movement of AI technologies and services within the single market.

Romania has yet to enact AI-specific legislation at the national level but, as part of the EU, it is expected to align with the EU's regulatory framework, particularly the AI Act. The objectives of the AI Act include:

  • mitigating AI-related risks;
  • ensuring transparency and accountability;
  • promoting ethical AI development; and
  • fostering innovation and competitiveness in the European AI landscape.

The legislation is based on the principles of trustworthiness, safety and transparency in AI systems. It includes provisions for high-risk AI systems, which are subject to clear obligations, such as:

  • mandatory fundamental rights impact assessments;
  • model evaluations;
  • systemic risk assessments;
  • adversarial testing; and
  • cybersecurity measures.

On the domestic front, attention is directed towards the impending draft law PL-X No 471/2023, commonly dubbed the “Deepfake Regulation”. This legislation targets the responsible use of technology, including AI, to combat the deepfake phenomenon. It seeks to regulate the dissemination of visual and/or audio content generated or manipulated using technology within the context of deepfakes, with the overarching goal of combating misinformation and preserving the integrity of transmitted messages.

Although Romania has yet to enact specific legislation pertaining to AI, it is not uncommon for governmental bodies, research institutions or industry associations to issue non-binding guidelines or recommendations addressing the challenges and opportunities associated with AI technology.

Despite the absence of specific AI legislation in Romania, the country has initiated steps to embrace AI technology and its potential advantages. Notably, Romania has unveiled an AI-driven robot named Ion, aimed at augmenting the government's awareness of public concerns. Ion gathers data through automated scans of social media and public messages via an online platform. Subsequently, this data is transformed into reports for government officials, with the aspiration of the AI proposing policy recommendations based on public feedback.

The Romanian government has acknowledged the significance of AI and has taken measures to bolster its advancement. One notable initiative is the National Strategy for Artificial Intelligence, which prioritises the advancement of research and innovation, the cultivation of public-private partnerships and the promotion of transparency and accountability in AI systems. Developed under the auspices of the Authority for the Digitalisation of Romania, the National Strategy for Artificial Intelligence serves as a strategic blueprint for the adoption and utilisation of cutting-edge technologies in public administration. It aims to identify solutions for enhancing operational efficiency and effectiveness through streamlined processes.

As an EU member state, Romania is expected to harmonise its national legislation and policies with AI-specific directives, regulations and rules issued by EU authorities. The EU has recently ratified the AI Act, which delineates regulations for large-scale, influential AI models and high-risk AI systems. Romania is obliged to adopt and enact this Act through domestic laws.

Furthermore, Romania has adhered to the guidelines outlined in the EU Commission's White Paper on AI, and has issued 13 recommendations concerning AI. These recommendations include fostering meaningful consultations with industry stakeholders to establish a framework for the responsible utilisation of AI and supporting the agricultural sector in integrating AI-driven solutions.

Romania is anticipated to incorporate Chapters I (General provisions) and II (Prohibited artificial intelligence practices) of the EU AI Act into its legislation by September 2024. A pivotal feature of the AI Act is its risk-based approach, categorising AI systems into four risk levels:

  • unacceptable;
  • high;
  • medium; and
  • low.

Unacceptable AI systems will face prohibition, while high-risk AI systems will necessitate stringent obligations, including mandatory Fundamental Rights Impact Assessments, data governance requirements and transparency measures.

Nevertheless, Romania's AI-specific jurisdictional law may diverge from the proposed EU regulations in certain aspects. Notably, Romania lacks a legal definition of AI, and its legislation does not attribute legal personality to AI. Moreover, there is no dedicated legislation concerning liability for AI-induced damage, and the Copyright Law solely extends legal protection to works deemed original, implying human authorship.

To enact the AI Act, Romania must formulate internal laws that align with EU directives and regulations on AI. This endeavour may entail amending existing laws or enacting new ones to address the unique challenges posed by AI technology. Furthermore, Romania will need to establish regulatory bodies and mechanisms to oversee and enforce compliance with the AI Act and its associated internal laws. European guidance to clarify the National Coordination Authority in the field of AI is expected at a later stage.

This is not relevant in this jurisdiction.

Privacy concerns represent a significant hurdle to AI adoption in Romania. With AI systems processing vast quantities of data, there is a risk of sensitive personal information being mishandled or accessed without consent. Romania's primary data protection legislation is the GDPR, which is enacted through Law No 190/2018. This law addresses various aspects of personal data processing, including the role of data protection officers, certification bodies and applicable sanctions for both public and private entities.

The National Supervisory Authority for Personal Data Processing (Autoritatea Națională de Supraveghere a Prelucrării Datelor cu Caracter Personal) serves as the competent authority for GDPR enforcement in Romania. This body is tasked with overseeing compliance with data protection laws, investigating complaints, imposing sanctions for violations, and providing guidance on data protection matters to individuals and organisations.

It is worth noting that AI systems that do not process personal data or that process data outside the EU may fall under the jurisdiction of the AI Act rather than the GDPR, depending on their scope of processing and built-in purpose. The AI Act applies to providers, users and other stakeholders involved in the AI value chain within the EU market. Conversely, the GDPR applies to controllers and processors handling personal data in the context of EU-related activities or offering goods/services to EU data subjects.

Regarding information and content laws, the European legislature has intervened by introducing exceptions and limitations to copyright laws, which are particularly relevant to generative AI systems. These exceptions, such as text and data mining (TDM) exceptions under the Copyright Directive 2019/790/EU, facilitate AI systems' access to data for analytical purposes. TDM involves automated techniques for analysing digital text and data to generate information, allowing AI systems to access large datasets crucial for generating new content.

In terms of regulatory developments, public attention is drawn to the pending draft law PL-X No. 471/2023, which is commonly referred to as the “Deepfake Regulation” and aims to address the responsible use of technology, including AI, in combatting the deepfake issue. This draft legislation is set to undergo voting in the Chamber of Deputies shortly after its approval in the Senate. It seeks to regulate the dissemination of visual and/or audio content generated or altered using technology in the context of deepfakes, with the objective of preventing misinformation and preserving the authenticity of transmitted messages. Proposed penalties for non-compliance range from EUR2,000 to EUR40,000.

At the EU level, significant attention is directed towards the furtherance of guidelines and other regulatory tools of the AI Act, which applies to providers, users and other stakeholders throughout the AI value chain involved in the placement or use of AI systems within the EU market, irrespective of their location. Businesses operating under Romanian laws or within Romania will fall under the scope of the AI Act if they engage in the development, deployment or utilisation of AI systems within the EU, regardless of their jurisdictional base.

Additional bills are expected to be crafted in the future, aiming to facilitate the integration of the EU AI Act within Romania's legal framework and to address domestic concerns and risks associated with AI technology and its use in various economic sectors.

A direct outcome of implementing the EU AI Act into national law will be the establishment of a dedicated national authority, tentatively named the “AI Regulatory Authority”, which is expected to materialise later this year. This authority will be tasked with overseeing the market, participating in regulatory testing environments and accrediting notified bodies.

In terms of domestic legislation, updates to existing laws are also anticipated in the near future. These updates may include amendments to facilitate the testing of new technologies through “experimentation clauses”, enhancements to the occupational registry to encompass AI-related job profiles, and clarification of technology transfer protocols to delineate contract types, terms and conditions of transfer. The IP-related legal framework is expected to undergo changes in this respect.

As far as is known, there have been no judicial decisions in Romania directly related to AI. This is due to the lack of local regulation of the subject matter. However, there is a possibility of AI indirectly influencing the subject of a dispute or that the court may not expressly mention the incidence of AI in order to avoid discussions on the lack of a legal framework.

Reviewing the case law of international courts, the ECJ's decision in the SCHUFA case (C-634/21) is relevant, as it stated that, when credit reference agencies use algorithms to generate credit scores and when creditors significantly rely on these scores in their decision-making processes, these agencies are involved in making individual automated decisions. Therefore, they shall ensure compliance with the provisions of the GDPR, particularly Article 22 thereof, but also with those relating to individual rights, such as the right to human intervention, the right to challenge decisions and the right to transparency in automated decision-making processes. Even though this case does not refer directly to AI, the principles promoted therein – such as transparency and the protection of individual rights in automated decision-making processes – may indirectly influence AI technologies and other emerging technologies.

In terms of generative AI and IP rights, a European-wide case before the European Patent Office (EPO) is worth highlighting, as it rejected patent applications naming AI “DABUS” as an inventor, stating that only natural persons can be inventors under the European Patent Convention (EPC). The EPO upheld this decision, highlighting the EPC requirement for inventors who are natural persons and rejecting arguments for AI-generated inventions to be treated differently. The appeal was also dismissed, upholding the rejection on the grounds of non-compliance with the EPC requirements, highlighting the ongoing debate on the role of AI in the patent field.

AI was associated with common attempts at definitions existing in society. In the case outlined in 4.1 Judicial Decisions, the notion of AI was defined, with the only mention being that AI represents a “machine”.

Concerning the distinction between generative and predictive AI, the existence of general definitions of AI causes confusion as the two types of models have different features and functionalities. However, once the EU AI Act enters into force, the definition of AI is expected to become unified in relation to the legal level, and the interpretation of AI between courts should be based on the definitions and provisions of this Act.

The role of regulating AI in Romania has fallen mainly to the Ministry of Research, Innovation and Digitalisation and the Authority for the Digitalisation of Romania (ADR). Without direct regulation of AI in Romania, their mandate is to establish and implement the regulatory and operational framework for AI. Both have issued the National Strategy for Artificial Intelligence for 2024–2027, as well as the methodological rules on the application of this strategy.

The Scientific and Ethical Council on Artificial Intelligence has been founded within the Ministry of Research, Innovation and Digitalisation is made up of reputable Romanian specialists, including Romanians abroad, who will offer their expertise for the AI policy development in Romania.

There are also collaborations among ministries such as the Ministry of Foreign Affairs, the Ministry of Finance, the Ministry of Education and the Ministry of Internal Affairs. However, the regulatory framework and the agencies that will be responsible for regulating AI once the EU AI Act comes into force depend on the Ministry of Research, Innovation and Digitalisation.

No local regulation yet contains a specific definition of AI, but the National Strategy for Artificial Intelligence for 2024–2027 states that AI “represents a set of systems that manifest intelligent behaviours and take actions with a certain degree of autonomy”. Taken from the European Commission's 2018 Communication on AI, it exhibits a general character, with no distinctions made with regard to generative or predictive AI, which could affect businesses operating in the field or using this system, in terms of operating or regulatory conditions.

On the other hand, the AI regulation aims to provide a comprehensive definition of the concept, distinguishing between AI models, which will generate a uniform interpretation of this definition among member states. According to the regulation, an “artificial intelligence system” is understood to be “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Similarly, a general definition of AI can be found in the draft law on Artificial Intelligence (see 5.4 Enforcement Actions), which is currently under debate in the Romanian Senate and defines AI as “an advanced way of processing in the virtual environment a large amount of data, meta-data and information by applying predefined algorithms, with the purpose of performing in an optimal time frame some tasks with anticipatory character”. 

At an international level, AI is defined by several institutions, without distinguishing between AI technology models. For example, the Draft Recommendation on the Ethics of Artificial Intelligence, developed in 2021 by an ad hoc expert group established by the United Nations Educational, Scientific and Cultural Organization (“UNESCO Draft Recommendation”), describes AI systems as “technological systems” or “information processing technologies incorporating models and algorithms that have the ability to process information in a way that resembles intelligent behaviour and that includes, usually, aspects of reasoning, learning, perception, prediction, planning or control”, but also mentions that they do not have the ambition to provide one single definition of AI, since such a definition would need to change over time, in accordance with technological developments.

Also at this level, the United Nations Regional Information Centre states that there is no universal definition of AI, which is generally considered to be “a discipline of computer science that is aimed at developing machines and systems that can carry out tasks considered to require human intelligence”, while the World Bank defines the phenomenon as “the ability of the software systems to carry out tasks that usually require human intelligence: vision, speech, language, knowledge, and search”.

At both European and national level, the authorities and agencies dealing with the process of implementing and regulating AI are mainly concerned with avoiding damage to public interest and fundamental rights. In this regard, a good example to mention would be the National Strategy for Artificial Intelligence, which aims to ensure that the development of AI technologies and their implementation follow principles such as:

  • respect for human rights and democratic values;
  • holding AI under the control of human intelligence and action as the final actor in decision-making;
  • respect for diversity and equality among users and gender equality, in order to give access to AI products and services to anyone;
  • security and safety with regard to the services offered and the data processed in case of risk or threats of cyber-attacks; and
  • transparency and trust on the operation of AI services.

In this regard, the following agencies are expected to intervene if AI technology is used in relation to the following areas, depending on their specific mandates:

  • the Data Protection Agency for personal data processing;
  • the ADR for cybersecurity issues, including the application of the NIS2 Directive, as applicable, and for AI in the seven economic sectors identified as being of strategic importance);
  • the Financial Supervisory Authority (ASF) for financial services and AI-enhanced financial products;
  • the National Agency for Consumer Protection for AI-enhanced products and services, covering their marketing and sales, including issues related to the Digital Services Act's implementation into the Romanian market;
  • the National Commission for Anti-Discrimination for issues related to AI application in relation to anti-discrimination policies and the mitigation of AI-generated biases and discrimination through its algorithm; and
  • the Romanian Competition Council if the use of AI brings about anti-competitive effects in the Romanian market and for any actions that could amount to unfair competition practices (including breaching trade marks or franchising agreements).

The related benefits and objectives are to promote measures for economic growth, social wealth and democratic values, stability and national security through responsible and ethical approaches, and by contributing to global norms and standards. It also promotes measures designed to encourage the development, use and uptake of AI in the internal market while ensuring a high level of protection of public interests, such as health and safety, and the protection of fundamental rights, as well as democracy, the rule of law and the environment, as recognised and protected.

Due to the lack of provisions related to AI in Romania, no enforcement measures or fines have yet been applied. However, at the level of regulatory actions, the following regulatory actions should be considered over the coming months.

Deepfake Regulation

Draft law PL-X No 471/2023 intends to deal with the responsible use of technology (including AI) in the context of the deepfake issue. The draft will soon be voted in the Chamber of Deputies, having already been approved in the Senate. The proposed fines for non-compliance with its provisions are set to be between EUR,000 and EUR40,000.

Artificial Intelligence Law

Submitted to the Senate of Romania for debate and approval as the first decision-making chamber, draft law no BP209/18.03.2024 aims to create a legal framework at a national level for the implementation, use, development and protection of AI in Romania, in the context of the emergence of new technologies and the deployment of enhanced cyberspace security measures at national and European level. Once the law is admitted and enters into force, it is expected to be in line and complete the provisions of the EU AI Act in order to be as effective as possible at a national level. For now, this is just a parliamentary initiative, which will be discussed by the Senate, and no information is yet available on whether the law has the required support to get traction to the Chamber of Deputies (which is the second chamber and has decisional power for this type of law enactment).

EU AI Act

The EU Regulation on AI contains penalties of up to EUR35 million, or up to 7% of turnover in the case of sanctioning entities.

The impact of this regulation will also be seen at the national level, where the implementing laws will adopt and implement the EU requirements.

The main standardisation body in Romania is the Romanian National Standardisation Organisation (ASRO), which on 29 February 2024 announced the establishment of a national technical committee for standardisation in the field of artificial intelligence: ASRO/CT 401. This committee will be chaired by the representative of the ADR, an authority with a predominant role in AI implementation and regulation. The objective of this standard-setting body is to promote and facilitate the adoption of standards in the field of AI, thus contributing to the development of a solid technological infrastructure and to increased competition in the current context of digital transformation. The committee is expected to issue AI-related standards in the coming months.

International standardisation bodies generate an impact on the activities carried out in Romania in the field of AI. Thus, the AI committees of the most important standardisation bodies, such as ISO (International Organisation for Standardisation) and CEN (The European Committee for Standardisation), have been transposed in Romania through a mirror technical committee, established on 29 February 2024, as announced by ASRO (see 6.1 National Standard-Setting Bodies). These international bodies do not interfere with jurisdictional law, and the standardisation rules adopted at the national level are expected to be inspired by existing international standards.

EU standards related to cybersecurity and AI are also expected to be further propagated from the EU level to the Romanian level, as a standard setting across the EU market, in line with the EU AI Act.

Romania has taken modest steps towards testing the implementation of various AI solutions within its public administration.

At the European level, Romania has progressed in the implementation of AI through its National Recovery and Resilience Programme (NRRP), as part of its broader efforts in digital transformation.

One notable initiative is ION, the government's inaugural AI Adviser, which facilitates more efficient representation of the population at the administrative level by receiving and relaying their messages, wishes and concerns.

The following uses of AI are anticipated in the near future.

  • The Ministry of Foreign Affairs has engaged an IT service to develop an integrated platform employing advanced AI algorithms. This platform will analyse and verify online content, determining its origin to combat misinformation in the online sphere. This aims to support clear decision-making to counter misinformation, particularly within the ministry's purview of foreign affairs and consular services for Romanian citizens.
  • The Ministry of Finance's initiative involves implementing an AI mechanism in customs controls, which aims to enhance verification rates for both document authentication and physical inspections at the borders. Full implementation of the solution is targeted for the summer of 2024.

Regarding the use of facial recognition and biometrics, privacy concerns are paramount. Compliance with the established principles of human rights is essential, both domestically and within the framework of European regulations. As the EU AI Act comes into effect, the utilisation of these features will be strictly regulated, adhering to the provisions outlined in the Act.

The process of setting up the legal framework for AI and its implementation is at an early stage. The lack of regulation in this area, and limited attempts to integrate AI into government activities, have so far not generated disputes. However, with the implementation of new EU regulations and the national AI strategy, the possibility of disputes being resolved by judicial decisions cannot be excluded.

As a general point, AI can significantly improve defence capabilities through:

  • AI-based surveillance systems that can cover large areas for analysis and surveillance purposes, using drones and satellite imagery, identifying potential threats;
  • AI algorithms that simulate numerous scenarios to aid strategic planning, providing military strategists with data-driven information to make informed decisions;
  • AI algorithms that detect and counter cyber threats in real time, strengthening defences against hacking attempts, data breaches and other cyber-attacks; and
  • automatic detection mechanisms of illegitimate traffic based on mechanisms provided by AI.

At the national level, in terms of national security, AI is one of the main objectives to be set. Even if no use of AI at this level has been made public so far, programmes such as the National AI Strategy or the NRRP aim to accelerate the adoption of this type of technology.

The increasing use of generative AI presents both potential and significant risks, including the following.

  • The development of deepfake technology, where images, audio, video or AI-generated images can deceive or mislead viewers, leading to potential criminal activity, is leading to legislative reactions, including in Romania, through the “Deepfake Regulation” (see5.4 Enforcement Actions).
  • Lack of interpretations of these outputs, often referred to as the “black box” problem, makes it difficult to understand how the results are generated, and therefore difficult to detect errors and mitigate biases.
  • Ensuring data privacy and security is important, because the risk of exposing sensitive information is increased as generative AI models can learn and replicate user input, including personal data or confidential information. As the technology is used, the amount of information and data collected and processed increases, which also increases the potential probability of the exposure of such data.
  • As described in 8.2 IP and Generative AI, copyright and IP rights in the results generated by AI models raise questions of authorship and ownership.

An ongoing issue regarding generative AI concerns the ownership of property rights over the creations made by the AI models. No mechanisms have been established to distinguish and determine who owns IP rights over the outputs of the AI models. In this respect, it can be noted that the terms and conditions of the providers of AI tools have been established in accordance with the corresponding regulations in force in each state. As was mentioned in 4.1 Judicial Decisions, the EPO ruled that only natural persons can be inventors under the EPC. This decision underscores the ongoing debate about AI's role in the patent field.

The conditions to be fulfilled by the owner are established in the national IP laws (including copyright), and the rights are granted to natural or, in some cases, legal persons. Therefore, the product resulting from a process carried out by a generative AI model should only be owned by the programmer of the model or, where applicable, by the user of the service, as the AI does not qualify as a legal person and therefore cannot hold copyright or other intellectual property rights.

However, it is important to follow the ongoing debate among copyright stakeholders in the foreseeable future to ascertain who will be granted the rights to utilise the work and to observe whether specific regulations will emerge in the future to alter the current landscape of intellectual property ownership.

As an EU member state, Romania abides by the GDPR, which directly governs personal data processing, including its utilisation in AI models.

The 2020 European Parliament study on the impact of the GDPR on AI acknowledges that AI deployment can comply with the GDPR but notes that the regulation lacks clear guidance for controllers. It calls for expanded and concrete GDPR provisions to address AI's evolving landscape, stressing the need for ongoing adaptation to technological advancements and emerging challenges.

However, several GDPR rights hold particular relevance to AI, including the following.

  • The right to rectification – individuals may request the correction of inaccuracies in their personal data used for AI training.
  • The right to erasure – individuals can request the removal of their data from training sets. Unless legally mandated otherwise, such requests should be honoured, ensuring respect for privacy while maintaining AI efficacy.
  • The right to data portability – individuals have the right to receive their data in a portable format, which is especially relevant when consent or a contract forms the basis of data processing for AI. However, preprocessing may affect data portability, requiring careful consideration.
  • The right to be informed – transparency is vital in AI data usage. Individuals should be informed if their data is used for AI training, even if direct notification is challenging due to anonymisation. Providing accessible information about data usage and objection procedures remains essential.

Furthermore, adherence to the GDPR's principles of data minimisation and purpose limitation should guide AI development. Collecting only data necessary for AI training and restricting its processing to legitimate, explicit purposes ensures alignment with privacy rights while fostering responsible innovation.

Litigation Proceedings

Pre-trial case management involves:

  • a case management system;
  • electronic communications – accessible digital platforms for lawyers/clients;
  • automated monitoring of procedures;
  • an automated system for monitoring procedural delays;
  • an automated system for completing procedural formalities;
  • automated decisions regarding case progression;
  • queue management; and
  • the automatic sorting of appeals.

During proceedings, AI is involved in:

  • guilty plea agreements, including databases of prosecutors;
  • the use of videoconferencing;
  • automatic transcription/translation;
  • the automatic presentation of case documents on screens during hearings;
  • case management (in complex case situations); and
  • the use of emotional AI (emotion detection, etc).

Post-sentencing, AI is used in:

  • the case management system;
  • legal research and analysis/autonomous research;
  • written assistance and the drafting of decisions;
  • decision-making systems;
  • intelligent assistance systems (pattern recognition, data analysis); and
  • risk scoring/likelihood of recidivism/conditions for parole.

Consulting Functions

  • Contract drafting and review: AI aids in the creation and assessment of initial contract drafts, and in the scrutiny of contractual terms. Furthermore, AI simplifies the management of contracts, ensuring adherence to legal requirements while minimising errors.
  • Due diligence automation: AI automates due diligence tasks by analysing extensive datasets, pinpointing potential risks and extracting pertinent information.
  • Regulatory compliance surveillance: AI monitors compliance with legal frameworks, tracking legislative changes and flagging potential compliance issues.
  • IP oversight: AI conducts patent and trade mark searches, and analyses IP.

Although the regulatory bodies overseeing legal practice have not promulgated specific rules or regulations regarding AI, a noteworthy resource is the “Guide on the use of Artificial Intelligence-based tools by lawyers and law firms in the EU 2022” by the Council of Bars and Law Societies of Europe (project co-funded by the Justice Programme of the European Union).

Regarding general ethical considerations, lawyers must understand AI limitations, supervise its use, protect client confidentiality, ensure transparency and inform clients about associated costs to prevent unjust billing practices. Lawyers should use AI systems for drafting with utmost responsibility, as their clients and the Bars expects them to ensure that the legal analysis is scrutinised fully by a licensed lawyer – otherwise, there is no certainty that such generated texts represent the best interests of the client.

Legal services fees and billing are also expected to undergo transformation, where output/deliverables-based remuneration and/or packages will become the norm for consulting functions, rather than hourly billing – as more and more formats are readily available, lawyers are expected to bill their time less on documents and more on their customisations to clients' needs.

In light of the yet-to-be-approved Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (the “Draft AI Liability Directive”), Romania lacks specific regulations addressing AI liability in cases of injury. Consequently, Romania relies on general principles of civil and criminal law, under which liability can be established through fault-based or strict liability. Fault-based liability entails demonstrating damage, fault of the responsible party (the supply chain participants – the provider, the producer, the user, etc), and causality between the fault and the harm. Strict liability allows victims to seek compensation regardless of fault in expressly established cases.

Furthermore, under Romanian law, strict liability extends to damage caused by objects under one's control, irrespective of fault. According to Article 1352 of the Romanian Civil Code (NCC), individuals are obliged to compensate for damage caused by things under their legal custody. These “things” are defined by Article 535 of the NCC as immovable or movable property subject to a patrimonial right. Assimilating this legal framework with AI liability, AI systems could be construed as “things” under someone's control, akin to traditional objects.

Liability hinges on meeting specific conditions: the existence of damage caused by the thing and its legal custody by the responsible party. The latter involves effective authority over the thing, including direction, control and supervision. Just as one is liable for damage caused by a physical object under their custody, liability could extend to AI systems under an individual's or entity's control. This control may encompass various stages of the AI lifecycle, such as development, deployment or operation. Therefore, aligning Romanian law on strict liability with AI liability suggests that similar principles of accountability and redress can be applied to address harm arising from AI systems.

AI algorithms have faced legal challenges in various contexts, including accusations of perpetuating bias in hiring processes, providing inaccurate predictions in financial modelling, and misdiagnosing patients in medical settings. Instances of discrimination have arisen from AI-powered systems in recruitment, credit scoring and insurance underwriting, leading to claims of unfair treatment and harm to individuals or groups. Moreover, faulty AI predictions have resulted in financial losses for investors and businesses, prompting lawsuits against the developers and users of these algorithms. Both the Romanian and European legal systems are actively developing their legal frameworks to address these emerging issues.

Romania, like other EU member states, will transpose the Draft AI Liability Directive (now at proposal stage) into its national legislation once the proposal is approved. There are currently no specific regulations addressing this matter within the country's legal framework.

It is important to mention that the Draft AI Liability Directive introduces two procedural mechanisms.

  • Firstly, it grants injured parties the right to request evidence disclosure or preservation from relevant parties, streamlining access to crucial information. Non-compliance triggers a presumption against the respondent, simplifying procedures for claimants and encouraging compliance.
  • Secondly, it facilitates proving causality by introducing rebuttable presumptions linking fault to AI system outputs.

Even so, the draft AI Liability Directive grants member states some discretion in interpretation. It establishes EU-wide rules for presuming causality but does not standardise burden of proof or required certainty levels, leaving such matters to national laws. Following a minimum harmonisation approach, claimants can leverage more favourable national rules, such as shifts in the burden of proof or no-fault liability, particularly in cases involving AI-caused damage.

In the context of AI, algorithmic bias refers to the tendency of a system to generate biased or unfair outcomes due to prejudices inherent in the input data or the design of the algorithm itself. Such biases can lead to discrimination based on factors such as race, gender, age or other demographic characteristics.

According to Romanian legislation, it is crucial that AI systems are designed and implemented in accordance with the principles of non-discrimination and equality, as stipulated in Law No 202/2002 on equality of opportunity between women and men and Law No 48/2002 on the prohibition of organisations and symbols with a racist character.

For instance, in employee recruitment, any AI system used must adhere to labour laws, including provisions regarding non-discrimination. In Romania, the Labour Code ensures equal treatment for all employees and job applicants. Thus, AI systems used in the recruitment process must be designed so as not to perpetuate biases based on gender, ethnicity or other characteristics protected by law. In addition, Romanian legislation on the equality of treatment and non-discrimination establishes general principles that can be applied in the context of algorithmic bias. This requires AI systems to be transparent and regularly evaluated to detect and correct any bias tendencies.

Current industry efforts to address bias in AI systems draw upon international frameworks such as UNESCO's 2021 Recommendation on the Ethics of AI and the OECD's 2020 Principles on AI. These guidelines emphasise the integration of ethics throughout the AI life cycle and advocate for responsible AI practices, including fair treatment and discrimination prevention.

Legal frameworks like the European Convention on Human Rights (ECHR) and EU directives also play a crucial role in combating bias in AI. While the ECHR prohibits unjustified, indirect discrimination by government entities, EU regulations extend similar coverage to private actors, banning discrimination based on sex, race or ethnicity.

Lastly, the AI Act emphasises the need for companies to address bias risks within their AI systems. It mandates a thorough process for companies to identify their position regarding bias, categorising potential risks into four groups (minimum risks, limited risks, high risks and unacceptable) and allowing them to develop their own AI ethics framework. This structured approach enables companies to strategically prioritise and remediate bias concerns, recognising that not all AI applications entail the same level of risk. An industry-based approach towards light auto-regulations is also expected, whereas specific industries might want to prepare drafting guidelines and codes of conduct that will be further developed and self-enforced in the context of AI evolving.

European data protection authorities are directing their attention towards AI businesses to ensure adherence to data protection laws. There has been a noticeable shift in focus towards AI and machine learning, particularly regarding the use of personal data for AI training. Within this context, it becomes imperative to address key considerations pertinent to AI and data protection, such as:

  • ensuring compliance with relevant data protection legislation throughout the collection, processing and storage phases of personal data;
  • implementing robust security measures to prevent unauthorised access or misuse of personal data;
  • regularly monitoring and auditing data usage to verify compliance with consent and permissions, reduce data processing and define retention periods;
  • providing transparent and comprehensive information to individuals concerning the utilisation of their personal data by AI systems, including their entitlement to access, rectify, erase or restrict data processing;
  • overseeing the automated decision processes of personal data;
  • establishing efficient procedures for addressing data access requests and complaints from individuals;
  • conducting Data Protection Impact Assessments (DPIAs); and
  • developing comprehensive governance frameworks to guarantee ethical and responsible AI implementation, encompassing routine evaluations of potential risks and societal impacts.

There is a general prohibition against subjecting individuals to fully automated decisions that significantly impact them, unless certain exceptions are met. These exceptions include scenarios where automated decision-making is necessary for contractual obligations, authorised by law with appropriate safeguards, or based on explicit consent from the data subject. Notably, while the GDPR grants individuals the right to be informed about the existence of automated decision-making and its logic, it does not explicitly confer a right to object to such processes.

The AI Act prohibits specific AI applications that could negatively impact fundamental rights. This encompasses the banning of biometric categorisation systems reliant on sensitive characteristics, as well as the indiscriminate collection of facial images from online platforms or CCTV footage for the creation of facial recognition databases. Furthermore, the regulations forbid the implementation of the following:

  • emotion recognition in both workplace and educational environments;
  • social scoring mechanisms;
  • predictive policing practices reliant solely on profiling or personal characteristic assessment; and
  • AI systems designed to manipulate human behaviour or exploit vulnerabilities.

Moreover, within the framework of the GDPR, biometric data is classified as a distinctive category of personal information according to Article 4, paragraph 14, encompassing data derived from specific technical processes associated with the physical, physiological or behavioural attributes of an individual, enabling or confirming their unique identification, such as facial images or fingerprint data. Consequently, the utilisation of biometric data is generally prohibited under Article 9, paragraph 1 of the GDPR. Notwithstanding, Article 9, paragraph 2 outlines several exceptional circumstances where biometric data may be lawfully processed, including instances where the data subject has provided explicit consent or when processing is deemed necessary for compelling public interest or health-related reasons.

Nonetheless, obtaining employees' consent for biometric data processing may not always suffice as a valid legal basis, particularly in the context of the inherent power imbalance between employers and employees. Despite these complexities, Romanian legal precedent underscores the significance of ensuring comprehensive disclosure to employees regarding biometric data processing activities, as prescribed by Article 12 of Law No 677/2001, in establishing a legitimate basis for such activities.

Thus, compliance with legal obligations regarding data transparency and consent emerges as a critical factor in navigating the complexities surrounding biometric data processing and the use of AI.

The GDPR establishes strict guidelines regarding automated decision-making, permitting it only under specific circumstances such as contractual necessity, lawful authorisation or explicit consent. However, these exceptions pose practical challenges for companies. Proving contractual necessity can be onerous, lawful authorisation is limited, and obtaining explicit consent, especially for complex AI systems, is challenging.

In addition, Article 22(3) mandates human intervention, adding operational complexity. Article 22(4) prohibits the use of special category data without explicit consent or lawful grounds, complicating AI tool usage, particularly for sensitive data. Hence, companies must navigate GDPR provisions carefully, balancing operational efficiency with compliance to protect individual rights and privacy effectively.

it is crucial to note that, while the GDPR grants individuals the right to be informed about automated decision-making, including profiling, it does not explicitly provide a right to object to such processes. European data protection authorities have clarified that this right amounts to a prohibition rather than requiring active invocation of its rights by a data subject. It remains to be seen how the AI Act and the GDPR could be interpreted to foster the private enforcement of individual rights in this regard.

In Romania, the absence of specific legislation targeting chatbots or AI substitutes does not undermine the importance of consumer protection laws in shaping the use of emerging technologies as commerce enablers. In light of a more permissive legal background, companies may utilise technologies such as machine learning algorithms, natural language processing, behavioural analytics and A/B testing for undisclosed suggestions or behavioural manipulation.

However, while there is no standalone law dedicated to these innovations, consumer protection laws serve as a foundational framework for ensuring fair and transparent practices, for example. By addressing issues such as transparency, fairness, accuracy and data privacy, the aforementioned legislation aims to prevent deceptive practices and to ensure that consumers receive accurate information to make informed decisions.

Exploitation of Market Power

  • Discrimination and bias: AI technologies can be utilised by dominant market players to discriminate against competitors or customers, potentially leading to unfair market practices.
  • Foreclosure of competitors: through mergers, exclusive agreements or leveraging big data, dominant companies may engage in practices that exclude or limit competition in the market.

Abuse Through Tying/Bundling

  • Preferential treatment: dominant players might use AI to bundle their services, offering preferential rates to users, thereby influencing consumer choices and limiting competition.
  • Redirection of users: AI-driven bundling strategies can redirect users towards the dominant company's offerings, potentially stifling competition and innovation.

Discrimination and Favouritism

  • In-house preference: dominant market players may utilise AI to prioritise their own products or services over those of competitors, disadvantaging third-party offerings and restricting market access.
  • Integration of AI services: companies might integrate existing AI applications into their own offerings, favouring subsidiary services and creating barriers to entry for competitors.

Denial of Access to Essential Facilities

  • Control of critical data: dominant companies may control essential databases or resources vital for AI development, denying access to competitors and impeding innovation.
  • Barrier to innovation: refusal to grant access to crucial datasets necessary for developing AI applications could hinder competition and innovation in the market.

It is likely that the market will know how to address competition law issues, as has happened in the past, such as with the OS and internet explorer bundling issues and the access to markets for app stores. A couple of large language model solutions are expected to become market standards, acting as oligopolies – in this respect, competition policies would be needed so as to maintain a level playing field among AI developers to the benefit of customers.

Data from the McKinsey Global Institute reveals a significant surge in the adoption of generative AI tools throughout 2023, with over 60% of enterprises either embracing or experimenting with AI to streamline their operational processes and enrich customer interactions. This uptick in demand places considerable pressure on AI suppliers to ensure compliance with existing regulations while safeguarding users against potential risks associated with their services and products. To tackle these challenges head-on, software providers are proactively updating their terms of service to bolster data protection, transparency, trustworthiness, fairness and security measures for their customers.

Simultaneously, both national and supranational institutions are playing pivotal roles in mitigating risks within the AI landscape. Although Romania has yet to implement specific measures to address these concerns, it is poised to align with the EU's directives by adopting the EU AI Act and, notably, the EU model contractual AI clauses. These clauses are available in two variants tailored for high-risk and non-high-risk systems, serving as appendices to agreements with ease. Originally designed for public organisations procuring AI systems, the EU's AI standard contractual clauses offer valuable insights applicable to any entity involved in acquiring or providing AI technologies, given the expansive reach of the EU AI Act.

AI is transforming the way companies manage their HR recruitment and termination processes, from screening resumes and scheduling interviews to onboarding activities. By automating tasks, AI enhances efficiency and allows HR teams to focus on strategic objectives. For instance, personal credit company Provident Romania has integrated AI to help and assist HR in hiring processes and other critical specific activities, saving 10,000 hours annually and reducing errors.

Concerns may arise about potential employee harm and legal compliance in relation to HR-related anti-discrimination policies and legal norms related to the workforce; AI algorithms may perpetuate biases present in training data, leading to unfair treatment based on gender or age, for example. Over-reliance on AI could also diminish the human touch in hiring, generating a risk of manipulating candidates (and candidates working around hiring policies) through AI-generated interview questions and neglecting the assessment of soft skills such as emotional intelligence or creative thinking.

Legally, using AI in hiring processes must comply with local regulations, such as Law No 202/2002 on equality of opportunity between women and men and Ordinance No 137 of 31 August 2000 on the prevention and punishment of all forms of discrimination, in order to avoid lawsuits and penalties. Employers must ensure fairness and transparency in AI practices, coupled with human oversight, to maintain ethical recruitment standards and an ethical approach to employee management.

Software technologies are increasingly being used to assess employee performance and monitor work activities, especially in remote working scenarios. They are intended to provide information on how tasks can be managed and performed more efficiently. In this respect, AI can also be deployed in this process for greater accuracy in order to highlight various ways in which work can be made more efficient and safer, for benefits in terms of professional achievement.

Potential harms to employees include, for example, privacy concerns as a result of intrusive monitoring that may harm values such as personal data protected under the GDPR and lead to potential challenges regarding violations of employees' privacy rights protected under local regulations.

Therefore, the use of AI for performance evaluation and monitoring must be done in line with ensuring transparency, privacy and confidentiality protection. Achieving a balance between increasing efficiency and protecting employee well-being is imperative for the ethical integration of AI in the workplace.

Romanian digital companies are consistently investing in AI capabilities to enhance their efficiency and their user experience. For example, car services companies have improved their route optimisation by using AI algorithms that analyse traffic data and road conditions in real time. In addition, companies such as Uber have considerably improved their safety features through AI, because one can detect unexpected deviations from the established route, which could indicate an unsafe situation. As for food delivery companies, AI is also being used to make personalised restaurant or food recommendations based on the customer’s preferences or order history.

In this context, there have been a considerable number of developments in the gig economy due to AI algorithms. For instance, AI has had a significant impact on freelance work, since it has revolutionised the way freelancers find gigs by matching skills with project requirements. In addition, AI-powered platforms are considered an efficient tool used by both gig workers and hirers to refine project searches, customise recommendations based on individual preferences and past performance, and even suggest future project types where a freelancer may excel. An AI-driven gig economy is also extremely dynamic, because AI automates repetitive tasks, giving freelancers the possibility to save time and focus on strategic decisions and the improvement of their services or products. Moreover, in an ever-changing labour market where upskilling is the key, it goes without saying that gig workers have easier access to online courses and qualifications. 

However, what is really problematic on this matter is the fact that there is currently no legal framework to regulate gig workers’ employment rights. A European directive on this issue is being prepared by the European Parliament and the Council, and will be transposed into member states’ legislation once it enters into force.

In Romania, financial services companies are making significant use of AI algorithms to efficiently verify a huge number of transactions and identify potentially fraudulent activity. AI technologies are also being used to assess the creditworthiness of customers. Nevertheless, it goes without saying that there are some risks associated with the use of AI in the financial sector, ranging from technical errors, lack of transparency and data protection breaches to the difficulty of complying with existing regulatory requirements. For example, companies using AI have to comply with the National Bank of Romania's regulations and a variety of European regulations and directives, such as the GDPR, the Revised Payment Services Directive or the Markets in Financial Instruments Directive II. The reuse of data can have a significant impact on the performance of AI systems, with risks related to data drift or selection bias, especially in the insurance market.

As AI becomes more widespread around the world, Romania is transforming its healthcare system by implementing cutting-edge diagnostic methods. More and more Romanian hospitals and private clinics are investing in AI techniques and algorithms that can improve the quality of medical images and provide radiologists with more accurate results, allowing them to make a correct diagnosis and plan appropriate treatment. For example, well-known Romanian medical network Regina Maria has implemented a platform called DeepcOS AIM, where mammography images are automatically sent and analysed by an AI system. Although this platform cannot make a diagnosis on its own, it provides valuable information such as the abnormality score or the type of lesion.

In addition, many hospitals across the country have started using the da Vinci Xi robot, a computer-assisted system that allows surgeons to perform complex procedures with greater precision and flexibility than traditional surgical techniques. Romania is also using AI-based tools in research and telemedicine, which allows patients to consult with healthcare providers in real time. AI is expected to become a major enhancer for diagnostics and for providing remote healthcare and medical guidance in remote areas that lack availability of qualified doctors. As far as is known, there are no specific claims related to malpractice through using AI.

There are also risks associated with the use of AI in the medical system; over-reliance on AI is dangerous because it can potentially lead to misdiagnosis. Furthermore, given that AI systems require access to vast amounts of sensitive information, their improper handling can lead to serious data breaches or exploitation for various purposes without patient consent. Mental health, infectious diseases, substance abuse or palliative care are just some of the medical areas where there is a high risk of misuse of sensitive data, which can have far-reaching consequences. In this context, it goes without saying that improving national cybersecurity is one of the main priorities in Romania's new Artificial Intelligence Strategy (2024–2027).

Following the public consultation phase to prepare the National Strategy for the Use of Artificial Intelligence in Romania between 2024 and 2027, autonomous vehicles was identified as an area requiring a legislative update. Currently, the Ministry of Research, Innovation and Digitalisation is overseeing the drafting of an analysis document on the establishment and operationalisation of some experimental poles in the field of autonomous vehicles – a field that benefits from numerous innovation opportunities. For example, according to the aforementioned strategy, Romania's declared main transport objective of digitising road infrastructure can also be achieved by installing some sensors on motorways to guide autonomous vehicles through a communication channel with VANETs (Vehicular Ad-hoc Network).

As in most countries in the world, AI is still not very well defined in Romanian legislation. In the absence of a legal framework at the national level, the use of AI in manufacturing is governed by EU directives and regulations and national laws applicable to all types of products. For example, product safety and liability are covered by EU Regulation 2023/988 on General Product Safety, a legal instrument that updates Directive 2001/95/EC in light of recent developments related to new technologies.

In its National Strategy for the Use of Artificial Intelligence between 2024 and 2027, Romania aims to stimulate the development and application of industrial automation solutions based on robots using AI techniques. Moreover, the Ministry of Research, Innovation and Digitalisation plans to participate with a series of projects in Cluster 4 Horizon Europe “Digital, Industry and Space”, where manufacturing technologies are one of the intervention areas.

Although there are no clearly delineated regulations governing the use of AI in professional services in Romania, some overarching legal instruments are applicable, such as the GDPR or the Network and Information Security Directive. However, in their recently published National Strategy for the Use of Artificial Intelligence between 2024 and 2027, the Romanian authorities confirm their commitment to adopt AI strategies in the public sector for digital public services and in the private sector for economic competitiveness.

More specifically, as regards public services for citizens and businesses, the main objective is to develop a government cloud system that will allow the use of AI, machine learning and big data technologies to improve services in areas such as healthcare (eg, development of an advanced telemedicine system) or public administration (eg, adoption of a workflow automation technology). In this context, the Romanian government released a system called eGovernment or eGov in June 2021, setting out a number of objectives and measures related to the use of AI, which should contribute to improving the digitisation of public services in Romania.

The ability of an AI-based system to be an inventor has been analysed in a relatively recent decision of the EPO. The EPO's Legal Board of Appeal confirmed in a hearing on the Thaler case in December 2021 that an AI system such as DABUS cannot be claimed as an inventor in patent applications; according to the EPO, under the EPC, the inventor of a patent application must be a human being. Furthermore, the EPO emphasised that patent rights are granted to the inventor of the patentable invention and cannot be extended to non-legal persons such as machines.

Regarding the applicability of EU copyright law to AI-enabled output, the lack of fully harmonised rules on authorship and copyright ownership has led to divergent approaches in member states. However, an examination of EU copyright law reveals that four interrelated criteria have to be met for an AI-assisted creation to be considered a protected “work”:

  • it must be a “production in the literary, scientific or artistic field”;
  • it must be the result of human intellectual effort;
  • it must involve creative choices; and
  • such choices must be “expressed” in the output.

Following the European Court of Justice’s reasoning in the Painer case, three distinct phases of the creative process in machine-assisted production can be identified: “conception” (design and specification), “execution” (creation of drafts) and “editing” (editing, refining, finalising). While AI systems are mainly involved in the execution phase, human authors often retain a crucial role in the conception or redaction phase. Thus, if an AI system is programmed to autonomously produce content without human involvement in these last two stages, it would not meet the criteria to be considered a “work” eligible for copyright protection. Most probably, one can envisage that, at some point, the tracking of IP rights will equate to tracking the link between the human “mastering the machine” and the machine that is creating new forms of (human) expressing as requested by the human author.

Numerous companies choose to protect their AI results, algorithms and methods with trade secrets, which can cover innovations that fall outside the scope of copyright or patents. Trade secrets can theoretically be protected indefinitely, as long as they remain secret and commercially valuable, and do not require registration, making them cost-effective and compatible with the rapid pace of innovation in AI.

EU Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure aims to harmonise the protection of trade secrets across the 27 member states. It establishes civil mechanisms to protect victims of trade secret misappropriation, without introducing criminal sanctions. These include preventing the unauthorised use and disclosure of misappropriated trade secrets, removing from the market goods produced using illegally obtained trade secrets, and entitling victims to compensation for damages resulting from the unlawful use or disclosure of misappropriated trade secrets.

Despite its clear benefits, the protection of trade secrets poses potential risks in the context of AI-based technologies. The main challenge is the potential loss of secrecy through theft or breach of contract, leaving AI components vulnerable and without trade secret protection.

The intersection of AI and IP has become increasingly complex as AI systems are used to create original works of art and authorship. In this context, the European Parliament published a report in 2020 on IP rights for the development of AI technologies. This report highlights the differences between human creations produced by AI systems and creations produced by AI systems themselves. The latter raises new regulatory challenges for the protection of IP rights, such as ownership, inventorship, appropriate remuneration and other issues related to potential market concentration.

The European Parliament considers that, where AI is used merely as a tool to assist an author in the creative process, the current IP framework remains applicable. In this respect, the European Parliament considers that works produced autonomously by artificial agents and robots may not be eligible for copyright protection, as the principle of originality, which is linked to a natural person, and the concept of “intellectual creation” relate to the personality of the author. However, in the explanatory statement of the report, the European Parliament highlights the fact that the condition of originality could hinder the protection of AI-generated creations.

As artistic creation by AI becomes more common (eg, the “Next Rembrandt” painting generated by an AI-based system after observing hundreds of the painter's works), there seems to be growing recognition that an AI-generated creation could be considered a work of art based on its creative result rather than the creative process. In addition, it is important to note that a lack of protection for AI-generated creations could deprive the interpreters of these creations of their rights, as the protection provided by related rights implies the existence of copyright in the work being interpreted.

When discussing the challenges faced by companies using generative AI such as OpenAI, the following issues need to be carefully considered:

  • ownership of the generated content;
  • the possibility that the content in question may be subject to copyright protection;
  • infringement of the rights of the original content creator through derivative works;
  • patents;
  • respect for trade secret rights;
  • the obligation to refrain from reverse engineering; and
  • compliance with licence and usage agreements.

Although the field of AI has come a long way in recent years, it is still very much in its infancy. Understanding AI and getting used to its systems is a process that takes time. Therefore, while there is no pre-determined checklist for boards of directors who want to implement AI algorithms in their companies, there are a few things that should be considered.

For example, given that AI systems rely heavily on data, often sensitive and personal information, boards need to ensure that the collection, storage and processing of data complies with the requirements of the GDPR. In addition, boards should assess the fairness and transparency of their AI systems, ensuring that they do not reinforce existing biases nor discriminate against particular groups. They should also assess the cybersecurity implications of AI and conduct regular risk assessments. In addition, company boards should be aware that the integration of AI may require the upskilling of their workforce, which would require the development of new strategies for transition, training and talent acquisition.

Nowadays, implementing AI best practices means navigating a plethora of guidelines available (especially) at an international level. In this regard, Romanian companies should start by understanding the existing legal framework. Knowing that Romania does not currently have any specific legislation regarding AI, investors and entrepreneurs should focus on the future implementation of the recently endorsed EU AI Act. This law should be explored by start-ups and small and medium-sized enterprises, as it offers them a compliant path towards developing and training AI models before releasing them to the general public.

Until the entry into force of the EU AI Act, Romanian companies should:

  • conduct a thorough risk assessment to identify potential risks associated with AI implementation and develop strategies to effectively mitigate these risks;
  • adapt AI practices and goals to their resources and capabilities;
  • invest in training programmes for employees; and
  • conduct regular audits to verify compliance with AI best practices and strategies established not only internally, but also externally around the world.

It is therefore significant that, in addition to the EU AI Act, the United Nations General Assembly recently adopted a key resolution on promoting “safe, secure and trustworthy” AI systems.

On the other hand, too much regulation can stifle innovation, especially at start-up level. One should carefully balance the safeguarding of a healthy compliance route with the ever-competing field of AI start-ups, where every cent spent matters. The battle for the new markets and tech niches has begun. The European Union must ensure not only compliance and norms but also global competitiveness.

Lexters

Helesteului Street, no.17
Bucharest
Romania

+40 745 772 762

contact@lexters.com www.lexters.com
Author Business Card

Law and Practice

Authors



Lexters is based in Bucharest, Romania, and has an outstanding team of multilingual lawyers who possess the talent, experience and creativity to effectively address the legal and business challenges faced by clients from the technology sector. The firm provides top-notch guidance due to its fluency in the language of technology. It seamlessly integrates corporate/M&A, IP/IT and private equity/VC practices to tailor solutions according to business needs, and readily adapts to changes in society. Lexters acts as a bridge between Central and Eastern Europe and the United States and the rest of the world.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.