Contributed By Bird & Bird
The UAE are known to be leading examples in actively adopting innovation and technology to benefit key sectors including education, automotive, healthcare and media.
While we see AI-relevant amendments being made to existing regulation, there is currently no specific UAE law governing Artificial Intelligence (AI).
Healthcare
The UAE has embraced machine-learning in its healthcare system, and aims to further integrate AI. For instance, during the pandemic, AI played a vital role in managing COVID-19 by restricting movement through the “Oyoon” programme. This system monitored residents’ permits using facial, voice, and licence-plate recognition. Additionally, the Dubai Health Authority plans to automate surgeries using AI and robotics.
The UAE Ministry of Health and Prevention (MOHAP) employs AI for diagnosis of disease (such as tuberculosis), using chest x-ray algorithms. This system validates radiologists’ findings and aids in pre-screening procedures, reducing costs.
The advantages of AI in healthcare are perceived to include reduced errors, faster medicine development, and automation of administrative tasks, benefiting both patients and staff.
Aviation
The UAE’s aviation authority, the GCA, has permitted exploration of the use of AI in air-traffic management. The authority has also deployed automated robots in airports to detect the faces of suspected criminals.
Perceived advantages from use of AI in aviation include:
Education
Several schools across the UAE have partnered with technology companies to integrate a digital education programme into the teaching programme. The primary objectives are to cut costs and promote education among the population. The UAE Ministry of Education also plans to introduce AI-generated tutors (by using tech similar to ChatGPT) into classrooms.
The advantages of the use of AI in education include the following:
Workplace
AI is being rolled out in the workplace and in government services. The perceived benefits include:
Automotive
AI is also revolutionising the automobile sector, enhancing convenience and safety. Key areas benefiting from AI include maintenance, car connectivity, autonomous driving, electrification, and sensors.
Exploration of innovative AI applications in the automobile industry include the following:
The perceived advantages of AI in the automotive sector include the following:
In April 2019, the UAE Cabinet adopted the National Artificial Intelligence Strategy 2031 (the National Strategy) aimed at positioning the UAE as a global leader in artificial intelligence by 2031. The National Strategy set out eight strategic objectives:
The UAE has also empowered the Artificial Intelligence and Advanced Technology Council (AIATC) to focus on positioning the UAE as a hub for AI investments, partnerships and talent. It is tasked with the oversight of financing, investment and research plans for AI and advanced technology.
There is currently no specific AI law in the UAE. However, there are a number of key initiatives being implemented to guide the adoption of AI. (See 5.1 Regulatory Agencies).
We are also seeing AI-relevant adjustments being made to existing regulation, and comment on this below. (See 3.6Proposed AI-Specific Legislation and Regulations, for example, on the AI amendment to the DIFC Data Protection Law).
We anticipate the implementation of further adjustment to existing law to accommodate the particularities of AI and the development of AI specific-regulation. That development is likely to be supported by the UAE Regulations Lab, launched in 2019, which focuses on drawing up new business-enabling regulations following the testing and evaluation of innovations enabled by new technologies.
While there is no specific AI law, as identified in 2.2 Involvement of Governments in AI Innovation, the Strategy sets out clear objectives and the bodies identified in 5.1 Regulatory Agencies have issued policy statements and guidance that seek to safeguard the development and adoption of AI, and will shape relevant AI regulation. See, for example, Smart Dubai’s AI Ethics Principles and Guidelines.
Please see 3.1 General Approach to AI-Specific Legislation and 2.2 Involvement of the Governments in AI Innovation.
This issue is not applicable in the UAE.
This issue is not applicable in the UAE.
This issue is not applicable in the UAE.
Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data (PDPL) is the primary regulation on data protection in the UAE. The PDPL deals with the rights of data subjects, such as the right to rectification and deletion of data more generally. Although it does not explicitly reference AI models, appropriate measures and procedures must be in place to ensure the erasure or correction of incorrect personal data.
Purpose limitation and data minimisation are key principles under the UAE data protection regime. An organisation must only process personal data for specific and lawful purposes, and only collect data relevant to the needs of the organisation (“purpose limitation”). After data use, all personal data collected should be deleted, and not kept and used for any other additional purposes (“data minimisation”). To avoid violating these provisions, it is recommended that personal data be anonymised, pseudonymised, securely encrypted or archived in a manner that ensures that the data is put beyond further use.
DIFC Data Protection Law No. 5 of 2020 (DIFC DPL) was amended in September 2023 specifically to regulate autonomous and semi-autonomous systems, including AI and generative machine-learning technology. Article 38 of the DIFC DPL provides that a data subject must have the right to object to any decision based solely on automated processing, including profiling, which produces legal consequences concerning them, or other seriously impactful consequences, and to require such decision be reviewed manually.
Regulation 10 of the DIFC DPL:
“System” under Regulation 10 means any machine-based system operating in an autonomous or semi-autonomous manner that can process personal data for purposes that are human-defined or purposes that the system itself defines (or both) and generates output as a result of or on the basis of such processing.
While Regulation 10 does not expressly reference AI, it is evident from the guidance that definitions used are adapted on the basis of the OECD guidelines and the draft Regulation of the European Union on harmonised rules on AI (the “EU AI Act”).
Regarding copyright, Federal Decree Law No. 38 of 2021 now protects “smart applications, computer programmes, databases, and similar works” generated by or with AI. Despite their non-human origin, AI-generated works may qualify for copyright protection under UAE Copyright Law.
Other relevant regulations include Federal Law No. 2 of 2019 on Information and Communication Technology in Health Fields (Health Data Law); Federal Decree-Law No. 3/2003 on Organizing Telecommunications (Telecoms Law); and Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes (Cybercrime Law).
The UAE government is clearly considering the development of regulations and frameworks to govern the use of AI. The implementation of AI legislation in other countries, including the UK, EU countries and the US, will no doubt be referenced in the introduction of AI-specific legislation in the UAE and influence the approach taken by UAE regulators when determining the key features of the legislation.
There are currently no judicial decisions issued in respect of generative AI and intellectual property rights.
The majority of court judgments for onshore UAE cases are not publicly available, making it very difficult to extract clear statements of principle from cases, particularly as judgments are not intended to be authoritative statements of law.
As described in 4.1 Judicial Decisions, there are currently no judicial decisions in the UAE in respect of AI. The definition of AI has not been tested by UAE courts. While there is also no legal definition of AI under UAE regulations, the UAE National Program for Artificial Intelligence Guide references the definition of AI used in the Merriam-Webster dictionary as “a branch of computer science dealing with the simulation of intelligent behaviour in computers.”
The implementation of the National Strategy (see 2.2 Involvement of Governments in AI Innovation) is supervised by the Emirates Council for Artificial Intelligence and Digital Transactions.
Since the adoption of the National Strategy, the UAE has demonstrated its commitment to delivering the eight objectives by:
See 4.2 Technology Definitions.
The Council for AI and Blockchain is dedicated to preventing harm associated with AI such data leaks, infringement of privacy and ethical concerns. It also seeks to create an environment conducive to the innovation and advancement of AI while maintaining societal ethics.
The RegLab focuses on mitigating any potential risks and developing regulations around AI use.
AIATC and the financial free zones of DIFC and ADGM regulate the use of AI technologies in finance. Their aim is to prevent financial fraud and any personal data breach while promoting AI innovation in the financial sector.
We are not aware of AI relevant enforcements, pending or otherwise.
In addition to the general guidance and policies issued by the bodies identified in 5.1 Regulatory Agencies, the following bodies will be relevant.
Companies operating in the UAE are regularly required to meet international standards set by regulatory bodies such as the International Organisation for Standardisation (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU).
UAE governmental authorities actively integrate AI across all functions. The UAE Strategy for Artificial Intelligence (AI) aims to enhance government performance using integrated smart digital systems, while the Council for Artificial Intelligence and Digital Transactions was established to oversee AI integration in government departments.
An example of the adoption of AI is the Telecommunications and Digital Government Regulatory Authority (TDRA) introduction of the AI-supported Unified Digital Platform which streamlines user access to information and services.
In the legal sector, the UAE courts are using AI to improve case management, support translation, and provide virtual courtroom environments. The DIFC courts have issued guidelines for AI-generated content in litigation, while the Abu Dhabi Judicial Department uses AI solutions to monitor criminal cases.
Biometric and facial recognition technologies are also widely used by UAE governments with their employees. More detail on facial recognition is provide in 11.3. Facial Recognition and Biometrics.
See 4.1 Judicial Decisions.
AI plays a significant role in national security matters in the UAE.
The UAE government employs AI in various aspects of national security to enhance capabilities in threat detection, intelligence analysis, border security, cybersecurity, and defence.
For obvious reasons, specific details of AI applications in national security are not publicly disclosed.
Generative AI can create original content based on learned patterns from accumulated data. Like other AI forms, it raises ethical issues related to data privacy, security, misinformation, plagiarism and copyright infringement, and generation of harmful content.
Determining intellectual property (IP) ownership is a challenge. The deep learning models used in generative AI, the lack of direct human intervention, and the reliance on existing data all blur IP ownership.
As AI advances, intellectual property laws can be expected to be supplemented and adjusted to address emerging issues. As an example, the Federal Decree Law No. 38 of 2021 on Copyrights and Neighboring Rights has recently been updated to include “smart applications, computer programs, databases, and similar” works in its definition of “Works” protected by the law. AI-generated works may qualify for copyright despite their non-human origin. Users of AI systems may be considered authors, and bear responsibility for copyright infringement.
In connection with AI, the following need consideration:
For data protection see 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI.
See 8.1 Emerging Issues and Generative AI.
Please see 3.6 Data, Information or Content Laws.
Most legal firms are actively exploring use cases and assessing AI’s costs and benefits. Deloitte’s recent survey found that 62% of lawyers believe AI will significantly impact the legal profession within three years. While lawyers are already using AI for tasks such as marketing, contract review and e-discovery, there is considerable scope for wider adoption. McKinsey & Company predicts that 22% of legal tasks can be automated by AI.
Common law firm use cases include the following:
Potential AI issues are as follows:
Aspects of the current law will clearly apply to AI-enabled technologies. Liability may arise as a result of a contract breach, in which case the terms of the contract will set the liability and the general laws of contract will address the remedies that may be available. Liability may also arise due to tort, as codified in Articles 282–298 of Civil Code Federal Law No. (1) 1987 Concerning Civil Transactions Law of the UAE. Liability may also be imposed by more general regulations, such as Federal Law No. 15/2020 on Consumer Protection (UAE Consumer Protection Law) and various laws addressing data protection.
In simple terms, tort is founded on a principle that harm or injury caused to a person by another requires compensation. When it comes to AI, while an AI-enabled device is capable of learning and processing experience and making independent decisions based on its machine learning which may cause damage, it is not regarded as a legal person and has no independent legal status.
The common approach to the challenges raised in tortious liability for damage caused by AI focuses on the following.
If an AI-enabled device causes harm, loss or damage under the Civil Code, it can be seen that liability may lie with the person “having control” of the device. This takes us into the complex area of responsibility and control. With an AI-enabled device, responsibility and control may conceivably be found in a number of hands – the person operating the device, the business who supplied the AI application, the business that developed the AI, or the software coder who designed the underpinning algorithm.
Under the Civil Code, the position is that when several persons are responsible for a prejudicial act, each of them is responsible for their share in it. Accordingly, the Civil Code opens up the opportunity for an apportionment of damages between the potentially liable persons. However, what has not been tested is how the UAE courts would go about that apportionment.
The Consumer Protection Law may also be relevant, and details the consumer rights to fair compensation for damages suffered as a result of the purchase or use of defective goods.
With regard to insurance, there are elements of existing cover that may apply to loss incurred as a result of AI-enabled technology. It is entirely predictable that the insurance industry will craft AI-specific coverage.
This question is addressed in 10.1 Theories of Liability and elsewhere in the responses.
The design and development of the underpinning algorithm and data used to train and improve the AI application introduce the possibility of bias. The generative nature of AI development then opens up the possibility of bias being exaggerated and hard-wired into the application. The potential unfairness and discrimination that can stem from bias needs to be evaluated and addressed when developing AI tools and when adopting AI-supported functionality.
Machine interpretation of demographically relevant statistics is inexorably linked to the quality and nature of the input data. Flaws in the base data can be exaggerated exponentially.
The dangers of bias are clear. Federal Decree-Law No. 34/2023 on Combating Discrimination, Hatred and Extremism (Anti-Discrimination Law) seeks to combat discrimination, hatred and extremism prohibiting discrimination based on religion, belief, rite, community, sect, race, colour, ethnicity gender or race. The penalties for transgression include up to a year’s imprisonment and fines in the range of AED500,000 to AED1 million.
As we have seen, Article 38 (1) of the DIFC Data Protection Law provides that “the data subject shall have the right not to be subject to a decision based solely on automated Processing including Profiling which produces legal effects concerning him or her or significantly affects him or her”.
Also, as touched on above, MoIAT is engaged in setting standards for AI technologies that seek to safeguard transparency and fairness. The UAE’s Ministry of AI, the UAE Artificial Intelligence and Blockchain Council and Smart Dubai also publish guidance designed to promote fairness and ethical behaviours in the adoption of AI technology.
As is so commonly the case with developing technology, AI brings both risk and benefit when it comes to the use of personal data.
The risks include:
The benefits available from the use of AI include:
See also 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI
The UAE is a keen adopter of facial recognition technology. In 2021 the UAE Cabinet approved the use of facial ID in certain sectors to verify the identity of individuals and cut paperwork. Facial recognition is now in widespread use. For example, Dubai International Airport uses CCTV cameras and AI-enabled facial recognition technology to enhance security. The airport’s smart gates are also equipped with facial and iris recognition technologies.
The primary pieces of legislation governing the use of facial recognition technology in the UAE is the Federal Data Protection Law and, where relevant, the DIFC Data Protection Law (together, the “Data Law”). The Data Law does not prohibit the collection or use of biometric data, although it places significant obligations and restrictions on data controllers handling such data.
Companies using facial recognition and biometric technology:
Risk arises when the transparency and the auditability of the automated decision-making (ADM) process is unclear - the so-called “black box” issue. The same issue impacts the ability of affected parties to challenge an ADM decision. It is necessary to accept that there is an identified concern with AI over the chance that AI models may have been trained on data repositories that introduce bias into the models.
The points raised in the context of the UAE data privacy regulations apply to ADM. See 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI.
Principle 3.3.2 of Smart Dubai’s Guidelines provides for the following.
Smart Dubai’s Ethical AI Toolkit states that traceability should be considered for significant decisions, particularly those that have the potential to result in loss, harm or damage, and that people should be informed of the extent of their interaction with AI systems.
Note, also, the requirement for disclosure in Regulation 10 of the DIFC DPL. (see 3.6 Data, Information or Content Laws).
The last ten years or so in the EU have seen AI implicated in prosecutions for price-fixing activities, where AI has been used to monitor adherence to minimum re-sale price arrangements.
The relevant authorities are well aware of the potential anti-competitive use of AI-enabled tooling in the context of price setting, with tooling designed to monitor the pricing data of competitors and predicting competitors’ reactions to market changes.
While the prosecutions referred to above were AI-enabled infringements, it is uncertain how the relevant authorities would address a situation where the activities in question would have been undertaken by an AI-enabled tool drawing on machine learning taking an automated decision.
The UAE recently enacted Federal Decree-Law No. 36/2023 on the Regulation of Competition (Competition Law) which introduces provisions, controls, and penalties designed to restrict anti-competitive behaviours. It will be important for companies using AI in price-setting to ensure compliance in order to avoid any antitrust violations.
Legal and Regulatory Compliance
We are seeing AI specific regulation being crafted and introduced internationally. AI solutions must comply with evolving legal and regulatory frameworks and be able to adapt to new legislation and the inevitable adjustment and definition of AI relevant legislation.
AI contracts should:
Data Privacy and Security
The quantum of data used increases the risk of privacy breaches or data leaks, and the risk can be mitigated contractually by:
Intellectual Property (IP)
Contracts should:
Reputational Risk
AI solutions can produce biased or harmful outcomes, leading to reputational damage for businesses; these risks can be mitigated in the contract by:
Performance and Accountability
The procurement of developing technology always comes with a risk of the delivered functionality not meeting the aspirations that commissioned it. The contract should:
Malicious Use
The risk of malicious use can be addressed in the contract by:
Lack of Transparency in the Functional Operation of the AI Solution, Fairness and Bias
The lack of transparency and bias challenges can be addressed in the contract by:
Scalability and Performance Degradation
The contract should:
AI tools are being used to screen CVs and identify the most qualified candidates based on skills, experiences, preferences and availability for interviews and fill roles based on those findings. Predictive AI is being used to determine which employees are most likely to leave a company as well as employee performance metrics.
The use of AI in recruitment and employee dismissals could cause potential harm to employees and give rise to liability. By using learned algorithms based on the hiring practices of past candidates, AI could develop biases based on human bias of past recruitment processes. In addition, algorithms may not be trained to account for candidates with neurodivergences, as well as race, cultural backgrounds or disability, which means that they would present a certain way and therefore risk enforcing discriminatory practices.
Under Federal Decree-Law No. 33/2021 (UAE Labour Law), discrimination based on race, colour, sex, religion, national origin, social origin, or disability is expressly prohibited. Note, also, Federal Decree-Law No. 34/2023 on Combating Discrimination, Hatred and Extremism (Anti-Discrimination Law) addressed in 11.1 Algorithmic Bias.
The use of AI as a performance-review tool has become increasingly prevalent in workspaces. With the introduction of AI, employers have a more time-efficient, accurate way of measuring the performance index while minimising human error.
Through employee-evaluation software, algorithms can effectively monitor employee performance by tracking general progress, setting personalised goals for individual employees and providing transparent, personalised feedback to help employees understand their strengths and any areas of improvement needed.
There are potential risks with using AI for monitoring purposes. While the performance index itself might be easily quantifiable, AI software often lacks context and nuance when monitoring employee performance. For example, a software which bases performance output on average toilet breaks might unfairly mark down disabled or pregnant workers, which could give rise to discrimination claims. Additionally, AI software still holds the risk of bias if used with biased or misinterpreted data. Employees might also fear invasion of privacy and an invasive use of data with tracking and monitoring software, leading to a general distrust of performance assessments.
Employers must ensure full transparency and adherence to data privacy laws when implementing AI monitoring tools in order to avoid legal risks. Employers should also check data sets for bias and ensure performance monitoring tools do not negatively affect individuals based on gender, race or other protected characteristics as that could be grounds for discriminatory claims.
In the UAE, digital platform companies using AI, include the following.
AI is used in various functions, including but not limited to the following.
Applicable Regulations/Initiatives
The UAE Central Bank is launching a key digital finance initiative as part of its Financial Infrastructure Transformation Programme.
The Ministry of Finance, in collaboration with the Artificial Intelligence, Digital Economy, and Remote Work Applications Office and the Mohammed Bin Rashid Centre for Government Innovation (MBRCGI), is also launching a platform to develop financial legislation and policies using digital solutions. This “Rules as Code” platform aims to build a comprehensive digital infrastructure for the creation of AI-based laws and regulations.
Guidelines for Financial Institutions Adopting Enabling Technologies have been issued jointly by the Central Bank of the UAE, the Securities and Commodities Authority, the Dubai Financial Services Authority of the Dubai International Financial Centre, and the Financial Services Regulatory Authority of Abu Dhabi Global Market.
Risks to Financial Services Companies from the Use of AI in UAE
The principle risks are those generally identified as relevant to AI. However, from a financial services perspective the following are relevant:
Risks of Biases in Repurposed Data
In the context of AI being used in customer services and more complex decision-making processes such as credit scoring and risk assessment, the potential for biased AI outcomes becomes a significant concern.
Credit Scoring and Loan Approvals
Financial institutions can use AI models to automate credit scoring and loan approvals, but if the AI models are trained on biased data sets or incorporate historical biases, it can lead to discriminatory practices.
Automated Trading and Investment
AI-driven investment tools might develop biases based on historical data that favours investments in certain sectors or regions, potentially leading to unequal opportunities in wealth accumulation for individuals based on their profile characteristics.
Fraud Detection
AI systems used for detecting fraudulent activities can inadvertently target certain demographics if the training data wrongly associates fraud with particular patterns related to ethnicity, location, or transaction types typically used by specific groups.
Personalised Marketing
AI that tailors financial product advertisements might perpetuate economic disparities by primarily promoting more favourable financial products, such as low-interest loans, to wealthier individuals or specific demographic group. Meanwhile, other individuals may be exposed to advertisements for higher-risk options.
Customer Service Chatbots
Chatbots and similar AI-powered applications can also display biases during interactions. For instance, they may offer varying responses or levels of assistance depending on the user’s demographic information.
The Federal Law No. 2 of 2019 on the Use of Information and Communication Technology in Health Fields (Health Data Law) is the most applicable regulation relating to the use of AI in healthcare (although it does not expressly mention AI). Its provisions relate to informed consent, privacy protection, anonymisation, and accountability have implications for the development, deployment, and use of AI technologies in healthcare.
The Ministry of Health and Prevention (MOHAP) is primarily responsible for the supervision of the use of AI in healthcare. The UAE has implemented regulations and guidelines to ensure the ethical use of AI in the UAE healthcare sector. The Department of Health Abu Dhabi (DOH) has issued their Policy on Use of Artificial Intelligence in the Healthcare Sector of Abu Dhabi, the aim of which is to enhance the reach and performance of healthcare-related services while minimising potential risks to patient safety and ensuring safe and secure AI use in healthcare management.
To address the chance of misdiagnosis and enhance treatment planning, MOHAP requires that AI devices and technology have continuous cycles of improvements and updates. Another requirement for AI is to ensure that the technology being used is auditable. This requirement ensures that any hidden biases in training data used will be addressed.
UAE healthcare has also seen the adoption of holographic technology for surgery planning, creating three-dimensional landscapes of a patient’s organs to provide doctors with a deeper understanding of a patient’s anatomy and enable greater precision in surgery.
Another use of AI algorithms in healthcare is the prediction of diseases by using data-mining software to identify patients who are at risk of developing specific conditions. By analysing personal data, algorithms can effectively reduce the rate of hospitalisation and fast-track treatment.
In relation to machine-learning, the PDPL, DIFC DPL, ADGM DPR 2021 and the Health Data Law are the most relevant data regulations.
The UAE aims to enhance its efficiency, safety and sustainability through integrating AI in its transportation infrastructure, especially in autonomous vehicles. Dubai has set an ambitious strategy for the transformation of 25% of the city’s transportation to autonomous driving by 2030.
The Regulations of Autonomous Vehicles
In April 2023, the Dubai Government issued Law No. (9) of 2023 Regulating the Operation of Autonomous Vehicles in the Emirate of Dubai (AV Regulations). The AV Regulations govern the licensing and regulatory framework, as well as operational standards for autonomous vehicles. The Road Transport Authority (RTA) governs the use of autonomous vehicles and serves as a licensing, regulatory and investigative body.
Frameworks for AI Algorithms, Data Privacy, Cybersecurity, and Vehicle Performance
The National Strategy referred to above touched on AI governance in autonomous vehicles by setting standards for the development of AI algorithms which control these vehicles, ensuring they are safe and effective under various traffic conditions.
Regulations concerning vehicle performance typically focus on ensuring that autonomous vehicles meet specific safety and operational standards before they are allowed on public roads. This includes rigourous testing and certification processes, similar to those used in aviation and other safety-critical industries.
Ethical Considerations for ADM
The UAE is continuing to consider the development of ethical guidelines relating to the operation of autonomous vehicles and, in particular, how such vehicles should prioritise decision-making in unavoidable accidents.
International Harmonisation
The UAE is actively engaging with international bodies and participating in international discussions aimed at aligning its regulations and standards for autonomous vehicles and AI technologies while advancing its technological infrastructure.
In the UAE, there are no specific regulations that govern the use of AI in manufacturing products. The current regulation addressing product safety and liability will continue to apply to products manufactured in an AI-enabled fashion.
While there is currently an absence of AI specific legislation, the National Strategy together with the guidance outlined in 5.1 Regulatory Agencies will be relevant.
There is no specific legislation that directly governs the use of AI in professional services. The current regulation addressing professional services will continue to apply to professional services that are AI supported.
The considerations touched on in 9.1 AI in the Legal Profession and Ethical Considerations apply.
In many jurisdictions, whether AI can be considered an inventor or author for patent purposes has been a subject of debate. Traditional patent law usually requires a human inventor.
In the UAE, the position regarding AI inventors remains unclear, as the UAE Patent Office has not yet issued specific guidelines or decisions on this matter. However, UAE Federal Law No.38 of 2021 Concerning Copyright Rights and Neighbouring Rights (the New Copyright Law) generally requires a work to be original and the product of human creativity in order to qualify for protection. While AI systems can generate creative outputs, they are ultimately reliant on pre-existing data and algorithms. Consequently, the question arises as to whether AI-generated works can truly be considered original, or if they merely replicate existing material.
While there are no publicly available judicial or agency decisions by UAE authorities which explicitly address the status of AI as an inventor or co-inventor, UAE IP laws typically follow international standards, which generally do not recognise AI systems as inventors.
The UAE, like many other jurisdictions, allows for the protection of trade secrets and confidential information. AI technologies often involve complex algorithms, datasets, and proprietary methodologies that can be considered trade secrets. By maintaining strict confidentiality and implementing appropriate security measures, companies can safeguard their AI technologies from unauthorised access, use, or disclosure.
In the UAE, contractual arrangements such as non-disclosure agreements (NDAs) play an important role in protecting AI technologies and trade secrets by ensuring that employees, contractors, and third parties maintain confidentiality. Companies should consider incorporating specific clauses into their contracts to address the ownership, non-disclosure and non-compete aspects related to AI. Additionally, contracts should outline the permitted use, access, and transfer of AI technologies and data, ensuring compliance with local laws and regulations.
In case of trade secret misappropriation or IP rights infringement, companies can seek legal remedies to enforce their rights and request compensation for damages. In the UAE, the courts provide a platform to protect trade secrets and intellectual property through civil litigation and injunctions.
While there are currently no explicit provisions for the protection of AI-generated works in the UAE, there is potential for legal reforms reflecting the nation’s commitment to fostering innovation.
Federal Law No.38 of 2021 Concerning Copyright Rights and Neighbouring Rights (Copyright Law) protects original literary and artistic works, including computer programmes. However, existing copyright laws do not explicitly address AI-generated content. This ambiguity poses challenges when it comes to determining ownership and protecting AI-generated works, as traditional copyright laws typically attribute authorship to human creators. The Copyright Law also stipulates that the author is the person who brings the work into existence. Therefore, for AI-generated content, it may be necessary to establish guidelines or frameworks that attribute authorship to both the AI and the human input that trained and refined the AI algorithm.
Using OpenAI, or any AI tool or platform, to create works raises several IP issues. The terms of use for these platforms often dictate the ownership of the output.
In the case of OpenAI and similar platforms, the user might retain IP rights to the unique output they generate if the platform’s terms of service allow it. However, where the AI contributes a substantial creative element, the question of authorship can be contentious. Users should be particularly cautious of the IP clauses within the terms of service agreements before using AI-generated output for commercial purposes.
When advising corporate boards of directors in the UAE on the adoption of AI, there are several key issues that can impact the success and risk management of AI initiatives.
Regulatory Compliance
Boards should have a basic knowledge on how AI works and its potential impact on their business. They should be advised on the current and upcoming regulations specifically targeting AI technologies in the UAE. In addition, they need to ensure strict adherence to UAE’s data protection laws.
Technological Challenges
Boards should acknowledge the potential complexities associated with the integration of AI and should implement stringent standards for evaluating system effectiveness. They should ensure that proper cybersecurity measures are in place.
Strategic and Operational Risks
AI initiatives should be aligned with the organisation’s strategy and ethical values.
The UAE has made significant strides in formulating AI strategies, frameworks and policies such as the UAE Strategy for Artificial Intelligence Ethics Guidelines and the AI Ethics Toolkit.
The principles and guidelines support industry, academia and individuals in understanding how AI systems can be used and provide self-assessment tool for developers to assess their platforms. In addition, the UAE National Program for Artificial Intelligence has issued a guide for the Best Practices for Data Management in Artificial Intelligence Applications.
The key issues for organisations to consider when implementing specific AI best practices are as follows.
Burj Daman, Level 14
Dubai International Financial Centre
Dubai
UAE
+971 4 309 3222
Simon.shooter@twobirds.com www.twobirds.com