Artificial Intelligence 2024 Comparisons

Last Updated May 28, 2024

Contributed By Bird & Bird

Law and Practice

Authors



Bird & Bird is an international full-service law firm with more than 1,700 lawyers and legal practitioners in 32 offices across Europe, the Middle East, North America, Africa and Asia Pacific. The firm has built a stellar, global reputation for providing sophisticated, pragmatic advice to companies which are carving out the world’s digital future. Our first UAE office opened in Abu Dhabi (2011), followed by our Dubai office (2016). Licensed in Abu Dhabi mainland, the ADGM and DIFC, our main practice areas include banking & finance (including Islamic finance); corporate, commercial & M&A; employment; dispute resolution; investigations; intellectual property; and TMT. Our 40-strong UAE team includes seven partners and 20 associates and paralegals, with many of our lawyers having ten years’ experience or more practising in the UAE. The local team has an in-depth understanding of the laws, business practices and customs of the region and its core industries.

The UAE are known to be leading examples in actively adopting innovation and technology to benefit key sectors including education, automotive, healthcare and media.

While we see AI-relevant amendments being made to existing regulation, there is currently no specific UAE law governing Artificial Intelligence (AI).

Healthcare

The UAE has embraced machine-learning in its healthcare system, and aims to further integrate AI. For instance, during the pandemic, AI played a vital role in managing COVID-19 by restricting movement through the “Oyoon” programme. This system monitored residents’ permits using facial, voice, and licence-plate recognition. Additionally, the Dubai Health Authority plans to automate surgeries using AI and robotics.

The UAE Ministry of Health and Prevention (MOHAP) employs AI for diagnosis of disease (such as tuberculosis), using chest x-ray algorithms. This system validates radiologists’ findings and aids in pre-screening procedures, reducing costs.

The advantages of AI in healthcare are perceived to include reduced errors, faster medicine development, and automation of administrative tasks, benefiting both patients and staff.

Aviation

The UAE’s aviation authority, the GCA, has permitted exploration of the use of AI in air-traffic management. The authority has also deployed automated robots in airports to detect the faces of suspected criminals.

Perceived advantages from use of AI in aviation include:

  • enhanced safety-management systems and protocols;
  • improved spatial mapping;
  • optimised management of aircraft-maintenance schedules;
  • improved security;
  • more efficient flight-route planning;
  • reduced call times with customer service;
  • a more efficient flight-booking process; and
  • faster airport check-in.

Education

Several schools across the UAE have partnered with technology companies to integrate a digital education programme into the teaching programme. The primary objectives are to cut costs and promote education among the population. The UAE Ministry of Education also plans to introduce AI-generated tutors (by using tech similar to ChatGPT) into classrooms.

The advantages of the use of AI in education include the following:

  • assessment of the individual needs of each student by tracking performance and requirements, which can, in turn, optimise the information that schools have on their students;
  • the provision of education for those living in isolated communities;
  • collaborative learning when students or teachers are not present physically at the same location; and
  • streamlined grading of tests and homework.

Workplace

AI is being rolled out in the workplace and in government services. The perceived benefits include:

  • replacement of repetitive tasks;
  • use of AI in recruitment with predictive analytics software for processing and filtering candidates;
  • adoption of AI-enabled chatbots to augment customer support;
  • use of AI tools to operate as invisible teammates to upskill workers; and
  • use of AI analytics to allow companies to better understand and unlock value in their data.

Automotive

AI is also revolutionising the automobile sector, enhancing convenience and safety. Key areas benefiting from AI include maintenance, car connectivity, autonomous driving, electrification, and sensors.

Exploration of innovative AI applications in the automobile industry include the following:

  • autonomous taxis;
  • AI-controlled pedestrian crossings; and
  • accident management by warning drivers of hazards.

The perceived advantages of AI in the automotive sector include the following:

  • improved predictive maintenance using AI to support real-time alerting systems monitoring and maintaining vehicles using historical and contextual data;
  • enhanced navigation allowing improved safety features in vehicles through real-time object recognition;
  • better cruise control;
  • fleet management;
  • more effective voice command capability; and
  • enhanced customer communication and automated manufacturing deploying AI supported robots in the assembly line.

In April 2019, the UAE Cabinet adopted the National Artificial Intelligence Strategy 2031 (the National Strategy) aimed at positioning the UAE as a global leader in artificial intelligence by 2031. The National Strategy set out eight strategic objectives:

  • building the UAE’s position as an AI destination and as a global hub for artificial intelligence;
  • increasing the competitive edge of the AI sector in the UAE;
  • establishing an incubator for AI innovations;
  • employing AI in the field of customer services to improve the quality of life;
  • attracting and training talent for the AI enabled jobs of the future;
  • attracting leading research capabilities;
  • providing a data-driven infrastructure to support AI experiments; and
  • delivering strong AI governance and regulations.

The UAE has also empowered the Artificial Intelligence and Advanced Technology Council (AIATC) to focus on positioning the UAE as a hub for AI investments, partnerships and talent. It is tasked with the oversight of financing, investment and research plans for AI and advanced technology.

There is currently no specific AI law in the UAE. However, there are a number of key initiatives being implemented to guide the adoption of AI. (See 5.1 Regulatory Agencies).

We are also seeing AI-relevant adjustments being made to existing regulation, and comment on this below. (See 3.6Proposed AI-Specific Legislation and Regulations, for example, on the AI amendment to the DIFC Data Protection Law).

We anticipate the implementation of further adjustment to existing law to accommodate the particularities of AI and the development of AI specific-regulation. That development is likely to be supported by the UAE Regulations Lab, launched in 2019, which focuses on drawing up new business-enabling regulations following the testing and evaluation of innovations enabled by new technologies.

While there is no specific AI law, as identified in 2.2 Involvement of Governments in AI Innovation, the Strategy sets out clear objectives and the bodies identified in 5.1 Regulatory Agencies have issued policy statements and guidance that seek to safeguard the development and adoption of AI, and will shape relevant AI regulation. See, for example, Smart Dubai’s AI Ethics Principles and Guidelines.

Please see 3.1 General Approach to AI-Specific Legislation and 2.2 Involvement of the Governments in AI Innovation.

This issue is not applicable in the UAE.

This issue is not applicable in the UAE.

This issue is not applicable in the UAE.

Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data (PDPL) is the primary regulation on data protection in the UAE. The PDPL deals with the rights of data subjects, such as the right to rectification and deletion of data more generally. Although it does not explicitly reference AI models, appropriate measures and procedures must be in place to ensure the erasure or correction of incorrect personal data.

Purpose limitation and data minimisation are key principles under the UAE data protection regime. An organisation must only process personal data for specific and lawful purposes, and only collect data relevant to the needs of the organisation (“purpose limitation”). After data use, all personal data collected should be deleted, and not kept and used for any other additional purposes (“data minimisation”). To avoid violating these provisions, it is recommended that personal data be anonymised, pseudonymised, securely encrypted or archived in a manner that ensures that the data is put beyond further use.

DIFC Data Protection Law No. 5 of 2020 (DIFC DPL) was amended in September 2023 specifically to regulate autonomous and semi-autonomous systems, including AI and generative machine-learning technology. Article 38 of the DIFC DPL provides that a data subject must have the right to object to any decision based solely on automated processing, including profiling, which produces legal consequences concerning them, or other seriously impactful consequences, and to require such decision be reviewed manually.

Regulation 10 of the DIFC DPL:

  • requires persons be aware when their data is processed by an autonomous and semi-autonomous system such as AI;
  • prohibits use of the system unless it is capable of processing personal data only for purposes that are human-defined or human-approved;
  • imposes obligations on the deployers and operators of such systems, such as the requirement for the processing of personal data by the systems, to be compliant with the DIFC Data Protection Law; and
  • requires the systems be built  with unbiased algorithmic decisions, fairness, transparency, security and accountability.

“System” under Regulation 10 means any machine-based system operating in an autonomous or semi-autonomous manner that can process personal data for purposes that are human-defined or purposes that the system itself defines (or both) and generates output as a result of or on the basis of such processing.

While Regulation 10 does not expressly reference AI, it is evident from the guidance that definitions used are adapted on the basis of the OECD guidelines and the draft Regulation of the European Union on harmonised rules on AI (the “EU AI Act”).

Regarding copyright, Federal Decree Law No. 38 of 2021 now protects “smart applications, computer programmes, databases, and similar works” generated by or with AI. Despite their non-human origin, AI-generated works may qualify for copyright protection under UAE Copyright Law.

Other relevant regulations include Federal Law No. 2 of 2019 on Information and Communication Technology in Health Fields (Health Data Law); Federal Decree-Law No. 3/2003 on Organizing Telecommunications (Telecoms Law); and Federal Decree Law No. 34 of 2021 on Combatting Rumors and Cybercrimes (Cybercrime Law).

The UAE government is clearly considering the development of regulations and frameworks to govern the use of AI. The implementation of AI legislation in other countries, including the UK, EU countries and the US, will no doubt be referenced in the introduction of AI-specific legislation in the UAE and influence the approach taken by UAE regulators when determining the key features of the legislation.

There are currently no judicial decisions issued in respect of generative AI and intellectual property rights.

The majority of court judgments for onshore UAE cases are not publicly available, making it very difficult to extract clear statements of principle from cases, particularly as judgments are not intended to be authoritative statements of law.

As described in 4.1 Judicial Decisions, there are currently no judicial decisions in the UAE in respect of AI. The definition of AI has not been tested by UAE courts. While there is also no legal definition of AI under UAE regulations, the UAE National Program for Artificial Intelligence Guide references the definition of AI used in the Merriam-Webster dictionary as “a branch of computer science dealing with the simulation of intelligent behaviour in computers.”

The implementation of the National Strategy (see 2.2 Involvement of Governments in AI Innovation) is supervised by the Emirates Council for Artificial Intelligence and Digital Transactions.

Since the adoption of the National Strategy, the UAE has demonstrated its commitment to delivering the eight objectives by:

  • appointing the UAE Minister of State for AI, who has issued guidance papers such as the December 2022 AI Ethics Principles and Guidance paper;
  • devising the UAE National Program for Artificial Intelligence (BRAIN), which consolidates resources, emphasising the UAE’s aim to become a leading participant in the responsible use of AI globally; its AI Guide details the approach adopted by the UAE to AI and comments on the key considerations of relevance – AI governance, data governance, cybersecurity, ethics, bias and employment;
  • setting up the Council for AI and Blockchain to focus on developing policies to create an AI-friendly ecosystem promoting collaboration between the public and private sectors while also safeguarding ethical considerations such as privacy and non-discrimination;
  • setting up the UAE Council for Artificial Intelligence and Digital Transactions to oversee AI integration in government departments and education and society in general; it also oversees the positive use of AI, privacy of user data, data security and integrity and efficient data-sharing with competent authorities;
  • creating the Artificial Intelligence and Advanced Technology Council (AIATC) focused on positioning the UAE as a hub for AI investments, partnerships and talent, and also tasked with the supervision of financing, investment and research plans for AI and advanced technology;
  • launching the UAE Regulations Lab (The RegLab) in 2019, in partnership with the Dubai Future Foundation, to develop new business-enabling regulations following the testing and evaluation of innovations enabled by new technologies; and
  • introducing Digital Dubai to develop and oversee implementation of policies and strategies that govern all matters related to Dubai’s information technology, data, digital transformation and cybersecurity; Digital Dubai has issued an Ethical AI Toolkit to support industry, academia and individuals in the responsible use of AI systems, including AI Ethics Principles and Guidelines (the AI Guidelines) addressing fairness, transparency, accountability and “explainability”.

See 4.2 Technology Definitions.

The Council for AI and Blockchain is dedicated to preventing harm associated with AI such data leaks, infringement of privacy and ethical concerns. It also seeks to create an environment conducive to the innovation and advancement of AI while maintaining societal ethics.

The RegLab focuses on mitigating any potential risks and developing regulations around AI use.

AIATC and the financial free zones of DIFC and ADGM regulate the use of AI technologies in finance. Their aim is to prevent financial fraud and any personal data breach while promoting AI innovation in the financial sector.

We are not aware of AI relevant enforcements, pending or otherwise.

In addition to the general guidance and policies issued by the bodies identified in 5.1 Regulatory Agencies, the following bodies will be relevant.

  • The Ministry of Industry and Advanced Technology (MoIAT). Thisis the primary UAE government body responsible for setting standards across all sectors. MoIAT is tasked with ensuring that AI systems adhere to ethical standards and promote transparency and accountability.
  • The GCC Standardisation Organisation (GSO). This organisation defines and develops GSO standards related to AI and machine-learning. These technical standards are not publicly available, but can be purchased from MoIAT or GSO’s online website.

Companies operating in the UAE are regularly required to meet international standards set by regulatory bodies such as the International Organisation for Standardisation (ISO), the International Electrotechnical Commission (IEC), and the International Telecommunication Union (ITU).

UAE governmental authorities actively integrate AI across all functions. The UAE Strategy for Artificial Intelligence (AI) aims to enhance government performance using integrated smart digital systems, while the Council for Artificial Intelligence and Digital Transactions was established to oversee AI integration in government departments.

An example of the adoption of AI is the Telecommunications and Digital Government Regulatory Authority (TDRA) introduction of the AI-supported Unified Digital Platform which streamlines user access to information and services.

In the legal sector, the UAE courts are using AI to improve case management, support translation, and provide virtual courtroom environments. The DIFC courts have issued guidelines for AI-generated content in litigation, while the Abu Dhabi Judicial Department uses AI solutions to monitor criminal cases.

Biometric and facial recognition technologies are also widely used by UAE governments with their employees. More detail on facial recognition is provide in 11.3. Facial Recognition and Biometrics.

See 4.1 Judicial Decisions.

AI plays a significant role in national security matters in the UAE.

The UAE government employs AI in various aspects of national security to enhance capabilities in threat detection, intelligence analysis, border security, cybersecurity, and defence.

For obvious reasons, specific details of AI applications in national security are not publicly disclosed.

Generative AI can create original content based on learned patterns from accumulated data. Like other AI forms, it raises ethical issues related to data privacy, security, misinformation, plagiarism and copyright infringement, and generation of harmful content.

Determining intellectual property (IP) ownership is a challenge. The deep learning models used in generative AI, the lack of direct human intervention, and the reliance on existing data all blur IP ownership.

As AI advances, intellectual property laws can be expected to be supplemented and adjusted to address emerging issues. As an example, the Federal Decree Law No. 38 of 2021 on Copyrights and Neighboring Rights has recently been updated to include “smart applications, computer programs, databases, and similar” works in its definition of “Works” protected by the law. AI-generated works may qualify for copyright despite their non-human origin. Users of AI systems may be considered authors, and bear responsibility for copyright infringement.

In connection with AI, the following need consideration:

  • patents: novel AI algorithms or techniques may qualify for patent protection, although patents typically cover specific inventions or processes which may be a challenge for AI-developed assets;
  • copyright: original AI models, software code and creative works generated by AI systems can be protected by copyright law, and this includes both the source code and any output produced by the AI model;
  • trade secrets: keeping AI models and training data confidential is essential for safeguarding trade secrets; and
  • trademarks: branding associated with AI products or services can be protected through trademarks, securing brand identity.

For data protection see 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI.

See 8.1 Emerging Issues and Generative AI.

Please see 3.6 Data, Information or Content Laws.

Most legal firms are actively exploring use cases and assessing AI’s costs and benefits. Deloitte’s recent survey found that 62% of lawyers believe AI will significantly impact the legal profession within three years. While lawyers are already using AI for tasks such as marketing, contract review and e-discovery, there is considerable scope for wider adoption. McKinsey & Company predicts that 22% of legal tasks can be automated by AI.

Common law firm use cases include the following:

  • contract review: AI tools can identify key contract aspects, support contract negotiation and monitor and manage contracts over their duration;
  • process automation: AI tools can support document filing, extract relevant information on matter closure and provide reminders for contract expiry, document registration, etc.;
  • prediction tooling: AI tools can process historical data to deliver outcome predictions and support fee estimation;
  • generative models: generative models are being used to drafts documents, check accuracy and enhance work quality;
  • research systems: AI-powered search engines can materially speed and improve research; and
  • client onboarding: AI tools can streamline background checks for regulatory compliance.

Potential AI issues are as follows:

  • cost and obsolescence: AI development is expensive; the sheer pace of AI development opens up the opportunity for expensive AI projects to become obsolete fast; and
  • infallibility: generative AI can generate false information and can magnify errors.

Aspects of the current law will clearly apply to AI-enabled technologies. Liability may arise as a result of a contract breach, in which case the terms of the contract will set the liability and the general laws of contract will address the remedies that may be available. Liability may also arise due to tort, as codified in Articles 282–298 of Civil Code Federal Law No. (1) 1987 Concerning Civil Transactions Law of the UAE. Liability may also be imposed by more general regulations, such as Federal Law No. 15/2020 on Consumer Protection (UAE Consumer Protection Law) and various laws addressing data protection.

In simple terms, tort is founded on a principle that harm or injury caused to a person by another requires compensation. When it comes to AI, while an AI-enabled device is capable of learning and processing experience and making independent decisions based on its machine learning which may cause damage, it is not regarded as a legal person and has no independent legal status.

The common approach to the challenges raised in tortious liability for damage caused by AI focuses on the following.

  • Vicarious liability– usually addressing the responsibility of employers for the actions and omissions of their employees and principals for their agents. The key is the existence of a special legal bond. If AI-enabled technology causes damage in a way that is similar to damage caused by an employee or an agent then the logic can be seen that the operator of the AI-enabled technology should be found responsible on a vicarious liability basis.
  • Strict liability – usually associated with damage caused by something dangerous in the use or under the responsibility of a person, and where the complainant is relieved of the need to prove wrongdoing and a causal link between the wrongdoing and the loss. With AI-enabled technology, AI-enabled devices can be seen to have the potential to be dangerous (an autonomous-driven vehicle is an obvious example). Strict liability for designated AI-enabled products may be attractive to legislators.
  • Fault-based liability – a fall-back position based on the imposition of a reasonable standard of care to avoid doing harm. In the AI context, the implied duty of care could attach to the choice of technology, as well as to the supervision and maintenance of the relevant devices.

If an AI-enabled device causes harm, loss or damage under the Civil Code, it can be seen that liability may lie with the person “having control” of the device. This takes us into the complex area of responsibility and control. With an AI-enabled device, responsibility and control may conceivably be found in a number of hands – the person operating the device, the business who supplied the AI application, the business that developed the AI, or the software coder who designed the underpinning algorithm.

Under the Civil Code, the position is that when several persons are responsible for a prejudicial act, each of them is responsible for their share in it. Accordingly, the Civil Code opens up the opportunity for an apportionment of damages between the potentially liable persons. However, what has not been tested is how the UAE courts would go about that apportionment.

The Consumer Protection Law may also be relevant, and details the consumer rights to fair compensation for damages suffered as a result of the purchase or use of defective goods.

With regard to insurance, there are elements of existing cover that may apply to loss incurred as a result of AI-enabled technology. It is entirely predictable that the insurance industry will craft AI-specific coverage.

This question is addressed in 10.1 Theories of Liability and elsewhere in the responses.

The design and development of the underpinning algorithm and data used to train and improve the AI application introduce the possibility of bias. The generative nature of AI development then opens up the possibility of bias being exaggerated and hard-wired into the application. The potential unfairness and discrimination that can stem from bias needs to be evaluated and addressed when developing AI tools and when adopting AI-supported functionality.

Machine interpretation of demographically relevant statistics is inexorably linked to the quality and nature of the input data. Flaws in the base data can be exaggerated exponentially.

The dangers of bias are clear. Federal Decree-Law No. 34/2023 on Combating Discrimination, Hatred and Extremism (Anti-Discrimination Law) seeks to combat discrimination, hatred and extremism prohibiting discrimination based on religion, belief, rite, community, sect, race, colour, ethnicity gender or race. The penalties for transgression include up to a year’s imprisonment and fines in the range of AED500,000 to AED1 million.

As we have seen, Article 38 (1) of the DIFC Data Protection Law provides that “the data subject shall have the right not to be subject to a decision based solely on automated Processing including Profiling which produces legal effects concerning him or her or significantly affects him or her”.

Also, as touched on above, MoIAT is engaged in setting standards for AI technologies that seek to safeguard transparency and fairness. The UAE’s Ministry of AI, the UAE Artificial Intelligence and Blockchain Council and Smart Dubai also publish guidance designed to promote fairness and ethical behaviours in the adoption of AI technology.

As is so commonly the case with developing technology, AI brings both risk and benefit when it comes to the use of personal data.

The risks include:

  • data breaches;
  • bias and discrimination;
  • lack of transparency;
  • lack of accountability;
  • lack of regulatory compliance;
  • data-security issues;
  • unintended/negative effects; and
  • inaccurate decision-making.

The benefits available from the use of AI include:

  • enhanced privacy; and
  • improved and smarter security, leading to increased trust and reputational gains due to more effective regulatory compliance.

See also 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI

The UAE is a keen adopter of facial recognition technology. In 2021 the UAE Cabinet approved the use of facial ID in certain sectors to verify the identity of individuals and cut paperwork. Facial recognition is now in widespread use. For example, Dubai International Airport uses CCTV cameras and AI-enabled facial recognition technology to enhance security. The airport’s smart gates are also equipped with facial and iris recognition technologies.

The primary pieces of legislation governing the use of facial recognition technology in the UAE is the Federal Data Protection Law and, where relevant, the DIFC Data Protection Law (together, the “Data Law”). The Data Law does not prohibit the collection or use of biometric data, although it places significant obligations and restrictions on data controllers handling such data.

Companies using facial recognition and biometric technology:

  • are likely to be considered data controllers of their customers’ biometric data, and will need to comply with data protection principles and other obligations imposed on data controllers under the Data Law;
  • should obtain specific, informed, and freely-given explicit consent from their customers for the collection and processing of their biometric data, and should consider offering alternative means of accessing services that do not involve biometric data;
  • should provide clear and specific information to their customers about the purposes, scope, and risks of biometric data processing, as well as their rights under the Data Law;
  • should take appropriate measures to ensure the security and protection of the biometric data processed and avoid capturing irrelevant or incidental data relating to non customers/non-consenting individuals; they should also implement measures to ensure that the data is anonymised or deleted when no longer needed for the purposes it is collected, or when consent is withdrawn; and
  • should conduct a DPIA before biometric data is collected and processed; this is mandatory when processing sensitive data, such as biometric data, to evaluate the potential impact of the data processing on the privacy and rights of data subjects, and to identify and mitigate any risks.

Risk arises when the transparency and the auditability of the automated decision-making (ADM) process is unclear - the so-called “black box” issue. The same issue impacts the ability of affected parties to challenge an ADM decision. It is necessary to accept that there is an identified concern with AI over the chance that AI models may have been trained on data repositories that introduce bias into the models.

The points raised in the context of the UAE data privacy regulations apply to ADM. See 3.6 Data, Information or Content Laws and 8.3 Data Protection and Generative AI.

Principle 3.3.2 of Smart Dubai’s Guidelines provides for the following.

  • Developers should build systems whose failures can be traced and diagnosed.
  • People should be told when significant decisions about them are being made by AI.
  • Within the limits of privacy and the preservation of intellectual property, those who deploy AI systems should be transparent about the data and algorithms they use.
  • Responsible disclosures should be provided in a timely manner and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making.

Smart Dubai’s Ethical AI Toolkit states that traceability should be considered for significant decisions, particularly those that have the potential to result in loss, harm or damage, and that people should be informed of the extent of their interaction with AI systems.

Note, also, the requirement for disclosure in Regulation 10 of the DIFC DPL. (see 3.6 Data, Information or Content Laws).

The last ten years or so in the EU have seen AI implicated in prosecutions for price-fixing activities, where AI has been used to monitor adherence to minimum re-sale price arrangements.

The relevant authorities are well aware of the potential anti-competitive use of AI-enabled tooling in the context of price setting, with tooling designed to monitor the pricing data of competitors and predicting competitors’ reactions to market changes.

While the prosecutions referred to above were AI-enabled infringements, it is uncertain how the relevant authorities would address a situation where the activities in question would have been undertaken by an AI-enabled tool drawing on machine learning taking an automated decision.

The UAE recently enacted Federal Decree-Law No. 36/2023 on the Regulation of Competition (Competition Law) which introduces provisions, controls, and penalties designed to restrict anti-competitive behaviours. It will be important for companies using AI in price-setting to ensure compliance in order to avoid any antitrust violations.

Legal and Regulatory Compliance

We are seeing AI specific regulation being crafted and introduced internationally. AI solutions must comply with evolving legal and regulatory frameworks and be able to adapt to new legislation and the inevitable adjustment and definition of AI relevant legislation.

AI contracts should:

  • require compliance with all relevant law, regulations and guidance in existence at the commencement of the contract and as that law, regulation and guidance is amplified and amended over the life of the contract; and
  • address liability for non-compliance with indemnification for any associated loss, damage, cost and fines.

Data Privacy and Security

The quantum of data used increases the risk of privacy breaches or data leaks, and the risk can be mitigated contractually by:

  • establishing clear data-handling protocols, including encryption, anonymisation, and access controls;
  • specifying data ownership and breach notification procedures; and
  • ensuring compliance with relevant regulatory data privacy requirements.

Intellectual Property (IP)

Contracts should:

  • clearly define IP rights, including ownership, licensing, and usage restrictions, and those rights need to address not only background IP but also foreground IP, including IP that may be generated by the AI solution; and
  • provide for how IP infringement claims will be addressed and include provisions for third-party IP infringement claim indemnification.

Reputational Risk

AI solutions can produce biased or harmful outcomes, leading to reputational damage for businesses; these risks can be mitigated in the contract by:

  • requiring the AI solution provider to adhere to ethical guidelines; and
  • requiring the implementation of robust monitoring and auditing mechanisms to detect and rectify biases or unintended consequences.

Performance and Accountability

The procurement of developing technology always comes with a risk of the delivered functionality not meeting the aspirations that commissioned it. The contract should:

  • provide a clear acceptance testing methodology to address failure to achieve acceptance;
  • include milestones and linked payment to ensure the development and delivery remain on track (consider liquidated damages);
  • set out performance metrics, service-levels and service credits; and
  • set out the basis for maintenance including bug fixing, updates and support; the contract should also consider the likely impact of a predictable period of rapid development of the AI solution.

Malicious Use

The risk of malicious use can be addressed in the contract by:

  • specifying prohibited uses; and
  • requiring adherence to the client’s security measures to prevent unauthorised access or misuse.

Lack of Transparency in the Functional Operation of the AI Solution, Fairness and Bias

The lack of transparency and bias challenges can be addressed in the contract by:

  • requiring transparency from AI suppliers over the adopted model architecture and decision-making processes;
  • including provisions for regular audits and periodic meetings for the supplier to explain the AI behaviour;
  • providing for fairness assessments and establishing bias mitigation strategies; and
  • monitoring for discriminatory outcomes.

Scalability and Performance Degradation

The contract should:

  • specify the predictable scalability requirements and performance expectations;
  • plan for system upgrades and capacity adjustments; and
  • detail the change-management mechanism that parties will adhere to in order to address the inevitable need for the agreement to flex and adjust.

AI tools are being used to screen CVs and identify the most qualified candidates based on skills, experiences, preferences and availability for interviews and fill roles based on those findings. Predictive AI is being used to determine which employees are most likely to leave a company as well as employee performance metrics.

The use of AI in recruitment and employee dismissals could cause potential harm to employees and give rise to liability. By using learned algorithms based on the hiring practices of past candidates, AI could develop biases based on human bias of past recruitment processes. In addition, algorithms may not be trained to account for candidates with neurodivergences, as well as race, cultural backgrounds or disability, which means that they would present a certain way and therefore risk enforcing discriminatory practices.

Under Federal Decree-Law No. 33/2021 (UAE Labour Law), discrimination based on race, colour, sex, religion, national origin, social origin, or disability is expressly prohibited. Note, also, Federal Decree-Law No. 34/2023 on Combating Discrimination, Hatred and Extremism (Anti-Discrimination Law) addressed in 11.1 Algorithmic Bias.

The use of AI as a performance-review tool has become increasingly prevalent in workspaces. With the introduction of AI, employers have a more time-efficient, accurate way of measuring the performance index while minimising human error.

Through employee-evaluation software, algorithms can effectively monitor employee performance by tracking general progress, setting personalised goals for individual employees and providing transparent, personalised feedback to help employees understand their strengths and any areas of improvement needed.

There are potential risks with using AI for monitoring purposes. While the performance index itself might be easily quantifiable, AI software often lacks context and nuance when monitoring employee performance. For example, a software which bases performance output on average toilet breaks might unfairly mark down disabled or pregnant workers, which could give rise to discrimination claims. Additionally, AI software still holds the risk of bias if used with biased or misinterpreted data. Employees might also fear invasion of privacy and an invasive use of data with tracking and monitoring software, leading to a general distrust of performance assessments.

Employers must ensure full transparency and adherence to data privacy laws when implementing AI monitoring tools in order to avoid legal risks. Employers should also check data sets for bias and ensure performance monitoring tools do not negatively affect individuals based on gender, race or other protected characteristics as that could be grounds for discriminatory claims.

In the UAE, digital platform companies using AI, include the following.

  • Food delivery platforms: developers are using advanced technology to improve food delivery using secure payment systems, chatbots and location tracking. AI analyses what users like, suggests personalised recommendations, helps predict delivery times, finds the best routes and offers popular dishes based on individual preferences.
  • Car service platforms: the Dubai Taxi Corporation, a division of the Roads and Transport Authority, has unveiled various digital solutions as part of its strategic plan for digital transformation, including the AI-enabled customer voice recognition system to verify a customer’s identity, AI chatbots responding to customer inquiries, a voice virtual assistance at the call centre to respond to customer requests and inquiries, a system for taxi demand prediction to distribute vehicles according to the entered data, and a driver face recognition system.
  • Online gifting platform: BloomingBox (an online gift platform specialising in same-day delivery of premium products) has adopted AI-driven solutions for route optimisation and track-and-trace solutions to enhance last-mile delivery operations, reduce costs, improve customer satisfaction and spur business growth.

AI is used in various functions, including but not limited to the following.

  • Fraud detection, where AI can improve payment security and reduce risk related to corruption or fraud.
  • Risk management, where AI allows rapid processing of vast amounts of data while cognitive computing assists in analysing both structured and unstructured data.
  • Customer service, with the adoption of AI-based chatbots to expedite services to clients by providing timely financial guidance, responding to questions on customer accounts, facilitating rapid money transfers between accounts and processing loan applications. AI-enabled voice monitoring is also used to improve fraud detection. AI-driven chatbots can also assist with marketing and sales opportunities by analysing customer data and recommending appropriate financial products. Emirates NBD has created the MENA region’s first intelligent voice-based and chatbot virtual assistant for banking, “EVA”.
  • Fintech, such as Digital Payments where AI is deployed to enhance the speed and lower the costs of digital payments.
  • Anti money laundering automation.
  • Credit scoring.

Applicable Regulations/Initiatives

The UAE Central Bank is launching a key digital finance initiative as part of its Financial Infrastructure Transformation Programme.

The Ministry of Finance, in collaboration with the Artificial Intelligence, Digital Economy, and Remote Work Applications Office and the Mohammed Bin Rashid Centre for Government Innovation (MBRCGI), is also launching a platform to develop financial legislation and policies using digital solutions. This “Rules as Code” platform aims to build a comprehensive digital infrastructure for the creation of AI-based laws and regulations.

Guidelines for Financial Institutions Adopting Enabling Technologies have been issued jointly by the Central Bank of the UAE, the Securities and Commodities Authority, the Dubai Financial Services Authority of the Dubai International Financial Centre, and the Financial Services Regulatory Authority of Abu Dhabi Global Market.

Risks to Financial Services Companies from the Use of AI in UAE

The principle risks are those generally identified as relevant to AI. However, from a financial services perspective the following are relevant:

Risks of Biases in Repurposed Data

In the context of AI being used in customer services and more complex decision-making processes such as credit scoring and risk assessment, the potential for biased AI outcomes becomes a significant concern.

Credit Scoring and Loan Approvals

Financial institutions can use AI models to automate credit scoring and loan approvals, but if the AI models are trained on biased data sets or incorporate historical biases, it can lead to discriminatory practices.

Automated Trading and Investment

AI-driven investment tools might develop biases based on historical data that favours investments in certain sectors or regions, potentially leading to unequal opportunities in wealth accumulation for individuals based on their profile characteristics.

Fraud Detection

AI systems used for detecting fraudulent activities can inadvertently target certain demographics if the training data wrongly associates fraud with particular patterns related to ethnicity, location, or transaction types typically used by specific groups.

Personalised Marketing

AI that tailors financial product advertisements might perpetuate economic disparities by primarily promoting more favourable financial products, such as low-interest loans, to wealthier individuals or specific demographic group. Meanwhile, other individuals may be exposed to advertisements for higher-risk options.

Customer Service Chatbots

Chatbots and similar AI-powered applications can also display biases during interactions. For instance, they may offer varying responses or levels of assistance depending on the user’s demographic information.

The Federal Law No. 2 of 2019 on the Use of Information and Communication Technology in Health Fields (Health Data Law) is the most applicable regulation relating to the use of AI in healthcare (although it does not expressly mention AI). Its provisions relate to informed consent, privacy protection, anonymisation, and accountability have implications for the development, deployment, and use of AI technologies in healthcare.

The Ministry of Health and Prevention (MOHAP) is primarily responsible for the supervision of the use of AI in healthcare. The UAE has implemented regulations and guidelines to ensure the ethical use of AI in the UAE healthcare sector. The Department of Health Abu Dhabi (DOH) has issued their Policy on Use of Artificial Intelligence in the Healthcare Sector of Abu Dhabi, the aim of which is to enhance the reach and performance of healthcare-related services while minimising potential risks to patient safety and ensuring safe and secure AI use in healthcare management.

To address the chance of misdiagnosis and enhance treatment planning, MOHAP requires that AI devices and technology have continuous cycles of improvements and updates. Another requirement for AI is to ensure that the technology being used is auditable. This requirement ensures that any hidden biases in training data used will be addressed.

UAE healthcare has also seen the adoption of holographic technology for surgery planning, creating three-dimensional landscapes of a patient’s organs to provide doctors with a deeper understanding of a patient’s anatomy and enable greater precision in surgery.

Another use of AI algorithms in healthcare is the prediction of diseases by using data-mining software to identify patients who are at risk of developing specific conditions. By analysing personal data, algorithms can effectively reduce the rate of hospitalisation and fast-track treatment.

In relation to machine-learning, the PDPL, DIFC DPL, ADGM DPR 2021 and the Health Data Law are the most relevant data regulations.

The UAE aims to enhance its efficiency, safety and sustainability through integrating AI in its transportation infrastructure, especially in autonomous vehicles. Dubai has set an ambitious strategy for the transformation of 25% of the city’s transportation to autonomous driving by 2030.

The Regulations of Autonomous Vehicles

In April 2023, the Dubai Government issued Law No. (9) of 2023 Regulating the Operation of Autonomous Vehicles in the Emirate of Dubai (AV Regulations). The AV Regulations govern the licensing and regulatory framework, as well as operational standards for autonomous vehicles. The Road Transport Authority (RTA) governs the use of autonomous vehicles and serves as a licensing, regulatory and investigative body.

Frameworks for AI Algorithms, Data Privacy, Cybersecurity, and Vehicle Performance

The National Strategy referred to above touched on AI governance in autonomous vehicles by setting standards for the development of AI algorithms which control these vehicles, ensuring they are safe and effective under various traffic conditions.

Regulations concerning vehicle performance typically focus on ensuring that autonomous vehicles meet specific safety and operational standards before they are allowed on public roads. This includes rigourous testing and certification processes, similar to those used in aviation and other safety-critical industries.

Ethical Considerations for ADM

The UAE is continuing to consider the development of ethical guidelines relating to the operation of autonomous vehicles and, in particular, how such vehicles should prioritise decision-making in unavoidable accidents.

International Harmonisation

The UAE is actively engaging with international bodies and participating in international discussions aimed at aligning its regulations and standards for autonomous vehicles and AI technologies while advancing its technological infrastructure.

In the UAE, there are no specific regulations that govern the use of AI in manufacturing products. The current regulation addressing product safety and liability will continue to apply to products manufactured in an AI-enabled fashion.

While there is currently an absence of AI specific legislation, the National Strategy together with the guidance outlined in 5.1 Regulatory Agencies will be relevant.

There is no specific legislation that directly governs the use of AI in professional services. The current regulation addressing professional services will continue to apply to professional services that are AI supported.

The considerations touched on in 9.1 AI in the Legal Profession and Ethical Considerations apply.

In many jurisdictions, whether AI can be considered an inventor or author for patent purposes has been a subject of debate. Traditional patent law usually requires a human inventor.

In the UAE, the position regarding AI inventors remains unclear, as the UAE Patent Office has not yet issued specific guidelines or decisions on this matter. However, UAE Federal Law No.38 of 2021 Concerning Copyright Rights and Neighbouring Rights (the New Copyright Law) generally requires a work to be original and the product of human creativity in order to qualify for protection. While AI systems can generate creative outputs, they are ultimately reliant on pre-existing data and algorithms. Consequently, the question arises as to whether AI-generated works can truly be considered original, or if they merely replicate existing material.

While there are no publicly available judicial or agency decisions by UAE authorities which explicitly address the status of AI as an inventor or co-inventor, UAE IP laws typically follow international standards, which generally do not recognise AI systems as inventors.

The UAE, like many other jurisdictions, allows for the protection of trade secrets and confidential information. AI technologies often involve complex algorithms, datasets, and proprietary methodologies that can be considered trade secrets. By maintaining strict confidentiality and implementing appropriate security measures, companies can safeguard their AI technologies from unauthorised access, use, or disclosure.

In the UAE, contractual arrangements such as non-disclosure agreements (NDAs) play an important role in protecting AI technologies and trade secrets by ensuring that employees, contractors, and third parties maintain confidentiality. Companies should consider incorporating specific clauses into their contracts to address the ownership, non-disclosure and non-compete aspects related to AI. Additionally, contracts should outline the permitted use, access, and transfer of AI technologies and data, ensuring compliance with local laws and regulations.

In case of trade secret misappropriation or IP rights infringement, companies can seek legal remedies to enforce their rights and request compensation for damages. In the UAE, the courts provide a platform to protect trade secrets and intellectual property through civil litigation and injunctions.

While there are currently no explicit provisions for the protection of AI-generated works in the UAE, there is potential for legal reforms reflecting the nation’s commitment to fostering innovation.

Federal Law No.38 of 2021 Concerning Copyright Rights and Neighbouring Rights (Copyright Law) protects original literary and artistic works, including computer programmes. However, existing copyright laws do not explicitly address AI-generated content. This ambiguity poses challenges when it comes to determining ownership and protecting AI-generated works, as traditional copyright laws typically attribute authorship to human creators. The Copyright Law also stipulates that the author is the person who brings the work into existence. Therefore, for AI-generated content, it may be necessary to establish guidelines or frameworks that attribute authorship to both the AI and the human input that trained and refined the AI algorithm.

Using OpenAI, or any AI tool or platform, to create works raises several IP issues. The terms of use for these platforms often dictate the ownership of the output.

In the case of OpenAI and similar platforms, the user might retain IP rights to the unique output they generate if the platform’s terms of service allow it. However, where the AI contributes a substantial creative element, the question of authorship can be contentious. Users should be particularly cautious of the IP clauses within the terms of service agreements before using AI-generated output for commercial purposes.

When advising corporate boards of directors in the UAE on the adoption of AI, there are several key issues that can impact the success and risk management of AI initiatives.

Regulatory Compliance

Boards should have a basic knowledge on how AI works and its potential impact on their business. They should be advised on the current and upcoming regulations specifically targeting AI technologies in the UAE. In addition, they need to ensure strict adherence to UAE’s data protection laws.

Technological Challenges

Boards should acknowledge the potential complexities associated with the integration of AI and should implement stringent standards for evaluating system effectiveness. They should ensure that proper cybersecurity measures are in place.

Strategic and Operational Risks

AI initiatives should be aligned with the organisation’s strategy and ethical values.

The UAE has made significant strides in formulating AI strategies, frameworks and policies such as the UAE Strategy for Artificial Intelligence Ethics Guidelines and the AI Ethics Toolkit.

The principles and guidelines support industry, academia and individuals in understanding how AI systems can be used and provide self-assessment tool for developers to assess their platforms. In addition, the UAE National Program for Artificial Intelligence has issued a guide for the Best Practices for Data Management in Artificial Intelligence Applications.

The key issues for organisations to consider when implementing specific AI best practices are as follows.

  • Ethics: it is important for organisations to ensure that AI models adhere to ethical principles, such as transparency, fairness, and accountability.
  • Regulatory compliance: organisations need to track adjustments to existing rules to address AI and keep a look-out for the inevitable enactment of AI-specific regulations.
  • Protection of intellectual property (IP): organisations should consider the intellectual property aspects of AI use, including associated ownership and protection issues.
  • Data privacy and security: since AI models rely heavily on data, organisations must ensure that AI implementation aligns with the applicable data privacy regulation and relevant industry best practice.
Bird & Bird (MEA) LLP

Burj Daman, Level 14
Dubai International Financial Centre
Dubai
UAE

+971 4 309 3222

Simon.shooter@twobirds.com www.twobirds.com
Author Business Card

Law and Practice in UAE

Authors



Bird & Bird is an international full-service law firm with more than 1,700 lawyers and legal practitioners in 32 offices across Europe, the Middle East, North America, Africa and Asia Pacific. The firm has built a stellar, global reputation for providing sophisticated, pragmatic advice to companies which are carving out the world’s digital future. Our first UAE office opened in Abu Dhabi (2011), followed by our Dubai office (2016). Licensed in Abu Dhabi mainland, the ADGM and DIFC, our main practice areas include banking & finance (including Islamic finance); corporate, commercial & M&A; employment; dispute resolution; investigations; intellectual property; and TMT. Our 40-strong UAE team includes seven partners and 20 associates and paralegals, with many of our lawyers having ten years’ experience or more practising in the UAE. The local team has an in-depth understanding of the laws, business practices and customs of the region and its core industries.