Artificial Intelligence 2023

Last Updated May 30, 2023

Malaysia

Law and Practice

Authors



Shearn Delamore & Co was established in 1905, and is one of the leading and largest law firms in Malaysia. With over 100 lawyers and 230 staff, the firm has the resources to run and manage the most complex projects, transactions and matters. The firm maintains extensive global network links with foreign law firms and multilateral agencies, and is a founding member of the blue-chip legal network Drew Network Asia (DNA). The firm’s technology, media and telecoms team, comprised of lawyers from various disciplines including IP, financial services, corporate and M&A, tax and competition, assists clients with the legal issues emerging from the convergence of technology, media and communications. It offers comprehensive and practical legal solutions to clients operating in an increasingly digitised world. The approach and solutions are tailored to meet clients’ specific strategic and commercial objectives. The firm would like to thank Puan Dayang Ellyn Narisa Binti Abang Ahmad, Jamie Goh Moon Hoong, Boo Cheng Xuan, Khoo Yuan Ping, and Yee Yong Xuan for their contribution to this chapter.

Artificial Intelligence, better known as AI, refers to the ability of a machine or computer program to behave intelligently in the same manner as a human being. In recent times, AI has gained much traction in the public domain with the advent of programs such as ChatGPT by OpenAI. Some of the background laws that apply to AI (though its complexity and applicability might give rise to debate) include the following:

  • Intellectual property laws: As is often the case with AI, computer programs are involved in the creation of products or processes, which has given rise to issues concerning their creation and ownership as regards IP laws, including patent and copyright.
  • Product liability laws: The issue of product liability continues to govern even products that are largely manufactured by AI. These products must still comply with applicable standards expected under, for instance, the Sale of Goods Act 1957 and Consumer Protection Act 1999, in order to ensure that they are of proper quality and meet certain standards before being released to the market;
  • Data privacy laws: AI technologies can potentially process a vast amount of personal data. In this light, robust data protection laws are necessary to regulate the processing of such data to prevent its misuse and security breaches that could compromise privacy and the sanctity of the data.
  • Employment laws: The rise and use of AI in the workforce could pose a threat to many jobs, highlighting the potential dispensability of human labour in certain sectors. This would potentially require some form of regulation in order to ensure that redundancy of workers does not become so widespread as to have long term economic effects.
  • Contract laws: The use of AI in contractual relationships can give rise to issues surrounding liability and accountability especially if AI is mainly or solely involved in the performance of certain contractual transactions.

Healthcare

The application of AI in healthcare can be seen in a wide range of areas. In 2018, the world’s first AI-enabled stethoscope system, Stethee Pro, was launched in Malaysia. The invention was the result of a strategic collaboration between the Malaysian Ministry of Health, the Malaysian Investment Development Authority (MIDA), Collaborative Research in Engineering, Science and Technology, and M3DICINE, and facilitated by the Telemedicine Development Group.

The technology platform behind the Stethee AI engine, Artificial Intelligence and Data Analytics (Aida), automatically tags geolocation and aggregate environmental data such as humidity, temperature, pollen count and the pollutant index for analysis by healthcare professionals. The Stethee Pro is able to record sounds and filter them, sending them to an app on a mobile device or computer via Bluetooth technology. This app will then be able to form a biometric identity for each patient, and diagnose if there are any heart or lung diseases using AI technology.

Oil and Gas

PETRONAS uses AI to manage platform data, transitioning from condition-based monitoring and conventional analytics towards predictive maintenance driven by predictive analytics. PETRONAS is also establishing an Artificial Intelligence Centre of Excellence to accelerate advancements in AI solutions that support energy delivery, operational efficiency and sustainability, through collaboration with a network of global partners.

Manufacturing

Seeloz Inc., an AI company based in Silicon Valley, California introduced the Supply Chain Automation Suite, the world’s first autonomous requirements planning engine, that redefines supply chain planning using AI.

Logistics

AI is also applied in logistics in the areas of monitoring shipment movements, and customer service using AI-powered chatbots and virtual assistants. For example, in 2023, Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group, announced that its AI-driven logistics solution EasyDispatch is now available in Malaysia. EasyDispatch is designed to improve supply chain management while reducing logistics costs, with its real-time service dispatch solution incorporating Al-powered, centralised features and Vehicle Route Planning.

With the algorithm trained to achieve optimal results within predefined business restrictions, the smart logistics solution uses the latest reinforcement learning AI and machine learning technology to improve field service dispatch capabilities and efficiency by providing high-accuracy address processing capability and real-time dispatch services.

Financial Services

The banking and financial industry in Malaysia has applied AI in areas such as risk assessment and the creation of personalised financial advice. In 2020, United Overseas Bank (Malaysia) Bhd launched Mighty Insights, which is believed to be the country’s first AI-based digital banking service. Mighty Insights uses advanced data analytics, machine learning and pattern recognition algorithms to determine the best guidance it can provide its customers based on their age, financial needs and lifestyle priorities.

In 2020, RHB Banking Group released the RHB Financing (SME) Mobile App, billed as the first AI-powered “customer self-initiated” small and medium-sized enterprise (SME) financing mobile app in Malaysia. The app is powered by AI, machine learning and big data capabilities, featuring facial recognition and real-time application processing capabilities.

Telecommunications

Telekom Malaysia Berhad has developed a system called PATROL as a proof of function for research, particularly on automatic detection of construction using vehicle dashcam and generate proactive alerts using AI, machine learning algorithms and video data. Project PATROL was developed to provide early and accurate warning alerts to help telecom providers identify threats near fibre infrastructure and proactively prevent fibre optic or copper damage.

Agriculture

eLadang/Digital AGTech is an initiative driven by the Malaysian Digital Economy Corporation (MDEC) in collaboration with ecosystem partners to infuse Industrial Revolution 4.0 (4IR) technologies, such as internet of things (IoT), big data analytics (BDA) and even AI to increase productivity/yield and quality, increase income/revenue, reduce operational costs/manpower, plantation optimisation, increasing interest/participation to sustain and scale digital adoption across the agriculture sector, fostering a highly-skilled, digitally empowered and data-driven workforce to ultimately boost the digital economy of the nation.

Cross-Industry Co-operative Initiatives

In 2020, a multi-conglomerate, Sunway Berhad, and leading telecommunication companies Celcom Axiata Berhad and Huawei Malaysia inked a Memorandum of Understanding to explore Malaysia’s first tripartite collaboration towards advancing smart township solutions encompassing AI and IoT, with 5G connectivity. Some of its planned initiatives include enhancing township’s safety and security features with facial recognition, tele-consultation healthcare services and e-learning features.

To date, Malaysia does not have specific legislation enacted that regulates or deals specifically with AI. However, other existing legislation may be broad enough to govern AI. As AI is premised very much on computer programming, provisions concerning computer programming/computer programs would be applicable to the workings of AI. 

Computer Crimes Act 1997

The Computer Crimes Act 1997 (CCA) came into force on 1 June 2000, and was designed to govern the misuse of computers. The CCA provides for various offences in relation to, among others, unauthorised access to computer material with or without an intention to commit a further offence, unauthorised modification to contents of a computer as well as wrongful/unauthorised communication of means of access to a computer (eg, codes, passwords).

The CCA is worded broadly so as to not exclude the possibility of the involvement of AI in the commission of offences, whether it be through accessing a program or data located on a computer or on the cloud. This is particularly of relevance when considering data and security breaches where data of customers, for instance, would likely be stored on the cloud or within a computer network.

Medical Device Act 2012

In relation to the medical field, medical devices are required to be classified, grouped and registered. They also need to be licensed and approved by the Medical Device Authority before they can be imported, exported, or placed on the market.

The Medical Device Act 2012 (MDA) seemingly does not distinguish between medical devices that employ AI and those that do not (eg, Stethee, an AI-enabled stethoscope, or machines used in robotic surgery). By extension, manufacturers of these medical devices, including the ones that make use of AI, would similarly be subjected to the MDA. The medical devices must therefore be manufactured in accordance with good manufacturing practices and/or any written directives issued by the Medical Device Authority, as well as be labelled, packaged and marked in the manner prescribed.

Medicines (Advertisement and Sale) Act 1956

In conjunction with the above, the Medicines (Advertisement and Sale) Act 1956 (MASA) also prohibits the publication of any advertisement referring to any article in terms that are calculated to lead to the use of that article as a medicine, an appliance or a remedy for the purpose of:

  • prevention or treatment of certain human diseases and conditions specified in the MASA;
  • practising contraception;
  • improving the condition or functioning of the human kidney or heart, or improving human sexual functions or performance; and
  • diagnosis of certain diseases specified in the MASA.

In effect, medical products that employ AI are prohibited from being advertised, inter alia, as being capable of the diagnosis, prevention, or treatment of the diseases and conditions specified in the MASA or in a manner that would lead to such use of medical products. Although manufacturers might be inclined to highlight their new AI technologies as capable of diagnosing a wide range of diseases or conditions, somewhat implying that a medical consultation may not be required, they should be wary of doing so.

Capital Markets and Services Act 2007 (CMSA)

The CMSA was enacted with the aim of regulating and providing for matters relating to the activities, markets and intermediaries in the capital markets. Amid the increasing use of robo-advisers for investment purposes, where portfolios are created and managed, and trades made, by AI with minimal input from its user, the Securities Commission Malaysia (SC) implemented the Digital Investment Management (DIM) framework.

As part of the DIM framework, the SC included certain requirements in its Guidelines on Compliance Function for Fund Management Companies as well as the Licensing Handbook, which was issued pursuant to section 377 of the CMSA.

Personal Data Protection Act 2010

The Personal Data Protection Act 2010 (PDPA) was enacted with the aim of regulating the processing of personal data in commercial transactions. In order for data to fall within the purview of the PDPA, such data must be information in respect of a commercial transaction which (i) is being processed wholly or partly by means of equipment operating automatically in response to instructions given for that purpose; (ii) is recorded with the intention that it should wholly or partly be processed by means of such equipment; or (iii) is recorded as part of a relevant filing system or with the intention that it should form part of a relevant filing system.

Due to the parameters set by (i) and (ii), information about a data subject in relation to commercial transactions being processed by AI (either wholly or partly automatically in response to instructions given) would be subject to the PDPA.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

No response has been provided in this jurisdiction.

No response has been provided in this jurisdiction.

Ministry of Science, Technology and Innovation

The Ministry of Science, Technology and Innovation (MOSTI) has been entrusted with the establishing AI governance, advancing research and development in relation to AI and escalating digital infrastructure to enable AI, among other things. MOSTI launched the National Artificial Intelligence Roadmap with the aim of building a thriving and sustainable AI innovation ecosystem in Malaysia by 2025, utilising the quadruple helix partnership of government, academia, industry and society.

To help achieve the above objectives, MOSTI aims to create a Policy and Regulation Committee for the purposes of AI. The Policy and Regulation Committee will review existing laws, policies, regulations and guidelines as well as develop standards for the proper development of AI in Malaysia, including Risk Management Systems by 2024. The Policy and Regulation Committee will form part of the AI Coordination and Implementation Unit, which will be the central hub for all things related to AI.

Securities Commission

The SC is a statutory body established under the Securities Commission Act 1993 with the purpose of regulating and developing the Malaysian capital market. The Securities Commission’s scope extends to:

  • registering authority for prospectuses of corporations other than unlisted recreational clubs;
  • approving authority for corporate bond issues;
  • regulating all matters relating to securities and futures contracts;
  • regulating the takeover and mergers of companies;
  • regulating all matters relating to unit trust schemes;
  • licensing and supervising all licensed persons;
  • supervising exchanges, clearing houses and central depositories;
  • encouraging self-regulation; and
  • ensuring proper conduct of market institutions and licensed persons.

The SC has in recent years begun addressing the impact of AI in the realm of capital markets, particularly robo-investing or innovative technology employed in the context of investment by DIM companies, including algorithms.

MOSTI

MOSTI defines AI as “a suite of technologies that enable machines to demonstrate intelligence, the ability to adapt to new circumstances, and used to amplify human ingenuity and intellectual capabilities through collective intelligence across a broad range of challenges”.

Examples of such “intelligence” would include perception, reasoning, learning, problem solving, language understanding, comprehension, consciousness, alertness, realisation, awareness, intuition, acumen and others in the subfields of vision, speech and robotics including software robots, machine learning and natural language processing.

Securities Commission

By virtue of paragraph 2.05(2) of the Licensing Handbook issued by the Securities Commission, a “digital investment management company” means a company carrying on the business of fund management incorporating innovative technologies into its automated discretionary portfolio management services.

MOSTI

Given the vast quantities of valuable data present in the digital sphere, big data has understandably emerged as a key area where research and development in relation to AI is most pronounced. With privacy and security issues at play, the significance of regulating AI in the realm of big data analytics cannot be overlooked.

As part of a survey in relation to AI governance by MOSTI involving entities including companies, government and academia, it was found that despite having a data security policy in place, only half of the entities surveyed felt that the policy was well established and implemented. The potential damage caused by data breaches could be severe, both from privacy and financial standpoints. To counteract this, MOSTI is dedicated to studying, reviewing and updating AI-related policies and regulations, with the goal of accelerating AI development while minimising data breaches to protect organisational data.

Securities Commission

The SC uses its Guidelines on Compliance Function for Fund Management Companies and the Licensing Handbook to ensure that DIM companies have the necessary technological capabilities and support to responsibly conduct their businesses. The SC imposes accountability on the board of directors, requires companies to have an effective compliance programme and risk management framework, and mandates transparent disclosure to customers. Regulating tech-driven investment services, such as robo-investing, is crucial to prevent significant financial and economic ramifications.

No response has been provided in this jurisdiction.

No response has been provided in this jurisdiction.

Capital Markets

The SC is the foremost standard-setting body in relation to AI employed in the realm of capital markets. Since AI is mainly used in relation to DIM, the Guidelines on Compliance Function for Fund Management Companies as well as the Licensing Handbook issued by the SC regulate the activities of DIM companies. Among the standards that have been prescribed by the SC are that DIM companies must:

  • have sufficient understanding of the rationale, risks and rules behind the algorithm underpinning the DIM business;
  • at all times, ensure the outcomes produced by the aforesaid algorithm:
    1. are consistent with the DIM company’s investment strategies;
    2. are commensurate with the risk profile of the investor; and
    3. comply with securities laws and relevant guidelines;
  • have the system to support the DIM company, which includes maintaining a secure environment pursuant to the Guidelines on Management of Cyber Risk and other relevant guidelines; and
  • conduct at least an annual review on the effectiveness of the governance and supervision of the technology and algorithm underpinning the DIM company.

As part of the means of achieving the above, the DIM company must have a compliance officer whose role is to establish a compliance programme that takes into consideration the unique and specific aspects of the DIM’s business model. A DIM company must also have a risk management framework, which must also include any other risks related to the DIM company.

In addition to the above, the SC also prescribes standards in relation to transparency of the AI being employed, by requiring DIM companies to disclose and display prominently on its platform any relevant information relating to the DIM company as well as certain details about its algorithm.

The SC takes all the above factors into account when deciding on whether to grant operating licences to DIM companies.

When it comes to the use of AI, companies should assess the likely conflicts with the laws of the jurisdictions in which they are doing business, such as in relation to intellectual property, data protection, product liability (safety standards, etc), competition law and ethical and human considerations.

The federal government and the state governments have embarked on numerous AI initiatives. With the wider push for AI, many federal and state agencies are accelerating their adoption of AI.

On the federal front, some efforts to encourage the development and adoption of AI have been initiated.

They include the use of chatbots, which were introduced in the web portals of the Employees’ Provident Fund and MOSTI.

The Kuala Lumpur City Hall is collaborating with the cloud computing arm of e-commerce giant Alibaba and MDEC on the “City Brain” smart traffic management system via big and heterogeneous data generated by video and image recognition, data mining and machine learning technology.

Many states have also included AI adoption in their strategic plans. Examples include:

  • Johor 4.0;
  • Pelan Strategik Melakaku Maju Jaya 2035;
  • Penang 2030;
  • SUK Perak 2021-2025;
  • Sarawak Digital Economy Strategy 2018-2022; and
  • Smart Selangor 2025, an initiative by the state to empower its people, businesses and the public sector by optimising digital technologies.

Judicial Use

Malaysia has been piloting AI sentencing tools in two states, Sabah and Sarawak, since January 2020, and in Kuala Lumpur and Shah Alam sessions and magistrates’ courts between July 2021 to April 2022. The Sabah and Sarawak courts have been aiming to move towards machine learning-based AI. The impetus behind this push to deploy AI in the judicial system is to achieve greater consistency in sentencing, improve transparency in dispensing justice and to preserve public confidence.

The AI tool is currently being used for possession of drugs offence under Section 12(2) of the Dangerous Drug Act 1952 (DDA) punishable under Section 12(3) of the DDA, and will soon be expanded to an offence under Section 380 of the Penal Code (theft in dwelling house). The Al requires critical information referred to as “parameters” for analysis and to make recommendations on sentencing, such as the relevant statutory provision, age, employment and socio-economic data. According to the courts, the reason behind choosing Section 12(2) of the DDA for the pilot is that the dataset for that offence is the richest one available.

Subsequently, in October 2022, with the upgrading of the current Integrated Court System to e-Kehakiman Sabah and Sarawak (eKSS), the courts in Sabah and Sarawak launched an expansion of AI in court sentencing called Artificial Intelligence in Court Sentencing (AiCOS). AiCOS will be expanded to cover the offence under Section 380 of the Penal Code.

However, some of the concerns expressed in utilising AI to make such decisions included the amplification of bias against minorities and marginalised groups as well as the lack of ability to consider mitigating factors and circumstances involved, which may eventually lead to a denial of justice. Malaysia’s Bar Council has also questioned the validity and transparency of the algorithm, given that the training dataset used was limited to only a five-year period. In response, the Sabah and Sarawak courts acknowledged that the ultimate sentence imposed by a Magistrate is one that is a result of an exercise of their discretion after taking into account all surrounding relevant circumstances, with  the recommendation from the AI system but one of them.

Even though the AI system has started to come into play in the Malaysian judiciary system, the operation of AI in courts is still at the experimental stage and its findings are not conclusive when it comes to decision-making. 

Legal Aid

A Digital and Artificial Intelligence Legal Aid Centre was launched in Sabah to provide legal advice to the public. The centre will provide legal advice and computer services, which will give users access to various legal resources such as statues, case laws, textbooks, agreements, forms and precedents.

Two drug possession cases decided in Sabah’s Magistrates’ Court were the first in Malaysia to apply AI in court. Two men, namely Christopher Divingson Mainol, and Denis P. Modili pleaded guilty to possessing drugs (0.16g of methamphetamine and 0.01g of methamphetamine respectively). The men were charged under Section 12(2) of the DDA, which provides for a fine of up to MYR100,000 or up to five years’ imprisonment or both on conviction. Based on the AI analysis, the recommended sentence was nine months for Christopher and ten months for Denis.

The court used AI solely as a tool to provide sentencing guidelines to assist the court, and the presiding judge retained full authority over the final sentence. In Denis’ case, although the AI recommended a sentence of ten months’ imprisonment, the court imposed a sentence of 12 months’ imprisonment.

In a similar vein, in Sarawak, Mohammad Shahrul Baizury Mostafa was charged under Section 12(2) of the DDA for possessing 0.06g of methamphetamine. After taking into account various factors such as the relevant law, the accused’s age, occupation and socio-economic level, the AI system suggested afine of MYR3,100 or 6 months’ imprisonment. The court followed the AI’s recommendation, after taking into account relevant arguments.

In Peninsular Malaysia, on 22 July 2021, the Office of Chief Registrar of the Federal Court of Malaysia issued a press statement releasing the sentencing guidelines for AI to be implemented in the sessions courts and magistrates’ courts. The implementation was done in three stages. The first phase involves 20 different offences under the Penal Code, Road Transport Act 1987, and the DDA; the second phase involves another 30 offences under various other legal provisions; and the third phase involves offences recorded in the e-Judiciary System.

Cybersecurity

The Malaysian government via the National Cyber Security Agency (NACSA) has launched the Malaysia Cyber Security Strategy 2020-2024 to curb cyber threats and define the future course of cybersecurity in Malaysia. NACSA is a dedicated agency that oversees all national cybersecurity functions formed under the aegis of the National Security Council. The Malaysia Cyber Security Strategy 2020-2024 is a comprehensive plan to realise the government’s vision to develop a cybersecurity strategy that is pragmatic and aims for a cyberspace that is secure, trusted and resilient while at the same time fostering economic prosperity and the well-being of its citizens. The strategy outlined several pillars, from effective governance and management to educating the next generation of cybersecurity defenders.

Further, per the Malaysian Public Sector Management of Information and Communications Technology Security Handbook, security controls should be included in AI-based application systems. The controls, among others, include setting a maximum limit on the automatic decision-making ability of Al systems or AI sub-systems of conventional applications, monitoring the stability of neural network-based applications for effectiveness, and not using a completely automated mode in an AI system employed for highly sensitive decision-making.

Border Security

In 2021, the Ministry of Home Affairs launched the National Integrated Immigration System project to replace the Malaysian Immigration System (MyIMMs). The existing immigration system, MyIMMs, needed to be replaced with a more sophisticated, integrated and holistic immigration system equipped with the latest technologies such as AI, IoT and BDA. Further, the new immigration system will be equipped with a Risk Assessment Engine, which applies AI and BDA technology in an integrated manner with data from other security agencies.

ChatGPT

The Malaysia Computer Emergency Response Team, which works closely with law enforcement agencies, has issued an advisory document titled “ChatGPT and Security Best Practices”, addressing topics such as security concerns, privacy concerns and the misuse of ChatGPT. In the field of education, the launch of ChatGPT raised concerns of plagiarism and cheating among academic circles in Malaysia. The Malaysian Academic Movement (Gerak) stated that ChatGPT would be “another weapon in the overall cheating game”.

In this regard, the Ministry of Higher Education has proposed a White Paper titled “A New Horizon for Science, Technology and Innovation – A Strategy for Malaysia”, aimed to manage technological disruptions to teaching and learning and the governance of higher education institutions. Further, at the time of writing, there is no specific regulation on generative AI in Malaysia.

No response has been provided in this jurisdiction.

While the use of AI in court sentencing has yielded benefits, it has also encountered criticisms from the legal community. Some critics fear that the involvement of AI could reduce sentencing to a mere technical exercise, stripping away the human element necessary for considering the multifaceted factors in each case, such as the aggravating and mitigating factors. Overreliance on AI in sentencing would be at odds with the courts’ customary reliance on the persuasive arguments put forward by legal counsels, which have always been of paramount significance.

Even though AI-based sentencing is intended to only serve as a guide for judges in lower courts, it could be argued that once a judge is provided with AI-generated recommendations prior to hearing arguments from legal counsel, they may be less likely to deviate significantly from the recommended sentence. This issue becomes more acute due to the apparent disregard or insufficient consideration of the personal circumstances of the accused. Consequently, judges are faced with the dilemma of choosing between consistency or mercy. After all, one of the guiding principles of sentencing is rehabilitation, rather than punishment.

To date, there is no specific legislation to address the liability for personal injury or commercial harm resulting from AI-enabled technologies.  Nevertheless, according to the Malaysia National AI Roadmap 2021-2025 (AIRmap), the government has plans to establish an AI Coordination and Implementation Unit responsible for prioritising foundational aspects of the AI-driven digital governance structure and measures, including policy, regulation, standards and guidelines. 

From a consumer protection perspective, AI-related products and services may be regarded as consumer products. The principal laws for consumer protection in Malaysia are the Sale of Goods Act 1957 and the Consumer Protection Act 1999. Based on the foregoing, AI-related products and services must then comply with statutorily required guarantees and conditions in relation to title, quality, fitness and price. 

In the context of contractual liabilities, per the general principles enshrined in the Contracts Act 1950, a contract is formed when the essential elements to form a contract, such as offer, acceptance, consideration and intention to create legal relations are met. Based on the foregoing, AI-based contracts may potentially be enforceable under the Contracts Act 1950 if the elements to form a valid contact are satisfied, provided there are no vitiating factors to render the contract void or voidable.

From a tort law perspective, in the event that the AI technologies harm the interest of individuals, the proprietor of the AI technologies ought to be liable. Further, the AI developers, manufacturers, or even users of the technologies may be held liable as well.

From a data protection perspective, the PDPA is of relevance. AI usage generally requires collection and processing of personal data. The PDPA sets out seven Personal Data Protection Principles with which a data user must comply; therefore, personal data processed by a data user using AI will nevertheless have to be processed in accordance with such principles.

Take the general principle (which generally requires consent as a condition for processing) as an example. This likely means that the data user must ensure that the AI used will not process personal data beyond the scope of the data subject’s consent. Other principles relating to security and integrity of personal data are also of direct relevance where AI is used to process personal data.

Compliance with the PDPA will likely minimise exposing the data user to liabilities when using AI to process personal data.

To date, there is no proposed regulation regarding the imposition and allocation of liability in connection with the use of AI technologies. Nevertheless, according to the AIRmap, the government has plans to establish an AI Coordination and Implementation Unit responsible for prioritising foundational aspects of the AI-driven digital governance structure and measures, including policy, regulation, standards and guidelines.

To date, there is no specific legislation to address the technical and legal characterisations of bias in algorithms in AI. In the context of the judiciary’s use of AI (see 8.2 Judicial Decisions), the Sarawak courts, via the Sarawak Information Systems, removed the “race” variable from the algorithm in the AI software, due to concerns about bias in sentencing. 

No response has been provided in this jurisdiction.

Facial Recognition

Although facial recognition technology has been adopted by the government and private entities in Malaysia, there is no specific law governing facial recognition.

Biometric Information

Pursuant to the PDPA, “personal data” includes sensitive personal data and expression of opinion about the data subject. Further, by virtue of the Code of Practice for Private Hospitals in The Healthcare Industry (registered pursuant to Section 23 of the PDPA), biometric information (such as fingerprints) collected from data subjects are considered as personal data. As such, biometric information may be considered as “sensitive personal data”, if it can reveal information, such as the physical or mental health or condition of a data subject. In relation to sensitive personal data, Section 40 of the PDPA is of relevance. It imposes an obligation on the data user to obtain explicit consent of a data subject prior to processing personal data.

The challenges posed by automated decision-making lie in the unpredictability of the analytics behind automated decision-making and profiling. Very often, companies do not disclose or justify their criteria and methods used to draw inferences and make decisions. Counterintuitive and unpredictable inferences may therefore be drawn by data users, without data subjects ever being aware, thus posing risks to privacy, identity, data protection and reputation.

The current data protection laws in Malaysia do not provide for the protection of data subjects in cases where decisions are based solely on automated processing. Firstly, it is not clear if inferences relating to an identified or identifiable living person would fall within the definition of personal data. Secondly, even if such inferences are considered personal data, it is not clear if express consent is necessary where processing is performed using analytics. This is made more complicated by the fact that the PDPA affords different standards of care in the case of sensitive data and ordinary personal data.

No response has been provided in this jurisdiction.

The Malaysia Competition Commission (MyCC) has not embarked on investigations or made decisions on price fixing agreements involving AI technology. Price fixing agreements have been condemned by MyCC in a number of cases. One of the most notable cases on a horizontal price fixing agreement is the case of Persatuan Insurans Am Malaysia (PIAM) & Ors v Competition Commission [2022] 9 CLJ 268. In the case, the insurers were alleged to have entered into horizontal price fixing agreements in fixing the labour rate at MYR30 per hour and spare parts discounts at 25% in respect of motor insurance claims involving workshops.

The MyCC in its final decision referenced a Merimen Online system where workshops would prepare and submit the cost estimates for repairs of vehicles for the insurers’ approval. Although the use of the system was not the basis upon which the decision of infringement was made, it was found by the MyCC that some insurers used the system to put default rates for the parts trade discount and labour rate and some did not. The MyCC considered the fact that some insurers did not use the system to fix default rates as a mitigating factor and reduced the fines for those insurers. That said, MyCC’s final decision was recently overturned by the Competition Appeal Tribunal in Persatuan Insurans Am Malaysia (PIAM) & Ors v Competition Commission [2022] 9 CLJ 268.

The National Fourth Industrial Revolution Policy (4IR Policy) issued in 2021 sets out the aim of the Malaysian government to use technology such as AI to improve environmental sustainability. This includes entering the Top 50 in Environmental Performance Index and reducing greenhouse gas emissions by 45% by 2030.

The government has highlighted five foundational 4IR technologies: AI, IoT, blockchain, advanced materials and technologies as well as cloud computing and big data analytics. Out of these five technologies, AI is acknowledged to be the most important technology, permeating all industries and playing an increasing role in daily life.

In respect of environmental issues, one of the initiatives identified is the provision of support to businesses and social enterprises to leverage the 4IR technologies to solve socio-environmental issues.

MOSTI published the AIRmap to harness AI capabilities in different industries to create a thriving and sustainable AI innovation ecosystem in the country. 

Recently, on 30 November 2022, Bank Negara Malaysia (BNM, or the Central Bank of Malaysia) issued a Climate Risk Management and Scenario Analysis policy document. Whilst there is no express mention on the use of AI, BNM considers the emergence of new or efficient carbon capture technology to be a relevant factor that should be taken into account by financial institutions in climate scenario analysis.

With the emphasis placed by the Malaysian government on the use of AI in addressing environmental issues, we expect more industry standards to be developed in the near future.

The use of AI technology in employee hiring has increased greatly due to AI’s efficiency in screening employment applications at a rapid pace as compared to manual application review. Whilst AI-based evaluations may bring with them a certain level of objectivity and fairness depending on how the program is set up (pre-setting conditions such as work experience and education), they could also result in discrimination, for instance if the algorithm rejects candidates with certain protected characteristics. The following provisions in Malaysian law are noteworthy.

Article 8(2) of the Federal Constitution states “Except as expressly authorised by this Constitution, there shall be no discrimination against citizens on the grounds only of religion, race, descent, place of birth or gender in any law or in the appointment to any office or employment under a public authority or in the administration of any law relating to the acquisition, holding or disposition of property or the establishing or carrying on of any trade, business, profession, vocation or employment”.

Section 69F(1) of the Employment Act 1955 empowers the Director General to inquire into and resolve any dispute between an employee and their employer in respect of any matter relating to discrimination in employment and the Director General may, pursuant to such decision, make an order.

Section 5(1)(c) of the Industrial Relations Act 1967 provides that employers shall not discriminate against any person in regard to employment, promotion, any condition of employment or working conditions on the ground that they are or are not a member or officer of a trade union.

Section 29(1) of the Persons With Disabilities Act 2008 (PWDA) provides that persons with disabilities shall have the right to access to employment on an equal basis with persons without disabilities. Section 29(2) of the PWDA requires employers to protect the rights of persons with disabilities on an equal basis with persons without disabilities in terms of work conditions, equal opportunities, remuneration, protection from harassment and the redress of grievances.

Employers should be mindful of the foregoing and strive to ensure equal employment opportunity when using AI technology in employee hiring. An individual who believes they have been discriminated against during a recruiting process could file a claim or complaint. Having said that, the risk of violation of applicable law for non-hiring due to an adverse rating by AI is unlikely to be high.

Notwithstanding the increasing reliance on AI technology at the workplace, AI technology has its limits as employers still require human input especially when it comes to defending claims of unfair dismissal. Dismissals in Malaysia must be with just cause and excuse; employers need to be able to explain what led to the dismissal (the main reasons, etc). If employers rely on an AI’s algorithm when making employees redundant, for example, they need to know how the algorithm came to its decisions, why certain employees were selected and why certain employees were retained. In essence, employers must be able to pinpoint the exact data point used by an AI and this would be near impossible given that AIs have complicated algorithms and use multiple data points.

AI technology automates the processes relating to evaluation of employee performance and monitoring work by collecting data from various sources such as emails, calendars and project management tools. The algorithms identify patterns in the data and provide objective input on areas requiring improvement from employees.

Whilst AI technology eliminates human errors and biases that can affect performance review and monitoring processes, it lacks the human element for building relationships – ie, it is a mechanically driven process in terms of providing feedback, support and guidance to employees. As IT technology only provides input based on data and algorithms, it may be unable to detect employee potential that may not be evident in the data. Conversely, AI algorithms provide a more comprehensive projection of employee performance and provide continuous assessment as real-time analysis, allowing employees to address performance issues in a timely manner rather than waiting for a set performance review cycle.

As discussed above, AI technology has its limits as employers still require human input, especially when it comes to defending claims of unfair dismissal. Dismissals in Malaysia must be with just cause and excuse; employers need to be able to explain what led to the dismissal (the main reasons, etc). In cases of poor performance, employers need to show that the employee was given sufficient notice/warning highlighting their poor performance and that the employee was given a reasonable opportunity to improve their work performance. As discussed in 14.1 Hiring Practices and Termination of Employment, it would be near impossible to pinpoint the exact data point used by an AI given that AIs have complicated algorithms and use multiple data points.

While there are no regulations expressly regulating the use of AI by digital platform companies, to the extent that such digital platform companies are also financial institutions regulated by BNM or capital market entities regulated by the SC, for example by virtue of their digital bank operations, or digital trading platform operations, their use of AI will be subject to the binding guidelines and policies of the BNM and SC, respectively, as discussed in 15.2 Financial Services.

The use of information technology by financial institutions regulated by BNM may be subject to the binding policies of the BNM, including among others the BNM Policy Document on Risk Management in Technology (RMiT PD). While the RMiT PD does not expressly address the use of AI, the RMiT PD defines technology risks as risks emanating from the use of information technology and the internet. Given that AI solutions are often delivered through or effectuated by information technology and the internet, the use of AI by certain financial institutions will therefore be subject to the regulatory scope of the RMiT PD.

The RMiT PD requires financial institutions subject to it to establish appropriate technology risk appetite, implement a sound and robust technology risk management framework, and implement a cyber resilience framework, among other things. It also prescribes certain minimum standards and control measures to be complied with and put in place by financial institutions in their use of and reliance on technology.

When offering online banking or insurance services, financial institutions subject to the RMiT PD are required to notify the BNM of various matters, including risks identified and strategies to manage such risks. This will presumably include any AI-related risks and policies put in place by the financial institution to address such AI-related risks. The use of AI may also be subject to the cybersecurity provisions of the RMiT PD, depending on the technical specifications of the AI in question.

Other BNM policies such as the BNM Policy Document on Outsourcing (the “Outsourcing PD”) and BNM Policy Document on Management of Customer Information and Permitted Disclosures (the “Customer Information PD”) may also apply, as the Outsourcing PD and Customer Information PD both contain technology-related provisions that supplement the RMiT PD. For example, the Outsourcing PD may apply in the context of procurement of an AI solution by a financial institution, and the Customer Information PD may apply where the use of technology including AI relates to the management of customer information. 

Some other financial-related businesses, such as digital asset exchanges, digital asset custodians and digital token issuers, which are subject to the regulatory oversight of the SC by virtue of the CMSA, will be bound by the technology-related provisions of the binding guidelines of SC, such as the SC Guidelines on Management of Cyber Risk, and SC Guidelines on Digital Assets. In a similar vein, therefore, the use of AI by these market players will have to comply with the relevant requirements in these guidelines.

The use of AI in the healthcare sector is not specifically regulated. However, the current healthcare-related laws will continue to apply. For instance, where a medical device functions based on AI, its approval for market circulation and its performance and safety will nevertheless be subject to the purview of the Medical Device Authority and the MDA.

The term “medical device” may include a software, and the MDA does not appear to distinguish between traditional software and AI-based software. Accordingly, it could be deduced that any AI-powered software that possesses the capability of diagnosing, preventing, monitoring, treating or alleviating a disease would likely be considered as a “medical device” under the MDA.

No response has been provided in this jurisdiction.

Where technologies are involved, the secrecy or the trade secrets involved may not be confined to mere technology itself, but also its use. This could potentially include personal data that is processed and compiled by AI.

In order to preserve the secrecy of the AI technology or the data generated from its use, companies should adopt the necessary safeguarding measures. For example, companies should be mindful when disclosing such data to their employees by ensuring that with each instance of disclosure:

  • employees are made known that the information is confidential in nature;
  • disclosure is only made by reason of seniority or responsibility and such reasons are made known to the employees;
  • access is restricted appropriately to the confidential information;
  • technological measures are employed to prevent unauthorised duplication of the confidential information; and
  • non-disclosure agreements are executed by the relevant employees or confidentiality clauses are incorporated into the employees’ contracts of employment.

Works of art and works of authorship generated by AI have seen a boom due to the increased use, ease and greater accessibility of software. Users may simply input commands, descriptions and instructions resulting in AI-generated works of art. This raises the question as to the authorship of these AI-generated works of art as well as whether copyright subsists within such works. While the UK Copyright, Designs and Patents Act 1988 explicitly provides that computer-generated works are eligible for copyright protection, the equivalent provision is absent from Malaysian Copyright Act 1987.

While courts in the past have held that an author under section 3 of the Copyright Act 1987 may also include a body corporate (eg, the company that developed the AI) in addition to a natural person, the other provisions of the Copyright Act 1987 may not seem to align with this interpretation. For example, the duration of copyright in relation to literary, artistic and musical works extends for a period of 50 years after the death of the author. This inherently suggests that an author in relation to these types of works would need to be a natural person who would be capable of expending a certain level of skill, judgement or effort. Further, the human element input here arguably is confined to the input of ideas or concepts (ie, input, descriptions or instructions by the user, which are not protectible by copyright) and not the ultimate expression of the work generated by the AI.

For these reasons, it is likely that AI would not qualify as an author of its generated works, nor would the human users whose input may simply be confined to ideas or concepts. By extension therefore, it is arguable that copyright may not subsist in these AI-generated works.

OpenAI, the entity behind ChatGPT, is at the forefront of AI chat system development. However, similar to the context of AI-generated artworks, it is unlikely that copyright protection can be applied to the output created through OpenAI/ChatGPT, as neither the AI itself nor its users can be considered the authors of such work.

Despite this, potential copyright infringement issues may arise for both users and OpenAI. It is assumed that OpenAI/ChatGPT’s machine learning process involves the analysis and assimilation of various datasets, texts and works in the public domain. As such, there is an inherent risk of copyright infringement, as not all materials used might be free of copyright restrictions.

As stipulated under Section 36(1) of the Copyright Act 1987, copyright infringement occurs when a person, without the copyright owner’s consent, performs an act subject to copyright protection under the Act, or causes another person to do so. In this context, OpenAI/ChatGPT could be argued to have potentially infringed copyright by reproducing copyrighted works for machine learning purposes. By extension, a user who instructs ChatGPT to generate a work or product could also be implicated in this infringement. This, of course, remains to be tested in the national courts.

In-house attorneys should be more cognisant of the standards set and regulations issued in respect of their respective industries. Where capital markets are concerned, in-house attorneys should be cognisant of the guidelines and handbooks issued by the SC. Where financial services are concerned, in-house attorneys should be cognisant of the applicable BNM policy documents relating to the use of technology.

Moreover, where a company processes personal data using AI systems, in-house attorneys should be mindful of adhering to the PDPA. Due to the sensitive nature of personal data and the inherent value and advantage it offers to competitors, rigorous risk management and compatible security frameworks should be implemented in order to protect such personal data.

Board of directors of financial institutions that are subject to the RMiT PD are specifically tasked to implement institutional policies and frameworks in relation to their institution’s use of technology, which presumably includes AI. Other sectoral guidelines and policies may contain similar requirements.

Further, certain laws can hold corporate directors accountable for the offences committed by the body corporate to which they serve. For instance, where AI is used in the processing of personal data, corporate directors should be aware that any contravention of the PDPA provisions by their company may also entail personal liability, because Section 133(1) of the PDPA allows any person who at the time of the commission of the offence was a director to also be charged jointly or severally in the same proceedings with the body corporate, and if the body corporate is found to have committed the offence, such director shall also be deemed to have committed that offence unless any of the statutory exceptions applies. Other legislation may contain similar provisions.

Shearn Delamore & Co.

Shearn Delamore & Co.
7th Floor, Wisma Hamzah-Kwong Hing
No 1 Leboh Ampang
50100 Kuala Lumpur
Malaysia

+60 3 2027 2727

+60 3 2078 5625

info@@shearndelamore.com www.shearndelamore.com
Author Business Card

Law and Practice

Authors



Shearn Delamore & Co was established in 1905, and is one of the leading and largest law firms in Malaysia. With over 100 lawyers and 230 staff, the firm has the resources to run and manage the most complex projects, transactions and matters. The firm maintains extensive global network links with foreign law firms and multilateral agencies, and is a founding member of the blue-chip legal network Drew Network Asia (DNA). The firm’s technology, media and telecoms team, comprised of lawyers from various disciplines including IP, financial services, corporate and M&A, tax and competition, assists clients with the legal issues emerging from the convergence of technology, media and communications. It offers comprehensive and practical legal solutions to clients operating in an increasingly digitised world. The approach and solutions are tailored to meet clients’ specific strategic and commercial objectives. The firm would like to thank Puan Dayang Ellyn Narisa Binti Abang Ahmad, Jamie Goh Moon Hoong, Boo Cheng Xuan, Khoo Yuan Ping, and Yee Yong Xuan for their contribution to this chapter.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.