Artificial Intelligence 2024 Comparisons

Last Updated May 28, 2024

Contributed By Ganado Advocates

Law and Practice

Authors



Ganado Advocates is one of Malta’s foremost law practices. It traces its roots to the early 1900’s, where it was founded in Malta’s capital city, Valletta. The firm has grown and adapted itself over the years to meet the changing needs of the international business and legal community. With a team of over 100 lawyers and professionals from other disciplines, it is consistently ranked as a top tier firm in all its core areas, from corporate law to financial services, maritime, aviation, intellectual property, data protection, technology, litigation, employment and tax law. Ganado Advocates has over the past decades contributed directly towards creating and enhancing Malta’s hard-won reputation as a reliable and effective international centre for financial and maritime services. Today, the firm continues to provide high standards of legal advisory services to support and enhance Malta’s offering.

Up until the time of writing (May 2024), Malta has not legislated specifically to cater for the legal revolution that AI is creating.

Malta’s legal system is a “mixed” one, where its civil, commercial and criminal laws are principally based on civil law, whilst the main source of its public and administrative laws is common law. These legal systems are still influential to the interpretation of Malta’s laws and it is expected that any decisions of the Italian, French and English courts in relation to AI will have an influence on the interpretation of Malta’s civil, commercial and public laws.

Contractual and Tortious Lability

General principles of contract and tort law would continue to apply to the use of artificial intelligence in Malta. These are covered by the Civil and Commercial Codes (Chapters 16 and 13, respectively) of the Laws of Malta.

Acting in good faith (in the manner of a bonus paterfamilias) is one of the underpinning principles of both contract law and tort law. The use of AI would be generally deemed to be a tool and the user of such “tool” remains ultimately responsible for damage caused by it or through its use. The principle of culpable negligence under Article 1033 of the Civil Code whereby “any person who with or without intent to injure, voluntarily or through negligence, imprudence, or want of attention, is guilty of any act or omission constituting a breach of the duty imposed by law, shall be liable for any damage resulting therefrom” is particularly relevant to damages resulting from the use of AI. As with any technology, the use of AI brings with it the duty of care towards others. This applies both where the use of the technology is a private one, as well as where it is used in a professional context. The user is not able to rely on ignorance of the effects of the use of the technology or the “black box” phenomenon.

IP, Data Protection and Consumer Affairs

Apart from its domestic laws, as an EU member state, Malta’s laws adopt harmonised EU legislation in most of the areas that are relevant to AI, be they copyright and IP, data protection, use of medical devices, product safety or consumer protection law. The domestic laws that have transposed the EU Directives or support EU Regulations in these fields, most notably the Copyright Act (Chapter 415 of the Laws of Malta), the Data Protection Act (Chapter 586 of the Laws of Malta), the Consumer Affairs Act (Chapter 378 of the Laws of Malta) and the Medical Devices Regulations (Subsidiary Legislation 427.44) have not been modified to cater for AI specificities. Neither has Transport Malta (the Authority for Transport in Malta) updated its Highway Code or introduced any specific provisions related to the use of automated vehicles in Malta.

One piece of legislation that was enacted and that could have a considerable impact on the development of AI solutions in the health sector is the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10), Under this law, where the use of health data by the health providers for purposes other than the original intended use, which purposes are listed in the law, can lead to benefits for the health system in Malta, this use can be deemed permitted subject to the use of anonymisation techniques or clearance from an established Ethics Committee. This permitted secondary use of health data should lead to AI advances in the Maltese health sector.

In summary, all relevant public authorities and bodies are keeping a watchful eye on developments in their areas of interest whilst, at the same time, waiting for more concrete signs of the need to change the status quo of the legal frameworks they are responsible for. Naturally, the discussions being held at pan-European level and at inter-supervisory authority level, will determine how the responsible authorities and the legislature will behave going forward.

Maltese Regulators

This said, the Malta Digital Innovation Authority (MDIA) was set up as a public authority in 2018 to lead and advise the government on developments and initiatives in the innovative technology space, including AI. It has developed and is revising a national AI Strategy for Malta and is also spearheading legislative change that will allow for proper regulation, in accordance with the EU’s AI Act.

Back in 2019 the MDIA launched what it described as “the world’s first national AI certification programme aiming for AI solutions to be developed in an ethically aligned, transparent and socially responsible manner”. The AI-ITA scheme laid out a certification programme similar to that found in today’s EU AI Act through which, according to the risks envisaged in the use of the technology, developers and deployers could attain certification through a technology systems auditor licensed by the MDIA who would certify that the technology met pre-set objectives and criteria.

Inevitably, Malta’s regulators, in particular the Malta Financial Services Authority (MFSA) and Malta Gaming Authority (MGA), have been following and commenting on developments in the use of technology, including AI, within their sectors of focus. Other legislation that is being harmonised at EU level will have an impact on the use of AI in certain sectors. In this vein, the MFSA has issued Guidelines on DORA (the EU’s Digital Operational Resilience Act), which updates its Guidelines on Technology Arrangements, ICT and Security Risk Management, and Outsourcing Arrangements for public consultation. This is another key aspect that impacts and regulates the use of Artificial Intelligence within financial services.

EU Regulators

The Guidance by the European Central Bank (ECB) and the European supervisory authorities – the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA) and the European Securities and Markets Authority (ESMA) – on the use of AI, cyber-risk and digital resilience, will continue to be key to developments in Malta regulating the use of technology, including AI, in the financial sector, where, save for harmonised standards at EU level, one would expect regulation to come in the form of directives issued by sectoral regulators. This approach is likely to be experienced in all sectors, including transport, health and education.

AI is pervasive in the industries that form the basis of Malta’s economic activity. In particular, large-scale use of AI is known to take place in the financial services (banking, insurance and investments), gaming (both i-gaming and video gaming) and health sectors, amongst others. The uses range from predictive AI (for instance in risk and credit worthiness checks, as well as prognostic medicine) to generative AI (in content and software development, as well as customer support and compliance).

Transport

In the public sector, the government has expressed the need to revert to AI to solve Malta’s traffic problems. From press releases that were published it seems that the government and relevant authorities are in fact investing in intelligent management systems. A pilot project was launched, under the leadership of Transport Malta, with the following goals:

  • to reduce congestion and emissions;
  • to identify patterns in transport behaviours;
  • to deliver insights to enable intelligent journey-planning and scheduling of public transport;
  • to create intelligent private journey routing (in conjunction with third-party applications); and
  • to assist with monitoring, policing, and enforcement.

Health and Education

The health sector is also relying on AI to assist with the procurement and effective management of medicines. The Central Procurement and Supplies Unit (CPSU) has launched a pilot project for a forecasting application that will be a decision-making tool used by the CPSU to help in budgeting, planning the procurement process (tendering, quotations, etc) and planning the ordering process. It would attempt to predict future outcomes based on past events and management insight. This application shall provide CPSU management and procurement personnel with an insight and the baseline tools and techniques to help better manage and react to fluctuations in demand.

In education, the Ministry of Education is reported to be working on a pilot project that will develop an AI-powered adaptive learning system to help students achieve better education outcomes through personalised learning programmes based on student performance, ambitions and needs. The pilot will also help teachers to build more formative assessments of the pupils’ capabilities. 

Tourism and Utilities

The Malta Tourism Authority is also reported to be launching a Digital Tourism Platform to allow for more meaningful use of tourist data.

In a pilot project owned by the Ministry for Energy, Enterprise and Sustainable Development, AI algorithms will be used to collect, organise, and analyse current data to discover patterns and other useful information relating to water and energy usage. The solution will deploy large-scale analytics and machine learning on customer data to help the utility companies to maximise resources and subsequently provide responsive real-time customer service management. Concurrently, they can make real-time adjustments to attain optimised generation efficiency.

Predictive maintenance models and scenarios will also be developed.

This project is expected to drive better efficiency, resilience and stability across Malta’s energy and water networks, and lay the foundation for the next evolution of its smart grid network.

The Malta AI Strategy and Vision 2030

The Malta AI Strategy and Vision 2030 contains 22 action points in its education and workforce section, six dealing with legal and ethical issues, and 11 in the part focussing on ecosystem infrastructure. These are being rolled out by the MDIA in conjunction with other public entities.

The objectives in the education and workforce space are:

  • understand and plan for the impact of technology and automation on the Maltese labour market;
  • equip the workforce with stronger digital competencies and new skills;
  • build awareness amongst the general population of what AI is and why it is important;
  • build awareness of AI amongst students and parents;
  • foster and embrace the adoption of AI in education;
  • develop teachers’ knowledge and awareness of AI in education;
  • equip all students enrolled in higher education programmes in Malta with AI skills; and
  • increase the number of graduates and postgraduates with AI-related degrees.

The legal and ethical objectives are:

  • establish an ethical AI framework towards trustworthy AI;
  • launch the world’s first national AI certification framework;
  • appoint a technology regulation advisory committee to advise on legal matters; and
  • set up a regulatory sandbox for AI and a data sandbox for AI.

The objectives related to ecosystem infrastructure are:

  • investment in Maltese language resources;
  • incentivise further investment in data centres;
  • establish a digital innovation hub (DIH) with a focus on AI;
  • increase the extent of the open data availability to support AI use cases;
  • provide cost-effective access to compute capacity;
  • expand Malta’s data economy through 5g and IoT; and
  • identify best practices for securing national AI solutions.

Other Initiatives

In addition to that mentioned above, the MDIA, together with the Ministry for the Economy and other constituted bodies, such as TechMT (an industry/public partnership), have been playing a central role in the promotion of AI initiatives. From the launch of sandboxes (such as MDIA’s technology assurance sandbox), to the setting up of business incubators (such as the DIH) and the making available of grants for digital innovation and grants for AI research, as well as seed funds, this network of bodies has been supporting technology development and innovation, including the development and adoption of AI.

Moreover, under a project to be funded by the EU, the MDIA, Malta Council for Economic and Social Development (MCESD) and University of Malta have created a hub (the Malta – EDIH) wherein the complete set of services of a European Digital Innovation Hub are provided on an open, transparent, and non-discriminatory basis and targeted towards SMEs, small mid-caps, and public sector organisations. Within the Hub public workshops are organised to facilitate two-way dialogue between AI experts and industry.

The MDIA was established in 2018 through the Malta Digital Innovation Authority Act (Chapter 591 of the Laws of Malta) with the aim of regulating innovative technology through the issuing of compliance certificates (both mandatory and voluntary). Its remit was further defined through the Innovative Technology Arrangements and Services Act (Chapter 592 of the Laws of Malta). Originally focused mainly on the regulation of distributed ledger technology (DLT), its remit was quickly expanded to other forms of innovative technology, including AI.

Initially, Malta took a proactive and innovative approach to the regulation of AI within its jurisdiction. In October 2019, Malta issued the Strategy and Vision for Artificial Intelligence in Malta 2030. This strategy outlined the policy that the country set out to adopt within the following years in order to “gain a strategic competitive advantage in the global economy as a leader in the AI field”. The basis of the strategy’s overall vision is three-fold. Firstly, it focuses on building an infrastructure that promotes the investment in AI applications and R&D. Secondly, it explores how these AI applications can be deployed in the private sector and, thirdly, it promotes adoption of AI in the public sector so as to maximise the overall benefit that can be derived from this innovative technology. This strategy is constantly being updated and a revision, taking into account the various recent developments, is expected to be issued soon.

From a regulatory perspective, the strategy included an ethical AI framework (see 3.3 Jurisdictional Directives) as well as a national AI certification programme. A Technology Regulation Advisory Committee was also founded to act as a point of reference for matters relating to the laws and regulation of AI, as well as assisting on the creation of regulatory and data sandboxes.

The AI Sandbox programme, which ensures that AI systems are developed in line with technology-driven control objectives, is one of the cornerstones of the 2030 vision.

The laws regulating the functions and scope of the MDIA are also currently being revised to better equip the Authority to meet its obligations and aims, going forward. In particular, the revisions make way for the introduction of local legislation required to complement the AI Act once this comes into force.

To date, the regulatory approach remains an optional one where developers are encouraged to make use of regulatory sandboxes to test whether their technology will live up to the scrutiny of mandatory regulation once this comes into force, in the shape and form of EU harmonised laws and standards.

Apart from those legislative developments mentioned elsewhere in this chapter, to date, no specific, local AI laws have been drafted, nor have laws relating to intellectual property, data protection or other areas that are central to AI been amended to cater for the challenges posed by the technology. This said, regulatory authorities are expected to spearhead developments in this space, in particular in the field of financial services and insurance.

No AI-specific legislation has been enacted in Malta. Legislative preparatory work is underway to allow for the introduction of the AI Act, which will have direct effect in Malta.

Back in October 2019, an ethical AI framework for the development of safe and trustworthy AI was published as part of the Strategy and Vision for AI in Malta 2030. This non-binding AI framework was essentially a set of AI governance and control practices which were based on four guiding principles. Firstly, AI systems must allow humans to maintain full autonomy whilst using them. Secondly, AI systems must not harm humans, the natural environment, or any other living beings. Thirdly, the development, deployment and use of AI systems must always be in alignment with the principle of fairness. Finally, one must be able to understand and challenge the operations and outputs of AI systems.

This AI framework reflected the Maltese policymakers’ aspirations to strike a balance between endorsing the uptake of AI technology, whilst also ensuring its safe deployment within the relevant industries.

To date all AI-specific legislation is in draft form. Malta has not yet legislated to allow for the transposition of AI-related directives and to cater for those measures in the EU Act that require national regulation and co-ordination between authorities. This said, a draft of such national laws is currently being discussed and consulted upon between the interested Ministries and public bodies, although it has not been rendered public. The enabling legislation (the “Malta Digital Innovation Authority (Amendment) Act”) that will allow for the entering into force of subsidiary legislation to regulate such matters, is currently passing through the second reading of Parliament. It will be enacted once the third reading is completed in the second half of 2024 and it is expected that Regulations relating to Artificial Intelligence will be enacted soon after.

Under current legislation (the Innovative Technology Arrangements and Services Act (Chapter 592 of the Laws of Malta)) developers of AI solutions may, voluntarily, obtain certification of their technology whereby the MDIA will certify that the technology meets pre-determined control objectives. Under the Technology Assurance Assessment Framework, applicants would need to appoint a systems auditor from amongst a list of auditors that are certified by the same MDIA to be competent to verify AI systems, who will verify whether the system meets the published criteria. This system, that was originally conceived for the audit of DLT systems and adopted to other forms of innovative technology, is similar in concept and process to that laid down in the AI Act and the control objectives are expected to be similar to those that will be stipulated under the same framework.

Upon the coming into force of the AI Act and any other pieces of EU legislation directed at the regulation of AI, Maltese laws, initiatives (similar to the technology certification one mentioned above) and processes that would be inconsistent with these harmonised rules will be disapplied.

There would not seem to be any laws that would not survive once the AI Act enters into force. As mentioned in 3.4.1 Jurisdictional Commonalities, the laws are currently being amended to allow for easier and less burdensome legislative processes to enact new laws and amend current ones that may be required to supplement the said AI Act.

This is not applicable in Malta.

Upon the coming into force of the AI Act and any other EU legislation directed at the regulation of AI, Maltese laws, initiatives (similar to the technology certification one mentioned in 3.4.1 Jurisdictional Commonalities) and processes that would be inconsistent with these harmonised rules will be disapplied.

There would not seem to be any laws that would not survive once the AI Act enters into force. The laws are currently being amended to allow for easier and less burdensome legislative processes to enact new laws and amend current ones that may be required to supplement the said AI Act.

As mentioned in 3.4.1 Jurisdictional Commonalities, bills are currently laid before Parliament to amend the Malta Digital Innovation Authority Act and which would, expectedly, allow for subsidiary legislation to be introduced to iron out any inconsistencies in law that may hinder the proper operation of the AI Act and any other EU technology-specific legislation. As an EU member state Malta will adopt all other EU laws that may impact the take up of AI. Currently, under the Innovation Technology Arrangements and Services Act, one may apply for technology assurance certification, including in relation to AI solutions. Naturally, this scheme will be limited to areas that do not overlap with the certification requirements envisaged by the AI Act. This said, there would seem to be little incongruence between the said scheme and the AI Act requirements and the MDIA has posited the scheme, which is a voluntary one, as a means for developers of AI solutions to test their solutions from a regulatory perspective in preparation for the obligations that may arise under the AI Act.

With the increasing relevance of generative AI, it is also possible that IP laws would be modified to allow for the creation of certain ownership rights in AI-generated works. This would be particularly relevant to the i-gaming and e-gaming development sectors that are relevant to Malta’s economy. Although there have been discussions and proposals in this regard, it is too early to say which position would be adopted by the government.

The Maltese courts have not had the opportunity to address the legal challenges being posited by AI, particularly in relation to intellectual property rights and damages resulting from the use of AI solutions. Decisions of foreign courts in those jurisdictions on whose laws Maltese law is modelled would be of significant importance and would offer guidance to Maltese courts when deciding these unexplored issues. Thus, UK court judgments on intellectual property rights and Italian and French court judgments in relation to tort and contractual damages resulting from the use of AI would be of interest to the courts in Malta.

Maltese courts have not had to grapple with definitions of AI and have not shed any light on what should constitute AI. It is understood that the definition of AI may change depending on the legislation being applied. It is important to note that the Maltese legal system does not expressly recognise the principle of judicial precedent, although court judgments, especially those of the Court of Appeal and Superior Courts, act as a source of interpretation of the law. This said, Maltese courts tend to adopt a particularly positivist attitude to the application of the law and would not attempt to provide interpretations that go beyond that found in the particular law that they are applying. Therefore, a universal interpretation of AI that would apply to all legal instruments, is not foreseeable. The legislature and regulatory authorities acting within their vested powers, would be free to adopt different meanings of the term “artificial intelligence” and may define such term in the law, directives, decisions and policies being drafted by them. The courts would then follow such interpretation when applying such law, directives, decisions and policies, depending on the subject matter of the case before them.

The MDIA has been tasked by the government of Malta to lead the initiatives and policies surrounding AI. It acts as advisor to the government on all matters relating to AI and co-operates with other authorities and public bodies that have a role to play in the regulation of this technology within the sectors they are responsible for.

The MDIA has formulated Malta’s AI strategy (see 2.2 Involvement of Governments in AI Innovation for further detail) and is currently implementing the various action points in co-operation with other stakeholders.

The MFSA and MGA are expected to also play a lead role in shaping the use of AI in the financial and gaming sectors, which are key industries in Malta. The Ministry of Health and Active Ageing, acting through various units that are tasked with co-ordinating and leading projects for the said Ministry, will also have an important role to play. Transport Malta will likewise be instrumental in regulating the use of autonomous vehicles and AI-enabled means of transport, including drones.

The Office of the Information and Data Protection Commissioner (IDPC) will continue to monitor developments relating to the use of personal data in and by AI and will regulate these matters in accordance with co-ordinated positions at the European Data Protection Board (EDPB) level.

With greater harmonisation at EU level, it is expected that all legal instruments will converge on the definition of AI provided in the AI Act. This does not mean that any applications that may fall outside the said definition will not be regulated by technology-agnostic rules, in the same manner as applications that will be classified as AI under such a definition. The AI Act, Cybersecurity Act, Digital Operational Resilience Act (DORA), Network and Information Systems Directive (NIS 2), amongst other instruments, all regulate, to different degrees and from different angles, the deployment of technology, including AI. The same approach will be reflected at national level in relation to all industries and sectors.

Therefore, it is not envisaged that there will be conflicts in the definition of AI which could lead to conflicting obligations resulting from different regulatory frameworks. However, deployers of AI would need to carry out a 360° evaluation of all the legal obligations that apply to the use of such technology in the particular sector and circumstances they are in.

The fact that, despite greater harmonisation at EU level, different laws may be applied by different jurisdictions, leads to greater complexity for deployers of AI, who invariably provide services across jurisdictions and even continents. This is of particular relevance to Malta where various developers and deployers that may be set up within the jurisdiction would be providing their services to clients in other jurisdictions.

Here and below I have removed the quote marks because we would prefer to format these lists in a manner consistent with our house style (eg, if a bulleted ends in a semicolon, the next one should not start with a capital letter).

The EU AI Act, which shall be directly enforceable in Malta, defines an “AI System” in Article 3(1) as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Until now, the only attempt at defining AI by the MDIA had come in the form of the AI – ITA Guidelines, in which it was held that AI may be deemed an Innovative Technology Arrangement (ITA) if it consists of software, the logic of which is based on underlying data sets and which exhibits one or more of the following functions:

  • the ability to use knowledge acquired in a flexible manner in order to perform specific tasks and/or reach specific goals;
  • evolution, adaptation and/or production of results based on interpreting and processing data;
  • a systems logic based on the process of knowledge acquisition, learning, reasoning, problem solving, and/or planning; and
  • prediction forecast and/or approximation of results for inputs that were not previously encountered.

The same guidelines further specify how AI may be recognised as an ITA by the MDIA if it applies one or more of the following techniques and/or algorithms:

  • machine learning and variations thereof (eg, deep learning);
  • neural networks or variation thereof (eg, convolutional neural networks (CNN) or recurrent neural networks);
  • pattern recognition (eg, computer vision);
  • natural language processing (NLP);
  • predictive systems;
  • fuzzy systems;
  • expert systems;
  • optimisation algorithms; (eg, evolutionary and/or hill climbing algorithms);
  • probabilistic classifiers (eg, naïve Bayes); and
  • cluster analysis algorithms (eg, k-means clustering).

This voluntary certification scheme will be superseded by the AI Act framework and, at best, is expected to be fine-tuned to the needs of those applications that do not qualify as requiring certification under the AI Act.

MDIA

In fulfilment of its mandate, the MDIA seeks to, inter alia, promote:

  • governmental policies that favour the deployment of ITAs within the public administration;
  • ethical and legitimate criteria in the design and use of ITAs to ensure quality of service and security;
  • transparency and auditability in the use of ITAs;
  • fair competition and consumer choice; and
  • the overall advancement and adoption of ITAs;

The MDIA also seeks to prevent;

  • the misuse of ITAs by ensuring that the ITA standards meet consumers’ legitimate expectations;
  • the breach of the data protection rights of users, consumers and the public in general;
  • the use of ITAs for money laundering and terrorist financing purposes; and
  • the use of ITAs in a manner which might tarnish Malta’s reputation.

MFSA

The MFSA regulates banking, financial institutions, payment institutions, insurance companies and insurance intermediaries, investment services companies and collective investment schemes, securities markets, recognised investment exchanges, trust management companies, company services providers and pension schemes. Its mission is to safeguard the integrity of markets and maintain stability within the financial sector for the benefit and protection of consumers. The MFSA collaborates with other local and foreign bodies, government departments, international organisations, ESMA, the EBA, EIOPA, colleges of supervisors, the European Systemic Risk Board (ESRB), the ECB, the Single Resolution Board (SRB) and other entities which exercise regulatory, supervisory, registration or licensing functions and powers under any law in Malta or abroad.

Other Regulators

As the regulator for the gaming industry in Malta, the MGA seeks to promote and ensure that gaming is fair and transparent, prevent crime, and protect minors and vulnerable players.

The IDPC is the national data protection authority. Its role is to supervise and ensure that the necessary of levels of data protection are implemented in Malta, whilst also investigating and taking corrective measures against those entities that fail to adhere to their obligations.

Whereas, to date, in view of its remit, the MDIA is not known to have applied fines or taken enforcement action, apart from suspending or cancelling licences, the MFSA, MGA and IDPC have all taken corrective measures and imposed fines for breaches of the frameworks that they are responsible for upholding. It does not, however, seem to be the case that any fines have been imposed or action has been taken against any industry players as a result of their deployment of AI solutions.

Despite the various government authorities discussed in this article setting standards for the sectors they oversee, to date no standards have been imposed specifically in relation to the use of AI. Neither do representative bodies of professionals seem to have set standards for the use of AI in their professions.

Until standards are harmonised across jurisdictions, it is expected that any standards that may be in line with what is applied in one jurisdiction will not automatically be accepted by regulatory bodies in other jurisdictions. This said, regulatory authorities within the EU collaborate closely together within their pan-European bodies of regulators, such as the EIOPA, EDPB, ESMA and EBA. It would be expected that standards that are set by these authorities would equally find application in Malta.

As discussed in 2.1 Industry Use, the government has embarked on a number of pilot projects where the use of AI for certain deliverables mentioned therein is being tested. Other than these, no further uses of AI by the government have been publicised.

No decisions related to government use of AI have been given by the Maltese courts.

The use of AI in national security matters has not been publicised.

To date, Maltese law, regulators or the courts have not dealt with the complex legal issues surrounding generative AI. It is expected that under general principles of contract law, the courts would uphold the limitations embedded in the licences and terms and conditions for use of generative AI solutions.

Copyright and Generative AI

In instances where the use of generative AI is not bound by licensing conditions, whether copyright could arise in generated works would depend on the originality of the generated works and the level of human intervention in the generation of the works.

Should the AI-generated work constitute a substantial copy of an original work and this is put in use by the entity that, through its prompts, generated the work using a third-party model, the said entity would be in breach of the copyright of the original work’s author, irrespective of the entity’s knowledge or intention in creating a copy of the original work. The only exception to this is where the exhaustive exceptions to copyright protection found in Article 9 of the Copyright Act (Chapter 415 of the Laws of Malta) apply. These include acts of reproduction of literary works by public libraries which are not for economic advantage, the reproduction of works for purposes of teaching or illustration without compensation, the reproduction or translation of works to render them accessible to persons with disability without compensation.

Similarly, if a model is trained on works in which copyright arises, without the authorisation of the copyright owner, the developers are liable for breach of copyright. This may result in the copyright owner prohibiting the commercial use and/or deployment of the AI model.

Where a work that would ordinarily qualify for copyright protection is created wholly by an autonomous process without meaningful intervention in the creation of the work, copyright would not arise. This is because copyright arises where the author or any of the joint authors of the artistic, literary or audiovisual work that qualifies for copyright protection is a citizen of, or is domiciled or permanently resident in, or in the case of a body of persons, is established in, Malta or a state in which copyright is protected under an international agreement to which Malta is also a party. The term “author” is defined as “the natural person or group of natural persons who created the work eligible for copyright”. The creation of a work by an autonomous process would therefore do away with the “author” and, consequently, copyright could not arise in it.

Personal Data and Generative AI

Another risk posed by generative AI relates to the use of personal data in both the training of the model and the interaction with it at prompt stage. The use of personal data in training a model must necessarily comply with one of the legitimate grounds under Article 6 of the GDPR. This is often not the case. The situation is compounded even further if special categories of data are used in the training of the model. It is with this in mind that the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10) was enacted. Under this law, where the use of health data by the public health providers for purposes other than the original intended use, which purposes are listed in the law, can lead to benefits for the health system in Malta, this use can be deemed permitted subject to the use of anonymisation techniques or clearance from an established ethics committee.

It is also important to note that there are no Maltese law exceptions to Article 22 of the GDPR. Under this provision of law a data subject may object to the fully automated processing of his or her data, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. Neither does Maltese law make any exceptions to the data subject rights of access to, rectification of and deletion of his or her personal data which is used in the training of AI models.

The risks of using personal data – and, even more so, data that is covered by professional secrecy or legal privilege – in prompts when using generative AI, cannot be overlooked. No guidance has yet been issued in this respect by Maltese regulators or professional representative bodies, although it remains the responsibility of professionals to ensure that the protected or privileged information is not disclosed or breached through the use of the technology.

AI systems, being computer programs and algorithms, are afforded copyright protection. Under Article 2 of the Copyright Act, a computer program is defined as a literary work and, subject to it having an original character, is afforded copyright up to 70 years after the end of the year in which the author dies.

The data compiled for the purpose of training an AI model may also enjoy sui generis protection rights relating to databases. Under Article 25 of the Copyright Act “the maker of a database who can show that there has been qualitatively or quantitatively a substantial investment in either the obtaining, verification or presentation of the contents of the database shall have, irrespective of the eligibility of that database or its contents for protection by copyright or by other rights, the right to authorise or prohibit acts of extraction or re-utilization of its contents, in whole or in substantial part, evaluated qualitatively or quantitatively”.

As generative AI models become more precise, the manner in which a user prompts the model becomes a valuable element that the user may wish to protect. This protection may be achieved by treating the prompts as trade secrets under the Trade Secrets Act (Chapter 589 of the Laws of Malta). A trade secret is defined as information that:

  • is secret in the sense that it is not, as a body or in the precise configuration and assembly of its components, generally known among or readily accessible to persons within the circles that normally deal with the kind of information in question;
  • has commercial value because it is secret; and
  • has been subject to reasonable steps under the circumstances, by the person lawfully in control of the information, to keep it secret.

The Maltese Data Protection Act (Chapter 586 of the Laws of Malta) and subsidiary legislation made thereunder does not weigh in on the rights of data subjects in an AI context. Neither do they create any noteworthy exceptions to the position under the GDPR. The principles of data minimisation, purpose limitation, legitimate grounds for processing under Article 6 and 9 of the GDPR, as well as the rights of the data subjects under Articles 12–22 of the GDPR all need to be considered carefully by developers involved in the training of models and deployers of AI systems, alike. Human oversight and the ability to fulfil the controller obligations in relation to data subject requests are principles that would need to be followed at all stages of AI development and deployment. Anonymisation techniques are equally important measures to consider, as promoted in, amongst others, the single piece of Maltese legislation that deals with the use of personal data (medical records) for, amongst other things, training AI models: the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10).

Legal tech is high on the agenda of legal professionals. This brings with it ethical considerations, including the impact on professional secrecy and legal privilege when interacting with generative AI. The UK Bar’s Council guidance on generative AI captures these issues well. The Maltese regulator for lawyers (the Committee for Advocates and Legal Procurators within the Commission for the Administration of Justice) and the lawyer representative body – the Chamber of Advocates – has not issued any guidance. This said, it is expected that they soon will. Until then, AI is to be considered as a useful tool that comes with its dangers and challenges and does not change the level of responsibility of lawyers to act ethically in accordance with the Code of Ethics that regulates the profession and their legal obligations resulting from, amongst other pieces of legislation, the Professional Secrecy Act (Chapter 377 of the Laws of Malta) and the Code of Organisation and Civil Procedure (Chapter 12 of the Laws of Malta).

As mentioned in 1.1 General Legal Background, liability in relation to the use of AI will continue to be governed by the principles of tort and contract law under the Civil Code and Commercial Code. The notion of acting in good faith as a bonus paterfamilias and of culpable negligence under Article 1033 of the Civil Code will apply to the deployment of AI.

Under Maltese law the technology itself would not have legal personality. It would therefore be the deployer or developer that would be ultimately responsible for harm caused by the use of AI. The determining factor would be the cause of the damage suffered by the injured party, whether this was a result of the wrongful use of the technology or a defect in the technology itself. In any event where the damage is suffered by a third party, the latter may opt to act against the deployer of the technology who directly caused the harm or even against the developer of the technology. Unless the developer is sued by the claimant, it would be up to the deployer to turn to the developer to recover the damages that the deployer may be made to pay the injured party.

Moreover, the Product Liability Directive, which was transposed into the Maltese Consumer Affairs Act, provides for a notion of strict liability whereby the producer (and in some instances the seller) of the AI system may be held liable for damage caused by a defect in their product, provided that the injured party proves the damage, the defect and the causal link between the two. The European Commission has, however, identified issues with the application of the Product Liability Directive to AI systems and, for this reason, has been working on an AI Liability Directive whilst political agreement was achieved on the amendments to the Product Liability Directive. Through these amendments AI, as “software”, has been definitely included within the scope of this piece of legislation. AI system providers will therefore potentially be liable for any defective AI systems that are placed on the market. Manufacturers of AI systems will also be responsible for defects in the free and open source software that they integrate into their systems.

Currently there are no proposed amendments to the liability regime for AI development and deployment. However, we would expect the necessary legal provisions to bring into effect the amendments to the Product Liability Directive to be drafted and discussed in Parliament over the coming months.

Algorithmic bias is one of the identified and well-documented risks of AI. Although no standards have been mandated by Maltese regulators and/or law to avoid the risk of algorithmic bias, developers of AI are guided by best industry practice. The obligations of explainability, transparency and auditability of solutions being imposed through the AI Act will act to minimise these risks in a harmonised fashion.

Prejudice caused as a result of algorithmic bias could be particularly relevant in areas of employment, credit worthiness and insurability evaluations, amongst others. Where bias in the algorithm creates prejudice and damages are suffered, the liability principles mentioned above will apply.

The patchwork of legal frameworks that, directly or indirectly, deal with AI are intended to work together to provide comprehensive protection to persons (natural or legal) who are the subjects of the deployment of the AI systems. Amongst these, data protection laws and principles remain of paramount importance. The transparency and explainability provisions in the AI Act, coupled with the information obligations in the GDPR, should empower data subjects to make a conscious decision as to whether to allow their use of personal data, or otherwise, in given circumstances and for the explained purposes.

Article 22 of the GDPR, which empowers the data subject to object to the processing of his or her data where this is fully automated, including profiling, and could lead to legal effects concerning him or her or similarly significantly affect the data subject, is significant.

The extent to which AI systems are being integrated in every aspect of life and within different sectors necessarily brings with it the need to impose greater emphasis on the “by design” adoption of processes and procedures that ensure that the data subjects’ rights are respected throughout the lifecycle of the AI’s deployment. Short of legislative amendments to spell out these obligations in a clearer fashion in an AI context, harmonised guidance from the EDPB and other such bodies is expected to help shape the future of the way data is processed in an AI environment. The relevance of such guidance would be seen, for instance, in the use of personal data of the AI users that is captured and used by generative AI models.

With data, including personal data, being at the centre of AI and with AI being so pervasive, the legislative frameworks that deal with network resilience and cybersecurity gain critical importance. DORA and the NIS 2 Directive are amongst the EU legal frameworks which complement the Cyber Resilience Act and the Cybersecurity Act in this area.

The use of AI for facial recognition and biometrics is known to be one of the more sensitive uses of this technology and brings with it inherent risks to the privacy of the individuals. Article 9 of the GDPR provides a high level of care that needs to be applied to the use of biometric data, which is treated as a special category of personal data.

The AI Act has also largely tackled the use of facial recognition including a number of uses of such techniques, including real-time facial recognition in public places (save for certain exceptions), predictive policing, internet scraping of facial images to create databases and emotion inferencing at work or school as prohibited uses. When not forbidden, facial recognition and biometrics are considered high-risk uses under Annex III.

Given the jurisdictional scope of the AI Act, similar to that of the GDPR, together with the level of fines that may be imposed in cases of breach, it is expected that the regulation of biometrics and facial recognition will be regulated and harmonised to a large degree.

In addition to these specific laws, the use of facial recognition and biometrics is central to the fundamental human right of respect for one’s private and family life (Article 8 of the European Convention on Human Rights). The State has an obligation to ensure that this human right is safeguarded and should the police or any other State institution breach this human right, the State would be found liable in damages to the individual whose rights were breached.

As mentioned in 8.1 Emerging Issues in Generative AI and 11.2 Data Protection and Privacy, the use of fully automated decision-making, including profiling, needs to be clearly explained to data subjects and they would have the right to object to it under Article 22 of the GDPR where this could lead to legal effects concerning them or similarly significantly affect them. Moreover, a data subject has the right to know how the data was used and produced the results. The “black box” risk associated with full automation is therefore one that cannot be underestimated by the deployers of AI who remain liable for the results produced by the system and damages that may result therefrom.

Risks related to automated decision-making arise not only where personal data is involved. Automated algorithmic trading, creditworthiness or insurability decisions are equally risk prone and may lead to the deployer of the AI bearing the responsibility for the wrong decisions taken by the AI system. As mentioned above, the culpable fault principle of tort and negligence in fulfilling one’s contractual obligations, may apply.

Contradictorily, it is in riskier areas such as health, education, finance and mobility, that the greatest benefits of automation are likely to be seen. Until such time as the technology becomes completely dependable with in-built auditable checks and balances that cannot be overwritten and that control the use of the technology itself, human oversight remains of paramount importance and the technology should not be allowed to replace the professional. It is this human oversight and the ability for the human professional to take the final decisions that aligns automation in AI with the professional ethics and regulatory requirements of regulated professions.

Transparency obligations underlie the professional use of AI in all sectors. This results from the patchwork of laws that regulate the industrial use of technology, be it the AI Act, GDPR, or sector-specific regulation. The use of chatbots and other technologies that render services that are generally provided by natural persons, is no different. Users are to be made aware that they are interacting with an AI technology and must be given the opportunity to stop this communication or request that they interact directly with a natural person.

Deployers of AI are responsible for the actions taken by them on the back of the technology used. Should the use of AI lead to anti-competitive conduct by the deployers of the technology, whether this relates to abuse of a dominant position, or collusion, the deployer will be responsible for the anti-competitive behaviour. Competition law does not distinguish or make exceptions for anti-competitive behaviour that results from automated functions in a technology. Even in this scenario, therefore, human oversight remains imperative.

Deployers of AI are ultimately responsible for using the technology within their business practice. They should therefore ensure that the various obligations to which they are subject are reflected in a back-to-back manner in the procurement agreement with the AI supplier. In this manner they will ensure that they will be able to turn to the supplier in if they are obliged to pay damages resulting from their use of the technology. Furthermore, certain sector-specific laws and regulatory directives may impose obligations on licensed entities in relation to the outsourcing agreements they have with third parties, including AI suppliers. This is the case, for instance, with DORA and the “Guidance on Technology Arrangements, ICT and Security Risk Management and Outsourcing Arrangements” issued by the MFSA (which are based on the EBA Guidelines) in relation to licensed financial service providers, where certain obligations would need to be inserted in the outsourcing agreements.

Automation in the field of employment is one of those areas where Article 22 of the GDPR, relating to automated decision-making, is of critical importance. Fully automated processes that lead to the selection of candidates for a job, are legally risky and could give rise to discrimination, challenge and ultimately damages being borne by the employer.

The same concerns that arise with regard to hiring and termination practice may also apply to employment performance analysis and monitoring. Moreover, using AI tools to draw inferences about an employee’s emotions when at work is forbidden under the AI Act.

The use of AI in digital platforms is a given in today’s world. Digital platforms thrive on data they obtain from their users. Consequently, data protection legislation and enforcement remains key to curbing abuse. Other EU instruments of note that will help shape the future of this industry are the Digital Markets Act and the Data Act, which, in their own ways and from their own angle, seek to mitigate the conglomeration and control of data by gatekeepers.

The financial services industry is one of the greatest net beneficiaries of AI, and use of the technology is widespread in the sector, whether in the provision of services, for purposes of marketing or internally for risk management.

This highly regulated industry is modelled through a patchwork of laws and regulations that tackle and curb the risks of the use of technology, including AI, from different angles. The main risks identified by the MFSA in its “Artificial Intelligence” edition of its “FinSights: Enabling Technologies” series on awareness information are: accountability, black box algorithms and lack of transparency, data quality, (restricted) competition, (inconsistency in and fragmentation of) regulation and discrimination.

The AI Act itself tackles a number of these issues, mandating transparency, explainability and auditability in different degrees depending on the levels of risk posed by the use of the technology and also classifying credit worthiness and life insurance as high-risk uses to which greater scrutiny and more onerous obligations apply.

Moreover, DORA obligations that include proper risk management, incident response preparedness, including resilience testing, incident reporting obligations and management of ICT third-party risk, would apply, as will the MFSA Guidance on Technology Arrangements, ICT and Security Risk Management (subject to modification in order to supplement DORA obligations.)

Likewise GDPR obligations of transparency, explainability, data minimisation and purpose limitation, along with the data subject rights, including the right to object to the use of one’s data by fully-automated systems that may produce legal effects or significantly affect the data subject, also apply to the use of AI.

Confidentiality and professional secrecy considerations impact the licensed providers’ interaction with generative AI and large language models, whilst the Data Act obligations relating to the data owner’s control rights, where the IoT is being deployed, may also apply.

It is for this purpose, given the complexity of regulation in this industry, that sector players are advised to take a 360° view of the regulatory implications resulting from their use of AI.

Healthcare is known to be another high-risk scenario for the use of AI. Patient rights, professional responsibility, coupled with the risks of culpable negligence, ethical considerations, professional secrecy and the use of highly sensitive health data are all matters that need to be considered carefully when healthcare professionals are interacting with AI. In this regard, Malta has recently enacted the Processing of Personal Data (Secondary Processing) (Health Sector) Regulations (Subsidiary Legislation 528.10) to allow for the exploitation of health data by technology in a controlled environment. See 3.6 Data, Information or Content Laws.

It is still early in the process of autonomous vehicles being tested on Maltese roads, despite there having been reports of intended tests in the public transport field and an AI-driven traffic management system. Transport Malta does not seem to have proposed any changes to the highway code or laws that require vehicles to be driven by persons that have a licence issued in accordance with the law.

Product safety requirements in manufacturing apply irrespective of the use of AI made by the manufacturer.

As mentioned in 9.1 AI in the Legal Profession and Ethical Considerations, issues of professional secrecy, confidentiality and, in case of lawyers, legal privilege, are amongst the legal and ethical challenges that would need to be considered carefully by professionals when interacting with and using AI. It is expected that professional representative bodies will set standards to be followed.

As mentioned in 8.1 Emerging Issues in Generative AI, under the Maltese Copyright Act, in order for copyright protection to arise an “author” would need to be a natural person. Consequently, AI-generated works would not qualify for copyright protection unless a natural person can evidence, if challenged, that he or she did substantively participate in the creation process. Currently there are no Maltese court judgments to go by on this matter.

A similar interpretation would apply to the notion of inventor under the Patents and Designs Act (Chapter 417 of the Laws of Malta) whereby the right to a patent will apply to the “inventor” and only a “natural person or legal entity may file an application for a patent” (Article 9).

As mentioned in 8.2 IP andGenerative AI, prompts in generating a work through AI may be protected through trade secrets.

Although there are ongoing discussions about the need to provide protection to AI-generated works that do not infringe third-party rights, to date no legislative steps have been taken in this direction by Maltese legislators.

The use of OpenAI to create works and products brings with it the unknown of whether the created work infringes third-party rights over works that were used in the machine learning process. The use of such infringing work would expose the user to potential liability for breaches of third-party rights despite his or her ignorance of the fact. Additionally the use of the generated work must comply with any licence conditions attached to the use of OpenAI.

Despite the manner in which AI is changing the way we operate and live, from a legal perspective, AI, like any technology, is deemed to be a tool in the hands of those who choose to use it. In this sense, the traditional tenets of our law that place the responsibility for the use of a tool on the professional or person who uses it, remain the basis of liability considerations.

Together with this, one must be cognisant of the complex regulatory reality industries operate in. Regulation does not work in silos, but a holistic approach to the regulatory obligations that kick in under the various frameworks and legal instruments must be taken and considered carefully when using AI.

A holistic legal and regulatory due diligence/impact assessment that is regularly revisited in view of changes in operations and/or law is a must in the complex world of interaction with AI. This will lead to full knowledge of the obligations expected from the deployer of the technology and will help put in place processes and procedures to ensure that the obligations are honoured. A culture of proper compliance will then need to be nurtured in the organisation through training and awareness programmes.

Ganado Advocates

171, Old Bakery Street
Valletta VLT 145
Malta

+356 2123 5406

lawfirm@ganado.com www.ganado.com
Author Business Card

Law and Practice in Malta

Authors



Ganado Advocates is one of Malta’s foremost law practices. It traces its roots to the early 1900’s, where it was founded in Malta’s capital city, Valletta. The firm has grown and adapted itself over the years to meet the changing needs of the international business and legal community. With a team of over 100 lawyers and professionals from other disciplines, it is consistently ranked as a top tier firm in all its core areas, from corporate law to financial services, maritime, aviation, intellectual property, data protection, technology, litigation, employment and tax law. Ganado Advocates has over the past decades contributed directly towards creating and enhancing Malta’s hard-won reputation as a reliable and effective international centre for financial and maritime services. Today, the firm continues to provide high standards of legal advisory services to support and enhance Malta’s offering.