Artificial Intelligence 2024

Last Updated May 28, 2024

Switzerland

Law and Practice

Authors



Homburger is one of the largest Swiss law firms, with more than 160 experts. The firm acts as trusted advisor to companies and entrepreneurs doing business in Switzerland in all aspects of commercial law, including on the full spectrum of intellectual property and technology, corporate and financing transactions, antitrust, litigation and arbitration, regulatory proceedings and investigations, and tax law. The firm is renowned for pioneering legal work, uncompromising quality and its outstanding work ethic. Homburger’s IP/IT and Data Protection teams advise and represent clients in all matters relating to intellectual property, technology, and data protection. This includes structuring and negotiating intellectual property and technology transactions, supporting clients with technical know-how in IT, telecommunications and media, advising clients on data protection, and representing clients before courts and authorities in proceedings relating to intellectual property, data protection, and matters with a particular focus on technology.

At present, there is no legislation in Switzerland dealing specifically with AI, and there is no current initiative to enact general-purpose AI regulation (see 3.1 General Approach to AI-Specific Legislation for further details).

However, the majority of Swiss law is technology-neutral, and can therefore also apply to AI-based products and services. This includes the following.

  • Civil liability: liability for damages or specific performance, as set out in the Swiss Code of Obligations, whether under contract or tort, is based on the conduct of natural or legal persons. AI systems themselves therefore cannot be held liable, but the owner, operator or manufacturer of the AI systems can. The Product Liability Act could potentially apply to AI-powered products (see also 10.1 Theories of Liability).
  • Criminal liability: under Swiss criminal law, criminal liability primarily attaches to natural persons. Even legal persons such as companies are only criminally liable with respect to select offences, or if the responsible natural person cannot be identified due to organisational deficiencies. Where criminal offences are committed by or through the use of AI, the individuals developing the AI systems, manufacturing the AI-based products, or those operating the systems or products could be subject to criminal penalties. Where the offences are committed negligently, questions with respect to attribution and the standard of care are largely unresolved.
  • Data protection: Swiss data protection law (primarily the Federal Act on Data Protection) already applies to AI-based processing of personal data. In view of the rapid developments in this field, the Federal Data Protection and Information Commissioner (FDPIC) issued a statement confirming this view in November 2023. The FDPIC highlighted, in particular, the requirement of transparency with respect to the purpose, functionality and data sources of AI-based processing of personal data.
  • Intellectual property: intellectual property legislation, particularly the Patent Act and the Copyright Act, can apply to the collection and use of input for AI (eg, the training of the AI model), as well as to the generation of output by the AI. However, IP protection of AI-generated content is limited (see 15. Intellectual Property for further details).
  • Unfair competition: the Unfair Competition Act prohibits business practices that are misleading or otherwise unfair or violating the principle of good faith. It applies both to vendors of AI systems/services and to other industries that make use of AI. Typical examples include providing incorrect or misleading information about one’s own business or disparaging or inaccurately comparing the offerings of competitors. The Unfair Competition Act also prohibits the exploitation of unlawfully obtained trade secrets.
  • Employment law: this applies where AI is used in an employment context. The Swiss Code of Obligations, which governs employment relationships between private companies and individuals, provides for a general duty of care of the employer to respect and protect the employee’s personality rights and health, which can be affected by AI.

Notably, Switzerland is neither a member state of the European Union nor a member of the European Economic Area, so EU/EEA legislation only applies to the country where it is specifically referenced in Swiss legislation.

Before the launch of ChatGPT in November 2022, the use of AI in Switzerland focused primarily on predictive AI. Much innovation in AI was concentrated in industries that have historically been strong users of technology in the country, such as finance, pharmaceuticals and medical devices, and robotics. By way of example, financial institutions have been using predictive AI extensively for fraud and money laundering prevention, portfolio and suitability analysis, and trading strategies. The most widespread use of “generative” AI in all industries was arguably text translation, with tools such as DeepL.

Following the general availability of ChatGPT, generative AI tools have seen a massive uptake in various industries in Switzerland. This includes “high impact” industries such as software and technology, media, pharmaceuticals, and finance, as well as industries where the use of generative AI is still in earlier stages, such as marketing and professional services. According to a study conducted by PwC’s strategy consulting “Strategy&” in March 2024, the Swiss economy has the highest growth potential worldwide through generative AI and could expand by 0.5 to 0.8% annually in the coming years based solely on the use of generative AI.

In its “Guidelines on Artificial Intelligence for the Confederation” published on 25 November 2020 (AI Guidelines), the Federal Council (Switzerland’s highest executive authority) makes the case that Switzerland should create and maintain an environment conducive to the research, development and application of AI in order to ensure high-quality living standards. The Swiss government does not specifically facilitate investments in the AI sector or specific technology, but favours a bottom-up approach. There are two essential pillars to incentivise the development and use of AI, as follows.

  • Regulatory conditions: the Federal Council strives to provide optimal regulatory conditions that encourage AI innovation. This involves adopting technology-neutral regulation that offers business and science stakeholders the freedom to choose and develop AI technologies.
  • Education, research and innovation: the Federal Council emphasises the importance of education, research and innovation as foundations for AI development. Education should enable everyone to acquire the digital skills necessary to live in an AI-driven digital society, and should offer advanced AI-specific education for specialists. Research should be left in the hands of Switzerland’s research institutions, including the choice of technologies and areas of research, as well as permitting those institutions to adopt their own guidelines for the use of AI.

Consistent with the principle of technology-neutral regulation, the AI Guidelines do not distinguish between different kinds of AI, such as generative AI or predictive AI.

Recently, multiple Swiss public research universities have established dedicated research centres and hubs for AI, aiming to combine researchers from different faculties and industry stakeholders and to facilitate AI start-ups and spin-offs. One prominent example is the ETH AI Center of the Swiss Federal Institute of Technology in Zurich (ETH Zurich).

The Swiss government has historically adopted a “wait-and-see” approach with respect to regulating AI, and has favoured industry self-regulation. However, in light of other regulatory initiatives regarding AI, notably the European Union’s AI Act and the Council of Europe’s proposed AI Convention, the Federal Council decided in November 2023 to review possible regulatory approaches that would be compatible with those two regulatory frameworks.

The results of this review are expected to be available by the end of 2024, and are intended to serve as a basis for the preparation of a specific regulatory proposal. The proposed regulation will then likely undergo an extensive consultation process prior to being introduced to the Swiss parliament.

There is no applicable information in this jurisdiction.

AI Guidelines

The AI Guidelines set out seven basic guidelines on the use of AI in Switzerland. They are not binding on the private sector as their primary purpose is to serve as a general frame of reference for the use of AI within the federal administration and to ensure a coherent policy. They can be summarised as follows.

  • Putting people first: the dignity and well-being of the individual as well as the common good should be paramount when developing and using AI.
  • Regulatory conditions: the Swiss government must continue to ensure the best possible regulatory conditions so that the opportunities offered by AI can be exploited (see 2.2Involvement of Governments in AI Innovation).
  • Transparency, traceability and “explainability”: AI-based decision-making and interaction with AI systems should be clearly recognisable as such. The functioning of AI and its purpose should be disclosed in a responsible and legally compliant manner, and data sets used for the training or development of AI should be disclosed within the framework of legal obligations in order to facilitate monitoring.
  • Accountability: liability must be clearly defined when using AI. Delegating responsibility to machines must not be permissible.
  • Safety and security: AI systems must be designed to be safe, robust and resilient in order to have a positive impact on people and the environment. They must not be vulnerable to misuse or misapplication, and safeguards must be in place to prevent serious misuse.
  • Active shaping of AI governance: Switzerland should work to actively shape global AI governance, in particular through its membership in international organisations such as the UN, OECD, ITU and Council of Europe.
  • Involvement of all relevant national and international stakeholders: Switzerland should aim to include all relevant stakeholders, including the private sector, civil society and scientific experts, in the political decision-making process.

Fact Sheet Generative AI

In response to the rapid rise of generative AI chatbots, particularly ChatGPT, the Swiss government issued a “Fact sheet on the use of generative AI tools in the Federal Administration”, the most recent version dating from 18 January 2024. The goal of this fact sheet is to give practical guidance to the employees of the federal administration and other federal agencies on how to use generative AI tools for their daily work.

The fact sheet encourages responsible experimentation with generative AI tools, such as summarising publicly available sources, obtaining code suggestions or generating images for presentations. It also reminds users not to violate existing regulations and policies. In particular:

  • users must never enter government information classified as confidential or secret, subject to official or professional secrecy or contractual confidentiality obligations, or personal data;
  • users must critically review and verify any AI-generated output to be used for government functions, and should clearly indicate that AI tools were used where appropriate; and
  • users must at all times comply with applicable IT and cybersecurity regulations.

Similar fact sheets for the use of generative AI tools by government employees have been issued at cantonal (state) and local levels, eg, in the Canton of St. Gallen.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

At present, no judicial decisions dealing with AI in substance have been reported in Switzerland.

Given the lack of reported judicial decisions on AI, there are no established definitions of AI used by the courts. In the absence of definitions established by case law, Swiss courts are likely to refer to definitions established by future statutes (if any) or definitions used by the Swiss government, such as the Federal Council’s dispatch accompanying a draft act or guidelines and fact sheets adopted by the federal government or its agencies (see 5.2 Technology Definitions for further information).

OFCOM

The Federal Office of Communications (OFCOM) is considered the leading agency dealing with the regulation of AI in Switzerland. OFCOM’s regulatory authority generally comprises telecommunications, radio and television, and postal services. OFCOM plays an important role in initiatives regarding the digitalisation of the federal administration and is also the office tasked with leading the evaluation of possible regulatory approaches for AI (see 3.1 General Approach to AI-Specific Legislation).

FDPIC

The Federal Data Protection and Information Commissioner (FDPIC) is an independent agency responsible for tasks in the areas of data protection and the principle of freedom of information. In its role as supervisory authority, the FDPIC monitors and enforces compliance by federal bodies and the private sector with Swiss federal data protection legislation. The FDPIC’s views and guidelines on the application of data protection legislation to AI are of significant practical importance.

FINMA

FINMA is the independent agency responsible for financial-market supervision in Switzerland. FINMA’s regulatory mandate is to supervise banks, insurance companies, exchanges and financial institutions, among others. It is charged with protecting creditors, investors and policyholders and ensuring that Switzerland’s financial markets function effectively. While FINMA’s regulatory authority is not specifically aimed at AI, it has recently identified AI as a key trend in its risk monitoring published in November 2023. FINMA stated that AI creates particular challenges in the financial sector in connection with the responsibility for AI decisions, the reliability of AI applications, the transparency and “explainability” of AI decisions, and the equal treatment of financial market clients. In its report, FINMA also states that it will review the use of AI by supervised institutions in line with the risk-based approach and the principle of proportionality.

CNAI

The Swiss federal government’s Competence Network for Artificial Intelligence (CNAI) is not a regulatory agency but is the only AI-specific body established within the federal government. The CNAI combines representatives of multiple federal departments and offices, and collaborates with AI experts in its “community of experts” and with other interested individuals, institutions, organisations and companies in its “community of practice”. In addition to drawing on expert knowledge and ensuring that knowledge gained from AI projects is shared within the federal administration, the CNAI has issued an official document for uniform AI terminology in the federal administration (see 5.2 Technology Definitions).

The CNAI has issued an official AI terminology document to introduce uniform use of terms throughout the federal administration. The CNAI terminology includes inter alia the following definitions.

  • Artificial intelligence or AI is defined as “building or programming a computer to do things that normally require human or biological skills (‘intelligence’)”. Examples include image recognition, speech recognition, language translation, visual translation and playing games with concrete rules.
  • Generative AI is defined as “AI systems that are trained on large amounts of data from the physical and virtual world in order to generate data themselves (eg, texts, imagery, sound recordings, videos, simulations, and codes)”.

At present, there are no definitions of equivalent significance issued by other government entities that would conflict with the definition of the CNAI.

Please see 5.1 Regulatory Agencies.

So far, there have been no reported enforcement actions regarding the use of AI in Switzerland.

The Swiss Association for Standardization (SNV), a private body and Switzerland’s member of CEN and ISO, is the primary forum for industry standardisation in the country. The SNV has not yet issued any standards related to AI, but is involved in the standards being adopted and considered by ISO.

Most standards in Switzerland are based on the relevant international standards developed in global or European standard-setting bodies. The most important international standard-setting bodies for Switzerland with respect to technology include ISO, IEC, ITU, CEN, CENELEC, and ETSI.

Given that Switzerland does not have AI-specific regulation at present, international standards relating to AI do not have the force of law in Switzerland. However, international standards could be taken into account by the courts when assessing the required standard of care in questions of liability.

Government use of AI in Switzerland varies significantly between the federal government and the different cantonal (state) and local governments, and is still in its early stages overall. While the federal government and most Cantons are already using or evaluating some form of AI, many of those proposed applications do not raise major concerns. Examples include semantic search in legislative texts, summarisation of administrative guidance, automatically assigning incoming questions or requests to the responsible departments for a response, and government chatbots.

Proposed new applications of AI in the areas of policing and criminal justice, on the other hand, have proven more controversial. One example is the Swiss Federal Office of Police (fedpol) working on an upgrade of its automated fingerprint and palmprint identification system (AFIS) to enable facial image comparison. The new system is expected to be operational in 2026, and will provide federal and cantonal immigration authorities, prosecutors, and border security with an additional means for identifying people. Currently, the proposed uses do not include real-time identification of individuals, which is highly controversial.

The most comprehensive list of government use of AI in Switzerland is compiled by the NGO AlgorithmWatch CH in its “Atlas of Automation Switzerland”.

There is no applicable information in this jurisdiction.

To the extent that such information is public, AI does not yet play a key role in national security matters in Switzerland. However, the Swiss Federal Intelligence Service (FIS) has stated in its most recent annual situation report “Switzerland’s Security 2023” that it expects foreign intelligence services to increase their use of AI to process and analyse the ever-increasing amounts of data being transmitted. The FIS also expects intelligence services to further improve their data-gathering capabilities domestically and abroad, which has implications for the FIS’ counter-intelligence and counter-espionage activities.

See 8.2 IP and Generative AI and 8.3 Data Protection and Generative AI.

IP protection of training data and AI models

The degree of IP protection of training data will depend on the nature of the data: if the training data consist of works that are subject to copyright, such as works of literature, newspaper articles, website contents, images, songs or software code, the Copyright Act prohibits unauthorised use, copying or distribution (among other actions). There are statutory limitations to copyright, but many of them will likely not apply in a commercial context (see IP infringement when training generative AI below). The unauthorised copying and distribution under the Copyright Act can trigger civil liability (injunctive relief, damages, disgorgement of profits) and, if committed with intent, constitutes a criminal offence.

Where the training data is not subject to copyright, as would likely be the case for statistical data, measurements, etc., or where copyright protection has expired, the data would have to be protected as such. The same applies to AI models, which, at their core, are simply structured sets of data points (weights, biases, other parameters). With respect to these kinds of data, two main avenues of protection are available, as follows.

  • Contractual restrictions on use and disclosure: parties may contractually agree to protect and keep training data or AI models confidential and to use them only for specific purposes, be it under an NDA, license agreement or as part of a provider’s terms and conditions. These kinds of obligations are generally valid and enforceable under Swiss law. However, the available remedies in case of a breach may be difficult to successfully enforce in practice. While a party may obtain injunctive relief forcing the other party to cease unauthorised disclosure, the injunction cannot undo the disclosure and potential loss of the confidential nature of the data. With respect to compensation, the actual losses suffered are likely difficult to evidence in a claim for damages. Contractual penalties are sometimes included in contracts to avoid the burden of proof, but penalties may be reduced by the court if found to be excessive. It is also unlikely that contractual penalties included only in the general terms and conditions of an AI provider would be enforceable as they may be considered uncommon and therefore not validly agreed to. In addition, all contractual obligations only bind the parties to the contract, and not a third party that is exploiting a (potentially unauthorised) disclosure.
  • Trade secret protection: trade secret protection is likely available for both the training data and the AI model (see 15.2 Applicability of Trade Secrecy and Similar Protection for further details).

IP protection of generative AI output

According to the prevailing doctrine in Switzerland, inventions generated by generative AI are not eligible for patent protection and works created by AI, including texts, images, audio, video and software, are not eligible for copyright protection (see 15.1 Applicability of Patent and Copyright Law).

The output itself may however be protected contractually or as a trade secret if it is kept confidential (see IP protection of training data and AI models above and 15.2 Applicability of Trade Secrecy and Similar Protection).

IP infringement when training generative AI

The training of AI models usually occurs in two steps: the compilation of a training data set and the actual training of the model by iterating over the training data and adjusting the parameters. Both activities involve the creation of copies of data, and can therefore constitute copyright infringement. The Copyright Act does provide for two relevant limitations, but these are unlikely to apply in a commercial context, as follows.

  • The text and data mining exception under Article 24d of the Copyright Act only applies to the creation of copies for scientific research, which does not cover commercial purposes. The exception also expressly does not apply to computer programs (source code).
  • Under Article 24a of the Copyright Act, the creation of copies is permitted if they are transient or incidental, a necessary step of a technical process, only made for a lawful use of the work, and have no independent economic significance. The last criterion is unlikely to apply to the training of AI models, and the copies made to create the training data sate are not transient or incidental.

In addition, where the training of an AI model involves the use of unlawfully obtained data that is subject to trade secret protection, the training would also constitute unfair competition under the Unfair Competition Act.

IP infringement by generative AI output

The disclosure or distribution of AI-generated output can infringe on copyright where it reproduces sufficiently large parts of a copyrighted work without authorisation of the copyright holder.

Under the Swiss Federal Act on Data Protection (FADP), the processing of personal data is generally lawful if the principles set out in Articles 6 (eg, proportionality, purpose limitation) and 8 (data security) of the FADP are complied with. A legal justification (consent, overriding private or public interest, or as provided by law) is required for the processing to remain lawful if the processing deviates from these principles, or if the data subject has objected to the processing. This concept differs slightly from the EU GDPR, where a lawful basis is required for all processing.

In the context of generative AI, data protection is relevant both with respect to the training of the AI model and with respect to its operation, particularly regarding the data subject’s right to rectification and deletion, as follows.

  • Training: if the training data set contains personal data, the data protection principles have to be complied with (eg, the training of the AI model must be a “purpose that the data subject can recognise”) or a legal justification needs to be applicable. However, if the personal data is scraped from publicly available sources such as the internet and the data subject has not expressly objected, the processing is presumed to be lawful under Article 30(3) of the FADP.
  • Generation: depending on the input (prompt) provided, generative AI models can output personal data. Whether AI models actually contain personal data is subject to debate, given that Switzerland follows a relative concept as to what constitutes personal data (ie, the same data may be personal data for one party but not for another). In our view, generative AI models do not store personal data, and the right to rectification or deletion would not require the deletion or the re-training or finetuning of the entire AI model. Rather, it would be sufficient for the operator of the AI-based application to filter out the inaccurate output that gives rise to a claim for rectification or deletion.

Use of AI tools

Apart from translation tools, early applications of AI in the legal profession in Switzerland were primarily focused on reviewing large numbers of documents as part of legal due diligence in M&A, or as part of internal investigations.

However, the availability and use of AI tools is currently undergoing rapid change. Ever since ChatGPT was released to the public, generative AI based on large language models has attracted significant interest from law firms and in-house legal departments. Tasks that Swiss law firms are experimenting with include summarisation, generation of analyses and memoranda based on legal source documents, drafting of contracts, the analysis of weaknesses and inconsistencies in argumentation, etc. Another upcoming application is semantic document search in a law firm’s or legal department’s own document database.

Certain legal information providers are also experimenting with generative AI to offer semantic document search and the AI-based answering of questions based on their database of legal literature, scholarly articles and court cases.

Legal and ethical concerns

The main legal and ethical concerns in the use of AI, and generative AI, in particular, relate to professional secrecy and quality of legal advice as follows.

  • Professional secrecy: Swiss attorneys are obliged to keep all information entrusted to them by their clients confidential pursuant to Article 13 of the Lawyer’s Act and to the contract governing the attorney-client relationship. In addition, the intentional breach of the professional secrecy obligation by attorneys or their support staff constitutes a criminal offence under Article 321 of the Swiss Criminal Code. In the absence of specific legislation relating to the acceptable level of risk when outsourcing client information, many Swiss law firms have been reluctant to use foreign cloud-based applications to process or store any client information. While this has gradually changed over the past few years, the use of cloud-based AI applications, maintaining professional secrecy and the reservations regarding foreign lawful access to client information, remains a concern.
  • Quality: the accuracy of AI-generated texts is a major concern for the legal profession. The tendency of large language models to inaccurately rephrase statements of legal significance or to hallucinate information in very convincing ways often necessitates a careful review of every single statement generated, reducing the intended efficiency gains. A newer generation of generative AI tools aims to counteract this with improved support for citations from source documents, but the need for critical review remains. And although there are no reported examples of hallucinated content leading to liability in the Swiss legal profession, examples from the United States have made legal practitioners in Switzerland cautious about using generative AI for legal work.

Only natural and legal persons can be held liable under Swiss private law. When using AI-powered products or services, the key question that arises is therefore which person the liability for the conduct of the AI is attributed to. Liability for damage or losses caused can be either contractual or non-contractual (tortious).

Contractual liability

Pursuant to Article 97 of the Swiss Code of Obligations (CO), a party to a contract is liable for failing to fulfil its contractual obligations unless it can prove that it was not at fault (ie, that the failure occurred without intent or negligence). Pursuant to Article 101 CO, the contractual liability extends to the conduct of any auxiliaries, such as employees or subcontractors, that are used in the performance of the contractual obligations; only natural or legal persons can qualify as auxiliaries. If the use of AI leads to a breach of a contractual obligation, the key question will be whether the breaching party can prove it did not act negligently in its use of AI.

Non-contractual (tort) liability

Article 41 CO covers non-contractual liability, where any person who unlawfully causes damage to another person is obliged to provide compensation. The application of this provision further requires intent or negligence on the part of the person causing the damage. Article 55 CO extends this liability to an employer for the conduct of their employees or other auxiliary staff.

This liability regime is supplemented by the Product Liability Act, which imposes liability on the manufacturer of defective products. However, there is no established conclusion on whether software such as AI systems qualify as products within the meaning of the Product Liability Act, and to what extent an individual AI system or component could be considered defective while the technology as such is known to produce unintended results. Furthermore, the manufacturer of a defective product is not liable if they can prove that the defect could not have been detected based on the state of the art in science and technology at the time the product was placed on the market.

There is no applicable information in this jurisdiction.

At present, there is no specific regulation on algorithmic bias in Switzerland. However, the fundamental right to equal treatment and non-discrimination contained in the Federal Constitution applies throughout the Swiss legal system and can also affect the relationship between private parties (eg, between employee and employer) in addition to the state.

Please refer to 8.3 Data Protection and Generative AI for general remarks on the lawfulness of processing and to 11.4 Automated Decision-Making for the consequences of fully automated decisions.

One key point in which the Federal Act on Data Protection (FADP) differs from other data protection laws is that fines for criminal offences of up to CHF250,000 are imposed on the responsible individuals (natural persons), not on the company. The main offences under the FADP include the intentional violation of information and cooperation obligations, of duties of care (eg, in connection with delegation to a processor), and of data protection secrecy obligations.

While it is unlikely that the use of AI as such will constitute a criminal offence under the FADP, the individuals responsible for the design and implementation of data processing using AI should carefully review data protection compliance to avoid personal criminal responsibility.

Biometric information that uniquely identifies a natural person expressly qualifies as “sensitive personal data” under the Federal Act on Data Protection, which means that such data may not be disclosed to third-party controllers without justification and, where the consent of the data subject is required for processing, the consent must be explicit.

The public’s unease regarding facial recognition and biometrics allegedly already in use by the police forces in certain Cantons has led to legislative initiatives in multiple Cantons to expressly prohibit their use.

Automated decision-making is addressed by the Federal Act on Data Protection. Its Article 21 obliges a data controller to inform the data subject if a decision is made solely by the automated processing of personal data that has a legal or otherwise significant consequence for the data subject. This obligation does not apply in the context of entering into an agreement if the request of the individual is granted in full (eg, a loan application is fully approved by an automated decision). Where the data subject has to be informed, they are entitled to request a human to review the automated decision. Intentional omission of the information of the data subject under Article 21 constitutes a criminal offence for which a fine of up to CHF250,000 may be imposed on the responsible individual.

Article 21 is not applicable to decisions recommended by AI but manually approved by a human, even if the level of human review is superficial. There is a pending initiative in the Swiss parliament to include similar information obligations for any decision significantly based on AI, but it is likely that further consideration will be postponed until the Federal Council’s review of potential approaches to general AI regulation is completed (see 3.1 General Approach to AI-Specific Legislation).

There is no general obligation for businesses to disclose the use of AI in Switzerland. An obligation of transparency may exist with respect to individual uses of AI, such as:

  • when processing personal data with AI, the personal data collected and the purpose and other important circumstances of the processing must be transparent to the data subject (otherwise such processing requires a legal justification); and
  • where a business uses AI to describe or advertise its products or services, doing so without disclosing the use of AI could, in theory, violate the Unfair Competition Act; this might be the case if product pictures are AI-generated rather than photos of real products (to the extent this is not obvious to the potential customer).

Many antitrust implications of price-setting using AI are complex and not yet clearly established under Swiss antitrust law. While it is clear that the use of AI to implement pre-agreed horizontal price fixing remains illegal, the possibility of AI autonomously coordinating prices among competitors is being discussed among scholars, but there is no settled case law yet.

However, in a vertical price-fixing case, the Swiss Federal Supreme Court held that the communication of manufacturer-suggested retail prices by the manufacturer to its distributors through an automated electronic system with daily price updates constituted an illegal agreement on fixed prices, seeing as all distributors and retailers had reason to believe that these prices would also be used by their competitors (and they in fact complied with the suggested pricing).

As AI models are only as good as the data they were trained with, businesses procuring AI solutions intended for productive use should include provisions regarding the quality of the training data and corresponding properties of the AI model (eg, no discriminatory bias, no infringement of IP rights) in their procurement contracts. Otherwise, they themselves risk becoming liable for the output of the AI solutions. While the use of due care is a possible defence under most provisions giving rise to liability under Swiss law, businesses ignoring well-known risks inherent in current AI technology may face the accusation of not having used the required standard of care.

Hiring

Résumé screening software (with or without AI) has been used in Switzerland for some time. Under the principle of freedom of contract, private-sector employers in Switzerland are not required to hire individual applicants, and, by default, are not required to explain why an applicant was rejected.

However, under the Gender Equality Act, employers are not permitted to discriminate against applicants based on gender in the selection process: if an applicant is rejected due gender-based discrimination, the applicant is entitled to a written explanation for the rejection and, if they can prove discrimination, to compensation under Article 5(2) of the Gender Equality Act. However, this does not apply at the stage of job ads, where employers are permitted to advertise a position only for a specific gender. Where AI-based selection tools discriminate based on gender, for example due to inherent gender bias in the training data, the employer may become liable.

In addition, if any rejection of an applicant is made by an automated decision without human control, the applicant must be informed, and is entitled to have a human review the decision based on data protection law (see 11.4 Automated Decision-Making).

Termination

Termination of a private-sector employment relationship is governed primarily by the Swiss Code of Obligations. In principle, employers do not need specific legal grounds for termination, but do need to provide a written explanation upon request of the employee. However, Article 336 of the Swiss Code of Obligations prohibits wrongful termination, including termination due to a quality inherent in the employee’s profile (eg, age, ethnicity, religion), unless this quality is related to the employment relationship or significantly affects work in the business. If an employee is terminated based on recommendations by AI (eg, due to inherent bias in the training data), the employer may therefore become liable for compensation due to wrongful termination (up to six months’ salary).

The prohibition of gender-based discrimination under the Gender Equality Act and the provisions on automated decision-making also apply to any termination that is discriminatory or automated, respectively.

Monitoring and surveillance is considered detrimental to employee health, and is therefore subject to multiple restrictions. Article 328 of the Swiss Code of Obligations provides for a general duty of care of the employer to respect and protect the employee’s personality rights and health. Based on this principle, all measures taken that can affect employee health and wellbeing must be proportionate to the (legitimate) aim pursued. In addition, Article 26 of Ordinance 3 to the Employment Act specifically prohibits the use of systems to monitor the general behaviour of employees in the workplace. Such systems are only permitted where they pursue legitimate aims, such as security, performance or efficiency, and only if their use is proportionate. In practice, this means employers must be able to prove that less intrusive alternative solutions were not sufficient to achieve the aim. Specifically, the FDPIC has stated that AI-based systems for the automated evaluation of employee-based data (eg, vision, movement, communication patterns) are prohibited under these provisions.

Employees are also entitled to employee participation regarding all matters that concern the protection of their health. They must be sufficiently informed of measures and circumstances affecting their health, and have a right to be consulted before the employer takes relevant decisions. This includes the right of employees to make suggestions, and the obligation of the employer to explain decisions that do not take employees’ objections into account.

At present, there is no specific regulation regarding the use of AI in digital platform companies, and general observations regarding the use of AI in data protection and potentially in employment contexts also apply to digital platform companies. Extensive use of AI to control the conduct of platform participants (eg, which participants get to serve which customers when, and at what price) may lead to increased findings by courts that the platform participants lack autonomy and are in fact employees of the digital platform companies.

At present, there is no specific regulation regarding the use of AI in financial services. General financial services regulation, including with respect to risk management, also applies to the use of AI. As set out under 5.1 Regulatory Agencies, FINMA has identified AI as a trend in its risk monitoring report, and will review the use of AI by supervised institutions.

At present, there is no specific regulation regarding the use of AI in healthcare. The primary areas of existing law governing the use of AI in healthcare are sector-specific regulation, such as the Therapeutic Products Act and the Medical Devices Ordinance, which specifically includes software in its definition of medical devices. Where AI-based products qualify as medical devices, they need to comply with the general safety and performance requirements set out in Annex I to the EU’s Medical Device Regulation (by virtue of an express reference to EU legislation). Medical devices placed on the market in Switzerland require a conformity marking (CE label), predominantly based on self-assessment by the manufacturer. The relevant assessment criteria depend on the applicable risk category.

In addition, the Federal Act on Data Protection applies also in the healthcare sector, although in the area of research on humans it is supplemented and partially overridden by the Human Research Act.

Fully autonomous vehicles are not yet permitted on Switzerland’s streets (outside of specific, limited pilot experiments), and drivers must always have their hands on the steering wheel. However, the Swiss parliament amended the Road Traffic Act in 2023 to permit the Federal Council to authorise vehicles with automatic driving systems in a delegated act. In the near future, drivers of such vehicles may therefore let go of the steering wheel, provided they remain ready to resume operating the vehicle themselves if the system indicates a need for manual intervention or otherwise reaches its limits.

At present, there are no specific regulations regarding the use of AI in manufacturing. The liability aspects of manufacturing primarily depend on whether there is a contractual relationship or not (see 10.1 Theories of Liability). Where the use of AI causes manufactured products to be defective, this may trigger liability of the manufacturer under the Product Liability Act.

Please refer to 9.1 AI in the Legal Profession and Ethical Considerations for the main concerns. Many professional services occupations are subject to similar professional secrecy obligations as lawyers (accountants, doctors, other medical professionals, etc.), and most professional services firms are concerned about the confidentiality of the information provided by their clients as well as the accuracy of their advice.

At present, there are no reported judicial or agency decisions in Switzerland on whether AI technology can be an inventor for patent purposes or an author for copyright purposes. However, the prevailing doctrine among legal scholars is that AI can neither be an inventor nor an author.

Patent law

Article 3 of the Swiss Patent Act refers to an “inventor” being entitled to a patent. Prevailing doctrine in Switzerland provides that only a natural person (ie, a human) can be an inventor for the purposes of the Patent Act, which excludes both legal persons and AI. Switzerland is party to the European Patent Convention (EPC), which contains analogous wording in its Article 60. With respect to the EPC, the European Patent Office’s Legal Board of Appeal ruled on 21 December 2021 in the DABUS case (case no. J 0008/20) that AI may not be named as an inventor.

The foregoing applies to the extent AI itself generates the invention without significant human contribution. If a human uses an AI application solely as a tool to make a discovery, the same way they might use other computer software, the human using AI as a tool would be considered the inventor under the Patent Act, and the invention may be eligible for patent protection.

Copyright law

Article 2 of the Swiss Copyright Act defines a work in which copyright may subsist as an “intellectual creation of literature or the arts with individual character”. The criterion of “intellectual creation” is widely interpreted as an expression of the human mind. Consistent with this interpretation, Article 6 of the Copyright Act states that “the author is the natural person who has created the work”. A work within the meaning of copyright can therefore only be created by a human, and not by AI.

While AI may not be an author and purely AI-generated content can therefore not be protected by copyright in Switzerland, works created using AI may be protected. If AI is used solely as a tool by the human author to express their thoughts, for example by having generative AI translate or edit existing text of the author or by modifying pictures similar to what an artist might do with photo editing software, the resulting AI-generated work may still be subject to copyright protection. Equally, if a human sufficiently modifies purely AI-generated content or is creative in the selection or ordering of pieces of AI-generated content, the resulting overall work may also be protected by copyright.

Swiss law uses the terms “manufacturing secrets” and “business secrets”, rather than trade secrets, but there is no distinction in practice. While there is no statutory definition, case law defines trade secrets as any information that: (1) is not publicly known, (2) has commercial value, (3) the owner has a legitimate interest in keeping secret, and (4) the owner intends to keep secret. A curated training data set, an AI model, and the source code of an AI application can therefore constitute a trade secret if the foregoing criteria are met.

Trade secrets are not protected as an absolute right (such as copyright) under Swiss law, but their disclosure or misappropriation is prohibited under specific circumstances, as follows.

  • The intentional disclosure of a trade secret by a person subject to a statutory or contractual obligation of secrecy, as well as the exploitation of a secret so disclosed by a third party, constitutes a criminal offence under Article 162 of the Swiss Criminal Code.
  • The disclosure or exploitation of trade secrets that were unlawfully accessed or obtained constitutes unfair competition under Article 6 of the Unfair Competition Act (UCA).
  • The use or exploitation of a work result that was entrusted to a person constitutes unfair competition under Article 5(a) UCA.
  • The exploitation of a work result by a third party that knew or should have known that the work result was unlawfully made available to it constitutes unfair competition under Article 5(b) UCA.
  • Inducing an employee, agent or other auxiliary staff to disclose a trade secret of their employer constitutes unfair competition under Article 4(c) UCA.
  • The disclosure of trade secrets by employees and agents is further prohibited by statutory law applicable to employment agreements (with certain limitations after the expiration of the employment agreement) and to agency agreements.

All examples of unfair competition above can trigger both civil and criminal liability.

See 15.1 Applicability of Patent and Copyright Law.

Using OpenAI’s products, particularly ChatGPT, touches on multiple issues relating to intellectual property, both on the “input” and on the “output” side, as follows.

  • Providing a copyrighted work as input to ChatGPT without authorisation by the copyright holder may constitute copyright infringement.
  • Providing input that is confidential may constitute a breach of contractual confidentiality obligations or trade secrecy.
  • Using content generated by ChatGPT may constitute copyright infringement if the content includes sufficiently large parts of a copyrighted work (although OpenAI has been working to prevent this type of output in successive updates).
  • As content generated by ChatGPT cannot be protected by copyright in Switzerland, a business has no exclusivity for that content and competitors are generally free to (re-)use the same content (outside of limited restrictions under the Unfair Competition Act).
  • If the content generated by ChatGPT is published and contains inaccuracies with respect to the offerings of the business itself (eg, promoting features its products do not have) or of its competitors (eg, inaccurate comparisons to the competitor’s products), the publication may constitute unfair competition under the Unfair Competition Act.

Pursuant to Article 716 of the Code of Obligations, the board of directors of a Swiss corporation has the non-transferable and inalienable duty to set the company’s overall strategy, to determine the company’s organisation and to supervise the executive officers.

When advising corporate boards of directors on identifying and mitigating risks associated with the adoption of AI, several key issues should therefore be addressed. These include the following.

  • Business impact: Boards should understand the potential impact of AI on the business. This includes recognising how AI can transform business models, create new opportunities, and disrupt existing practices.
  • Technical competence: Boards should ensure sufficient AI competence both within the board and in the executive teams. Boards should promote AI literacy and ensure that they have the necessary expertise to make informed decisions about AI. Ideally, at least one member of the board should have a technical background to provide leadership on AI topics.
  • Corporate governance: Boards should integrate AI governance into their overall corporate governance frameworks. This involves defining clear roles and responsibilities for overseeing compliance, AI initiatives and ensuring that AI aligns with the company’s strategic objectives. If no specific position is to be created, boards should consider allocating the AI portfolio to an existing officer with technical expertise, such as a CIO/CTO or the Data Protection Officer (DPO).
  • Risk management: given the potential legal, ethical and reputational risks of AI, boards should adapt their risk management frameworks for AI. This includes ensuring compliance with evolving AI regulation, managing data privacy and security concerns, and addressing potential biases in AI models.
  • Internal policies and guidelines: in line with the corporate governance and risk management aspects, boards should consider adopting internal policies and guidelines on the use of AI applications by employees (see also 17.1 AI Best Practice Compliance Strategies).

Given that Switzerland does not have AI-specific regulation at present, compliance strategies should be aimed at ensuring that the use of AI tools complies with the existing body of law, and should therefore focus on practical guidance for employees to use AI tools. Businesses should ideally adopt internal policies on the use of AI tools, clearly outlining which types of use are permitted and which are not.

Depending on the field of business and the contractual frameworks in place, this may include an outright prohibition on using publicly available generative AI tools (eg, public versions of ChatGPT), restrictions on the type of data that may be submitted as input (eg, no confidential information, no personal data) or restrictions on the use of output (eg, mandatory review for accuracy prior to publication).

Businesses should also invest in training their staff in AI literacy to help avoid issues based on ignorance or misunderstanding the nature and limitations of current AI technologies (eg, mistaking a generative AI chatbot for a search engine).

Homburger

Prime Tower
Hardstrasse 201
CH-8005 Zurich
Switzerland

+41 43 222 10 00

+41 43 222 15 00

lawyers@homburger.ch www.homburger.ch
Author Business Card

Trends and Developments


Authors



Kellerhals Carrard is a full-service business law firm with offices in Basel, Berne, Geneva, Lausanne, Lugano, Sion and Zurich, as well as representative offices in Shanghai and Tokyo. The company has more than 500 employees, including 300 legal professionals, making it the largest law firm in Switzerland. Kellerhals Carrard provides a complete range of legal services for domestic and international clients and is known for its entrepreneurial spirit, business acumen and pragmatic approach. Innovation plays a central role in Kellerhals Carrard’s practice. The firm has broad expertise in combining legal and technical knowledge – a core competence that allows it to offer valuable guidance in the interdisciplinary field of AI. Through close cooperation with the legislative authorities and regulators, the company is able to advise clients at an early stage on upcoming changes in the regulatory framework and the resulting strategic measures.

The Huge Potential for AI Applications in Switzerland

According to a recently published study, Switzerland has the greatest growth potential in the field of generative AI among 20 industrialised countries. In a best-case scenario, the technology could trigger a GDP boost of up to CHF50 billion by 2030. One of the reasons given for AI's enormous growth potential is that the technology and software sectors, as well as media, pharmaceutical and financial companies – all of which are strongly represented in Switzerland – are expected to be among the biggest beneficiaries of the new technology.

In order for Switzerland to benefit from this huge growth potential, a balanced regulatory framework is key. On the one hand, innovation must not be hampered by over-restrictive regulation or prohibitions. On the other,  with a view to safeguarding fundamental rights, creating trust in society and compliance with international standards (eg, the Convention on AI), it is also important to address the risks associated with the AI use and take appropriate measures in good time. In the past, Switzerland has always managed such balancing acts well, thanks, primarily, to its principle-based and technology-neutral legislation.

Switzerland’s Principle-Based and Technology-Neutral Approach

In general, Swiss law is principle-based and designed in a technology-neutral way. Instead of specific obligations for concrete situations or technologies, the law provides for broad principles and objectives. This setting allows for a high degree of flexibility and adaptability, not only for the legislator but also for companies. As long as the principles and objectives are adhered to, it does not matter which technologies are used.

Legal uncertainties that may arise with principle-based legislation can be countered with communications from authorities that regularly publish their practices, views and expectations. In the area of financial-market law, for example, the Financial Market Supervisory Authority (FINMA) uses the instrument of “circulars”, with which it can specify open, undefined legal terms and set guidelines for the exercise of discretion to ensure uniform and appropriate practice. In our experience, most authorities in Switzerland are also prepared to respond to inquiries and other requests regarding the legal assessment of new business models (rulings; no-action letters).

Against this backdrop, a 2019 report by the State Secretariat for Education, Research and Innovation to the Federal Council (SERI) concludes that there is no fundamental need to adapt the existing Swiss legal system with regard to artificial intelligence. However, the SERI also stated that this assessment may change rapidly due to an abundance of technological developments. 

Indeed, much has happened since the end of 2019, when the SERI’s report was published. In April 2021, for example, the EU Commission presented its first draft of the AI Act. With the release of ChatGPT in November 2022, generative AI – and AI in general – has become increasingly relevant within  society and the economy. Various initiatives have been submitted to the Swiss parliament and include:

  • protection against discrimination by AI;
  • certain declaration and transparency obligations;
  • strengthening of the participation rights of employees;
  • clarification of legal framework conditions, and their improvement in general; and
  • regulation of AI within the EU.

Just four years after the SERI report, the Federal Council has tasked the Federal Department of the Environment, Transport, Energy and Communications (DETEC) with submitting a new one by the end of 2024 to identify possible approaches to regulating AI in Switzerland. This analysis is meant to serve as a basis for the Federal Council to issue a specific mandate for drafting an AI regulation in 2025.

It can therefore be assumed that AI will also be incorporated more explicitly into legislation in Switzerland in one form or another. Experience shows that the Swiss legislator will be guided by EU regulations, but without abandoning Swiss lawmaking principles. Unlike in the EU, we therefore do not expect horizontal legislation, but rather sector-specific and selective amendments to existing laws. It will be interesting to see how Switzerland will strike a balanced approach here.

How to Meet the Regulator’s Expectations for AI in Financial Services

Due to the large quantity of data, the financial market offers a particularly promising field for the application of AI. In 2021, FINMA conducted surveys of selected banks to draw up an initial inventory of the areas in which AI applications are used. The results showed that these include, among others, the following:

  • client and transaction monitoring, for example in relation to money laundering, credit-card abuse or payment-transaction fraud;
  • portfolio and suitability analysis;
  • trading systems and strategies; and
  • process automation in document processing, IT and HR, as well as in marketing and sales promotion.

A FINMA survey of Swiss insurance companies showed that AI is already widely used, particularly in customer interaction, claims processing, sales, and pricing. In terms of governance, insurers have begun to institutionalise committees to consolidate and further develop their AI-specific processes.

According to the Risk Monitor 2023, FINMA sees particular challenges in the use of AI in the following four areas, and expects the financial industry to address these accordingly.

Governance and responsibility

Decisions are increasingly based on the results of AI applications, or are even carried out autonomously by AI applications. This makes the control and allocation of responsibility for the actions of the applications more complex. There is an increased risk that errors will go unnoticed, and that accountability will be unclear. FINMA requires that clear roles and responsibilities, as well as risk-management processes, be defined and implemented, as responsibility for decisions cannot be delegated to AI or third parties. In addition, all parties involved are required to have sufficient expertise in the field of AI.

Robustness and reliability

AI applications are based on large amounts of data. This creates potential risks resulting from poor data quality (eg, unrepresentative data). In addition, AI applications optimise themselves automatically, which can result in incorrect further development of the model (known as “drift”). Such applications are reliable enough to be used autonomously. Finally, the increased use of AI applications and the associated outsourcing and cloud usage also increases cybersecurity risk. FINMA requires that results be sufficiently accurate, robust and reliable in the development, adaptation, and application of AI, which means that data, models and results must be critically scrutinised.

Transparency and “explainability”

In AI applications, the influence of individual parameters on results can often no longer be understood due to the many parameters and complex models. Without an understanding of how the results are achieved, there is a risk that decisions based on AI applications cannot be explained, understood, or verified. This makes controls impossible. In addition, customers cannot fully assess the risks without reference to the use of AI. In the case of insurance tariffs, for example, the use of AI could result in the tariff not being properly understood and not ultimately explained transparently to customers. FINMA requires that the “explainability” of the results of an application and the transparency of its use must be ensured.

Equal treatment

Many AI applications use personal data to assess individual risks (eg, pricing, lending) or to develop customer-specific services. Data on different groups of people not being sufficiently available can lead to distortions or incorrect results for groups of people. If products and services are offered based on these incorrect results, unintended, unjustifiable unequal treatment may arise. In addition to legal risks, unequal treatment is also associated with reputational risks. FINMA requires that unjustifiable unequal treatment be avoided.

FINMA will review the use of AI by supervised institutions (eg, banks and insurance companies) and will continue to closely monitor developments in the use of AI in the financial industry, remain in contact with relevant stakeholders, and follow international developments.

Developments in AI and the Processing of Personal Data

AI and the processing of data are inextricably linked. AI is trained with data (training data), processes data (input data) and generates new data (output data). If personal data is processed, the Federal Law on Data Protection (“FADP”; in force since September 1, 2023) generally applies.

The requirements of the FADP must be observed during the development, training, and use of AI applications. Particular attention must be paid to processing principles such as proportionality (eg, data minimisation, need-to-know principle); transparency; purpose limitation and correctness; and – following a risk-based approach – data security. In certain circumstances, particularly when using new technologies, data-protection impact assessments must be carried out before personal data is processed. This can be the case in both the development and application of AI systems. Further, the fulfilment of information obligations and data subject rights must be ensured, and data-processing agreements must be in place if personal data is processed. If data recipients (eg, the provider of an AI system) are located abroad, additional measures must be taken if the country does not have an adequate level of data protection from Switzerland’s perspective. The violation of certain obligations under the FADP is subject to sanctions. Unlike in the EU, it is not the companies that are sanctioned, but the natural person – ie, the responsible employee. The fine can be up to CHF250,000. As these fines can neither be insured nor covered by the employer (the company), it is important to clearly specify responsibilities and tasks – eg, within the framework of internal data-protection policies.

In a press release (which is not binding), the Federal Data Protection and Information Commissioner (“FDPIC”) emphasised that, regardless of the approach to future regulations, existing data-protection provisions must be adhered to. 

In particular, the FDPIC requires providers and/or deployers of AI applications to do the following:

  • ensure compliance with the FADP in order to protect the digital self-determination of individuals to the greatest extent possible;
  • make the purpose, functionality and data sources of AI-based processing transparent;
  • inform data subjects whether they are speaking or corresponding with a machine (eg, chatbots), and whether the data they have entered is being processed to improve the self-learning programmes or for other purposes;
  • make the use of programmes that enable “deepfakes” of identifiable persons clearly recognisable; and
  • refrain from using applications if they are intended to undermine the privacy and informational self-determination protected by the FADP (similar to AI systems prohibited under the AI Act).

The question of who must comply with which obligations under the FADP depends on the respective roles of the parties involved. In practice, it is often the case that a company deploys an AI tool from a third-party provider, but can determine the purposes and means of data processing itself. In this situation, the company usually qualifies as the controller, while the provider qualifies as the processor. The parties must conclude a data-processing agreement and, if necessary, take precautions with regard to any transfer abroad.

The FADP provides – similar to Article 22 GDPR – a provision on automated individual decision-making, which is regularly found in AI applications. Pursuant to Article 21 para. 1 of the FADP, the controller must inform the data subject of any decision based solely on automated processing which produces legal effects concerning the data subject or significantly affects the data subject. A decision based solely on automated processing means that no substantive assessment and decision based on it has been made by a natural person. This excludes pure “if-then” decisions; a certain degree of discretion is required. In the event of such a decision, the controller must give the data subject the opportunity to express their point of view upon request. The data subject may also request that the automated individual decision be reviewed by a natural person (ie, an employee of the controller). Although this person must be authorised to change the decision, there is no right to change the decision or the criteria for the decision, nor does the controller have to justify the decision after the data subject has exercised their rights. This special information obligation does not apply in the following cases:

  • if the automated decision is directly related to the conclusion or performance of a contract between the controller and the data subject (unlike under the GDPR, the necessity of the automated decision for the contract is not a prerequisite), and the data subject’s request is granted; or
  • if the data subject has expressly consented to the automated decision.

Unlike under the GDPR, the information on the automated individual decision-making and the related rights of the data subject can be provided in the course of, or even after, the decision since this is sufficient for the data subject to assert their rights. Again, unlike under the GDPR, the controller is not obliged to provide information about the logic involved or the significance and the envisaged consequences of such processing for the data subject.

However, information about the logic involved may have to be provided if data subjects assert their right to information. Pursuant to Article 25 para. 2 let. f of the FADP, the data subject must be informed of the existence of an automated individual decision-making and the logic on which the decision is based. It is deemed sufficient to state the basic assumptions of the algorithmic logic on which the decision is based. However, algorithms that form the basis of the decision do not have to be disclosed. For example, it is sufficient to state that a contract will not be concluded due to a negative scoring result if information is also provided on the amount and type of information used for scoring and its weighting.

How to Respect and Protect IP Rights in Generative AI

Swiss copyright law grants the author the right to determine whether, when and how their work is used (exclusive right). This raises the question of whether providers of AI applications may train their models with content (freely accessible on the internet). In principle, the consent of the author is required for this, unless an exception applies (eg, personal use or scientific research).

The deployers of AI tools must also ensure that they do not infringe upon the copyrights of third parties. This would be the case, for instance, if the output of the AI tool largely covers the original work from the creator (eg, if the output cites whole book chapters) or if the generated output contains any third-party trademarks. Although the so-called memorisation of GenAI tools, which is responsible for reproducing the content of the input, is considerably low from a technical perspective, such reproduction is not excluded. Further, the usage of the GenAI output is also subject to contractual agreements, such as the terms and conditions of the GenAI tool provider. Users are advised to carefully read these terms, as they include intellectual property considerations, the scope of usage right of the GenAI output (often the free access version is restricted to non-commercial use) and other aspects, such as warranties, indemnification and liability).

Finally, the question arises as to whether content generated by AI applications (eg, a text) can be protected under Swiss copyright law. Content generated by AI is protected by copyright under Swiss law if the output constitutes an intellectual creation originating from the user, as an AI tool does not constitute an “author” under Swiss law. Based on this, it largely depends on the input of the user. If the AI tool was simply used as a tool (similar to a painter using a brush), there should not be any restriction to grant protection on the output. However, in most cases, this threshold may not be achieved. Thus, in our opinion, although copyright protection cannot be ruled out (if the AI application is simply used as a tool), it must be assessed on a case-by-case basis.

How to Comply with Unfair Competition Law in Marketing AI

There are many ways in which AI can be used in marketing (eg, personalised marketing, automation of customer interaction, which includes chatbots, etc). In addition, the surge of GenAI tools, in particular, has increased with regards to one specific use in marketing, which is content creation (generation of text, images, etc, for marketing purposes). 

The Swiss Federal Act against Unfair Competition (UCA) does not (yet) regulate the use of AI in commercial communication. However, even without specific AI regulation, many cases are already covered by existing law.

The UCA prohibits, inter alia, making false or misleading business-related statements. If a company is using a chatbot, for example, it must ensure that the chatbot’s statements are correct and not misleading. In many cases, this is likely to require human supervision. Further, advertising must be recognisable as such. If, for example, an advertisement shows a picture generated by AI, a title or label regarding the fact that it is advertisement may be required. However, unlike in the AI Act of the EU, there is no AI transparency obligation under the UCA and thus chatbots or images generated by AI tools (also if displaying persons) are not yet subject to obligatory labelling. However, labelling the use of AI tools might sometimes still be required under the general rules of the UCA – eg, if the advertisement is misleading, deceptive or otherwise in breach of the principle of good faith and could influence the relationship between competitors or between suppliers and customers (chatbots, deepfakes). Fully AI-generated advertising brings the risk of creating confusion with competitors, making unfair comparisons or exploiting the work of others. Companies are advised to (i) have a human in the loop when creating advertising with AI; and (ii) adapt or alter the generated content from AI tools before using it in a marketing context to minimise violations of the UCA. Sales chatbots must be designed in such a way that they do not use aggressive sales methods, since this constitutes an act of unfair competition.

Application of the EU AI Act for Companies in Switzerland

Due to its extraterritorial effect, the EU AI Act – like the GDPR – will also be relevant for companies in Switzerland. The latter must check on a case-by-case basis, ie for each AI application, whether they fall within the scope of the EU AI Act be it as a provider or deployer of AI systems or general-purpose AI models. This will be the case, in particular, if they make AI systems available in the EU, or if they use AI systems whose output is used in the EU.

Conclusion

Although there are currently no specific regulations for the provision, import, distribution and/or deployment of AI systems or general-purpose AI models in Switzerland, companies are well advised to address the challenges posed by such applications in good time. This means creating governance for the development and use of AI tools that addresses not only compliance with applicable law (such as financial-market, data-protection, unfair competition, or IP law), but also ethical standards and future regulation (such as the EU AI Act, if applicable in the future). In addition to published court and authority practice, official publications provide important guidance for legally compliant implementation (eg, from FINMA or the FDPIC). In individual cases, it may also be advisable to ask authorities for an opinion, no-action letter or other kind of ruling on a planned use case involving AI.

Last, but not least, legal policy developments in Switzerland must be closely monitored. Great attention must be paid, in particular, to the forthcoming report on the need to regulate AI, which is expected at the end of 2024.

Kellerhals Carrard

Kellerhals Carrard
Rämistrasse 5
8001 Zürich
Switzerland

+41 58 200 39 00

www.kellerhals-carrard.ch
Author Business Card

Law and Practice

Authors



Homburger is one of the largest Swiss law firms, with more than 160 experts. The firm acts as trusted advisor to companies and entrepreneurs doing business in Switzerland in all aspects of commercial law, including on the full spectrum of intellectual property and technology, corporate and financing transactions, antitrust, litigation and arbitration, regulatory proceedings and investigations, and tax law. The firm is renowned for pioneering legal work, uncompromising quality and its outstanding work ethic. Homburger’s IP/IT and Data Protection teams advise and represent clients in all matters relating to intellectual property, technology, and data protection. This includes structuring and negotiating intellectual property and technology transactions, supporting clients with technical know-how in IT, telecommunications and media, advising clients on data protection, and representing clients before courts and authorities in proceedings relating to intellectual property, data protection, and matters with a particular focus on technology.

Trends and Developments

Authors



Kellerhals Carrard is a full-service business law firm with offices in Basel, Berne, Geneva, Lausanne, Lugano, Sion and Zurich, as well as representative offices in Shanghai and Tokyo. The company has more than 500 employees, including 300 legal professionals, making it the largest law firm in Switzerland. Kellerhals Carrard provides a complete range of legal services for domestic and international clients and is known for its entrepreneurial spirit, business acumen and pragmatic approach. Innovation plays a central role in Kellerhals Carrard’s practice. The firm has broad expertise in combining legal and technical knowledge – a core competence that allows it to offer valuable guidance in the interdisciplinary field of AI. Through close cooperation with the legislative authorities and regulators, the company is able to advise clients at an early stage on upcoming changes in the regulatory framework and the resulting strategic measures.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.