At present, there is no legislation in Switzerland dealing specifically with AI. Switzerland has recently signed the Council of Europe’s AI Convention and is expected to ratify and incorporate it into Swiss law some time after 2026 (see 3.1 General Approach to AI-Specific Legislation for further details).
However, the majority of Swiss law is technology-neutral, and can therefore also apply to AI-based products and services. This includes the following.
Notably, Switzerland is neither a member state of the EU nor a member of the European Economic Area (EEA), so EU/EEA legislation only applies to the country where it is specifically referenced in Swiss legislation.
Before the launch of ChatGPT in November 2022, the use of AI in Switzerland focused primarily on predictive AI. Much innovation in AI was concentrated in industries that have historically been strong users of technology in the country, such as finance, pharmaceuticals and medical devices, and robotics. By way of example, financial institutions have been using predictive AI extensively for fraud and money laundering prevention, portfolio and suitability analysis and trading strategies. The most widespread use of “generative” AI in all industries was arguably text translation, with tools such as DeepL.
Following the general availability of ChatGPT, generative AI tools have seen a massive uptake in various industries in Switzerland. This includes “high-impact” industries such as software and technology, media, pharmaceuticals and finance, as well as industries where the use of generative AI is still in earlier stages, such as marketing and professional services. According to a study conducted by PwC’s strategy consulting “Strategy&” in March 2024, the Swiss economy has the highest growth potential worldwide through generative AI and could expand by 0.5% to 0.8% annually in the coming years based solely on the use of generative AI.
In its “Guidelines on Artificial Intelligence for the Confederation” published on 25 November 2020 (the “AI Guidelines”), the Federal Council (Switzerland’s highest executive authority) makes the case that Switzerland should create and maintain an environment conducive to the research, development and application of AI in order to ensure high-quality living standards. The Swiss government does not specifically facilitate investments in the AI sector or specific technology, instead favouring a bottom-up approach. There are two essential pillars to incentivise the development and use of AI, as follows.
Consistent with the principle of technology-neutral regulation, the AI Guidelines do not distinguish between different kinds of AI, such as generative AI and predictive AI.
More recently, multiple Swiss public research universities have established dedicated research centres and hubs for AI, aiming to combine researchers from different faculties and industry stakeholders and to facilitate AI start-ups and spin-offs. One prominent example is the ETH AI Center of the Swiss Federal Institute of Technology in Zurich (ETH Zurich).
The Swiss government has historically adopted a “wait-and-see” approach with respect to regulating AI, and has favoured industry self-regulation. However, in light of other regulatory initiatives regarding AI, notably the EU’s AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “AI Convention”), the Federal Council decided in November 2023 to review possible regulatory approaches that would be compatible with those two regulatory frameworks.
The resulting report submitted to the Federal Council by the Federal Office of Communications (OFCOM) stated three objectives of potential AI regulation in Switzerland: strengthening Switzerland as a location for innovation, safeguarding the protection of fundamental rights (including economic freedom) and increasing public trust in AI. OFCOM outlined three possible regulatory approaches that could achieve those objectives:
On 12 February 2025, the Federal Council decided on a regulatory approach that is largely based on the second option with limited implementation: ratifying the Council of Europe’s AI Convention and incorporating it into Swiss law, with a primary focus on public authorities and less extensive obligations for the private sector. The Federal Council also decided that legislative changes should primarily be sector-specific, and only key areas relevant to fundamental rights, such as data protection, should be subject to horizontal regulation. These legislative changes will be supplemented by non-binding measures.
The Federal Department of Justice and Police is tasked with drafting a bill for consultation by the end of 2026. The proposed bill will then likely undergo an extensive consultation process prior to being introduced to the Swiss parliament. Also by the end of 2026, the Federal Department of the Environment, Transport, Energy and Communications, together with other federal departments, will draw up a plan for non-binding measures.
In line with the chosen regulatory approach, Switzerland signed the Council of Europe’s AI Convention on 27 March 2025.
There is no applicable information in this jurisdiction.
AI Guidelines
The AI Guidelines of 25 November 2020 set out seven basic guidelines on the use of AI in Switzerland. They are not binding on the private sector as their primary purpose is to serve as a general frame of reference for the use of AI within the federal administration and to ensure a coherent policy. They can be summarised as follows.
As part of surveys conducted within the federal administration in 2022 and 2024, OFCOM found that the AI Guidelines generally still cover the relevant topics, but that they may require amendments to remain of practical use. The AI Guidelines are likely to be re-evaluated in the course of 2025 when a new strategy for the use of AI systems within the federal administration, published on 21 March 2025, is implemented.
Fact Sheet for Generative AI
In response to the rapid rise of generative AI chatbots, particularly ChatGPT, the Swiss government issued the “Fact sheet on the use of generative AI tools in the Federal Administration”, with the most recent version dating from 18 January 2024. The goal of this fact sheet is to give practical guidance to the employees of the federal administration and other federal agencies on how to use generative AI tools for their daily work.
The fact sheet encourages responsible experimentation with generative AI tools, such as summarising publicly available sources, obtaining code suggestions or generating images for presentations. It also reminds users not to violate existing regulations and policies. In particular:
Similar fact sheets for the use of generative AI tools by government employees have been issued at the cantonal (state) and local levels.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
At present, no judicial decisions dealing with AI in substance have been reported in Switzerland.
OFCOM
OFCOM is considered the leading agency dealing with the regulation of AI in Switzerland. OFCOM’s regulatory authority generally comprises telecommunications, radio and television, and postal services. OFCOM plays an important role in initiatives regarding the digitalisation of the federal administration and was also the office tasked with leading the evaluation of possible regulatory approaches for AI (see 3.1 General Approach to AI-Specific Legislation).
FDPIC
The FDPIC is an independent agency responsible for tasks in the areas of data protection and the principle of freedom of information. In its role as supervisory authority, the FDPIC monitors and enforces compliance by federal bodies and the private sector with Swiss federal data protection legislation. The FDPIC’s views and guidelines on the application of data protection legislation to AI are of significant practical importance.
FINMA
The Swiss Financial Market Supervisory Authority (FINMA) is the independent agency responsible for financial market supervision in Switzerland. FINMA’s regulatory mandate is to supervise banks, insurance companies, exchanges and financial institutions, among others. It is charged with protecting creditors, investors and policyholders and ensuring that Switzerland’s financial markets function effectively. While FINMA’s regulatory authority is not specifically aimed at AI, it has identified the use of AI by supervised institutions as a focus area of its supervisory activities. In its risk monitoring published in November 2023, FINMA stated that AI creates particular challenges in the financial sector in connection with the responsibility for AI decisions, the reliability of AI applications, the transparency and “explainability” of AI decisions, and the equal treatment of financial market clients. In the recent FINMA Guidance 08/2024 of 18 December 2024, FINMA also stated that it will review the use of AI by supervised institutions, refine its expectations of appropriate AI governance and risk management and strive for a technology-neutral, proportionate and standardised approach (see 14.2 Financial Services for further details).
CNAI
The Swiss federal government’s Competence Network for Artificial Intelligence (CNAI) is not a regulatory agency but is the only AI-specific body established within the federal government. The CNAI combines representatives of multiple federal departments and offices, and collaborates with AI experts in its “community of experts” and with other interested individuals, institutions, organisations and companies in its “community of practice”.
In addition to drawing on expert knowledge and ensuring that knowledge gained from AI projects is shared within the federal administration, the CNAI has issued an official AI terminology document to introduce the uniform use of terms throughout the federal administration. The CNAI terminology includes inter alia the following definitions.
See 3.3 Jurisdictional Directives.
So far, there have been no reported enforcement actions regarding the use of AI in Switzerland. However, the FDPIC issued a press release on 20 March 2025 stating that it had conducted a preliminary investigation into X (formerly Twitter) regarding the use of personal data of users to train AI models for Grok, the generative AI chatbot of xAI. After being provided with information by X, the FDPIC concluded that this use was consistent with the FADP due to a setting that allows users to opt out of having their data used for training by default.
The Swiss Association for Standardization (Schweizerische Normen-Vereinigung; SNV), a private body and Switzerland’s member of the European Committee for Standardization (CEN) and the International Organization for Standardization (ISO), is the primary forum for industry standardisation in the country. The SNV has not yet issued any standards related to AI, but is involved in the standards being adopted and considered by ISO.
Most standards in Switzerland are based on the relevant international standards developed in global or European standard-setting bodies. The most important international standard-setting bodies for Switzerland with respect to technology include ISO, the International Electrotechnical Commission (IEC), the ITU, CEN, the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).
Given that Switzerland does not have AI-specific regulation at present, international standards relating to AI do not have the force of law in Switzerland. However, international standards could be taken into account by the courts when assessing the required standard of care in questions of liability.
Government use of AI in Switzerland varies significantly between the federal government and the different cantonal (state) and local governments, and is still in its early stages overall. While the federal government and most cantons are already using or evaluating some form of AI, many of those proposed applications do not raise major concerns. Examples include semantic search in legislative texts, the summarisation of administrative guidance, automatically assigning incoming questions or requests to the responsible departments for a response and government chatbots.
Proposed new applications of AI in the areas of policing and criminal justice, on the other hand, have proven more controversial. One example is the Swiss Federal Office of Police (fedpol) modernising its automated fingerprint and palmprint identification system (AFIS) to add facial image comparison capabilities. The new system is expected to be operational in 2027 and will provide federal and cantonal immigration authorities, prosecutors and border security with an additional means for identifying people. Currently, the proposed uses do not include real-time identification of individuals, which is highly controversial.
The most comprehensive list of government use of AI in Switzerland is compiled by the NGO AlgorithmWatch CH in its “Atlas of Automation Switzerland”.
There is no applicable information in this jurisdiction.
To the extent that such information is public, AI does not yet play a key role in national security matters in Switzerland. However, the Swiss Federal Intelligence Service (FIS) has stated in its annual situation report titled “Switzerland’s Security 2023” that it expects foreign intelligence services to increase their use of AI to process and analyse the ever-increasing amounts of data being transmitted. The FIS also expects intelligence services to further improve their data-gathering capabilities domestically and abroad, which has implications for the FIS’ counter-intelligence and counter-espionage activities. In the 2024 edition of its situation report, the FIS has highlighted that China is becoming a technology leader in the areas of AI and big data, which may increase the malware capabilities of Chinese threat actors.
On 12 December 2024, the National Council (the larger house of the Swiss parliament) tasked the Federal Council with compiling a report for a comprehensive security and defence strategy regarding the dangers of autonomous weapons systems and systems with AI, considering both the risks and opportunities for Switzerland’s security and its defence industry.
See 8.2 Data Protection and Generative AI and 15.1 IP and Generative AI.
Under the FADP, which applies to federal bodies and private persons, the processing of personal data is generally lawful if the principles set out in Articles 6 (eg, proportionality, purpose limitation) and 8 (data security) of the FADP are complied with. A legal justification (consent, overriding private or public interest, or as provided by law) is required for the processing to remain lawful if the processing deviates from these principles, or if the data subject has objected to the processing. This concept differs slightly from the EU GDPR, where a lawful basis is required for all processing.
In the context of generative AI, data protection is relevant both with respect to the training of the AI model and with respect to its operation, particularly regarding the data subject’s right to rectification and deletion, as follows.
One key way in which the FADP differs from data protection laws in other jurisdictions is that fines for criminal offences of up to CHF250,000 are imposed on the responsible individuals (natural persons), not on the company. The main offences under the FADP include the intentional violation of information and co-operation obligations, duties of care (eg, in connection with delegation to a processor) and data protection secrecy obligations.
While it is unlikely that the use of AI as such will constitute a criminal offence under the FADP, the individuals responsible for the design and implementation of data processing using AI should carefully review data protection compliance to avoid personal criminal responsibility.
Use of AI Tools
Apart from translation tools, early applications of AI in the legal profession in Switzerland were primarily focused on reviewing large numbers of documents as part of legal due diligence in M&A, or as part of internal investigations.
However, the availability and use of AI tools is currently undergoing rapid change. Ever since ChatGPT was released to the public, generative AI based on large language models has attracted significant interest from law firms and in-house legal departments. Tasks that Swiss law firms are experimenting with include summarisation, the generation of analyses and memoranda based on legal source documents, the drafting of contracts, the analysis of weaknesses and inconsistencies in argumentation, etc. Another application is semantic document search in a law firm’s or legal department’s own document database. AI offerings for the legal profession are now shifting towards more complex workflows and use cases of agentic AI.
Legal information providers are also experimenting with generative AI to offer semantic document search and the AI-based answering of questions based on their database of legal literature, scholarly articles and court cases.
Legal and Ethical Concerns
The main legal and ethical concerns in the use of AI, and generative AI in particular, relate to professional secrecy and the quality of legal advice, as follows.
Only natural and legal persons can be held liable under Swiss private law. When using AI-powered products or services, the key question that arises is, therefore, which person the liability for the conduct of the AI is attributed to. Liability for damage or losses caused can be either contractual or non-contractual (tortious).
Contractual Liability
Pursuant to Article 97 of the Swiss Code of Obligations (CO), a party to a contract is liable for failing to fulfil its contractual obligations unless it can prove that it was not at fault (ie, that the failure occurred without intent or negligence). Pursuant to Article 101 of the CO, the contractual liability extends to the conduct of any auxiliaries, such as employees or subcontractors, used in the performance of the contractual obligations; only natural or legal persons can qualify as auxiliaries. If the use of AI leads to a breach of a contractual obligation, the key question will be whether the breaching party can prove it did not act negligently in its use of AI.
Non-Contractual (Tort) Liability
Article 41 of the CO covers non-contractual liability, where any person who unlawfully causes damage to another person is obliged to provide compensation. The application of this provision further requires intent or negligence on the part of the person causing the damage. Article 55 of the CO extends this liability to an employer for the conduct of their employees or other auxiliary staff.
This liability regime is supplemented by the Product Liability Act, which imposes liability on the manufacturer of defective products. However, there is no established conclusion on whether software products such as AI systems qualify as products within the meaning of the Product Liability Act, and to what extent an individual AI system or component could be considered defective while the technology as such is known to produce unintended results. Furthermore, the manufacturer of a defective product is not liable if they can prove that the defect could not have been detected based on the state of the art in science and technology at the time the product was placed on the market.
There is no applicable information in this jurisdiction.
At present, there is no specific regulation on algorithmic bias in Switzerland. However, the fundamental right to equal treatment and non-discrimination contained in the Federal Constitution applies throughout the Swiss legal system and can also affect the relationship between private parties (eg, between employee and employer) in addition to the state.
Biometric information that uniquely identifies a natural person expressly qualifies as “sensitive personal data” under the FADP, which means that such data may not be disclosed to third-party controllers without justification and, where the consent of the data subject is required for processing, the consent must be explicit.
The public’s unease regarding facial recognition and biometrics allegedly already in use by the police forces in certain cantons has led to legislative initiatives in multiple cantons to expressly prohibit their use.
Automated decision-making is addressed by the FADP. Its Article 21 obliges a data controller to inform the data subject if a decision is made solely by the automated processing of personal data that has a legal or otherwise significant consequence for the data subject. This obligation does not apply in the context of entering into an agreement if the request of the individual is granted in full (eg, a loan application is fully approved by an automated decision). Where the data subject has to be informed, they are entitled to request that a human review the automated decision. Intentional omission of the information of the data subject under Article 21 of the FADP constitutes a criminal offence for which a fine of up to CHF250,000 may be imposed on the responsible individual.
Article 21 of the FADP is not applicable to decisions recommended by AI but manually approved by a human, even if the level of human review is superficial. An initiative was launched in the Swiss parliament to include similar information obligations for any decision significantly based on AI, but the initiative was rejected by the relevant parliamentary commission as premature given the Federal Council’s review of potential approaches to general AI regulation (see 3.1 General Approach to AI-Specific Legislation).
There is no general obligation for businesses to disclose the use of AI in Switzerland. An obligation of transparency may exist with respect to individual uses of AI, such as:
As AI models are only as good as the data they were trained with, businesses procuring AI solutions intended for productive use should include provisions regarding the quality of the training data and corresponding properties of the AI model (eg, no discriminatory bias, no infringement of IP rights) in their procurement contracts. Otherwise, they themselves risk becoming liable for the output of the AI solutions. While the use of due care is a possible defence under most provisions giving rise to liability under Swiss law, businesses ignoring well-known risks inherent in current AI technology may face the accusation of not having used the required standard of care.
Hiring
Résumé screening software (with or without AI) has been used in Switzerland for some time. Under the principle of freedom of contract, private sector employers in Switzerland are not required to hire individual applicants, and, by default, are not required to explain why an applicant was rejected.
However, under the Gender Equality Act, employers are not permitted to discriminate against applicants based on gender in the selection process: if an applicant is rejected due to gender-based discrimination, the applicant is entitled to a written explanation for the rejection and, if they can prove discrimination, to compensation under Article 5(2) of the Gender Equality Act. However, this does not apply at the stage of job ads, where employers are permitted to advertise a position only for a specific gender. Where AI-based selection tools discriminate based on gender, for example due to inherent gender bias in the training data, the employer may become liable.
In addition, if any rejection of an applicant is made by an automated decision without human control, the applicant must be informed and is entitled to have a human review the decision based on data protection law (see 11.3 Automated Decision-Making).
Termination
Termination of a private sector employment relationship is governed primarily by the CO. In principle, employers do not need specific legal grounds for termination, but do need to provide a written explanation upon request of the employee. However, Article 336 of the CO prohibits wrongful termination, including termination due to a quality inherent in the employee’s profile (eg, age, ethnicity, religion), unless this quality is related to the employment relationship or significantly affects work in the business. If an employee is terminated based on recommendations by AI (eg, due to inherent bias in the training data), the employer may therefore become liable for compensation due to wrongful termination (up to six months’ salary).
The prohibition of gender-based discrimination under the Gender Equality Act and the provisions on automated decision-making also apply to any termination that is discriminatory or automated, respectively.
Monitoring and surveillance is considered detrimental to employee health, and is therefore subject to multiple restrictions. Article 328 of the CO provides for a general duty of care of the employer to respect and protect the employee’s personality rights and health. Based on this principle, all measures taken that can affect employee health and wellbeing must be proportionate to the (legitimate) aim pursued. In addition, Article 26 of Ordinance 3 to the Employment Act specifically prohibits the use of systems to monitor the general behaviour of employees in the workplace. Such systems are only permitted where they pursue legitimate aims, such as security, performance or efficiency, and only if their use is proportionate. In practice, this means employers must be able to prove that less intrusive alternative solutions were not sufficient to achieve the aim. Specifically, the FDPIC has stated that AI-based systems for the automated evaluation of employee-based data (eg, vision, movement, communication patterns) are prohibited under these provisions.
Employees are also entitled to employee participation regarding all matters that concern the protection of their health. They must be sufficiently informed of measures and circumstances affecting their health, and have a right to be consulted before the employer takes relevant decisions. This includes the right of employees to make suggestions, and the obligation of the employer to explain decisions that do not take employees’ objections into account.
At present, there is no specific regulation regarding the use of AI in digital platform companies, and general observations regarding the use of AI in data protection, and potentially in employment contexts, also apply to digital platform companies. Extensive use of AI to control the conduct of platform participants (eg, which participants get to serve which customers when, and at what price) may lead to increased findings by courts that the platform participants lack autonomy and are in fact employees of the digital platform companies.
At present, there is no specific regulation regarding the use of AI in financial services. General financial services regulation, including with respect to risk management, also applies to the use of AI. As set out under 5.1 Regulatory Agencies, FINMA has identified AI as a focus area of its supervisory activities and will review the use of AI by supervised institutions. In its recent FINMA Guidance 08/2024 of 18 December 2024, FINMA draws attention to a number of risks relating to the use of AI by supervised institutions. Risks from the use of AI primarily constitute operational risks, although other risks, such as the risk of dependence on third parties, legal risks or reputational risks, are also of relevance. FINMA highlighted the following findings, among others:
At present, there is no specific regulation regarding the use of AI in healthcare. The primary areas of existing law governing the use of AI in healthcare are sector-specific regulations, such as the Therapeutic Products Act and the Medical Devices Ordinance, which specifically includes software in its definition of medical devices. Where AI-based products qualify as medical devices, they need to comply with the general safety and performance requirements set out in Annex I to the EU’s Medical Device Regulation (by virtue of an express reference to EU legislation). Medical devices placed on the market in Switzerland require a conformity marking (CE label), predominantly based on self-assessment by the manufacturer. The relevant assessment criteria depend on the applicable risk category.
In addition, the FADP applies also in the healthcare sector, although in the area of research on humans it is supplemented and partially overridden by the Human Research Act.
Fully autonomous vehicles are not yet permitted on Switzerland’s streets in general. However, the Swiss parliament amended the Road Traffic Act in 2023 to permit the Federal Council to authorise vehicles with automatic driving systems in a delegated act. This delegated act, the Ordinance on Automated Driving (OAD), entered into force on 1 March 2025. The OAD governs the approval and operation of autonomous vehicles in Switzerland and at present permits three categories of operation of autonomous vehicles, subject to strict safeguards:
At present, there are no specific regulations regarding the use of AI in manufacturing. The liability aspects of manufacturing primarily depend on whether there is a contractual relationship or not (see 10.1 Theories of Liability). Where the use of AI causes manufactured products to be defective, this may trigger liability of the manufacturer under the Product Liability Act.
Please refer to 9.1 AI in the Legal Profession and Ethical Considerations for the main concerns. Many professional services occupations are subject to professional secrecy obligations similar to those of lawyers (accountants, doctors, other medical professionals, etc), and most professional services firms are concerned about the confidentiality of the information provided by their clients as well as the accuracy of their advice.
IP Protection of Training Data and AI Models
The degree of IP protection of training data will depend on the nature of the data: if the training data consist of works that are subject to copyright, such as works of literature, newspaper articles, website contents, images, songs or software code, the Copyright Act prohibits unauthorised use, copying or distribution (among other actions). There are statutory limitations to copyright, but many of them will likely not apply in a commercial context (see “IP Infringement When Training Generative AI” in the following). Unauthorised copying and distribution under the Copyright Act can trigger civil liability (injunctive relief, damages, disgorgement of profits) and, if committed with intent, constitutes a criminal offence.
Where the training data is not subject to copyright, as would likely be the case for statistical data, measurements, etc, or where copyright protection has expired, the data would have to be protected as such. The same applies to AI models, which, at their core, are simply structured sets of data points (weights, biases, other parameters). With respect to these kinds of data, two main avenues of protection are available, as follows.
IP Protection of Generative AI Output
According to the prevailing doctrine in Switzerland, inventions generated by generative AI are not eligible for patent protection and works created by AI, including texts, images, audio, video and software, are not eligible for copyright protection (see 15.2 Applicability of Patent and Copyright Law).
The output itself may however be protected contractually, or as a trade secret, if it is kept confidential (see “IP Protection of Training Data and AI Models” in the foregoing and 15.3 Applicability of Trade Secrecy and Similar Protection).
IP Infringement When Training Generative AI
The training of AI models usually occurs in two steps: the compilation of a training data set and the actual training of the model by iterating over the training data and adjusting the parameters. Both activities involve the creation of copies of data, and can therefore constitute copyright infringement. The Copyright Act does provide for two relevant limitations, but these are unlikely to apply in a commercial context, as follows.
In addition, where the training of an AI model involves the use of unlawfully obtained data that is subject to trade secret protection, the training would also constitute unfair competition under the UCA.
IP Infringement by Generative AI Output
The disclosure or distribution of AI-generated output can infringe on copyright where it reproduces sufficiently large parts of a copyrighted work without the authorisation of the copyright holder.
At present, there are no reported judicial or agency decisions in Switzerland on whether AI technology can be an inventor for patent purposes or an author for copyright purposes. However, the prevailing doctrine among legal scholars is that AI can neither be an inventor nor an author.
Patent Law
Article 3 of the Swiss Patent Act refers to an “inventor” being entitled to a patent. Prevailing doctrine in Switzerland provides that only a natural person (ie, a human) can be an inventor for the purposes of the Patent Act, which excludes both legal persons and AI. Switzerland is party to the European Patent Convention (EPC), which contains analogous wording in its Article 60. With respect to the EPC, the European Patent Office’s legal board of appeal ruled on 21 December 2021 in the DABUS case (case no J 0008/20) that AI may not be named as an inventor.
The foregoing applies to the extent AI itself generates the invention without significant human contribution. If a human uses an AI application solely as a tool to make a discovery, in the same way they might use other computer software, the human using AI as a tool would be considered the inventor under the Patent Act, and the invention may be eligible for patent protection.
Copyright Law
Article 2 of the Swiss Copyright Act defines a work in which copyright may subsist as an “intellectual creation of literature or the arts with individual character”. The criterion of “intellectual creation” is widely interpreted as an expression of the human mind. Consistent with this interpretation, Article 6 of the Copyright Act states that “the author is the natural person who has created the work”. A work within the meaning of copyright can therefore only be created by a human, and not by AI.
While AI may not be an author and purely AI-generated content can therefore not be protected by copyright in Switzerland, works created using AI may be protected. If AI is used solely as a tool by the human author to express their thoughts, for example by having generative AI translate or edit existing text of the author or by modifying pictures similar to what an artist might do with photo editing software, the resulting AI-generated work may still be subject to copyright protection. Equally, if a human sufficiently modifies purely AI-generated content or is creative in the selection or ordering of pieces of AI-generated content, the resulting overall work may also be protected by copyright.
Swiss law uses the terms “manufacturing secrets” and “business secrets”, rather than trade secrets, but there is no distinction in practice. While there is no statutory definition, case law defines trade secrets as any information that (i) is not publicly known, (ii) has commercial value, (iii) the owner has a legitimate interest in keeping secret and (iv) the owner intends to keep secret. A curated training data set, an AI model and the source code of an AI application can therefore constitute a trade secret if the foregoing criteria are met.
Trade secrets are not protected as an absolute right (such as copyright) under Swiss law, but their disclosure or misappropriation is prohibited under specific circumstances, as follows:
All of these examples of unfair competition can trigger both civil and criminal liability.
See 15.2 Applicability of Patent and Copyright Law.
Using OpenAI’s products, particularly ChatGPT, touches on multiple issues relating to intellectual property, both on the “input” and on the “output” side, as follows:
Many antitrust implications of price-setting using AI are complex and not yet clearly established under Swiss antitrust law. While it is clear that the use of AI to implement pre-agreed horizontal price fixing remains illegal, the possibility of AI autonomously co-ordinating prices among competitors is being discussed among scholars, but there is no settled case law yet.
However, in a vertical price-fixing case, the Swiss Federal Supreme Court held that the communication of manufacturer-suggested retail prices by the manufacturer to its distributors through an automated electronic system with daily price updates constituted an illegal agreement on fixed prices, seeing as all distributors and retailers had reason to believe that these prices would also be used by their competitors (and they in fact complied with the suggested pricing).
At present, Switzerland does not have overarching cybersecurity legislation. Instead, cybersecurity aspects are covered by different acts and regulations at the federal and cantonal (state) level. The FADP, together with the Data Protection Ordinance (DPO), requires federal bodies and private persons that process personal data to ensure an adequate level of data security and to report breaches of data security that likely result in a high risk to the personality or fundamental rights of data subjects to the FDPIC, and in certain cases also to the data subject. Although they do not specifically address AI, these provisions will be of relevance wherever AI is used to process personal data in the private sector.
In addition, the Information Security Act (ISA), which entered into force on 1 January 2024 and primarily governs information security practices by federal authorities and organisations, was amended to include reporting obligations for cyber-attacks on public authorities and providers of critical infrastructure. These reporting obligations apply as of 1 April 2025. Organisations in scope must report cyber-attacks of a certain severity to the National Cyber Security Centre within 24 hours of discovery. The list of critical infrastructure providers specifically includes cloud computing and data centre providers in Switzerland, as well as hardware and software vendors whose products are used by other critical infrastructure providers to control and monitor operational systems and processes or to ensure public safety. It is likely that the reporting obligations will become increasingly relevant for the AI-based solutions used by these two categories of critical infrastructure provider.
Since 1 January 2024, Swiss public companies and supervised financial institutions with at least 500 FTE, and at least CHF20 million in total assets or more than CHF40 million in turnover during two consecutive financial years, are required to submit a report on non-financial (ESG) matters to their annual general meeting for approval. Additional reporting requirements exist for certain companies active in commodities trading and those importing conflict minerals or at risk of using child labour. AI tools may be used to support the drafting of these reports; however, the responsibility for their accuracy remains with the board of directors.
None of these reporting obligations specifically relate to AI, but the use of AI may need to be covered in the report on ESG matters depending on the business model of the reporting company and the environmental impact. Certain Swiss companies have already included statements regarding their use of AI in their ESG reports (mostly highlighting the fact that increased use of AI will increase energy consumption and therefore emissions).
Given that Switzerland does not have AI-specific regulation at present, there are no specific legal requirements with respect to AI governance and compliance. AI governance and compliance strategies should be aimed at ensuring that the use of AI complies with the existing body of Swiss law and should consider the company’s potential exposure to the EU AI Act, which may be relevant by virtue of its extraterritorial application.
Companies should integrate AI governance into their overall corporate governance frameworks. This involves defining clear roles and responsibilities for overseeing compliance, AI initiatives and ensuring that AI aligns with the company’s strategic objectives. If no specific position for AI is to be created, companies should consider allocating the AI portfolio to an existing officer with technical expertise, such as a chief information officer (CIO)/chief technology officer (CTO) or the data protection officer (DPO).
Companies should also ensure sufficient AI competence, both within the board of directors and in the executive teams, to be able to make informed decisions about AI. Ideally, at least one member of the board of directors should have a technical background to provide leadership on AI topics.
AI compliance strategies should include practical guidance for employees to use AI tools. Businesses should ideally adopt internal policies on the use of AI tools, clearly outlining which types of use are permitted and which are not. Depending on the field of business and the contractual frameworks in place, this may include an outright prohibition on using publicly available generative AI tools (eg, public versions of ChatGPT), restrictions on the type of data that may be submitted as input (eg, no confidential information, no personal data) or restrictions on the use of output (eg, mandatory review for accuracy prior to publication).
Businesses should also invest in training their staff in AI literacy to help avoid issues based on ignorance or misunderstanding of the nature and limitations of current AI technologies (eg, mistaking a generative AI chatbot for a classic search engine or not noticing hallucinations).
Prime Tower
Hardstrasse 201
CH-8005 Zurich
Switzerland
+41 43 222 10 00
+41 43 222 15 00
lawyers@homburger.ch www.homburger.chSwitzerland’s Emerging AI Landscape
Switzerland is emerging as a significant jurisdiction in the evolving landscape of artificial intelligence (AI). The introduction of generative AI presents a notable economic opportunity, with projections indicating a potential increase in the nation’s gross domestic product of approximately 11% within the next decade. This growth, primarily driven by productivity enhancements across service sectors, is attracting global interest, with US venture capital firms increasingly investing in Swiss AI start-ups.
Switzerland benefits from a robust, innovative ecosystem and a long-standing tradition of AI research, fostering a thriving environment for hundreds of deep tech AI start-ups. This potential has prompted global technology corporations, including OpenAI, Anthropic, Google, IBM and Apple, to establish a presence in Zurich. A substantial proportion of Swiss businesses have already integrated AI into their business functions, reflecting optimism towards its transformative capabilities. Legal and regulatory frameworks will be crucial in navigating this technological shift. In the past, Switzerland has always managed such balancing acts well – this is primarily thanks to Switzerland’s principle-based and technology-neutral legislation.
Principle-Based and Technology-Neutral Approach
In general, Swiss law is principle-based and designed in a technology-neutral way: instead of having specific obligations for particular situations or technologies, the law provides for broad principles and objectives. This allows for a high degree of flexibility and adaptability not only for the legislator but also for companies. As long as the principles and objectives are adhered to, it does not matter which technologies are used.
Legal uncertainties that may arise with principle-based legislation can be countered with communications from authorities that regularly publish their practices, views and expectations. In the area of financial market law, for example, the Financial Market Supervisory Authority (FINMA) uses the circulars, through which it can specify open, undefined legal terms and set guidelines for the exercise of discretion to ensure uniform and appropriate practice. In the authors’ experience, most authorities in Switzerland are also prepared to respond to inquiries and other requests regarding the legal assessment of new business models (rulings; no-action letters and informal inquiries).
Regulating AI the Swiss Way
On 27 March 2025, the Federal Council signed the Council of Europe’s Convention on Artificial Intelligence (the “Convention”). This step was undertaken just one month after an analysis by the Federal Department of the Environment, Transport, Energy and Communications (DETEC) and the Federal Department of Foreign Affairs (FDFA), mandated by the Federal Council, was presented. In said analysis, the ratification of the Convention was one of three possible options. The ratification could lead to a minimal implementation of the Convention, in which the obligations for the state would be more comprehensive than for private actors, or to similarly far-reaching obligations for both the state and private entities.
The Federal Council seems to be primarily pursuing state-focused regulation. Private actors are only affected by the Convention in limited areas – ie, where there is a direct or indirect horizontal effect of fundamental rights, such as equal pay in employment or racial discrimination. However, the extent to which private actors will be affected remains uncertain to some extent. Particularly, the mutual recognition agreement in relation to conformity assessment between Switzerland and the EU might require Switzerland to adopt provisions equivalent to those regarding high-risk AI systems under the EU AI Act.
The Convention is not directly applicable, and the necessary legislative amendments thus remain to be prepared. However, the Federal Department of Justice and Police (FDJP), in co-operation with DETEC and the FDFA, has been tasked with drawing up a consultation draft, which is to be presented by the end of 2026. Likewise, DETEC was commissioned to develop an implementation plan for legally non-binding measures to implement the Convention by the end of 2026. Based on experience, the legislative changes are unlikely to come into force before 2029.
AI in Financial Services: How to Meet the Regulator’s Expectations?
In contrast to the still-developing approach to regulating AI, FINMA has already formulated its expectations concerning the use of AI in FINMA Guidance 08/2024. To meet the regulatory expectations of FINMA, financial institutions should primarily focus on establishing a robust governance and risk management framework that ensures transparency and explainability, maintains robust and reliable systems and adheres to non-discrimination principles.
Establish a robust governance framework
This step involves clearly defining governance and responsibility structures for AI applications, ensuring that tasks, powers and responsibilities are well-defined at the individual level, not just within committees. Existing governance frameworks for IT can be adapted, and many institutions find it beneficial to establish specific AI directives due to the interdisciplinary nature of AI.
Furthermore, institutions must ensure that all personnel involved possess sufficient expertise in AI, understanding not only its potential but also its limitations and risks, and the technical and organisational measures to mitigate them. Management bodies, including the board of directors, should develop a fundamental understanding of AI-specific risks relevant to the institution. The authors suggest companies strive to comprehensively prepare their employees at all levels for the integration of AI through knowledge transfer and training programmes to ensure meaningful, efficient and safe application.
Maintain a comprehensive inventory and strong risk management
A crucial step is to create and maintain a comprehensive inventory of all AI applications used within the institution. This allows for a clear understanding of where AI is being utilised and the associated risks. Following this, a well-defined risk classification methodology should be implemented, considering factors such as the significance for regulatory compliance, financial impact, legal and reputational risks and the number and type of clients affected.
A proactive risk management approach is paramount, encompassing the identification, assessment, management and ongoing monitoring of AI-specific risks. This includes addressing concerns around data quality, ensuring that the data used for training and operation is accurate, representative and of sufficient integrity. Institutions must also establish measures to mitigate model risk, such as a lack of robustness, reliability or predictability, and address the potential for model drift, where the AI’s performance degrades over time. Rigorous testing and validation protocols are essential to ensure the robustness and reliability of AI systems throughout their life cycle. These measures require a deep understanding of core processes and the technology itself.
Ensure transparency and explainability
FINMA expects the use of AI to be transparent, particularly towards clients who are directly affected by AI-driven processes. While acknowledging the limitations in fully explaining the inner workings of some complex AI models, institutions should strive to make the results of AI applications explainable to the extent necessary for understanding and verification. This does not necessarily mean understanding every individual decision but rather ensuring that the overall process and outcome can be comprehended and confirmed. It can be challenging to provide the right level of detail to customers and clients about AI use, ensuring understandability while avoiding overwhelming technical information.
Maintaining comprehensive documentation for material AI applications is vital. This documentation should cover the application’s purpose, data sources, model selection, performance metrics, assumptions, limitations, testing procedures and fallback solutions, ensuring a clear understanding of how the AI functions.
Adhere to non-discrimination principles
Institutions must take measures to avoid discrimination resulting from the use of AI. This involves critically scrutinising both the data used to train AI models and the models themselves to prevent biases that could lead to discriminatory outcomes.
Conduct ongoing monitoring and independent review
Continuous monitoring of AI system performance and behaviour is necessary to detect potential issues such as model degradation or unintended consequences. Establishing feedback loops and mechanisms for review and potential remediation is crucial.
For material AI applications, especially those with significant impact or risk, independent reviews of the development, implementation and ongoing operation can provide valuable oversight and ensure adherence to both regulatory expectations and internal guidelines.
By focusing on these key areas, financial institutions can effectively align their adoption and use of AI with FINMA’s supervisory expectations and industry best practices, fostering a responsible and secure deployment of these technologies within the Swiss financial sector. Meanwhile, FINMA will review the use of AI by supervised institutions (eg, banks and insurance companies). It emphasised that the understanding of the risks associated with AI is still developing, and that it might refine its expectations based on its supervisory experience and in line with international developments. Given the dominance of AI developments and the long timeline before Switzerland’s regulatory approach will be finalised, it is to be expected that FINMA will revert to its expectations in the near future.
AI and Data Protection Laws – Requirements of the FDPIC
In a press release (which is not binding), the Federal Data Protection and Information Commissioner (FDPIC) emphasised that, regardless of the approach to future regulations, existing data protection provisions must be adhered to. In particular, the FDPIC requires providers and/or deployers of AI applications to:
Since this press release, the FDPIC has informally repeated that he wants to derive requirements from existing standards. Only applications that aim to undermine personal privacy should be prohibited. If, for example, and as the case may be, the use of AI is declared and evident, there are private interests in using AI, the risks for the personality have been specified and informed consent has been provided by an adult, such use might be permissible. Thus, the FDPIC seems to currently be placing a great deal of emphasis on transparent information and personal responsibility. In line with this, in a preliminary investigation concerning the training of Grok with data from users of X, the FDPIC issued a press release in which he noted that users are responsible for making use of the available opt-out option. It remains to be observed whether this trend will persist.
Another topic subject to heated debate is the question of the extent to which large language models (LLMs) do or do not contain personal data. For example, the Hamburg Data Protection Authority (HmbBfDI) held the absolute view that LLMs do not contain any personal data. However, this view has been criticised and has not prevailed.
Swiss Copyright Law and AI: Key Challenges and Recommendations for Companies
The current landscape of AI and copyright in Switzerland is marked by significant legal ambiguities and challenges stemming from the unique characteristics of AI systems, in particular in the fields of (i) training data, (ii) authorship and protection of AI-based content and (iii) the usage of the output generated from AI tools.
The Swiss Copyright Act (CA) grants authors exclusive rights to determine the use of their works. This raises critical questions for AI application providers, particularly regarding the training of models with freely accessible internet content (which involves crawling and scraping the internet). The act of copying data for AI training is considered reproduction and requires the author’s consent, unless an exception applies (such as personal use or use for scientific research). Legal scholars are discussing and debating whether AI training can even be considered “reproduction” or whether specific exceptions such as “temporary copying” and “text and data mining” apply. The court will have to rule on these questions. Should the courts conclude that the use of training data indeed qualifies as reproduction and no exception applies, there could be a need for legislative action ensuring the development of AI systems in an appropriate manner. In case the courts decide that the use of training data does qualify as reproduction, what this means for the rights holders, and if any legislative measures are required, would again need to be clarified.
Another pertinent question is whether AI-generated content can be protected under Swiss copyright law. AI-generated content is protected if it constitutes an intellectual creation originating from the user, as AI tools are not considered “authors” under Swiss law. Protection largely depends on the user’s input, which must demonstrate sufficient creative influence for the output to be protected. This remains a grey area, with no clear criteria for the degree of human involvement required, and each case must be assessed individually.
Lastly, the users of the AI tool must ensure they do not infringe upon third-party copyrights. Infringement occurs if the AI output closely replicates the original work, such as citing entire book chapters or containing third-party trademarks. Although the memorisation capability of generative AI tools is technically low, it is not entirely negligible. Furthermore, the usage of AI-generated output is subject to contractual agreements, including the terms and conditions of the AI tool provider. Users should carefully review these terms, which cover intellectual property considerations, usage rights (often restricted to non-commercial use in free versions), warranties, indemnification and liability.
For companies, implementing guidelines to minimise legal risks when using AI tools is recommended. These should include:
By adhering to these guidelines, companies can navigate the complexities of Swiss copyright law, ensuring compliance and protecting intellectual property rights in the evolving landscape of AI. As regards the open questions and discussions concerning copyright law and AI, case law will provide some guidance and a basis for potential legislative action.
The EU AI Act: Does It Apply to Companies in Switzerland?
Due to its extraterritorial effect, the EU AI Act – like the EU General Data Protection Regulation – will also be relevant for companies in Switzerland. The latter must check on a case-by-case basis – ie, for each AI application – whether they fall within the scope of the EU AI Act, be it as a provider or deployer of AI systems or general-purpose AI models. For example, Swiss companies might fall within the scope of the AI Act if they make AI systems available in the EU or if they use AI systems whose output is used in the EU.
It is thus necessary that Swiss companies assess whether each AI application falls within the scope of the AI Act, which is made easy with the firm’s EU AI Act self-assessment tool. For this reason, it is recommended that an AI application inventory be kept, through which companies can determine their role and the potential applicability of the AI Act.
So far, Swiss companies have focused on avoiding prohibited AI practices and – to the extent possible – high-risk AI systems, as the latter come with the most obligations, especially for providers of high-risk AI systems. In contrast, non-high-risk AI systems are subject only to limited requirements, such as ensuring transparency when users interact with AI and promoting AI literacy.
To prepare for the AI Act, the authors recommend that Swiss companies:
Conclusion
Although there are currently no specific regulations for the provision, import, distribution and/or deployment of AI systems or general-purpose AI models in Switzerland, companies are well advised to address the challenges posed by such applications in good time. This means creating governance for the development and use of AI tools that addresses not only compliance with applicable law (such as financial market, data protection, unfair competition or intellectual property law), but also ethical standards and future regulation (such as the EU AI Act, if applicable). In addition to published court and authority practice, official publications (eg from FINMA or the FDPIC) provide important guidance for legally compliant implementation. In individual cases, it may also be advisable to ask authorities for an opinion, a no-action letter or another kind of ruling on a planned use case involving AI.
Last but not least, legal policy developments in Switzerland must be closely monitored. Great attention must be paid in particular to the forthcoming work of the FDJP, DETEC and the FDFA to regulate AI, which is expected at the end of 2026.
Kellerhals Carrard
Rämistrasse 5
8001 Zürich
Switzerland
+41 58 200 39 00
info@kellerhals-carrard.ch kellerhals-carrard.ch