Currently, no comprehensive AI-specific law has been enacted in Canada. However, there are Canadian federal and provincial legal frameworks that apply to the different uses of AI. This includes laws related to consumer protection, criminal conduct, human rights, privacy, and tort:
In Canada, the key industry applications of AI and machine learning are found in the financial services, healthcare, automotive, and advertising and marketing sectors:
The federal government has adopted an ambitious investment strategy to support and grow the Canadian AI sector. In 2023, the government allotted significant funding to Canadian start-ups in the AI space. Moreover, as part of the 2024 federal budget, the government of Canada has set aside CAD2.4 billion in measures to promote and accelerate Canadian private-sector AI businesses. The package includes:
The Federal government has adopted an operational policy for the use of AI in the federal public service. This strategy focuses on enhancing data stewardship and literacy within the public service, responsibly integrating AI into modern services, optimising cloud usage, investing in infrastructure to support AI and protect data, and adopting a comprehensive approach to safeguard government systems and ensure resilient digital services by 2027. In doing so, the strategy will focus on:
Canada has been at the forefront of examining AI-specific legal issues, conducting extensive studies with various stakeholders, and issuing guidance to government departments and industry. In terms of AI-specific legislation and AI legislative updates to existing privacy and intellectual property legal regimes in Canada, the approach has been progressive in terms of proposing legislative updates and conducting policy studies; however, legislative action for these initiatives has been much more cautious, conservative and susceptible to political delay (eg, the proposed Artificial Intelligence and Data Act (AIDA) has been delayed indefinitely for an election). In comparison, financial industry regulators, professional associations, and federal/provincial governments in Canada have progressed in issuing AI-specific industry and public sector guidance and directions.
Currently, there is no AI-specific law that is in force in Canada. The proposed federal private-sector privacy law, Bill C-27 (Digital Charter Implementation Act, 2022), introduces a new AI-specific legislation, the Artificial Intelligence and Data Act (AIDA), which aims to ensure the responsible development of AI in Canada. Unfortunately, Bill C-27 along with AIDA has been delayed indefinitely and it is uncertain when a similar or identical piece of legislation is anticipated to be introduced or entered into force. Furthermore, the Workers for Workers Four Act, which will take effect in 2026, will put specific requirements on employers using AI in the hiring process.
AI systems often involve the collection, use and disclosure of personal information. Businesses involved in the development, deployment and use of AI should ensure compliance with Canadian federal and provincial privacy laws, consumer protection laws, human rights laws, employment law, criminal law, and industry-specific laws (where applicable), including:
In September 2023, the federal government introduced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems as well as an Implementation Guide for Managers of Artificial Intelligence Systems in 2025. The Code is a voluntary regime wherein signatories commit to adopting identified measures aimed at reaching desirable outcomes in the development, management, and use of generative AI systems, while the implementation guide elaborates on best practices in greater detail.
The federal government has also issued a Guide on the Use of Generative AI for federal institutions to use before using generative AI tools including assessing and mitigating certain ethical, legal, and other risks.
This is not applicable in Canada.
This is not applicable in Canada.
This is not applicable in Canada.
In January 2024, the federal government concluded a public consultation relating to future amendments to the Copyright Act, considering the impacts of recent developments in AI, namely the introduction of robust generative AI. The key issues addressed in the consultation process are outlined in the consultation paper: Consultation on Copyright in the Age of Generative Artificial Intelligence, and include data mining, authorship and ownership of AI generated works, and infringement and liability.
In February 2025, the government released a report summarising the perspectives of participants on several key issues. They received responses from about 1,000 interested Canadians and 103 organisations or expert stakeholders. They also held seven roundtable discussions with 62 stakeholders. Below are some of the key issues identified:
In June 2022, “Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts”, also known as the Digital Charter Implementation Act, 2022, was introduced. Bill C-27 is designed to overhaul the federal private-sector privacy legislation, PIPEDA, and modernise the framework for the protection of personal information in the private sector. Bill C-27 died on the order paper and would require reintroduction in the House of Commons. If reintroduced as identical legislation or substantially similar legislation it would have introduced the following legislative updates:
To date, there have only been a few judicial decisions in Canada that have substantively addressed AI issues and only one decision related to generative AI and intellectual property rights. There have yet to be any reported decisions from the Canadian copyright board or patent appeal board regarding the issue of granting copyright or patents for AI-generated works, though more are expected in the future given the onset of filings related to AI-generated works.
Stross v Trend Hunter, 2020 FC 201, is the sole decision dealing with generative AI and intellectual property rights. In this decision, the Federal Court of Canada found that the defendant could not rely on the fair dealing defence for its use of a photo generated by AI that was found to have infringed the plaintiff’s copyright. The plaintiff was a professional photographer who photographed housing projects in the US. The defendants reproduced six of the plaintiff’s photographs on its website in an article, which was created using AI and personnel to process data generated by its website, which was then used to prepare consumer trend reports for clients. The court found the defendants liable for copyright infringement because the defendant’s use of photographs did not constitute fair dealing. The court did find that the defendants’ use of AI satisfied the first part of the fair dealing test because it met the definition of “research” under Section 29 of the Copyright Act as the use was described as a computerised form of market research that measured consumer interaction and preferences for the purposes of generating data for clients. However, part two of the fair dealing test was not satisfied because the defendants’ ultimate goal was commercial in nature, for the benefit of the defendant and its clients, with no benefit to the plaintiff and no broader public interest purpose.
In Haghshenas v Canada (Citizenship and Immigration), 2023 FC 464, the Federal Court of Canada dismissed an applicant’s request for judicial review of a deportation decision by the Immigration Officer. The court rejected the applicant’s argument that the officer’s decision was not procedurally fair because it was reached through the assistance of artificial intelligence. The court found that the use of AI was not relevant to the duty of procedural fairness because an officer was involved in making the decision in question, and because judicial review addresses the procedural fairness and or reasonableness of a decision.
In James v Amazon.com.ca, Inc., 2023 FC 166, the Federal Court found that it was not within the court’s jurisdiction to rule that the defendant’s AI–based and automated decision-making (ADM) data request process did not comply with the Personal Information Protection and Electronic Documents Act (PIPEDA). In this case, the defendant had used an automated decision-making process to deny the applicant access to personal information, which the applicant sought to argue was a violation of PIPEDA. The court found that the use of this AI technology fell outside the scope of Section 14, the matter was not raised in the complaint, was not addressed by the Privacy Commissioner and there was no basis in the record to entertain an argument around AI as an explanation for why access was denied.
In Moffatt v Air Canada, 2024 BCCRT 149, a BC small claims tribunal found that companies which deploy AI-enabled chatbots can be held liable for negligent misrepresentations the chatbot provides to consumers on its website. In this case, the plaintiff used a chatbot on an airline’s website to search for flights following the death of a family member. The chatbot indicated that the plaintiff could apply a bereavement fare retroactively, however the plaintiff later learned from Air Canada that retroactive applications are not permitted. In a suit for a partial refund, the plaintiff argued that he relied on the chatbot’s advice. The airline claimed the plaintiff did not follow the correct procedure, and in any case, that Air Canada cannot be held liable for information provided by its chatbot – implying, in the opinion of the court, that the chatbot is a separate legal entity. The court rejected the airline’s arguments and found it responsible for the negligent misrepresentations on its website. Representations involving chatbots were therefore held to the same standard as any other information statically presented on the website.
Several new AI cases commenced in 2024 but not yet resolved in 2025 are currently making their way through the courts. Several major Canadian media outlets have commenced an action against OpenAI claiming OpenAI had used their copyrighted content to train its AI models. In a similar vein, the Canadian Legal Information Institute (Canlii) has also filed suit against Caseway, an AI legal research firm, for allegedly engaging in large-scale data extraction from CanLii’s database.
In Canada, there is no overarching AI-specific law. For this reason, various government departments and regulatory agencies bear the responsibility for overseeing and administering laws specific to the different uses of AI as well as developing AI-specific guidance.
In 2019, the federal government appointed an Advisory Council on AI, which focuses on examining how to advance AI in Canada in an open, transparent, and human rights-centric manner. In particular, the Advisory Council in AI has a working group on extracting commercial value from Canadian-owned AI and data analytics.
The Office of the Privacy Commissioner of Canada investigates complaints, conducts audits and pursues court action under the federal public sector and private sector privacy laws, including violations relating to the collection, use and transfer of personal information in AI systems. The provincial privacy commissioners in Alberta, British Columbia, Quebec, and other provinces with privacy laws also play a similar investigation and enforcement role with regards to the use of personal information in AI systems within the province. Further to this, if the proposed AIDA passes, the Minister of Innovation, Science, and Industry (“Minister”) will become responsible for the administration and enforcement of all non-prosecutable offences under AIDA. There would also be a new statutory role for an AI and Data Commissioner, who would support the Minister in carrying out these responsibilities.
Federal and provincial human rights commissions are also engaged in studies to understand the implications of AI on discrimination and other human rights issues, including data discrimination, racial profiling, and failure to ensure community participation and human oversight over AI systems.
Industry-focused regulators are also making progress in Canada to address the impacts of AI within their regulatory authority. Health Canada issued guiding principles for the development of medical devices that use machine learning (a form of AI). The Office of the Superintendent of Financial Institutions is also updating its model risk guidelines to account for the use of AI and digital technologies and conducting AI-specific studies to establish safeguards around the use of AI in financial services. Canadian federal and provincial securities regulators are increasingly using AI to monitor customer identification and transactions to detect financial crimes, insider trading and market manipulation.
The recently released Implementation Guide for Managers of Artificial Intelligence Systems is a non-binding guide that supports the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. While the Code primarily guides developers and managers of advanced generative AI systems, it also outlines responsible AI practices more broadly. The guide details essential best practices for the responsible and effective use of AI technologies, emphasising safety protocols to mitigate risks, accountability measures to uphold ethical standards, and human oversight and monitoring to maintain control over AI operations, while complementing existing Canadian policies and laws, including those related to privacy, competition, consumer protection, and copyright.
In late 2024 the Canadian Securities Administrators published a non-binding public notice in response to the increasing use of AI systems in capital markets. The notice highlights several overarching themes relating to the use of AI:
Additionally, the notice offers guidance on how securities law currently applies to the use of AI. While this guidance is not binding, market participants implementing AI should be aware of the CSA’s approach to AI matters concerning registrants, non-investment fund reporting issuers (non-IF issuers), marketplaces and marketplace participants, clearing agencies and matching service utilities, trade repositories and derivatives data reporting, designated rating organisations, and designated benchmark administrators.
On 4 April 2023, the Office of the Privacy Commissioner of Canada (OPI) launched an investigation into OpenAI, the company behind artificial intelligence-powered chatbot ChatGPT. The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information through ChatGPT is without consent.
In February 2021, the federal and provincial privacy commissioners (Alberta, British Columbia, and Quebec) (“Offices”) launched a joint investigation to examine whether Clearview AI, Inc.’s (“Clearview”) collection, use and disclosure of personal information by means of its facial recognition tool complied with federal and provincial privacy laws applicable to the private sector. The Offices found that Clearview engaged in the collection, use and disclosure of personal information through the development and provision of its facial recognition application, without the requisite consent and for a purpose that a reasonable person would find to be inappropriate. The Offices recommended that Clearview:
There are currently no AI-specific standard-setting bodies in Canada; however, the Canadian Institute for Advanced Research released in 2017 the Pan-Canadian Artificial Intelligence Strategy (PCAIS), which lays out Canada’s three-pillared strategy for becoming a world leader in AI.
As part of the PCAIS, the Government of Canada has pledged CAD8.6 million in funding from 2021-26 for the Standards Council of Canada to develop and/or adopt standards related to artificial intelligence. In March 2023, the Standards Council of Canada expanded the Canadian Data Governance Standardization Collaborative to address national and international issues related to both AI and data governance through a new AI and Data Governance (AIDG) Standardization Collaborative to develop standardisation strategies in this area.
In 2017, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created a joint technical committee on AI: ISO/IEC JTC 1/SC 42 (“Joint Committee on AI”) that aims to provide guidance and develop a standardisation programme on Artificial Intelligence.
So far, the Joint Committee on AI has published 17 standards documents with 27 more under development. While most of these standards documents provide high-level information on AI as opposed to specific guidelines, some of the concrete measures contemplated in the published and under-development standards include risk management tools such as AI impact assessments. Of note, the proposed Artificial Intelligence and Data Act will also introduce impact assessment requirements if it becomes law.
The extent of regulation associated with the government’s use of AI varies at the national and provincial/local level in Canada.
At the federal level, the Canadian government’s strategy on regulating AI and algorithms is managed by the Directive of Automated Decision-Making (DADM), while the Algorithmic Impact Assessment Tool (AIA) is used to empower the DADM. The DADM is the first national policy focused on algorithmic and automated decision-making in public administration in Canada. The DADM applies to any system, tool, or statistical model used to recommend or make an administrative decision about a client. Under the DADM, the Assistant Deputy Minister, or any other person named by the Deputy Head, is responsible for:
The AIA supports the DADM as a risk assessment tool by determining the acceptability of AI solutions from an ethical and human perspective. It contains questions that assess areas of risk related to the project, system, algorithm, decision, impact, and data used.
The federal government has also released its Guide on the use of generative AI and its strategy for federal public service, which provides principled guidance to federal institutions on their use of generative AI tools, including best practices in respect of privacy, compliance, record-keeping and future policy direction in the implementation of AI for improving public service.
The provincial governments have not been as proactive as the federal government when it comes to AI and automated decision-making and there are not any provincial equivalents to the DADM. However, some provinces, like Ontario and Quebec, have started considering ways of regulating the use of automated systems in the public sector.
The Ontario government maintains a Digital and Data Strategy to regulate AI and algorithms in public decision-making. Ontario is in the process of developing its Trustworthy Artificial Intelligence framework, which is functionally similar to the federal DADM. The framework’s purpose is to ensure responsible AI use that minimises harm and maximises benefits for Ontarians.
The government of Quebec’s Law 25, “An Act to Modernize Legislative Provisions Respecting the Protection of Personal Information”, covers automated decision systems’ use of personal information.
In Haghshenas v Canada (Citizenship and Immigration), 2023 FC 464, the Federal Court of Canada dismissed an applicant’s request for judicial review of a deportation decision by a Federal Immigration Officer. The court rejected the applicant’s argument that the officer’s decision was not procedurally fair because it was reached through the assistance of AI, as the court found that this consideration was not relevant to the duty of procedural fairness. The court found that the use of AI was irrelevant because an Officer had made the decision in question, and that judicial review is meant to deal with the procedural fairness and or reasonableness of a decision.
In July 2017, the Canadian government released the CDS/DM “Joint Directive to Develop and Operationalize a Defence Program Analytics Capability”. The directive seeks to create analytics capability, drive digital transformation, and establish data management as a core capability. The directive also created new positions, such as Chief Data Officer (CDO) for the Department of National Defence and Canadian Armed Forces (DND/CAF), and, in July 2018, the ADM of Data, Innovation, Analytics (ADM (DIA)). The scope of the DND/CAF strategy includes all data held in a repository in any format, and at any point in the data lifecycle, which includes data that is created, collected, and/or used in both military operations and exercises, as well as in corporate administrative processes.
The Defence Research and Development Canada and the Centre for International Governance Innovation (CIGI) co-organised a nationwide workshop series in autumn 2021 and winter 2022, which examined AI from the perspective of defence and security, specifically in the context of Canadian national security. The workshop focused on AI and semi-autonomous systems, AI and cybersecurity and enabling pan-domain C2. The participants of the workshop included key stakeholders from DND/CAF, the federal government and leading institutions supporting research on AI. The workshop examined issues such as data quality assessment, data format, data sharing, bias mitigation, human-machine teaming, and the ethics of autonomous systems, and identified the need to upgrade government and military platforms around federated data networks, and for stronger collaboration between government, industry, and higher education to scale Canada’s digital infrastructure.
The recently enacted National Security Review of Investments Modernization Act amends the foreign investment review provisions under the Investment Canada Act. In particular, the Guidelines on the National Security Review of Investments - Investment Canada Act recognises AI as a sensitive area of technology and raises the level of scrutiny for foreign investments in the Canadian AI sector – eg, where the investor is affiliated with an adverse or rival government.
Generative AI, or artificial intelligence tools that are capable of creating new data and content that did not previously exist, includes chatbots such as ChatGPT image-generation tools, video generation and audio generation tools. Issues created by such generative AI tools span across multiple industry sectors and include intellectual property issues related to ownership, authorship, and originality, litigation and liability issues, and privacy law issues.
On the intellectual property front, the government of Canada, recognising that the issues posed by generative AI do not fit neatly into the current legislative and jurisprudential tools, engaged in a Consultation on Copyright in the Age of Generative Artificial Intelligence. This consultation examined areas related to text and data mining, authorship and ownership of works generated by AI, and infringement and liability regarding AI. To date, the government has only provided a summary report identifying the issues that were outlined in the submissions from stakeholders.
The Office of the Privacy Commissioner of Canada (OPC) has also acknowledged the potential privacy issues associated with generative AI. The OPC launched an investigation on 4 April 2023 into whether personal information is improperly collected, used, or disclosed as part of OpenAI’s training process. Moreover, in December 2023, the OPC along with provincial privacy regulators published the "Principles for responsible, trustworthy and privacy-protective generative AI technologies" to identify key considerations for the application of privacy principles to the development, management, and use of AI.
Under Canadian privacy law, the collection, use and disclosure of personal information of data subjects for commercial activities is limited to that which is necessary for the purposes identified by the organisation. Subject to certain exceptions, it is unlawful to later collect, disclose or use the personal information for an alternative purpose without obtaining additional consent. Moreover, personal information must only be retained as long as necessary to fulfil the defined purpose. As such, use of personal information, either as inputs or training data for AI, will usually need to be supported by appropriate consent from data subjects. The foregoing limitations and consent requirements create parameters on the commercial use of personal information and will be important considerations when evaluating the viability of exposing datasets to AI applications.
Current Uses of AI
As AI continues to develop, there are a growing number of use cases which may aid the practice of law. Areas of law practice in which AI has been increasingly used include the following.
E-discovery
AI-powered e-discovery tools assist with quickly and efficiently reviewing documents in the litigation discovery process. One such technique is through predictive coding. By using AI techniques, such as deep learning, AI-empowered tools use words and word patterns in a small set of documents marked as relevant and/or privileged and then apply it to a large dataset of other documents.
Legal research and legal analytics
More recently, AI tools have been introduced that purport to provide increased productivity and efficiency for legal research via use of natural language processing and machine learning technologies. For example, some offerings include AI-powered research tools that may provide answers to legal questions asked in plain language, as opposed to more traditional research searches using keywords and Boolean operators.
Legal technology companies are also harnessing the predictive ability of AI to forecast likely outcomes in court decisions. For example, in the tax context, Blue J Legal purports to predict outcomes of decisions with “90% accuracy” by analysing thousands of previous case law as comparators. Similarly, Lex Machina uses natural language processing to review court decisions to draw insights and predict how courts, judges and lawyers will behave, which in turn, allows lawyers to anticipate behaviours and outcomes that different legal strategies will produce.
Contractual analysis
AI technologies are being deployed to assist in contract analysis and review. AI can assist in quickly scrutinising contracts, identifying missing clauses, inconsistencies in terminology used or undefined terms across a single or multiple documents. For example, Kira Systems leverages the capability of machine learning to understand broad concepts, rather than a constrained rule-based “if-then” analysis, to identify and extract relevant clauses in its contract analyses.
Patent and trade mark searches
AI is being utilised to benefit intellectual property practitioners by assisting in patent and trade marks searches. For example, NLPatent uses machine learning and natural language processing to understand patent language, which allows lawyers to search for patent terms and prior art in plain language, instead of relying on keywords. By describing the concept of an invention, prior art will be brought to the fore in AI-assisted patent searches.
In the trade mark context, companies such as Haloo, utilise AI-powered searches to provide rapid and more expansive mark searches to ensure that there are no existing and conflicting marks that may interfere with registration of trade marks and tradenames.
Emerging Uses of AI
AI, with the ability for natural language processing, represents a large shift in how legal research can be conducted more efficiently and quickly. More recently, the release of more general-scope AI technologies, such as ChatGPT, may represent an inflection point in the practice of law. Potential use cases for AI like ChatGPT in the legal field include assisting in legal research, drafting standard legal documents, providing general legal information to the public and assisting in legal analysis.
Rules or Regulations Promulgated or Pending by Law Societies/Courts
Thus far, law societies in Canada have not promulgated rules directly to address the use of AI in the legal profession. However, the duty of lawyers to be competent may provide provisional guidance in this area until more concrete rules are provided. The commentary to Rule 3.1-2 of the Model Code of Professional Conduct, set out by the Federation of Law Societies, stipulates, “[t]o maintain the required level of competence, a lawyer should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities. A lawyer should understand the benefits and risks associated with relevant technology, recognising the lawyer’s duty to protect confidential information….” Law societies in many jurisdictions, including Ontario, have amended their commentaries to include similar language.
The Federal Court of Canada has released key guidance on the use of generative AI by legal professionals in legal proceedings. The Federal court imposes disclosure obligations on parties whenever AI is used to generate content in any document prepared for litigation and submitted to the Court. In emphasising caution regarding the risks of using generative AI (namely, hallucinations), the court has stressed that all material generated by AI should be subject to human scrutiny. Similarly, several provincial courts, including in Manitoba, Yukon and Alberta have issued similar guidelines.
Ethical Considerations
Lawyers in Canada abide by the ethical codes of conduct mandated by provincial and territorial law societies. Generally, all the law societies have similar requirements with respect to ethical codes of conduct. Notably, lawyers are required to be competent and efficient. Competency and efficiency requirements, in a future where AI is commonplace, may mean that lawyers must know how to effectively use these tools to assist clients.
However, the implementation of AI in legal practice still faces unresolved issues relating to client confidentiality. There are ongoing investigations by Canada’s Privacy Commissioner into OpenAI relating to complaints of alleged collection, use and disclosure of personal information without consent. In Canada, lawyers are obliged not to reveal confidential client information. The use of AI models, such as ChatGPT and other large language models (LLMs), increases the risk of inadvertent disclosure of confidential information. Current rules do not address the situation of inadvertent disclosure.
Another professional and ethical challenge that must be considered is the accuracy and reliability of AI tools. With ChatGPT-3, there have been notable concerns regarding its “hallucinations”, that is, when it would confidently put forward factually incorrect statements as if true.
Finally, there remains the tension of access to justice and concerns for unauthorised practice of law, particularly in the context of public-use AI. AI tools have the potential to make basic legal information more easily accessible and digestible to the public at large. Notably, law societies in Canada do not have authority over the provision of legal information. Rather, law societies regulate the provision of legal advice. The distinction between legal information and legal advice is not clearly demarcated. Public use of Chatbots to request legal advice may raise concerns of unauthorised practice of law.
In Canadian law, tort law is the most relevant theory of liability for personal injury or commercial harm arising from AI-enabled technologies where the injured person has no pre-existing legal relationship (ie, by way of contract). Although it is possible for liability to arise through intentional torts or strict liability, negligence law will likely be the most common mechanism for plaintiffs seeking compensation for losses from the defendant’s use of an AI system. The constituent elements of a negligence claim are:
To bring a tort claim, the plaintiff has the burden of proof in establishing that an AI system was defective, the defect was present at the time the AI system was in the plaintiff’s control and that the defect contributed to or caused the plaintiff’s injury. A defect related to manufacturing, design or instruction of an AI-based system could give rise to a tort claim.
Nevertheless, it may be difficult for plaintiffs to identify defendants. AI systems contain a complex array of components and contributors, software programming, data providers, owners, and users of systems, and third parties. Furthermore, anonymous defendants present a concern because identifying humans behind remotely operated robotics systems may not be possible. Another challenge arises in determining the appropriate jurisdiction or venue for litigation when there are so many different contributors located in potentially different legal jurisdictions.
Canada does not presently have a strict liability regime under tort law for manufacturers of defective products. General principles in tort law in Canada are governed by common law rather than by statute, thus there are currently no proposed regulations regarding the imposition and allocation of liability as it relates to AI technologies.
Biased outputs by AI systems may be found when they create an unjustified and adverse differential impact on any of the prohibited grounds for discrimination under the Canadian Human Rights Act, or provincial or territorial human rights or discrimination legislation. For example, if an AI system is used by an employer to triage applications for job openings, employers must make sure that prospective candidates are not being adversely ranked due to information in their applications about gender status, sexual orientation, disability, race or ethnicity status, or any other prohibited grounds for discrimination under local law.
Biased outputs by AI systems could also be derived indirectly, such as making adverse and systematic differentiations based on variables that may serve as proxies for protected grounds like race or gender, such as making onboarding decisions based on credit score.
Laws in Canada are evolving to regulate biased output risks by AI systems. In June 2022, the government of Canada introduced Bill C-27, which would have enacted AIDA, among other privacy-related statutes if passed. Bill C-27, along with AIDA, has been delayed indefinitely and it is uncertain when a similar or identical piece of legislation will be introduced. AIDA would have regulated the formation and subsequent utilisation of “high-impact systems” of AI. It would have required businesses that design prescribed AI systems to mitigate biased outputs in their designs, document appropriate uses and limitations, and disclose such limitations. Businesses that use these regulated AI systems would have been expected to consider the bias-related risks of these systems, monitor their use, and mitigate biased outputs.
Some recent changes have been introduced with respect to the collection, use and disclosure of biometric-related personal information, including for facial recognition purposes. In the Province of Quebec, Law 25, An Act to modernise legislative provisions as regards the protection of personal information, amended the province’s Act to establish a legal framework for information technology (“Quebec IT Law”). The Quebec IT Law requires businesses that create a database of biometric characteristics and measurements (eg, a database of faces for facial identification purposes) to disclose the database to Quebec’s privacy regulator, the Commission d’accès à l’information, promptly and no later than 60 days after it is brought into service. This disclosure requirement requires businesses to complete and submit a prescribed disclosure form to the regulator, describing the biometric database, how and why it is being used, and any potential risks associated with its use and subsequent maintenance.
Biometric personal information has also been expressly defined as “sensitive” by Quebec’s Law 25. As a result, the collection, use and disclosure of biometric personal information in Quebec requires express consent on the part of the respective data subject.
The collection, use and disclose of biometric personal information without express consent has been the topic of a joint investigation by the Office of the Privacy Commissioner of Canada and provincial privacy regulators in Canada; namely, in the joint investigation of Clearview AI. Clearview AI’s facial recognition technology was found to scrape facial images and associated data from publicly accessible online sources (eg, public social media accounts) and to store that information in a database. While the information was scraped from publicly accessible social media accounts, Canadian privacy regulators found that the purposes for which Clearview AI used the facial images and associated data were unrelated to the purposes for which the images were originally shared on social media sites, thereby requiring fresh and express consent for new uses, and new purposes for using any of the facial images or associated data by a third party.
Automated decision-making technologies are being used across sectors and involve the use of an AI algorithm to draw conclusions based on data from its database and parameters set by a business. Examples of automated decision-making technologies include AI screening processes that determine whether applications for loans online should be granted and aptitude tests for recruitment, which use pre-programmed algorithms and criteria to triage job applications and reject applicants who do not meet certain criteria.
Use of automated decision-making technologies is regulated by federal and provincial privacy laws in Canada, mainly by imposing a disclosure obligation on businesses that use such technologies to make a decision that materially impacts data subjects based exclusively on the technology without further human involvement.
There is currently no standalone federal or provincial law or regulation that applies specifically to chatbots or technologies that substitute for services rendered by natural persons. Use of such technologies is subject to the automated decision-making regulations under Canadian privacy law, whereby a business must inform data subjects that automated decision-making technologies are using their personal information to make automated decisions that could have a material impact on the data subjects.
Moreover, in June 2022, the government of Canada tabled Bill C-27, which would have enacted AIDA, among other privacy-related statutes, if passed. As discussed above, the proposed AIDA has been delayed indefinitely for an election. AIDA would have regulated the formation and subsequent utilisation of “high-impact systems” of AI. AIDA would impose requirements on businesses that design prescribed high-impact systems of AI, such as duties to mitigate and correct biased outputs, document the limitations of the AI system, and disclose limitations to users of the high-impact system.
While Canada has not yet enacted AI-specific legislation which regulates private-sector procurement processes for the acquisition or supply of AI goods and services, businesses need to comply with their contractual obligations and requirements under relevant laws. Suppliers should ensure compliance with product liability obligations under applicable sale of goods laws, consumer protection laws, tort laws and human rights laws (including supply chain requirements).
In terms of public sector procurement, the relevant government agencies in Canada have standardised processes for the procurement of AI solutions by establishing a pre-qualified list of suppliers that can provide the federal government departments and agencies across Canada with responsible and effective AI services, solutions, and products.
Under federal and provincial laws in Canada, employers are restricted from taking actions that have or are intended to have an unjustified and adverse differential impact on employees under one or more prohibited grounds for discrimination, whether under the Canadian Human Rights Act or provincial or territorial human rights or discrimination laws. Risks are greater for employers where such decisions are systematic and involve a large number of employees. Therefore, when AI systems are being used by employers, whether during the onboarding, termination or employment phase of the relationship, employers have a duty to ensure AI systems are not discriminating against employees, directly or indirectly (such as by relying on data that serves as a proxy for discrimination). Furthermore, the Workers for Workers Four Act, which will take effect in 2026, will put specific requirements on employers using AI in the hiring process.
Use of technologies to make automated decisions about employees is also regulated indirectly in federal and provincial privacy statutes, mainly by imposing a disclosure obligation on businesses that use such technologies to make a decision that materially impacts data subjects that is based exclusively on the technology without further human involvement (eg, if an AI algorithm rejects job applicants who are not Canadian citizens or permanent residents, without such decisions being reviewed by a human).
In October 2022, the Province of Ontario amended its employment standards legislation, the Employment Standards Act, to require employers with 25 or more employees in Ontario, to have a written “electronic monitoring policy” in place to convey all the ways that electronic monitoring is being used by the employer to monitor employees. These could include, for example, monitoring attendance in the office, activity on a work computer, monitoring emails and other communications, or monitoring internet browsing activity. Employers need to share the electronic monitoring policy with existing employees, including, in certain circumstances, when it is materially updated, and need to provide new employees with the electronic monitoring policy within a prescribed period of onboarding.
In terms of evaluations, employers must disclose to employees when AI is being used to make automated decisions that can materially impact an employee, such as if AI is used to make evaluations about an employee’s performance without human involvement. Such uses of AI should be conveyed to employees in the context of an employee privacy policy.
Digital platform companies using AI are subject to Canadian federal and provincial private-sector privacy laws for the collection, use and disclosure of the personal information of customers. With regards to e-commerce, digital platform companies are also subject to Canada’s Anti-Spam Legislation (CASL). CASL protects consumers and businesses from the misuse of digital technology, including spam and other electronic threats. Digital platform companies are also subject to human rights and privacy laws in Canada with regards to the handling of employee personal information and any recruitment and hiring practices through automated decision-making systems.
Digital platform companies operating in Quebec must comply with provincial private-sector privacy law requirements for transparent and accountable automated decision-making (ADM). This includes providing notice of ADM processes and complying with requests to correct personal information used in decisions.
In Canada, there is no specific AI legislation or regulation in financial services. In the absence of an AI-specific regime, financial institutions developing, deploying, and using AI solutions must comply with all applicable existing laws, including financial services laws, consumer laws and privacy laws.
In Canada, there are different financial regulators, including the Office of the Superintendent of Financial Institutions (OSFI), the Financial Consumer Agency of Canada (FCAC), and the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC), which regulate banks and financial services. Certain financial services are regulated provincially, such as in the areas of insurance and securities. The focus of Canadian banking and financial services regulators has been towards establishing regulatory guidelines and oversight over the responsible use of AI in the financial services sector, including measures to mitigate the risk of biases and discriminatory practices when dealing with customers and employees.
Canadian regulatory guidance with regards to the use of technology (including AI) in the provision of financial services includes the following:
In Canada, there is no AI-specific law for the healthcare and medical devices sector. Health Canada is focused on establishing a regulatory framework for the use of machine learning in medical devices. To this end, Health Canada, in collaboration with the US Food and Drug Administration (FDA), and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have jointly identified ten guiding principles that can inform the development of Good Machine Learning Practice (GMLP). These guiding principles will help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML):
A regulatory framework governing autonomous vehicles in Canada is emerging at both the federal and provincial levels. In general, published guidelines on both the provincial and federal levels are informed by the standards set by the Society of Automotive Engineers (SAE) International, which define six levels of driving automation along a spectrum of degrees of human control. The safety standards for autonomous vehicles are regulated federally, while provinces and territories regulate drivers, liability, insurance, and traffic laws within their jurisdictions. Several provinces are currently engaged in pilot programs testing fully autonomous vehicles on Canadian roads.
As discussed above, all collection, use and disclosure of personal information is subject to applicable federal or provincial privacy laws. Manufacturers of autonomous vehicles must consider the impacts of the use of AI solutions in autonomous vehicles on individual privacy and security.
Product liability can also be imposed on the designers, manufacturers, and retailers of AI products through contractual liability, sale of goods laws, consumer protection laws and tort law.
The ability of AI to act autonomously raises novel legal issues, particularly with respect to how to assign fault when an AI product causes injury or damage. In Canadian law, although there have yet to be any reported cases to date, tort law remains the most relevant theory of liability for personal injury or commercial harm arising from AI-enabled technologies where the injured person has no pre-existing legal relationship (ie, by way of contract). Negligence law will likely be the most common mechanism for plaintiffs seeking compensation for losses from the defendant’s use of an AI system. However, barriers may arise in accessing remedies through tort law, including the fact that it may be difficult for plaintiffs to identify defendants, or to establish negligence or causation where the damages concern an emergent and unexpected property of the given AI system. Outside the context of tort law, liability in contract may also present a concern in that parties may use the law of contracts and the contracting process to inappropriately limit or avoid liability by contract.
As litigation involving generative AI continues to increase in other jurisdictions such as the US, it is expected that Canadian courts will adjudicate similar cases in the future involving violation of copyright law due to generative AI technology and product liability litigation arising from AI products that result in injury.
Professional services are activities that use human capital for advisory purposes in areas, such as legal, consulting, finance, marketing, information technology, amongst other areas. Currently, there is no overarching and singular AI-specific regulation for professional services in Canada. Professional services providers using AI to deliver services must comply with contractual arrangements and applicable professional standards and codes of conduct.
For example, province and territorial law societies have issued guidance and practice notes for licensees on the use of generative AI with a significant focus on lawyer conduct and maintaining client confidentiality.
Intellectual property rights inure to the author, owner, or inventor of a work or invention. Generative AI, which can produce complex creative works and inventions, challenges the fundamentals of intellectual property. Canadian patent legislation and jurisprudence has generally been clear that inventors are humans, therefore blunting any debate, for the time being, as to whether a generative AI model can be an inventor. Canadian copyright law, however, is much less certain. With a generative model creating fully realised creative works based solely on user inputs – which can be very rudimentary – the question arises as to whether the user is exhibiting sufficient skill and judgement in the expression of the idea. Furthermore, the process by which the generative AI model created the output based on the user input is often shrouded within a “black box”, whereby even the AI model programmer cannot identify exactly how the final expression was created. Drawing authorship into question creates a cascading effect whereby key determinants for infringement, such as access to the original copyrighted work, become harder to establish, and liability, if infringement is found, becomes harder to pin down.
The government of Canada has acknowledged in their Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (the “AI Consultation”) that the current Copyright Act is ill-equipped to address the novel questions posed by generative AI. Moreover, the government of Canada also concluded its Consultation on Copyright in the Age of Generative Artificial Intelligence, which recognises the profound impact generative AI has had on creatives and seeks input from stakeholders to reconcile this impact with the potential for innovation and growth.
Ownership of AI
With the rise of AI, and specifically generative AI, one of the main discussions surrounding intellectual property (IP) is whether AI can own IP, or an AI system has IP rights associated with a work. Currently, AI-created IP is being tested through the Canadian Intellectual Property Office. Both copyright and patents require the owners, authors, or inventors to be identified as part of the application and registration process. With AI, it is unclear who the owner, author, or inventor of the work may be. For example, in patent law, the courts have primarily demonstrated that an inventor must be a human, however currently there are patent applications being prosecuted in Canada where an AI system is the inventor. Similarly, in copyright, although the term “author” is not defined in the Copyright Act, it is unclear whether an AI system can be an author. These applications are being closely watched to determine the status of AI ownership in Canada.
AI training and IP
AI systems are trained on massive datasets that are often scraped from the internet. Depending on the AI model, these can include texts and images that may be subject to copyright protection, or other intellectual property protection. Since protected data may be used for training, there is a risk that AI systems may infringe upon intellectual property to produce an output. Additionally, the training data may have infringed upon intellectual property rights if the data was not licensed for AI training.
Generative AI
Generative AI, or artificial intelligence models that can create new data and content that did not previously exist, has forced both legislators and regulatory bodies across several fields to reckon with the limitations of their current legal instruments. In particular, generative AI creates novel issues for intellectual property law in relation to authorship, infringement, liability, and data privacy law in relation to the collection and use of personal information for training purposes.
Intellectual Property Concerns
Intellectual property rights inure to the author, owner, or inventor of a work or invention. Generative AI, which can produce complex creative works and inventions, complicates the fundamentals of intellectual property. Canadian patent legislation and jurisprudence have been quite clear that inventors are humans, therefore blunting any debate, for the time being, as to whether a generative AI model such as ChatGPT can be an inventor. Canadian copyright law, however, is much less certain. With a generative model creating fully realised creative works based solely on user inputs – which can be very rudimentary – the question arises as to whether the user is exhibiting a sufficient amount of skill and judgement in the expression of the idea. Furthermore, the process by which the generative AI model created the output based on the user input is shrouded within a “black box”, whereby even the AI model programmer cannot identify exactly how the final expression was created. Drawing authorship into question creates a cascading effect whereby key determinants for infringement, such as access to the original copyrighted work, become harder to establish, and liability, if infringement is found, becomes harder to pin down.
To address potential issues with copyright, the government of Canada issued a Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things. The government has not provided any suggestions based on that consultation yet. The government released a summary of submissions from the Consultation on Copyright in the Age of Generative Artificial Intelligence, however no legal changes or suggestions have been made yet.
Canada’s Copyright Act does not provide a definition of “author”, but previous case law related to injunction applications states that copyright can only exist in works authored by a human being. However, recently, the Canadian Intellectual Property Office (CIPO) granted a copyright registration to copyright that was completely generated by AI. Although the copyright registration was granted, it is unclear how this registration will be enforced. The CIPO states that they do not guarantee that “the legitimacy of ownership or the originality of a work will never be questioned.” While the software supporting AI may be copyrightable, this copyright does not automatically mean that the output of the software or AI is protected by copyright.
Ownership issues arise with patents as well. Under Canada’s Patent Act, an inventor and the party entitled to the benefit of the invention must be listed in the patent application. According to the Federal Court, an inventor must be:
In 2020, a patent was applied listing the inventor as an AI system. This application is currently undergoing examination and has not been granted yet.
Trade secrets can be used to protect the software and algorithms of AI technology. Trade secrets are protected in Canada under common law and the Civil Code of Quebec. Trade secrets regarding AI can also be protected by contract. In theory, this means that the AI technology, as well as the training data that the AI trained on, such as text or images, can be protected by trade secrets. Copyright, to a certain extent, can protect a compilation of data but there is a question as to whether another person using a copyright protected dataset infringes that copyright if they use the dataset to train their own AI program.
AI-generated works of art and works of authorship can include, for example, literary, dramatic, musical, and artistic works, all of which can be the subject of copyright protection in Canada. Before AI-generated works can be copyright protected; however, they must first overcome two major hurdles:
Formal Requirements
Copyright protects the original expression of ideas that are fixed in a material form. Outputs from generative AI programs are expressions of ideas that are fixed in a material form, but there is some question as to their originality.
Originality in Canada requires that skill and judgement be involved in the expression of the idea. The Supreme Court of Canada in CCH Canadian Ltd. v Law Society of Upper Canada, 2004 SCC 13, defined “skill” as the use of one’s knowledge, developed aptitude, or practiced ability, and “judgement”, as the use of one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options and producing a work. In addition, the work involved must be something more than a purely mechanical exercise.
Canadian courts have not yet had to reckon with whether AI-generated work involves skill and judgement that goes beyond a purely mechanical exercise, but one can imagine that it will largely depend on the facts of the situation. Where a very basic prompt is given, such as “draw me a flower,” or “write me a poem about flowers,” it will likely be difficult to establish that skill and judgement went into the creation of the resulting generated image.
Authorship
The author of a copyrighted work is the person who exercised the skill and judgement in its creation. There are therefore three potential candidates for authorship of AI-generated works: the user inputting the prompt, the person who created the AI model, or the AI model itself.
As discussed above, the person who inputs the prompt can be said to exercise skill and judgement in creating the work depending on the complexity of the prompt that they input.
The creator of the AI model exercised skill and judgement in the creation of the model, and no doubt owns copyright to the code associated with the model, but the connection between their skill and judgement and the outputted work is likely too tenuous to establish authorship.
Attributing authorship to the AI model itself creates a number of issues both conceptual and logistic. Conceptually, one might argue that generative AI, regardless of how “human” it may seem in its outputs, is still nothing more than a computer program applying complex rules to inputs in order to produce outputs. If courts or regulators adopt this philosophical view, then it would be hard to argue that generative AI’s “if x then y” approach to creating original works could ever amount to more than a purely mechanical exercise.
Logistically, copyright protection in Canada subsists for the life of the author plus 70 years, creating obvious issues for generative AI models that do not “die”. Furthermore, Section 5(1)(a) of the Copyright Act states that an author must be “a citizen or subject of, or a person ordinary resident in, a treaty country”. This seems to contemplate that the author is a natural person. Finally, Section 14.1(1) of the Copyright Act conveys moral rights, or rights to the integrity of the work, that are separate from copyright rights. Generative AI models, which are (so far) non-sentient, cannot properly exercise their moral rights to the integrity of their works.
There are generally five considerations when commercialising or otherwise incorporating into your business the outputs of generative AI models such as OpenAI:
Licensing Considerations
Each generative AI model has different policies related to the use and ownership of the inputs (user prompts), and outputs of the program. Under OpenAI’s policy, the users own all the inputs, and OpenAI assigns to the user all its rights, title, and interest in and to the output. However, other generative AI programs might retain some interest in the output of the program, so users should carefully review the legal policies associated with the program that they are using.
Inaccuracies
Generative AI programs consisting of LLMs such as ChatGPT are prone to inaccuracies, or “hallucinations,” whereby the program will produce a seemingly correct answer to a question that actually has no grounding in reality. Inaccurate outputs might lead to a number of legal liabilities, such as under defamation law, consumer product liability law, tort law, etc.
Litigation Risk
Generative AI models are trained on massive data sets scraped from the internet, which often include data points such as images that are subject to intellectual property law protection. There is a risk that, by using these protected data points as inputs for generative AI models, the outputs of those models might infringe upon those protected works. Furthermore, and as discussed above, output inaccuracies can lead to litigation risk.
Privacy Considerations
A large number of the data points fed into the generative AI models as training data are likely considered “personal information” under Canadian privacy law, meaning informed consent is likely necessary before collecting, using, or disclosing the personal information as part of the AI model.
Furthermore, consideration should be given to the user inputs and potential confidentiality breaches that might occur if sensitive information is input into the system.
Bias
Generative AI models, like all AI models, are susceptible to bias stemming from the personal bias of their programmers and any bias baked into their training data.
The Canadian Competition Bureau recently conducted a public consultation on Artificial Intelligence and Competition in Canada, highlighting emerging issues and concerns from both domestic and international stakeholders. Several concerns were identified, stemming from the unique characteristics of AI markets, which are marked by higher marginal costs, frequent partnerships, and applicability across diverse sectors. AI development largely depends on crucial resources such as computing systems, extensive data centres, and AI-specific chips. These resources are typically managed and supplied by major technology companies, which significantly influence the flow of AI development.
The dominance of large firms in controlling data creates significant barriers to entry for smaller firms that lack the resources to access similar inputs. Stakeholders have also raised concerns over new potentially anti-competitive dynamics, such as algorithmic pricing and the amplification of deceptive marketing practices. AI-empowered deceptive practices identified by the Competition Bureau include generating fake reviews, endorsements, impersonations, tailored phishing campaigns, and the use of generative AI and deepfake tools, all of which have made it increasingly difficult for consumers to distinguish between real and fake content.
Bill C-26, also known as the Act Respecting Cyber Security, aimed to introduce the Critical Cyber Systems Protection Act (CCSPA). The CCSPA was designed to safeguard the critical cyber systems that support Canada’s essential infrastructure in sectors such as finance, telecommunications, energy, and transportation. It was intended to enhance Canada’s ability to defend against cyber threats, including those posed by evolving technologies like AI. Under the CCSPA, designated operators would be required to establish a Cyber Security Program (CSP), mitigate risks from supply chains and third-party services or products, report cyber security incidents, and implement Cyber Security Directives (CSDs). These requirements were meant to create a continuous improvement cycle in cyber security, enabling operators to better prevent, detect, respond to, and recover from cyber threats and incidents, including those involving AI. However, Bill C-26 was not passed as it died on the order paper when Parliament was prorogued on 6 January 2025. A similar bill might be introduced in a future parliamentary session.
The Canadian Centre for Cyber Security has issued guidance on Artificial Intelligence (ITSAP.00.040) and Generative Artificial Intelligence (ITSAP.00.041), outlining the risks they pose and the security measures that can be taken to mitigate these risks.
Canadian ESG regulations are increasingly aligning with international standards. In Canada, large financial institutions are required to adhere to mandatory ESG reporting requirements, while other companies have the option to engage in voluntary reporting. Emerging technologies, such as AI and machine learning, are being utilised to automate the collection and analysis of ESG data, thereby enhancing accuracy and efficiency. However, AI applications consume substantial amounts of electricity, and cooling AI servers to manage the heat generated by data centres necessitates significant water consumption, raising environmental concerns.
In December 2024, Alberta’s Technology Minister announced plans to invest CAD100 billion in artificial intelligence data centre infrastructure over the next five years.
Organisations are integrating AI into their internal operations and in some cases as part of their product and service offerings. In such cases, key issues that organisations should keep in mind relate to: AI systems development and training; data privacy and security risks; intellectual property ownership; over-reliance or misuse by employees; and risks of inherent bias and discrimination. To address these risks, organisations should consider the following best practices:
181 Bay Street
Suite 2100
Toronto
Ontario M5J 2T3
Canada
+1 416 863 1221
+1 416 863 6275
www.bakermckenzie.com/en/In 2025, the Canadian landscape of AI continues to evolve, driven by regulatory developments across various levels of government, administrative bodies, and sector-specific industries. Nationwide, public and private sectors are acknowledging the need for strong operational frameworks to manage AI technologies, given the indefinite delay of Bill C-27 and the lack of proposed federal AI legislation. The government of Canada is providing more guidance on the implementation of AI, publishing best practices for integrating AI into federal public services, and conducting consultations with the public. As AI becomes more prevalent in the day-to-day operations of government services, other levels of government are expected to follow suit. Increased awareness and enforcement actions by agencies such as the Office of the Privacy Commissioner (OPC) and the Competition Bureau are reinforcing compliance and protecting consumers from potential risks relating to the use of AI.
AI Legislative Vacuum
Canada has been proactive in addressing AI-specific legal issues by conducting studies and providing guidance to government departments and industry. However, while there have been progressive proposals for legislative changes, actual legislative action has been cautious and subject to political delays. Notably, the proposed Artificial Intelligence and Data Act (AIDA) and Bill C-27, which includes AI-specific legislation, have both been delayed indefinitely, creating uncertainty about when similar legislation might be enacted. In its stead, public bodies have supplemented the AI regulatory vacuum with their own guidance documents. In 2025, Innovation, Science and Economic Development Canada (ISED) published a supplementary guide to complement its previously released Voluntary Code of Conduct for Advanced Generative Artificial Intelligence Systems. This guide further elaborates on the Voluntary Code, describing in much greater detail best practice measures for AI systems managers. It is anticipated that the next government will reintroduce an AI bill similar to the previously proposed Bill C-27, but it is likely to prompt more extensive public consultations.
In late 2024, the Canadian Securities Administrators issued a notice on the increasing use of AI in capital markets. The notice underscored that securities laws apply to activities regardless of the technology used. It highlighted the need for strong governance and risk management, accountability for AI decisions, transparency in AI usage, and the consideration of new conflicts of interest.
In 2025, the Canadian government released a report on a copyright consultation, which outlined the needs for copyright reform as it pertains to generative AI. No guidance documents have been provided to date.
AI Trends in Enforcement Actions
In recent years, the Office of the Privacy Commissioner of Canada (OPC) has recognised possible privacy concerns related to generative artificial intelligence. For example, in February 2025, the OPC launched an investigation into “X”, the social media platform, following a complaint. The OPC’s investigation will centre around the social media platform’s collection, use, and disclosure of Canadians’ personal information to train artificial intelligence models and whether it complies with the Personal Information Protection and Electronic Documents Act (PIPEDA). In the past, the OPC has also launched an investigation into OpenAI, focusing on whether the platform has collected, used, or disclosed Canadians’ personal information as part of OpenAI’s training process in a manner that breached their obligations under PIPEDA.
Recent results of the Competition Bureau’s public consultation indicate an active awareness of potentially AI-empowered deceptive marketing practices and anti-competitive behaviour, likely leading to increased enforcement action. Stakeholders have highlighted that AI development relies on key resources like computing systems and data centres, which are mainly controlled by major tech companies, creating barriers for smaller firms. Additionally, stakeholders have highlighted specific anti-competitive practices and AI-driven deceptive tactics, making it increasingly difficult for consumers to identify genuine content. It is anticipated that the Competition Bureau would likely pursue AI-related deceptive marketing practices and anti-competitive behaviour moving forward.
Starting in 2024 and continuing into 2025 and beyond, there has been an uptick in generative AI litigation. Several new AI-related cases initiated in 2024 are still making their way through the courts in 2025. Major Canadian media outlets have filed a lawsuit against OpenAI, alleging that it used their copyrighted content to train its AI models. Similarly, the Canadian Legal Information Institute (CanLII) has sued Caseway, an AI legal research firm, for allegedly engaging in large-scale data extraction from CanLII’s database.
AI Trends in the Workplace
The rapid development and widespread adoption of AI have raised concerns about its entry into workplaces through personal devices and subscription services, rather than centralised IT initiatives. While AI can enhance productivity and personalised efficiencies, the rise of Bring Your Own AI (BYOAI) introduces significant risks and compliance issues for organisations. Decentralised control over personal AI tools and the collective liability they impose can expose sensitive data to external threats and lead to compliance violations with regulations like Canada’s PIPEDA. Organisations should monitor AI developments, adhere to risk mitigation guidelines, and participate in government and industry consultations to help shape AI legislation and policies.
Furthermore, the use of AI by employers may be governed based on how the AI is being used. For example, starting in 2026 the Workers for Workers Four Act will put specific requirements on employers using AI in the hiring process.
AI Trends in the Media
The increase in AI use within media and advertising has introduced a range of new challenges. Brands now have the ability to employ or partner with AI-based influencers and ambassadors. When compared with traditional human influencers, AI-based influencers can be more economical and offer features that traditional influencers cannot (ie, data analytics). Importantly, companies and brands should keep in mind potential licence or rights issues when using AI-based influencers. A rising issue in generative AI specifically, is the surge of deepfakes in media. Deepfakes have the capability to fabricate images, audios, and videos of individuals such as celebrities. Deepfakes, which may be unwittingly used in marketing campaigns could lead to issues concerning scams, defamation, and libel.
181 Bay Street
Suite 2100
Toronto
Ontario M5J 2T3
Canada
+1 416 863 1221
+1 416 863 6275
www.bakermckenzie.com/en/