Currently, no comprehensive Artificial Intelligence (AI) specific law has yet to be enacted in Canada. However, there are Canadian federal and provincial legal frameworks that apply to the different uses of AI. This includes laws related to consumer protection, criminal conduct, human rights, privacy, and tort:
In Canada, the key industry applications of AI and machine learning are found in the financial services, healthcare, automotive, and advertising and marketing sectors:
The Federal government has adopted an ambitious investment strategy to support and grow the Canadian AI sector. In 2023, the government allotted significant funding to Canadian startups in the AI-space. Moreover, as part of the 2024 Federal budget, the Government of Canada has set aside CAD2.4 billion in measures to promote and accelerate Canadian private-sector AI businesses. The package includes:
Canada has been at the forefront of examining AI specific legal issues, conducting extensive studies with various stakeholders, and issuing guidance to government departments and industry. In terms of AI specific legislation and AI legislative updates to existing privacy and intellectual property legal regimes in Canada, the approach has been progressive in terms of proposing legislative updates and conducting policy studies; however, legislative action in for these initiatives has been much more cautious and conservative (eg, proposed Artificial Intelligence and Data Act (AIDA) still under parliamentary review). In comparison, financial industry regulators, professional associations, and federal/provincial governments in Canada have progressed in issuing AI specific industry and public sector guidance and directions.
Currently, there is no AI-specific law that is in force in Canada. The proposed federal private-sector privacy law, Bill C-27 (Digital Charter Implementation Act, 2022), introduces a new AI-specific legislation, the Artificial Intelligence and Data Act (AIDA), which aims to ensure the responsible development of AI in Canada. AIDA (if passed) is anticipated to enter into force no earlier than 2026.
AI systems often involve the collection, use and disclosure of personal information. Businesses involved in the development, deployment and use of AI should ensure compliance with Canadian federal and provincial privacy laws, consumer protection laws, human rights laws, criminal law, and industry-specific laws (where applicable), including:
In September 2023, the federal government introduced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The Code is a voluntary regime wherein signatories commit to adopting identified measures aimed at reaching desirable outcomes in the development, management, and use of generative AI systems.
The federal government has also issued a Guide on the Use of Generative AI for federal institutions to use before using generative AI tools including assessing and mitigating certain ethical, legal, and other risks.
This is not applicable in Canada.
This is not applicable in Canada.
This is not applicable in Canada.
In January of 2024, the federal government concluded a public consultation relating to future amendments to the Copyright Act, considering the impacts of recent developments in AI, namely the introduction of robust generative AI. The key issues addressed in the consultation process are outlined in the Consultation paper: Consultation on Copyright in the Age of Generative Artificial Intelligence, and include data mining, authorship and ownership of AI generated works, and infringement and liability.
In June 2022, “Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts”, also known as the Digital Charter Implementation Act, 2022, was introduced. Bill C-27 is designed to overhaul the federal private-sector privacy legislation, PIPEDA, and modernise the framework for the protection of personal information in the private sector. Bill C-27 is undergoing legislative review in parliament and if passed, would introduce the following legislative updates:
To date, there have only been a few judicial decisions in Canada that have substantively addressed AI issues and only one decision related to generative AI and intellectual property rights. There have yet to be any reported decisions from the Canadian copyright board or patent appeal board regarding the issue of granting copyright or patents for AI-generated works, though more are expected in the future given the onset of filings related to AI-generated works.
Stross v Trend Hunter, 2020 FC 201, is the sole decision dealing with generative AI and intellectual property rights. In this decision, the Federal Court of Canada found that the defendant could not rely on the fair dealing defence for its use of a photo generated by AI that was found to have infringed the Plaintiff’s copyright. The Plaintiff was a professional photographer who photographed housing projects in the US. The defendants reproduced six of the plaintiff’s photographs on its website in an article, which was created using AI and personnel to process data generated by its website, which was then used to prepare consumer trend reports for clients. The Court found the defendants liable for copyright infringement because the defendant’s use of photographs did not constitute fair dealing. The Court did find that the defendants’ use of AI satisfied the first part of the fair dealing test because it met the definition of “research” under Section 29 of the Copyright Act as the use was described as a computerised form of market research that measured consumer interaction and preferences for the purposes of generating data for clients. However, part two of the fair dealing test was not satisfied because the defendants’ ultimate goal was commercial in nature, for the benefit of the defendant and its clients, with no benefit to the plaintiff and no broader public interest purpose.
In Haghshenas v Canadas (Citizenship and Immigration), 2023 FC 464, the Federal Court of Canada dismissed an applicant’s request for judicial review of a deportation decision by the Immigration Officer. The Court rejected the applicant’s argument that the officer’s decision was not procedurally fair because it was reached through the assistance of artificial intelligence. The Court found that the use of AI was not relevant to the duty of procedural fairness because an officer was involved in making the decision in question, and because judicial review addresses the procedural fairness and or reasonableness of a decision.
In James v Amazon.com.ca, Inc., 2023 FC 166, the Federal Court found that it was not within the Court’s jurisdiction to rule that the defendant’s AI–based and automated decision-making (ADM) data request process did not comply with the Personal Information Protection and Electronic Documents Act (PIPEDA). In this case, the defendant had used an automated decision-making process to deny the applicant access to personal information, which the applicant sought to argue was a violation of PIPEDA. The Court found that the use of this AI technology fell outside the scope of section 14, the matter was not raised in the complaint, was not addressed by the Privacy Commissioner and there was no basis in the record to entertain an argument around AI as an explanation for why access was denied.
In Moffatt v Air Canada, 2024 BCCRT 149, a BC small claims tribunal found that companies which deploy AI-enabled chat bots can be held liable for negligent misrepresentations the chatbot provides to consumers on its website. In this case, the plaintiff used a chatbot on an airline's website to search for flights following the death of a family member. The chatbot indicated that the plaintiff could apply a bereavement fare retroactively, however the plaintiff later learned from Air Canada that retroactive applications are not permitted. In a suit for a partial refund, the plaintiff argued that he relied on the chatbot's advice. The airline claimed the plaintiff did not follow the correct procedure, and in any case, that Air Canada cannot be held liable for information provided by its chatbot – implying, in the opinion of the court, that the chatbot is a separate legal entity. The court rejected the airline's arguments and found it responsible for the negligent misrepresentations on its website. Representations involving chatbots were therefore held to the same standard as any other information statically presented on the website.
The definition of AI has not been directly addressed by the Canadian courts. However, the courts have commented on the types of AI technology that exist.
For instance, in Haghshenas v Canadas (Citizenship and Immigration), 2023 FC 464, the Federal Court described AI as a form of machine learning. In Drummond v The Cadillac Fairview Corp. Ltd., 2018 ONSC 5350, the Ontario Superior Court commented on the use of AI in the context of computer-assisted legal research, noting that “computer-assisted legal research is a necessity for the contemporary practice of law and computer assisted legal research is here to stay with further advances in artificial intelligence to be anticipated and to be encouraged.” (paragraph 10) In Moffatt v Air Canada (discussed above), the British Columbia Civil Resolution Tribunal, which presides over small claims, described chatbots as "an automated system that provides information to a person using a website in response to that person's prompts and input" (paragraph 14).
In Canada, there is no overarching AI-specific law. For this reason, various government departments and regulatory agencies bear the responsibility for overseeing and administering laws specific to the different uses of AI as well as developing AI-specific guidance.
In 2019, the federal government appointed an Advisory Council on AI, which focuses on examining how to advance AI in Canada in an open, transparent, and human-rights centric manner. In particular, the Advisory Council in AI has a working group on extracting commercial value from Canadian-owned AI and data analytics.
The Office of the Privacy Commissioner of Canada investigates complaints, conducts audits and pursues court action under the federal public sector and private sector privacy laws, including violations relating to the collection, use and transfer of personal information in AI systems. The provincial privacy commissioners in Alberta, British Columbia, Quebec, and other provinces with privacy laws also play a similar investigation and enforcement role with regards to the use of personal information in AI systems within the province. Further to this, if the proposed AIDA passes, the Minister of Innovation, Science, and Industry (“Minister”) will become responsible for the administration and enforcement of all non-prosecutable offences under AIDA. There would also be a new statutory role for an AI and Data Commissioner, who would support the Minister in carrying out these responsibilities.
Federal and provincial human rights commissions are also engaged in studies to understand the implications of AI on discrimination and other human rights issues, including data discrimination, racial profiling, and failure to ensure community participation and human oversight over AI systems.
Industry-focused regulators are also making progress in Canada to address the impacts of AI within their regulatory authority. Health Canada issued guiding principles for the development of medical devices that use machine learning (a form of AI). The Office of the Superintendent of Financial Institutions is also updating its model risk guidelines to account for the use of AI and digital technologies and conducting AI-specific studies to establish safeguards around the use of AI in financial services. Canadian federal and provincial securities regulators are increasingly using AI to monitor customer identification and transactions to detect financial crimes, insider trading and market manipulation.
The federal government is increasingly using AI to make and support its administrative decisions to improve service delivery. In its Directive on Automated Decision-Making, artificial intelligence is defined as “information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours or solving problems.”
The proposed Artificial Intelligence and Data Act focuses on AI at the systems level, defining an artificial intelligence system as a “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions”.
In Canada, the federal government has issued and made available various guiding principles, directives, assessment tools and lists of qualified AI suppliers for government agencies to support responsible use of AI (including generative AI). There is a focus on addressing concerns that the use of automated systems may result in unfair, biased, and discriminatory decisions.
In leading up to the introduction of the proposed Artificial and Intelligence Data Act, the Office of the Privacy Commissioner of Canada (OPC) released recommendations that a regulatory framework for AI in Canada must be technology-neutral and include the following elements:
The OPC has also issued principles for responsible, trustworthy, and privacy-protective generative AI technologies for developers, providers, and organisations using generative AI.
On 4 April 2023, the Office of the Privacy Commissioner of Canada (OPI) launched an investigation into OpenAI, the company behind artificial intelligence-powered chatbot ChatGPT. The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information through ChatGPT is without consent.
In February 2021, the federal and provincial privacy commissioners (Alberta, British Columbia, and Quebec) (“Offices”) launched a joint investigation to examine whether Clearview AI, Inc.’s (“Clearview”) collection, use and disclosure of personal information by means of its facial recognition tool complied with federal and provincial privacy laws applicable to the private sector. The Offices found that Clearview engaged in the collection, use and disclosure of personal information through the development and provision of its facial recognition application, without the requisite consent and for a purpose that a reasonable person would find to be inappropriate. The Offices recommended that Clearview:
There are currently no AI-specific standard-setting bodies in Canada; however, the Canadian Institute for Advanced Research released in 2017 the Pan-Canadian Artificial Intelligence Strategy (PCAIS), which lays out Canada’s three-pillared strategy for becoming a world leader in AI.
As part of the PCAIS, the Government of Canada has pledged CAD8.6 million in funding from 2021-26 for the Standards Council of Canada to develop and/or adopt standards related to artificial intelligence. In March 2023, the Standards Council of Canada expanded the Canadian Data Governance Standardization Collaborative to address national and international issues related to both AI and data governance through a new AI and Data Governance (AIDG) Standardization Collaborative to develop standardisation strategies in this area.
In 2017, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created a joint technical committee on AI: ISO/IEC JTC 1/SC 42 (“Joint Committee on AI”) that aims to provide guidance and develop a standardisation programme on Artificial Intelligence.
So far, the Joint Committee on AI has published 17 standards documents with 27 more under development. While most of these standards documents provide high level information on AI as opposed to specific guidelines, some of the concrete measures contemplated in the published and under development standards include risk management tools such AI impact assessments. Of note, the proposed Artificial Intelligence and Data Act will also introduce impact assessment requirements if it becomes law.
The extent of regulation associated with the government’s use of AI varies at the national and provincial/local level in Canada.
At the federal level, the Canadian government’s strategy on regulating AI and algorithms is managed by the Directive of Automated Decision-Making (DADM), while the Algorithmic Impact Assessment Tool (AIA) is used to empower the DADM. The DADM is the first national policy focused on algorithmic and automated decision-making in public administration in Canada. The DADM applies to any system, tool, or statistical model used to recommend or make an administrative decision about a client. Under the DADM, the Assistant Deputy Minister, or any other person named by the Deputy Head, is responsible for:
The AIA supports the DADM as a risk assessment tool by determining the acceptability of AI solutions from an ethical and human perspective. It contains questions that assess areas of risk related to the project, system, algorithm, decision, impact, and data used.
The federal government has also released its Guide on the use of generative AI, which provides principled guidance to federal institutions on their use of generative AI tools, including best practices in respect of privacy, compliance, and record-keeping.
The provincial governments have not been as proactive as the federal government when it comes to AI and automated decision-making and there are not any provincial equivalents to the DADM. However, some provinces, like Ontario and Quebec, have started considering ways of regulating the use of automated systems in the public sector.
The Ontario government maintains a Digital and Data Strategy to regulate AI and algorithms in public decision-making. Ontario is in the process of developing its Trustworthy Artificial Intelligence framework, which is functionally similar to the federal DADM. The framework’s purpose is to ensure responsible AI use that minimises harm and maximises benefits for Ontarians.
The Government of Quebec's Law 25, “An Act to Modernize Legislative Provisions Respecting the Protection of Personal Information”, covers automated decision systems’ use of personal information.
In Haghshenas v Canadas (Citizenship and Immigration), 2023 FC 464, the Federal Court of Canada dismissed an applicant’s request for judicial review of a deportation decision by a Federal Immigration Officer. The Court rejected the Applicant’s argument that the officer’s decision was not procedurally fair because it was reached through the assistance of AI, as the Court found that this consideration was not relevant to the duty of procedural fairness. The Court found that the use of AI was irrelevant because an Officer had made the decision in question, and that judicial review is meant to deal with the procedural fairness and or reasonableness of a decision.
In July 2017, the Canadian government released the CDS/DM “Joint Directive to Develop and Operationalize a Defence Program Analytics Capability”. The directive seeks to create analytics capability, drive digital transformation, and establish data management as a core capability. The directive also created new positions, such as Chief Data Officer (CDO) for the Department of National Defence and Canadian Armed Forces (DND/CAF), and, in July 2018, the ADM of Data, Innovation, Analytics (ADM (DIA)). The scope of the DND/CAF strategy includes all data held in a repository in any format, and at any point in the data lifecycle, which includes data that is created, collected, and/or used in both military operations and exercises, as well as in corporate administrative processes.
The Defence Research and Development Canada and the Centre for International Governance Innovation (CIGI) co-organised a nationwide workshop series in autumn 2021 and winter 2022, which examined AI from the perspective of defence and security, specifically in the context of Canadian national security. The workshop focused on AI and semi-autonomous systems, AI and cybersecurity and enabling pan-domain C2. The participants of the workshop included key stakeholders from DND/CAF, the federal government and leading institutions supporting research on AI. The workshop examined issues such as data quality assessment, data format, data sharing, bias mitigation, human-machine teaming, and the ethics of autonomous systems, and identified the need to upgrade government and military platforms around federated data networks, and for stronger collaboration between government, industry, and higher education to scale Canada’s digital infrastructure.
The recently enacted National Security Review of Investments Modernization Act amends the foreign investment review provisions under the Investment Canada Act. In particular, the Guidelines on the National Security Review of Investments - Investment Canada Act recognises AI as a sensitive area of technology and raises the level of scrutiny for foreign investments in the Canadian AI sector, e.g., where the investor is affiliated with an adverse or rival government.
Generative AI, or artificial intelligence tools that are capable of creating new data and content that did not previously exist, includes chatbots such as ChatGPT and image-generation tools such as DALL-E 2. Issues created by such generative AI tools span across multiple industry sectors and include intellectual property issues related to ownership, authorship, and originality, litigation and liability issues, and privacy law issues.
On the intellectual property front, the government of Canada, recognising that the issues posed by generative AI do not fit neatly into the current legislative and jurisprudential tools, engaged in a Consultation on Copyright in the Age of Generative Artificial Intelligence. This consultation examined areas related to text and data mining, authorship and ownership of works generated by AI, and infringement and liability regarding AI.
The Office of the Privacy Commissioner of Canada (OPC) has also acknowledged the potential privacy issues associated with generative AI. The OPC launched an investigation on 4 April 2023 into whether personal information is improperly collected, used, or disclosed as part of OpenAI’s training process. Moreover, in December 2023, the OPC along with provincial privacy regulators published the "Principles for responsible, trustworthy and privacy-protective generative AI technologies" to identify key considerations for the application of privacy principles to the development, management, and use of AI.
Intellectual property rights inure to the author, owner, or inventor of a work or invention. Generative AI, which can produce complex creative works and inventions, challenges the fundamentals of intellectual property. Canadian patent legislation and jurisprudence has generally been clear that inventors are humans, therefore blunting any debate, for the time being, as to whether a generative AI model can be an inventor. Canadian copyright law, however, is much less certain. With a generative model creating fully realised creative works based solely on user inputs – which can be very rudimentary – the question arises as to whether the user is exhibiting sufficient skill and judgement in the expression of the idea. Furthermore, the process by which the generative AI model created the output based on the user input is often shrouded within a “black box,” whereby even the AI model programmer cannot identify exactly how the final expression was created. Drawing authorship into question creates a cascading effect whereby key determinants for infringement, such as access to the original copyrighted work, become harder to establish, and liability, if infringement is found, becomes harder to pin down.
The government of Canada has acknowledged in their Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (the “AI Consultation”) that the current Copyright Act is ill-equipped to address the novel questions posed by generative AI. Moreover, the government of Canada also concluded its Consultation on Copyright in the Age of Generative Artificial Intelligence, which recognises the profound impact generative AI has had on creatives and seeks input from stakeholders to reconcile this impact with the potential for innovation and growth.
Ownership of AI
With the rise of AI, and specifically generative AI, one of the main discussions surrounding intellectual property (IP) is whether AI can own IP, or an AI system has IP rights associated with a work. Currently, AI-created IP is being tested through the Canadian Intellectual Property Office. Both copyright and patents require the owners, authors, or inventors to be identified as part of the application and registration process. With AI, it is unclear who the owner, author, or inventor of the work may be. For example, in patent law, the courts have primarily demonstrated that an inventor must be a human, however currently there are patent applications being prosecuted in Canada where an AI system is the inventor. Similarly, in copyright, although the term “author” is not defined in the Copyright Act, it is unclear whether an AI system can be an author. These applications are being closely watched to determine the status of AI ownership in Canada.
AI training and IP
AI systems are trained on massive datasets that are often scraped from the internet. Depending on the AI model, these can include texts and images that may be subject to copyright protection, or other intellectual property protection. Since protected data may be used for training, there is a risk that AI systems may infringe upon intellectual property to produce an output. Additionally, the training data may have infringed upon intellectual property rights if the data was not licensed for AI training.
Generative AI
Generative AI, or artificial intelligence models that can create new data and content that did not previously exist, has forced both legislators and regulatory bodies across several fields to reckon with the limitations of their current legal instruments. In particular, generative AI creates novel issues for intellectual property law in relation to authorship, infringement, liability, and data privacy law in relation to the collection and use of personal information for training purposes.
Intellectual Property Concerns
Intellectual property rights inure to the author, owner, or inventor of a work or invention. Generative AI, which can produce complex creative works and inventions, complicates the fundamentals of intellectual property. Canadian patent legislation and jurisprudence has been quite clear that inventors are humans, therefore blunting any debate, for the time being, as to whether a generative AI model such as ChatGPT can be an inventor. Canadian copyright law, however, is much less certain. With a generative model creating fully realised creative works based solely on user inputs – which can be very rudimentary – the question arises as to whether the user is exhibiting a sufficient amount of skill and judgement in the expression of the idea. Furthermore, the process by which the generative AI model created the output based on the user input is shrouded within a “black box”, whereby even the AI model programmer cannot identify exactly how the final expression was created. Drawing authorship into question creates a cascading effect whereby key determinants for infringement, such as access to the original copyrighted work, become harder to establish, and liability, if infringement is found, becomes harder to pin down.
Under Canadian privacy law, the collection, use and disclosure of personal information of data subjects for commercial activities is limited to that which is necessary for the purposes identified by the organisation. Subject to certain exceptions, it is unlawful to later collect, disclose or use the personal information for an alternative purpose without obtaining additional consent. Moreover, personal information must only be retained as long as necessary to fulfil the defined purpose. As such, use of personal information, either as inputs or training data for AI, will usually need to be supported by appropriate consent from data subjects. The foregoing limitations and consent requirements create parameters on the commercial use of personal information and will be important considerations when evaluating the viability of exposing datasets to AI-applications.
Current Uses of AI
As AI continues to develop, there are a growing number of use cases which may aid the practice of law. Areas of law practice in which AI have been increasingly used include the following.
E-discovery
AI-powered e-discovery tools assist with quickly and efficiently reviewing documents in the litigation discovery process. One such technique is through predictive coding. By using AI techniques, such as deep learning, AI-empowered tools use words and word patterns in a small set of documents marked as relevant and/or privileged and then apply it to a large dataset of other documents.
Legal research and legal analytics
More recently, AI tools have been introduced that purport to provide increased productivity and efficiency for legal research via use of natural language processing and machine learning technologies. For example, some offerings include AI-powered research tools that may provide answers to legal questions asked in plain language, as opposed to more traditional research searches using keywords and Boolean operators.
Legal technology companies are also harnessing the predictive ability of AI to forecast likely outcomes in court decisions. For example, in the tax context, Blue J Legal purports to predict outcomes of decisions with “90% accuracy” by analysing thousands of previous case law as comparators. Similarly, Lex Machina uses natural language processing to review court decisions to draw insights and predict how courts, judges and lawyers will behave, which in turn, allows lawyers to anticipate behaviours and outcomes that different legal strategies will produce.
Contractual analysis
AI technologies are being deployed to assist in contract analysis and review. AI can assist in quickly scrutinising contracts, identifying missing clauses, inconsistencies in terminology used or undefined terms across a single or multiple documents. For example, Kira Systems leverages the capability of machine learning to understand broad concepts, rather than a constrained rule-based “if-then” analysis, to identify and extract relevant clauses in its contract analyses.
Patent and trade mark searches
AI is being utilised to benefit intellectual property practitioners by assisting in patent and trademarks searches. For example, NLPatent uses machine learning and natural language processing to understand patent language, which allows lawyers to search for patent terms and prior art in plain language, instead of relying on keywords. By describing the concept of an invention, prior art will be brought to the fore in AI-assisted patent searches.
In the trademark context, companies such as Haloo, utilise AI-powered searches to provide rapid and more expansive mark searches to ensure that there are no existing and conflicting marks that may interfere with registration of trademarks and tradenames.
Emerging Uses of AI
AI, with the ability for natural language processing, represents a large shift in how legal research can be conducted more efficiently and quickly. More recently, the release of more general-scope AI technologies, such as the ChatGPT, may represent an inflection point in the practice of law. Potential use cases for AI like ChatGPT in the legal field include assisting in legal research, drafting standard legal documents, providing general legal information to the public and assisting in legal analysis.
Rules or Regulations Promulgated or Pending by Law Societies/Courts
Thus far, law societies in Canada have not promulgated rules directly to address the use of AI in the legal profession. However, the duty of lawyers to be competent may provide provisional guidance in this area until more concrete rules are provided. The commentary to Rule 3.1-2 of the Model Code of Professional Conduct, set out by the Federation of Law Societies, stipulates, “[t]o maintain the required level of competence, a lawyer should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities. A lawyer should understand the benefits and risks associated with relevant technology, recognising the lawyer’s duty to protect confidential information….” Law societies in many jurisdictions, including Ontario, have amended their commentaries to include similar language.
The Federal Court of Canada has released key guidance on the use of generative AI by legal professionals in legal proceedings. The Federal court imposes disclosure obligations on parties whenever AI is used to generate content in any document prepared for litigation and submitted to the Court. In emphasising caution regarding the risks of using generative AI (namely, hallucinations), the court has stressed that all material generated by AI should be subject to human scrutiny. Similarly, several provincial courts, including in Manitoba, Yukon and Alberta have issued similar guidelines.
Ethical Considerations
Lawyers in Canada abide by the ethical codes of conduct mandated by provincial and territorial law societies. Generally, all the law societies have similar requirements with respect to ethical codes of conduct. Notably, lawyers are required to be competent and efficient. Competency and efficiency requirements, in a future where AI is commonplace, may mean that lawyers must know how to effectively use these tools to assist clients.
However, the implementation of AI in legal practice still faces unresolved issues relating to client confidentiality. There are on-going investigations by Canada’s Privacy Commissioner into OpenAI relating to complaints of alleged collection, use and disclosure of personal information without consent. In Canada, lawyers are obliged not to reveal confidential client information. The use of AI models, such as ChatGPT and other large language models (LLMs), increases the risk of inadvertent disclosure of confidential information. Current rules do not address the situation of inadvertent disclosure.
Another professional and ethical challenge that must be considered is the accuracy and reliability of AI tools. With ChatGPT-3, there have been notable concerns regarding its “hallucinations”, that is, when it would confidently put forward factually incorrect statements as if true.
Finally, there remains the tension of access to justice and concerns for unauthorised practice of law, particularly in the context of public-use AI. AI tools have the potential to make basic legal information more easily accessible and digestible to the public at large. Notably, law societies in Canada do not have authority over the provision of legal information. Rather, law societies regulate the provision of legal advice. The distinction between legal information and legal advice is not clearly demarcated. Public use of Chatbots to request legal advice may raise concerns of unauthorised practice of law.
In Canadian law, tort law is the most relevant theory of liability for personal injury of commercial harm arising from AI-enabled technologies where the injured person has no pre-existing legal relationship (ie, by way of contract). Although it is possible for liability to arise through intentional torts or strict liability, negligence law will likely be the most common mechanism for plaintiffs seeking compensation for losses from the defendant’s use of an AI-system. The constituent elements of a negligence claim are:
To bring a tort claim, the plaintiff has the burden of proof in establishing that an AI system was defective, the defect was present at the time the AI system was in the plaintiff’s control and that the defect contributed or caused the plaintiff injury. A defect related to manufacturing, design or instruction of an AI-based system could give rise to a tort claim.
Nevertheless, it may be difficult for plaintiffs to identify defendants. AI systems contain a complex array of components and contributors, software programming, data providers, owners, and users of systems, and third parties. Furthermore, anonymous defendants present a concern because identifying humans behind remotely operated robotics systems may not be possible. Another challenge arises in determining the appropriate jurisdiction or venue for litigation when there are so many different contributors located in potentially different legal jurisdictions.
Canada does not presently have a strict liability regime under tort law for manufacturers of defective products. General principles in tort law in Canada are governed by common law rather than by statute, thus there are currently no proposed regulations regarding the imposition and allocation of liability as it relates to AI technologies.
Biased outputs by AI systems may be found when they create an unjustified and adverse differential impact on any of the prohibited grounds for discrimination under the Canadian Human Rights Act, or provincial or territorial human rights or discrimination legislation. For example, if an AI system is used by an employer to triage applications for job openings, employers must make sure that prospective candidates are not being adversely ranked due to information in their applications about gender status, sexual orientation, disability, race or ethnicity status, or any other prohibited grounds for discrimination under local law.
Biased outputs by AI systems could also be derived indirectly, such as making adverse and systematic differentiations based on variables that may serve as proxies for protected grounds like race or gender, such as making onboarding decisions based on credit score.
Laws in Canada are evolving to regulate biased output risks by AI systems. In June 2022, the Government of Canada introduced Bill C-27, which, if passed, would enact AIDA, among other privacy-related statutes. AIDA would regulate the formation and subsequent utilisation of “high-impact systems” of AI. It would require businesses that design prescribed AI systems to mitigate biased outputs in their designs, document appropriate uses and limitations, and disclose such limitations. Businesses that use these regulated AI systems would be expected to consider the bias-related risks of these systems, monitor their use, and mitigate biased outputs.
The regulation of AI under Canadian privacy law has taken several forms, including, but not limited to, new disclosure obligations when AI is used to make automated decisions about data subjects and regulations around the use of data for the creation of AI algorithms.
Under federal and provincial privacy laws in Canada, legislators have pushed to impose disclosure obligations on businesses in control of personal information where a technology or AI system is used by the business to make automated decisions that impact a data subject (eg, if personal information is collected from job candidates and AI is used to conduct the initial triage of applications, where decisions are made exclusively by AI). Also, given the unique impact that automated decision-making and generative AI may have on vulnerable groups, the OPC has strongly suggested that AI decision-making in relation to vulnerable groups must be subject to additional scrutiny and enhanced monitoring. These disclosure obligations have been introduced in Quebec by Law 25, amending the province’s Act respecting the protection of personal information in the private sector, which requires informing data subjects about use of an AI system.
Canada’s federal private sector privacy law, PIPEDA, could also be effectively superseded by the Consumer Privacy Protection Act (CPPA), which is being considered under Canada’s Bill C-27. If Bill C-27 were to pass, businesses would be required to provide a general account of the business’ use of any automated decision-making system to make predictions, recommendations or decisions about individuals that have a significant impact on them.
Changes to Canada’s privacy regulatory landscape are being considered to regulate the databases that inform AI algorithms. For example, if Bill C-27 were to pass, AIDA would be enacted, which regulates the design and utilisation of prescribed AI systems. A new criminal offence would be created by the passing of AIDA relating to the knowing use or processing of unlawfully obtained personal information to design, develop, use or make available for use of an AI system (eg, knowingly using personal information obtained from a data breach to train an AI system).
Some recent changes have been introduced with respect to the collection, use and disclosure of biometrics-related personal information, including for facial recognition purposes. In the Province of Quebec, Law 25, An Act to modernise legislative provisions as regards the protection of personal information, amended the province’s Act to establish a legal framework for information technology (“Quebec IT Law”). The Quebec IT Law requires businesses that create a database of biometric characteristics and measurements (eg, a database of faces for facial identification purposes) to disclose the database to Quebec’s privacy regulator, the Commission d'accès à l'information, promptly and no later than 60 days after it is brought into service. This disclosure requirement requires businesses to complete and submit a prescribed disclosure form to the regulator, describing the biometrics database, how and why it is being used, and any potential risks associated with its use and subsequent maintenance.
Biometric personal information has also been expressly defined as “sensitive” by Quebec’s Law 25. As such, the collection, use and disclose of biometrics personal information in Quebec requires express consent on the part of the respective data subject.
The collection, use and disclose of biometrics personal information without express consent has been the topic of a joint investigation by the Office of the Privacy Commissioner of Canada and provincial privacy regulators in Canada; namely, in the joint investigation of Clearview AI. Clearview AI’s facial recognition technology was found to scrape facial images and associated data from publicly accessible online sources (eg, public social media accounts) and to store that information in a database. While the information was scraped from publicly accessible social media accounts, Canadian privacy regulators found that the purposes for which Clearview AI used the facial images and associated data were unrelated to the purposes for which the images were originally shared on social media sites, thereby requiring fresh and express consent for new uses, and new purposes for using any of the facial images or associated data by a third party.
Automated decision-making technologies are being used across sectors and involve the use of an AI algorithm to draw conclusions based on data from its database and parameters set by a business. Examples of automated decision-making technologies include AI screening processes that determine whether applications for loans online should be granted and aptitude tests for recruitment, which use pre-programmed algorithms and criteria to triage job applications and reject applicants who do not meet certain criteria.
Use of automated decision-making technologies is regulated by federal and provincial privacy laws in Canada, mainly by imposing a disclosure obligation on business that use such technologies to make a decision that materially impacts data subjects based exclusively on the technology without further human involvement.
There is currently no standalone federal or provincial law or regulation that applies specifically to chatbots or technologies that substitute for services rendered by natural persons. Use of such technologies is subject to the automated decision-making regulations under Canadian privacy law, whereby a business must inform data subjects that automated decision-making technologies are using their personal information to make automated decisions that could have a material impact on the data subjects.
Moreover, in June 2022, the government of Canada tabled Bill C-27, which, if passed, would enact AIDA, among other privacy-related statutes. AIDA would regulate the formation and subsequent utilisation of “high-impact systems” of AI. AIDA would impose requirements on businesses that design prescribed high-impact systems of AI, such as duties to mitigate and correct biased outputs, document the limitations of the AI system, and disclose limitations to users of the high-impact system.
Businesses must continue to comply with their obligations under Canada’s competition laws when an AI system is being used. Canada’s Competition Act, for example, imposes a number of restrictions against misleading consumers (eg, certain uses of scarcity cues and drip pricing practices), which could be triggered by use of AI systems.
For example, booking sites use scarcity cues to inform users that only a certain number of rooms are available for their desired hotel or that a certain number of people are currently looking the same hotel. The Competition Bureau of Canada published the sixth volume of the Deceptive Marketing Practices Digest, which warns that, while use of scarcity cues can be permissible, they cannot be misleading (such as informing users that ten other users are concurrently viewing an offer when, in fact, they are viewing an offer that relates to a booking during another month or in a different city, thereby creating a misleading impression of scarcity when such scarcity does not exist). Businesses using AI must ensure that its use does not create misleading impressions.
The Canadian Competition Bureau ("Bureau") has launched a major initiative to explore the impacts of AI on competition in Canada (ie, discussion paper). The initiative examines several areas for how AI can affect competition and with regards to the Bureau's enforcement areas including mergers and monopolistic practices, cartels, and deceptive marketing practices.
While Canada has not yet enacted AI-specific legislation which regulates private-sector procurement processes for the acquisition or supply of AI goods and services, businesses need to comply with their contractual obligations and requirements under relevant laws. Suppliers should ensure compliance with product liability obligations under applicable sale of goods laws, consumer protection laws, tort laws and human rights laws (including supply chain requirements).
In terms of public sector procurement, the relevant government agencies in Canada have standardised processes for the procurement of AI solutions by establishing a pre-qualified list of suppliers that can provide the federal government departments and agencies across Canada with responsible and effective AI services, solutions, and products.
Under federal and provincial laws in Canada, employers are restricted from taking actions that have or are intended to have an unjustified and adverse differential impact on employees under one or more prohibited grounds for discrimination, whether under the Canadian Human Rights Act or provincial or territorial human rights or discrimination laws. Risks are greater for employers where such decisions are systematic and involve a large number of employees. Therefore, when AI systems are being used by employer, whether during the onboarding, termination or employment phase of the relationship, employers have a duty to ensure AI systems are not discriminating against employees, directly or indirectly (such as by relying on data that serves as a proxy for discrimination).
Use of technologies to make automated decisions about employees is also regulated indirectly in federal and provincial privacy statutes, mainly by imposing a disclosure obligation on business that use such technologies to make a decision that materially impacts data subjects that is based exclusively on the technology without further human involvement (eg, if an AI algorithm rejects job applicants who are not Canadian citizens or permanent residents, without such decisions being reviewed by a human).
In October 2022, the Province of Ontario amended its employment standards legislation, the Employment Standards Act, to require employers with 25 or more employees in Ontario, to have a written “electronic monitoring policy” in place to convey all the ways that electronic monitoring is being used by the employer to monitor employees. These could include, for example, monitoring attendance in the office, activity on a work computer, monitoring emails and other communications, or monitoring internet browsing activity. Employers need to share the electronic monitoring policy with existing employees, including, in certain circumstances, when it is materially updated, and need to provide new employees with the electronic monitoring policy within a prescribed period of onboarding.
In terms of evaluations, employers must disclose to employees when AI is being used to make automated decisions that can materially impact an employee, such as if AI is used to make evaluations about an employee’s performance without human involvement. Such uses of AI should be conveyed to employees in the context of an employee privacy policy.
Digital platform companies using AI are subject to Canadian federal and provincial private-sector privacy laws for the collection, use and disclosure of the personal information of customers. With regards to e-commerce, digital platform companies are also subject to Canada’s Anti-Spam Legislation (CASL). CASL protects consumers and businesses from the misuse of digital technology, including spam and other electronic threats. Digital platform companies are also subject to human rights and privacy laws in Canada with regards to the handling of employee personal information and any recruitment and hiring practices through automated decision-making systems.
Digital platform companies operating in Quebec must comply with provincial private-sector privacy law requirements for transparent and accountable automated decision-making (ADM). This includes providing notice of ADM processes and complying with requests to correct personal information used in decisions. If passed, the proposed Artificial Intelligence and Data Act will apply to and govern the use of AI by digital platform companies.
In Canada, there is no specific AI legislation or regulation in financial services. In the absence of an AI-specific regime, financial institutions developing, deploying, and using AI solutions must comply with all applicable existing laws, including financial services laws, consumer laws and privacy laws.
In Canada, there are different financial regulators, including the Office of the Superintendent of Financial Institutions (OSFI), the Financial Consumer Agency of Canada (FCAC), and the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC), which regulate banks and financial services. Certain financial services are regulated provincially, such as in the areas of insurance and securities. The focus of Canadian banking and financial services regulators has been towards establishing regulatory guidelines and oversight over the responsible use of AI in the financial services sector, including measures to mitigate the risk of biases and discriminatory practices when dealing with customers and employees.
Canadian regulatory guidance with regards to the use of technology (including AI) in the provision of financial services includes the following:
In Canada, there is no AI-specific law for the healthcare and medical devices sector. Health Canada is focused on establishing a regulatory framework for the use of machine learning in medical devices. To this end, Health Canada, in collaboration with the US Food and Drug Administration (FDA), and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) have jointly identified ten guiding principles that can inform the development of Good Machine Learning Practice (GMLP). These guiding principles will help promote safe, effective, and high-quality medical devices that use artificial intelligence and machine learning (AI/ML):
A regulatory framework governing autonomous vehicles in Canada is emerging at both the federal and provincial levels. In general, published guidelines on both the provincial and federal levels are informed by the standards set by the Society of Automotive Engineers (SAE) International, which define six levels of driving automation along a spectrum of degrees of human control. The safety standards for autonomous vehicles are regulated federally, while provinces and territories regulate drivers, liability, insurance, and traffic laws within their jurisdictions. Several provinces are currently engaged in pilot programs testing fully autonomous vehicles on Canadian roads.
As discussed above, all collection, use and disclosure of personal information is subject to applicable federal or provincial privacy laws. Manufacturers of autonomous vehicles must consider the impacts of the use of AI solutions in autonomous vehicles on individual privacy and security.
Product liability can also be imposed on the designers, manufacturers, and retailers of AI products through contractual liability, sale of goods laws, consumer protection laws and tort law.
The ability of AI to act autonomously raises novel legal issues, particularly with respect to how to assign fault when an AI product causes injury or damage. In Canadian law, although there have yet to be any reported cases to date, tort law remains the most relevant theory of liability for personal injury of commercial harm arising from AI-enabled technologies where the injured person has no pre-existing legal relationship (ie, by way of contract). Negligence law will likely be the most common mechanism for plaintiffs seeking compensation for losses from the defendant’s use of an AI-system. However, barriers may arise in accessing remedies through tort law, including the fact that it may be difficult for plaintiffs to identify defendants, or to establish negligence or causation where the damages concern an emergent and unexpected property of the given AI-system. Outside the context of tort law, liability in contract may also present a concern in that parties may use the law of contracts and the contracting process to inappropriately limit or avoid liability by contract.
As litigation involving generative AI continues to increase in other jurisdictions such as the US, it is expected that Canadian courts will adjudicate similar cases in the future involving violation of copyright law due to generative AI technology and product liability litigation arising from AI products that result in injury.
Professional services are activities that use human capital for advisory purposes in areas, such as legal, consulting, finance, marketing, information technology, amongst other areas. Currently, there is no overarching and singular AI specific regulation for professional services in Canada. Professional services providers using AI to deliver services must comply with contractual arrangements and applicable professional standards and codes of conduct.
For example, province and territorial law societies have issued guidance and practice notes for licensees on the use of generative AI with a significant focus on lawyer conduct and maintaining client confidentiality.
To address potential issues with copyright, the Government of Canada issued a Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things. The Government has not provided any suggestions based on that consultation yet.
Canada’s Copyright Act does not provide a definition of “author,” but previous case law related to injunction applications states that copyright can only exist in works authored by a human being. However, recently, the Canadian Intellectual Property Office (CIPO) granted a copyright registration to copyright that was completely generated by AI. Although the copyright registration was granted, it is unclear how this registration will be enforced. The CIPO states that they do not guarantee that “the legitimacy of ownership or the originality of a work will never be questioned.” While the software supporting AI may be copyrightable, this copyright does not automatically mean that the output of the software or AI is protected by copyright.
Ownership issues arise with patents as well. Under Canada’s Patent Act, an inventor and the party entitled to the benefit of the invention must be listed in the patent application. According to the Federal Court, an inventor must be:
In 2020, a patent was applied listing the inventor as an AI system. This application is currently undergoing examination and has not been granted yet.
Trade secrets can be used to protect the software and algorithms of AI technology. Trade secrets are protected in Canada under common law and the Civil Code of Quebec. Trade secrets regarding AI can also be protected by contract. In theory, this means that the AI technology, as well as the training data that the AI trained on, such as text or images, can be protected by trade secrets. Copyright, to a certain extent, can protect a compilation of data but there is a question as to whether another person using a copyright protected dataset infringes that copyright if they use the dataset to train their own AI program.
AI-Generated Works of Art and Works of Authorship can include, for example, literary, dramatic, musical, and artistic works, all of which can be the subject of copyright protection in Canada. Before AI-generated works can be copyright protected however, they must first overcome two major hurdles:
Formal Requirements
Copyright protects the original expression of ideas that are fixed in a material form. Outputs from generative AI programs are expressions of ideas that are fixed in a material form, but there is some question as to their originality.
Originality in Canada requires that skill and judgement be involved in the expression of the idea. The Supreme Court of Canada in CCH Canadian Ltd. v Law Society of Upper Canada, 2004 SCC 13, defined “skill” as the use of one’s knowledge, developed aptitude, or practiced ability, and “judgement”, as the use of one’s capacity for discernment or ability to form an opinion or evaluation by comparing different possible options and producing a work. In addition, the work involved must be something more than a purely mechanical exercise.
Canadian courts have not yet had to reckon with whether AI-generated work involves skill and judgement that goes beyond a purely mechanical exercise, but one can imagine that it will largely depend on the facts of the situation. Where a very basic prompt is given, such as “draw me a flower,” or “write me a poem about flowers,” it will likely be difficult to establish that skill and judgement went into the creation of the resulting generated image.
Authorship
The author of a copyrighted work is the person who exercised the skill and judgement in its creation. There are therefore three potential candidates for authorship of AI-generated works: the user inputting the prompt, the person who created the AI model, or the AI model itself.
As discussed above, the person who input the prompt can be said to exercise skill and judgement in creating the work depending on the complexity of the prompt that they input.
The creator of the AI model exercised skill and judgement in the creation of the model, and no doubt owns copyright to the code associated with the model, but the connection between their skill and judgement and the outputted work is likely too tenuous to establish authorship.
Attributing authorship to the AI model itself creates a number of issues both conceptual and logistic. Conceptually, one might argue that generative AI, regardless of how “human” it may seem in its outputs, is still nothing more than a computer program applying complex rules to inputs in order to produce outputs. If courts or regulators adopt this philosophical view, then it would be hard to argue that generative AI’s “if x then y” approach to creating original works could ever amount to more than a purely mechanical exercise.
Logistically, copyright protection in Canada subsists for the life of the Author plus 70 years, creating obvious issues for generative AI models that do not “die”. Furthermore, Section 5(1)(a) of the Copyright Act states that an author must be “a citizen or subject of, or a person ordinary resident in, a treaty country”. This seems to contemplate that the author is a natural person. Finally, Section 14.1(1) of the Copyright Act conveys moral rights, or rights to the integrity of the work, that are separate from copyright rights. Generative AI models, which are (so far) non-sentient, cannot properly exercise their moral rights to the integrity of their works.
There are generally five considerations when commercialising or otherwise incorporating into your business the outputs of generative AI models such as OpenAI:
Licensing Considerations
Each generative AI model has different policies related to the use and ownership of the inputs (user prompts), and outputs of the program. Under OpenAI's policy, the users own all the inputs, and OpenAI assigns to the user all its right, title, and interest in and to the output. However, other generative AI programs might retain some interest in the output of the program, so users should carefully review the legal policies associated with the program that they are using.
Inaccuracies
Generative AI programs consisting of LLMs such as ChatGPT are prone to inaccuracies, or “hallucinations,” whereby the program will produce a seemingly correct answer to a question that actually has no grounding in reality. Inaccurate outputs might lead to a number of legal liabilities, such as under defamation law, consumer product liability law, tort law, etc.
Litigation Risk
Generative AI models are trained on massive data sets scraped from the internet, which often include data points such as images that are subject to intellectual property law protection. There is a risk that, by using these protected data points as inputs for generative AI models, the outputs of those models might infringe upon those protected works. Furthermore, and as discussed above, output inaccuracies can lead to litigation risk.
Privacy Considerations
A large number of the data points fed into the generative AI models as training data are likely considered “personal information” under Canadian privacy law, meaning informed consent is likely necessary before collecting, using, or disclosing the personal information as part of the AI model.
Furthermore, consideration should be given to the user inputs and potential confidentiality breaches that might occur if sensitive information is input into the system.
Bias
Generative AI models, like all AI models, are susceptible to bias stemming from the personal bias of their programmers and any bias baked into their training data.
AI systems and tools can be used within an organisation to support and enhance corporate governance through data-driven decision-making, financial assessments, market predictions, and to facilitate customer and shareholder interests in the organisation. However, there are risks associated with using AI in corporate governance, including gender and racial bias, data disruption and poisoning, customer and employee privacy, cybersecurity, reputational harm, and operational issues.
Corporate boards of directors (“boards”) play a vital role in overseeing the use of AI systems and tools in their organisations, including the use of such tools to make strategic corporate decisions. In assisting with corporate decision-making, AI tools may leverage data from within the organisation or access data from external sources (including data purchased by the organisation).
To mitigate against the risks associated with AI adoption, boards should ensure that they are aware of, and receive training on, the types of AI systems and tools that are used within their organisations, the nature of the data used to operate and train such systems and associated risks, and engage appropriate stakeholders from within the organisation in AI-related decision-making processes (ie, IT, legal and human resources).
Boards should establish AI-focused specialist committee(s), conduct routine risk assessments of AI systems, develop AI risk mitigation and management policies and measures, and communicate such policies and measures to senior management and employees. The board should have a strategy in place to implement AI within their organisations and assign roles and responsibilities to senior management in executing such strategies.
Organisations are integrating AI into their internal operations and in some cases as part of their product and service offerings. In such cases, key issues that organisations should keep in mind relate to: AI systems development and training; data privacy and security risks; intellectual property ownership; over-reliance or misuse by employees; and risks of inherent bias and discrimination. To address these risks, organisations should consider the following best practices:
181 Bay Street, Suite 2100
Toronto, Ontario M5J 2T3
Canada
+1 416 863 1221
+1 416 863 6275
www.bakermckenzie.com/en/Overview of AI-related Legislative and Regulatory Developments in Canada
Currently, Canada does not have an overarching AI-specific legal framework. There are various legal frameworks related to consumer protection, privacy, criminal conduct, tort, and human rights, which are applicable to the different uses of AI. Furthermore, there is government and industry-specific guidance and tools that seek to regulate the development, deployment, and use of AI systems in Canada. In recent years, the federal government has issued and made available guidelines, principles, assessment tools, and directives relating to the responsible use of AI by government institutions. Relevant government authorities have also established a pre-qualified list of suppliers of AI solutions for federal institutions.
To address the lack of comprehensive AI regulation, the federal government introduced legislative proposals to overhaul and modernise the federal private-sector privacy regime in Canada through the proposed Digital Charter Implementation Act, 2022 (Bill C-27), which includes a new Artificial and Intelligence Data Act (AIDA). If passed, AIDA will introduce new requirements for organisations to ensure safety and fairness in the design, development, and deployment of high-impact AI systems. Organisations will also be required to implement new governance mechanisms and policies to consider and address the risks of their AI system and provide users with enough information to make informed decisions.
In its current state, AIDA relies on future regulations to flesh out its operative provisions. To provide further clarity to the application of AIDA, the federal government released the AIDA Companion Document, which provides some additional detail on the legislative purpose and objectives envisaged by parliament for AIDA. Moreover, the Companion Document clarifies that there will be a two-year gap between adoption and enforcement, allowing for rounds of consultation with stakeholders while parliament aligns AIDA with its policy approach via regulations. Given that AIDA, along with the rest of Bill C-27 is still undergoing legislative review, it may still be some time before any new AI-related law makes it through the parliamentary process, such that it is likely only to be implemented in 2026 at the earliest.
Government policy directives
To keep up with the pace of innovation and growth in the AI sector which has seen an explosion of activity since the introduction of practical and accessible generative AI systems, the federal government has engaged in a series of policy consultations, issued directives to federal institutions, and provided non-binding or optional guidelines to the private sector.
For instance, the federal government has published a Guide on the use of generative AI for federal institutions. The Guide defines generative AI, uses a risk-based approach to the appropriate selection and use of generative AI tools, and lists best practices for the use of generative AI tools to be followed by federal institutions.
Similarly, the Office of the Privacy Commissioner, along with the provincial privacy regulators, issued guidance in late-2023 regarding the application of existing data privacy regulations to generative AI. In the "Principles for responsible, trustworthy and privacy-protective generative AI technologies" Canada's privacy regulators identify key considerations for the application of existing privacy principles to generative AI technologies. As the application of existing privacy principles are very much informed by the context in which data is collected, used, or disclosed, organisations should consult these Principles to better understand how the regulator will likely apply data privacy obligations in respect of the use and development of generative AI systems. In addition, Canadian privacy regulators have recognised the privacy impact of the rising demand for large volumes of data necessary to train artificial intelligence systems, like large language models. In the wake of the investigation into Clearview AI, which was found to have unlawfully scraped facial images publicly available on social media to train its facial recognition software, Canadian privacy regulators also released a joint statement on data scraping and privacy, which had the following key takeaways:
In a similar vein the federal government also introduced the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The Code is a voluntary regime targeted at private-sector organisations who wish to publicly demonstrate their commitment to a series of measures aimed at mitigating the risks of generative AI.
AI and Competition
The Canadian Competition Bureau ("Bureau") has launched an initiative to examine the impacts of AI on competition in Canada. To further this initiative, the Bureau published a discussion paper, "Artificial Intelligence and Competition" for consultation in March 2024. The purpose of the consultation is for the Bureau to assess and prepare for potential competitive harm from AI and promote competition in AI markets. The discussion paper examines the definition of AI, examines how the production of AI products and services may involve markets for AI infrastructure, AI development, and AI deployment. Furthermore, the discussion paper explores how AI can affect competition in relation to the Bureau's areas of enforcement including mergers and monopolistic practices, cartels, and deceptive marketing practices.
Industry policy directives
Healthcare
At the end of 2023, Health Canada issued a draft pre-market guidance for machine learning-enabled medical devices. Briefly, the document provides guidance in relation to medical devices which use machine learning, in whole or in part, to achieve their intended medical purpose (as defined in the Food and Drugs Act).
Manufacturers applying for a machine learning-enable medical device (MLMD) will be required to adhere to the guidance to understand Health Canada's expectations for proving ongoing safety and effectiveness of the MLMD throughout its lifecycle, and to demonstrate that the MLMD will maintain a high level of protection to the health and safety as balanced against the risks and benefits to the patient.
Financial services
AI tools are only as effective as the data model they rely on. The main risks stemming from AI come from compromises to the data relied on. Without appropriate safeguards, the risks associated with the use of AI in the financial services sector can have profound implications for the use of AI in the context of customer identification, consumer credit services, securities trading, and investment advisory services. The increasing use and systemic risks associated with AI in the financial services sector, has Canadian financial regulators collaborating with global AI research organisations and regulators from other jurisdictions to conduct AI risk studies and develop sector-specific risk management frameworks, tools, and guiding principles.
In April 2023, the Office of the Superintendent of Financial Institutions (OSFI) and the Global Risk Institute (GRI) jointly released a report on the ethical, legal, and financial implications of AI for financial services institutions. The discussions have led to the development of EDGE principles (Explainability, Data, Governance and Ethics):
Further to its April 2023 report, guideline B-13 – Technology and Cyber Risk Management (B-13) published by OSFI come into effect on 1 January 2024. The guideline outlines the regulator’s expectations for federally regulated financial institutions (FRFIs) in relation to their use of technology and cyber risk management, including sound technology asset management practices and implementing a system development life cycle framework for the secure development, acquisition, and maintenance of technology systems. The guideline works in tandem with existing guidelines from the regulator to create compliance and monitoring obligations on FRFIs using advanced technology such as AI.
In 2024, OSFI is further revising Guideline E-23 on Model Risk Management for federally regulated financial institution (FRFI) and federally regulated private pension plan (FRPP), which includes addressing the risks that are bolstered by the uptake of AI and machine learning analytics due to increased data access, digitisation, reduced data and storage costs, and enhanced computing power.
The guideline signals progress by financial regulators in Canada, such as OSFI, towards developing a comprehensive AI regulatory framework that will implement safeguards and mitigate risks associated with the use of AI systems in the financial sector. This aligns with the global approach towards AI regulation, in which regulators are seeking to find a balance between regulation and innovation, establishing comprehensive regulations while ensuring that banks and other financial institutions continue to innovate, transform, and remain competitive.
IP & AI Update
Intellectual property rights inure to the author, owner, or inventor of a work or invention. Generative AI, which can produce complex creative works and inventions, challenges the fundamentals of intellectual property. Canadian patent legislation and jurisprudence has been quite clear that inventors are humans, therefore blunting any debate, for the time being, as to whether a generative AI model such as ChatGPT can be an inventor. Canadian copyright law, however, is much less clear. With a generative model creating fully realised creative works based solely on user inputs – which can be very rudimentary – the question arises as to whether the user is exhibiting a sufficient amount of skill and judgement in the expression of the idea. Furthermore, the process by which the generative AI model created the output based on the user input is often shrouded within a “black box,” whereby even the AI model programmer cannot identify exactly how the final expression was created. Drawing authorship into question creates a cascading effect whereby key determinants for infringement, such as access to the original copyrighted work, become harder to establish, and liability, if infringement is found, becomes harder to pin down.
The Federal government has continued its efforts to seek stakeholder input on future amendments to address conceptual gaps in Canada's aging copyright law. Following up on the earlier Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things, which had the purpose of identifying gaps in Canada’s copyright framework with respect to AI and the internet of things and to provide policy considerations that should be addressed with respect to copyright, a second consultation was conducted in response to the impacts of generative AI.
Specifically, in January of 2024, the federal government concluded its second consultation addressing gaps in its copyright framework. Through the Consultation on Copyright in the Age of Generative Artificial Intelligence, the government sought input from a large cross-section of stakeholders which included artists and creatives as well as developers and managers of advanced generative AI systems. Briefly, key topics of concern included carving out exceptions for text and data mining, defining authorship and ownership of AI generated works, and realigning the scope of infringement and liability for the modern AI-enabled economy.
So far, no concrete recommendations have been prepared based on either consultation, but recommendations for legislative updates are anticipated.
Responsible and Ethical Use of AI in Law (Courts)
Presently, there are no rules published by law societies which directly address the use of AI in the legal profession. However, in an AI-driven future, lawyers’ competency and efficiency may necessitate effective use of these tools. Notably, the Model Code of Professional Conduct from the Canadian Federation of Law Societies underscores the key importance technological competence. Moreover, the twin requirements of competency and efficiency may very well require lawyers in the future to be familiar and, more likely, adept at navigating new AI-enabled technologies.
On the other hand, the use of generative AI in the production of submissions to Canadian courts is beginning to see regulation in response to widely publicised irresponsible use of generative AI tools in the profession. For example, the federal court of Canada has published rules for the use of generative AI by legal professionals in respect of their submissions to the court during litigation. The court now mandates that all any content generated, in whole or in part, with the assistance of artificial intelligence bear a disclosure notifying the court of that fact. The court emphasises that whenever AI is used by lawyers in respect of litigation, that a "human in-the-loop" approach be taken – that is that lawyers monitor and review all inputs and outputs of any AI tools used. Similarly, provincial courts in Manitoba, Yukon and Alberta have issued comparable guidelines.
Emerging and Ongoing Issues in AI
AI and data privacy concerns
The following are key areas of privacy developments related to the development, deployment, and use of AI in Canada:
Biased outputs
Bill C-27, if passed, will enact AIDA, and regulate AI systems, with the most stringent regulations targeting “high-impact systems” of AI. Businesses will be obligated to prevent and correct biased outputs by AI algorithms that result in unjustified and adverse differential impacts under the prohibited grounds for discrimination pursuant to federal or provincial human rights legislation. Discrimination-related claims under employment laws in class action proceedings may also be triggered by biased AI outputs. Businesses must monitor AI systems for bias affecting their workforce, customers, or third parties.
Automated decisions/disclosures
Federal and provincial laws in Canada are incorporating disclosure requirements into their privacy laws for businesses that use technology and AI to make automated decisions about data subjects. Such obligations require businesses to inform data subjects before such decisions are made (eg, to update privacy policies to describe the use of personal information for making automated decisions). Privacy regulators may increasingly investigate or enforce the business practices involving these decision-making technologies.
Regulation of biometrics
Biometric personal information is treated as sensitive, requiring express consent from data subjects before it is collected, used, or disclosed for commercial purposes. Moreover, in the Province of Quebec, biometric databases need to be formally registered with the province’s privacy regulator through the submission of a prescribed form. Following recent investigations by privacy regulators of businesses that utilise biometric databases (eg, the processing of facial images for identity verification purposes), it is possible that there will be increased scrutiny of the collection, use and disclosure of biometric data, and other Canadian provinces could adopt formal registration obligations such as in Quebec.
AI liability
Product liability litigation and AI technology
The ability of AI to act autonomously raises novel legal issues, particularly with respect to how to assign fault when an AI product causes injury or damage. In Canadian law, tort law remains the most relevant theory of liability for personal injury of commercial harm arising from AI-enabled technologies where the injured person has no pre-existing legal relationship (ie, by way of contract). Negligence law will likely be the most common mechanism for plaintiffs seeking compensation for losses from the defendant’s use of an AI-system. However, barriers may arise in accessing remedies through tort law, including the fact that it may be difficult for plaintiffs to identify defendants.
Most recently, in one 2024 case, a British Columbia tribunal ruled that companies deploying AI-enabled chatbots can be held liable for negligent misrepresentations made by the chatbot to consumers on their website. In a specific case, the plaintiff used a chatbot on an airline’s website to search for flights after the death of a relative. The chatbot incorrectly advised the plaintiff that they could apply a bereavement fare retroactively. Later, the airline clarified that retroactive applications were not allowed. The plaintiff sought a partial refund, arguing reliance on the chatbot’s misleading advice, which did not caution risks of false or misleading answers. The airline contended that the chatbot’s information was separate from their liability, inferring that the chat bot had separate liability and agency. However, the court rejected the airline’s stance, holding them responsible for the chatbot’s negligent misrepresentations. Despite the chatbot’s interactive nature, its representations are held to the same standard as other static information on the website.
Outside the context of tort law, liability in contract may also present a concern in that parties may use the law of contracts and the contracting process to inappropriately limit or avoid liability by contract.
As litigation involving generative AI continues to increase in other jurisdictions such as in the United States, it is expected that Canadian courts will adjudicate similar cases in the future, which will involve violation of copyright law due to generative AI technology and product liability litigation arising from AI products that result in injury.
181 Bay Street, Suite 2100
Toronto, Ontario M5J 2T3
Canada
+1 416 863 1221
+1 416 863 6275
www.bakermckenzie.com/en/