Healthcare AI 2025 Comparisons

Last Updated August 06, 2025

Law and Practice

Authors



Gowling WLG (Canada) LLP is a law firm that provides legal guidance on governance, operations and strategic issues to clients in the healthcare sector. In addition to representing healthcare providers in medical negligence litigation and other medico-legal matters, the firm also co-ordinates legal counsel nationally representing healthcare providers. The firm advises on stakeholder engagement and supports educational initiatives, legal spending, and database management. Gowling WLG also represents numerous health organisations. Its Medical Defence and Health Law Group defends regulated health professionals in malpractice and regulatory proceedings and operates pro bono hotlines. The firm represents accreditation bodies and provides regulatory and compliance advice to manufacturers and distributors, offering full life-cycle legal services for clients in the life sciences. It also supports healthcare infrastructure projects, not-for-profits, and advises on health privacy compliance and data protection matters.

The adoption of AI in healthcare has accelerated dramatically in recent years, transforming both clinical and operational aspects of patient care. AI systems are most commonly being used currently to assist clinical diagnostics, optimise patient treatment plans, and reduce healthcare professionals’ workloads. By expediting processes and enabling earlier disease detection and diagnosis, AI holds significant promise for improving public health outcomes.

Healthcare providers analyse and interpret large amounts of complex data during the diagnosis process, which can lead to cognitive fatigue. AI tools can assist in interpreting data and reaching clinical decisions with greater efficiency and reduced mental strain.

AI is also enhancing patient care by enabling remote health monitoring outside traditional clinical settings. Remote monitoring pairs biosensors with analytics to identify patterns and predict potential health risks earlier. In both inpatient and outpatient contexts, AI systems are used by healthcare providers in determining optimal medications, dosages, and treatment plans.

Current AI applications can also streamline routine tasks for healthcare providers. Some AI-powered notetaking systems automatically generate clinical reports from patient conversations, reducing paperwork and allowing healthcare providers to focus on facilitating meaningful patient interactions. Operational AI is also used to streamline documentation and patient flow, including in hospital emergency departments.

AI systems are used to enhance the speed and quality of patient care while reducing physical and cognitive workloads.

AI systems can assist with: clinical decision-making which increases the efficiency and quality of patient care. Peer-reviewed studies report gains in diagnostic accuracy for selected use cases. AI systems can also assist with administrative tasks and reduce workloads that contribute to cognitive fatigue. They can also process and compare large amounts of data without being affected by fatigue, emotion or memory.

Despite the benefits of AI systems in healthcare, some unique challenges arise with the novel technology. AI systems require a large amount of high-quality data to accurately build their algorithm. When data is imprecise, unvalidated, unreliable or incorrect, it can impact the integrity of the data it generates. An AI system is only as good as the data used to build it, and not every clinical specialty has large amounts of high-quality data available. Canadian health data is currently fragmented across jurisdictions, and it is difficult to establish a centralise dataset.

Further, algorithms are at risk of bias if they do not include data from diverse populations. Data may be influenced by human subjectivity and repeat inequities from discriminatory practices. In addition, the use and storage of large data sets are vulnerable to security threats and confidentiality concerns. From 2015-2023, there were at least 14 reported major cyber-attacks on Canadian hospitals, labs and health networks. There are risks of major data leaks with the use of AI systems in healthcare.

There are also concerns about data sovereignty and control over collection, use and interpretation.

Major trends in the Canadian AI healthcare sector include the integration of AI into diagnostic imaging (such as radiology and pathology), the deployment of predictive analytics for early warning and patient risk stratification, and the widespread adoption of AI-powered documentation tools to reduce administrative burden.

Innovation and adoption are being driven by a diverse set of stakeholders. The government of Canada has invested in AI tools aimed to improve the Canadian Healthcare system. For example, in June 2025, Canada Health Infoway employed a federally funded programme which gave 10,000 primary care clinicians across Canada AI Scribe licences. The federal government also provided funding of CAD60 million in the 2021 budget to support the Pan-Canadian Artificial Intelligence Strategy in 2017 to promote collaboration between provincial AI hubs. On 24 September 2025, the federal government announced the creation of a task force on AI that will recommend policies to improve research, talent development, adoption and commercialisation of AI in Canada.

The Royal College of Physicians and Surgeons of Canada, the Canadian Medical Association and Canada’s Drug Agency all acknowledge that AI will be an important aspect of patient care in the future. They have advocated for initiatives such as implementing AI and digital technologies into residency training and healthcare delivery.

Canada has no single statutory definition of healthcare AI. Instead, AI-enabled tools are regulated within existing regimes. Many will be captured as “medical devices”, defined broadly as any instrument, apparatus, software, or material intended for diagnosis, treatment, mitigation, or prevention of disease or abnormal conditions, bringing them under the Food and Drugs Act and Medical Devices Regulations (MDRs). Health Canada’s Software as a Medical Device (SaMD) guidance explains how software fits that framework and how risk class is determined.

Tools that fall outside the medical-device definition (eg, purely administrative or operational aids) are not licensed as devices but remain subject to other applicable laws (privacy, security, public-sector legislation). Canada’s now-lapsed federal AI Bill, the Artificial Intelligence and Data Act (AIDA), would have introduced a cross-sector definition of an “AI system” and a “high-impact system”. Late-stage AIDA amendments proposed a broader, technology-neutral AI definition aligned with international practice, but the Bill died on the Order Paper in January 2025.

From a classification perspective, software with diagnostic or therapeutic purposes (or decision support that informs those uses) is generally SaMD, and is classed I–IV depending on intended use and risk (from lowest to highest perceived risk), with higher classes triggering more evidence and monitoring. Purely administrative software is typically out of scope. Health Canada has provided a guide with examples to support consistent classification.

The core federal instruments for clinical AI remain the Food and Drugs Act and the MDRs, applied to SaMD and AI-enabled devices. Provinces rely on existing frameworks, such as privacy laws and healthcare regulations, to address AI-related issues. Some provinces have taken steps to modernise legislation and specifically contemplate AI. In particular, Ontario’s Strengthening Cyber Security and Building Trust in the Public Sector Act (2024) empowers regulations requiring public-sector entities, including hospitals, to disclose AI use, implement accountability and risk management frameworks, adhere to prescribed technical standards, and in prescribed circumstances, to ensure an individual oversees AI use.

In Quebec, both private- and public-sector privacy statutes and the Act respecting health and social services information require notice of automated decisions, disclosure of the personal data and principal factors relied upon, and a right to human review.

Developers of healthcare AI systems must first determine whether their product is a medical device under the MDRs. Software that performs diagnostic, therapeutic, or decision-support functions is generally classified as SaMD, as further discussed in 2.4 Software as a Medical Device (SaMD).

Licensing requirements vary by device class (I–IV). Evidence submissions must demonstrate safety and effectiveness, supported by recognised international standards (eg, IEC 62304 – software lifecycle). Higher-risk devices also require a quality management system certified to ISO 13485 through the Medical Device Single Audit Program.

Health Canada continues to regulate many clinical AI tools as SaMD under the MDRs. Using the International Medical Device Regulators Forum risk classification, the department mandates more rigorous evidence and post‑market surveillance. In February 2025, Health Canada issued its Pre‑market Guidance for Machine‑Learning‑Enabled Medical Devices (PMGMLMD) for algorithm change protocols and transparency expectations.

Software that is limited to administrative functions is generally out of scope, and software that supports (but does not replace) clinical judgement may still be SaMD depending on claims and risk.

AI in healthcare engages federal and provincial privacy regimes. Federally, the Personal Information Protection and Electronic Documents Act applies to commercial handling of personal information, while provincially, private sector, public sector and health-sector statutes may also apply depending on the jurisdiction and context. As noted in 2.2 Key Laws and Regulations, Quebec privacy legislation adds notice and explanation rights for automated decisions and access to human review.

There are broad requirements to undertake privacy impact statements before implementing AI systems in healthcare. For example, in Alberta and Quebec, privacy impact assessments are required under health sector-specific privacy legislation. Core requirements around consent, data-minimisation, and safeguards continue to apply to AI training, fine-tuning and in-use data. Privacy and data governance are discussed further in 6. Data Governance in Healthcare AI.

Health Canada relies on recognised standards to assess software safety and performance. For SaMD and machine learning-enabled medical devices (MLMD), these include:

  • IEC 62304 (software life-cycle);
  • ISO 14971 (risk management); and
  • IEC 62366-1 (usability).

Health Canada has also recommended standards to assist developers conduct cybersecurity risk management processes in its Pre-market Guidance Document on Cybersecurity. Additionally, the Standards Council of Canada and the Canadian Standards Association recognise further standards to assess health software, including:

  • IEC 82304-1 (health software); and
  • IEC 81001-5-1 (cybersecurity for health software).

Technical and data interoperability work is also underway. In June 2024, the federal government tabled Bill C-72, the Connected Care for Canadians Act, which would have mandated interoperability of health IT and prohibited data-blocking by vendors. Although the Bill died on the Order Paper, it signals federal direction on common standards for exchange. Parallel policy initiatives include the Digital Health Interoperability Task Force for standards-based exchange.

While the delivery of healthcare in Canada falls under provincial/territorial jurisdiction, the federal government is responsible for regulating medical devices, including AI powered devices (see 2.2 Key Laws and Regulations and 2.4 Software as a Medical Device (SaMD)).

While the federal government has confirmed that it will not be reviving AIDA30, in addition to the recently announced task force on AI, the federal government has announced a series of initiatives to support responsible and safe AI adoption, discussed in 5. Ethical and Governance Considerations for Healthcare AI. It has also advised that it plans to focus on updating federal privacy and data protection laws in the near future.

Provincial and territorial health authorities are responsible for the delivery, funding and regulation of healthcare services, including the use of digital health and AI tools in clinical practice.

Many professional regulatory bodies (typically provincial) responsible for the regulation of the conduct and practice of healthcare professionals have issued guidance on the ethical and safe use of AI in clinical care. The guidance consistently urges caution when using AI with three dominant themes:

  • ensuring the work product is accurate;
  • protecting client/patient privacy; and
  • establishing accountability for any technology use by professionals.

Lastly, federal and provincial/territorial privacy commissioners oversee compliance with privacy laws, including data used in AI systems.

AI-based medical devices are classified as Class I-IV, depending on their risk level. Most AI/ML medical devices are Class II, III or IV. In seeking approval from Health Canada, manufacturers must provide product lifecycle information that demonstrates the safety and effectiveness of an MLMD.

Manufacturers must demonstrate safety and effectiveness appropriate to class, mapped to recognised standards, with documentation covering design and intended use, risk management, data and model development, testing/clinical validation, transparency (essential user information), and post-market monitoring.

As discussed in 3.2 Pre-Market Requirements, Health Canada’s guidance through the PMGMLMD is comprehensive as it considers the entire life cycle of an MLMD, from the initial design through post-market monitoring. For MLMD, Health Canada recognises predetermined change-control plans (PCCPs) to pre-authorise defined updates with ongoing surveillance.

There may also be reporting obligations for some incidents. For example, the federal government requires manufacturers report “incidents” related to the failure of the device or a deterioration in its effectiveness or an inadequacy in its labelling or in its directions for use. In addition, a report must be made of a device has caused, or could cause, death or serious deterioration in health of a patient, user or other person.

The Protecting Canadians from Unsafe Drugs Act (Vanessa’s Law) created in 2019 mandatory reporting obligations for hospitals for any medical device incident that occurs within the hospital. Hospitals must report qualifying incidents to Health Canada within 30 calendar days of becoming aware of them. A “medical device incident” means “an incident related to a failure of a medical device or a deterioration in its effectiveness, or any inadequacy in its labelling or in its directions for use that has led to the death or a serious deterioration in the state of health of a patient, user, or other person, or could do so were it to recur.” The reports must include factual information about the event/incident and certain personal health information about the patient.

There are few publicised enforcement cases specific to AI. Health Canada has established a robust framework for compliance and enforcement. Most enforcement to date has focused on compliance promotion, monitoring, and ensuring that manufacturers meet the evolving regulatory requirements for AI/ML devices.

At the provincial/territorial level, privacy commissioners are primarily responsible for dealing with privacy breaches arising out of the use of AI, while human rights tribunals will handle cases of discrimination. Regulatory colleges and associations will focus on how healthcare professionals use AI/ML in their practices.

Canadian courts have not yet apportioned liability in AI-influenced care, and no AI-specific statute addresses it.

Liability may arise from the inappropriate reliance on AI systems by healthcare providers, flaws built into the AI systems and their algorithms, or negligent selection, vetting, and maintenance of the AI tools. Liability will likely be apportioned based on a consideration of the duties owed to the patient by the various stakeholders.

Liability may arise from clinician reliance, system defects, or institutional selection, implementation and oversight. Clinicians must apply judgment to AI outputs, developers may face design/performance claims, and institutions are expected to vet, deploy and supervise tools appropriately.

In Canada, medical malpractice claims arise from the tort of negligence. To prove negligence, a plaintiff must establish that a healthcare provider breached their applicable standard of care, and that breach was the cause of a compensable damage.

The standard of care is determined by comparing the conduct of the healthcare provider to the standard of a reasonable and prudent healthcare provider in similar circumstances. The question is whether the provider used the AI system in a way that adhered to the standards of a reasonable and prudent healthcare provider in similar circumstances with similar experience and qualifications.

In the context of AI, past court decisions analysing negligence for harms suffered from the reliance on faulty medical technologies may provide guidance on how this will be dealt with. The determination will involve an analysis on whether it was reasonable for the healthcare provider to rely on the results of the AI system when they made decisions and provided care. This involves an analysis of the profession’s standard of practice, whether the device was authorised, and whether the provider mitigated any foreseeable risks. The plaintiff would then need to prove that the way the healthcare provider used the AI system caused the compensable harm. Alternatively, a plaintiff would need to establish that an AI-developer breached their duty of care to them by providing a product that gave rise to injury in the ordinary course of use.

The government of Canada recently tabled Bill-27, Digital Chapter Implementation Act, which sought to provide guidance on management requirements and risk assessment frameworks associated with AI use. In the absence of a successor, developers and healthcare institutions must look to the duty of care owed to a patient in making decisions. In practice, organisations may align their internal programmes with widely used frameworks such as the National Institute of Standards and Technology’s AI Risk Management Framework and international standards including ISO/IEC 23894:2023 (AI — Guidance on Risk Management), each of which provides structured methods to identify, assess and mitigate AI risks across the lifecycle.

Health Canada regulates SaMD under the Food and Drugs Act and MDRs (see 3. Regulatory Oversight in Healthcare AI).

Operating in the AI sector involves handling large datasets, developing predictive models, and delivering recommendations that will significantly impact users and businesses. To assess these risks, document mitigations and calibrate oversight, many organisations leverage Canada’s Algorithmic Impact Assessment (AIA) (developed for automated decision systems in the federal public service) as a practical template. As discussed in 2.5 Data Protection and Privacy, privacy impact assessments are also increasingly used before implementing AI systems in healthcare.

Entities involved in deploying healthcare AI system are exposed to specific risks, including data breaches and claims arising from the unintended consequences of AI outputs. These entities will want to carefully consider their insurance needs, including directors and officers, technology errors and omissions, cyber, professional, media and employment practices.

Canada has not adopted an AI-specific healthcare safe harbour shielding compliant parties from tort claims. Consequently, the best practices for reducing liability risk remain unclear.

AI developers may demand that those who implement AI systems are liable for harm resulting from their use once it has been approved by an institution or regulator. It is the duty of the downstream entities to ensure the system is being used reasonably and it is the responsibility of the provider to identify and mitigate the risks associated with using the system.

Institutions and healthcare providers may demand that AI developers are liable when the harm stems from a design defect such as the choice of data used to train the algorithm. The developer is responsible for the way the dated is used, balanced, and interpreted by the AI system when it generates an assessment.

A persistent challenge is the “black-box” nature of some systems, which complicates evaluation and root-cause analysis due to their lack of transparency. It is challenging for healthcare providers to critically evaluate AI-generated assessments because the underlying data and how it informs the output are opaque. Healthcare providers do not control or have access to the system’s inner workings, decisions or recommendations, making it difficult to identify and mitigate associated risks.

The ethical landscape for AI in Canada is multi-layered and evolving. National, provincial, and institutional principles collectively emphasise safety, accountability, transparency, equity and robust governance.

At the federal level, the Pan-Canadian AI for Health (AI4H) Guiding Principles highlight person-centred care, equity, privacy and security, robust oversight, accountability, transparency, data literacy, and Indigenous data sovereignty. These principles aim to ensure that AI supports healthcare in a manner that is inclusive and trustworthy. The government of Canada has also issued a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems to mitigate risks associated with generative AI. While not legally binding, the Code offers measures for ethical development, deployment and management.

Provincially, professional regulators and associations reinforce that healthcare providers maintain responsibility for decision-making, informed consent, and privacy. Further, most recognise AI’s growing role and expect caution in adoption consistent with professional codes of ethics to safeguard patient safety and well-being.

At the institutional level, hospitals and other organisations are adopting internal governance structures for AI. These frameworks typically focus on intentionality of use, accountability, inclusivity, trustworthiness, and clinical fit.

Data governance frameworks also play a central role. The Canadian Institute for Health Information (CIHI) embeds integrity, inclusion and Indigenous data sovereignty in its Health Data Ethics framework. INOVAIT, the image-guided therapy (IGT) and AI network funded by the government of Canada, has proposed Principles for Safe, Ethical and Trustworthy Canadian Health Data Licensing that emphasise transparency, patient representation, and responsible sharing of health data for AI innovation. Indigenous data sovereignty expectations are also reflected in the OCAP principles (ownership, control, access and possession), which inform organisational data-sharing and stewardship practices.

International ethical frameworks also inform Canadian practice. Guidance from the WHO and other global bodies has been referenced by professional regulators when developing domestic expectations for the design and ethical use of AI in clinical settings. In addition, the US Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) jointly identified ten guiding principles to help promote safe, effective, and high-quality medical devices that use AI and machine learning.

At an organisational level, fairness can be embedded through principles that encourage disclosure of validation populations and subgroup performance, inform procurement and governance, support meaningful transparency for clinicians and patients, and foster ongoing evaluation with timely corrective action.

Transparency and explainability are core principles for the ethical use of AI in healthcare. AI systems should be accompanied by transparent communication that helps providers, patients, and the public understand how, when and where the AI is being used. Healthcare professionals should also understand how AI reaches its conclusions.

Professional regulators increasingly emphasise these principles, directing clinicians to:

  • be transparent about the extent to which AI informs clinical decision-making;
  • explain how tools function and their limitations; and
  • retain the ability to interpret outputs and exercise clinical judgement.

For medical devices, Health Canada’s transparency principles emphasise providing intended use, limitations, and essential user information to support safe human-AI performance.

Responsibility for sharing information falls on all relevant stakeholders, including vendors and deploying institutions. In practice, both providers and patients should have access to clear, meaningful information about how AI functions in clinical contexts.

Algorithmic bias raises ethical and regulatory challenges for AI in healthcare. Canadian policy instruments generally emphasise recognising and mitigating bias and promoting fairness across the AI lifecycle.

The AI4H Guiding Principles treat equity, diversity, and inclusion as foundational. In practice, this means AI systems should support fair and culturally appropriate care, minimise bias and health inequities, and reflect the needs of diverse populations. Health Canada’s evolving regulatory framework for AI-enabled medical devices addresses algorithmic bias as a safety concern, advocating co-ordinated legal, ethical, and governance mechanisms to manage risk.

For clinicians, several regulators expect reasonable efforts to identify and address potential bias before incorporating AI into patient care. Healthcare providers are encouraged to exercise caution when interpreting AI-generated outputs and should account for the patient’s demographics, clinical context and social determinants of health. For organisations, higher-level controls include procurement standards for dataset representativeness and fairness testing, monitoring for drift and disparate impact, and clear escalation pathways when bias signals or harms are detected. CIHI’s ethics framework and OCAP provide reference points for equity and Indigenous data-sovereignty in data stewardship.

Existing guidance makes clear that AI should augment, rather than replace, clinical judgement. Clinicians maintain a role in decision-making that affects patient care, considering whether any AI recommendation is clinically appropriate in the context. Regulators expect them to guard against automation bias, inform patients about the tool’s role and limits as part of consent, and document reasons for accepting or departing from an output.

Health Canada’s risk-based approach, together with professional standards and institutional policy, requires proportionate clinical and regulatory oversight across the lifecycle. Core safeguards include a reliable clinician override, effective change management for adaptive models, post-deployment monitoring, audit trails, and supervision of team use.

There is no blanket statutory ban on “autonomous” clinical AI, but device obligations, professional duties and institutional policies significantly constrain unsupervised use. Systems that materially affect diagnosis or treatment should include timely human verification and override, while full automation is generally confined to narrow, lower risk functions or controlled settings.

Canada does not currently have a legal or regulatory regime specifically governing training data used for AI systems. The principal federal proposal, AIDA, which would have imposed certain evaluation and risk assessment requirements for training data sets, including with respect to risks of bias, lapsed in January 2025.

In the absence of specific legislated standards for AI training data, existing privacy and health information regimes govern whether and how personal health information may be used to train healthcare AI systems. In Canada, each province is governed by a different set of public and private sector privacy and health information laws. Private-sector processing is generally governed by the federal private sector statute except where substantially similar provincial laws apply. Health-sector actors (eg, custodians/trustees and their agents) are generally governed by provincial health information statutes, and public hospitals face additional obligations under provincial public-sector privacy laws. The applicable law turns on:

  • the nature of the data (personal information, personal health information, de-identified, anonymised);
  • the role and location of the accountable entity (custodian/trustee, public body, private organisation); and
  • in some cases, the residence of the individual.

General privacy principles relating to consent, purpose limitation, minimisation, accuracy, and safeguards govern use of personal health information in training, when making decisions about individuals, and when disclosing their information. Certain regulations governing the creation and maintenance of source data may also indirectly drive source data quality and documentation. Certain health information statutes contain technical requirements for the use and provision of electronic information management systems (ie, logging, audit requirements). Additionally, Health Canada regulates certain tools as software as a medical device. Pre-market expectations include evidence of data management practices, algorithm change protocols, transparency, and cybersecurity measures commensurate with risk. Voluntary industry codes also encourage dataset quality and bias mitigation. The AI4H Guiding Principles, endorsed by the federal, provincial and territorial (FPT) governments (except for Quebec), outline commitments to, among other things, ensure that personal health information will be used in AI technologies in a manner that respects individual privacy and ensures data security, including through appropriate consent, de-identification, secure systems, and legal compliance with privacy, information and data legislation. Commitments were made to implement clear regulatory, policy, ethical and/or procurement frameworks for healthcare AI.

Secondary use generally requires consent unless a statutory exception applies, which varies by province. Use and disclosure of personal health information for training of AI models is not specifically authorised by any jurisdiction’s privacy or health information laws. Therefore, informed individual consent is generally required to use any identifying health information for such purposes.

Multiple health information laws authorise custodians to transform identifying health information into non-identifying form, directly or through an information manager/agent, without consent. While irreversibly anonymised healthcare data is generally not subject to the requirements applicable to personal information (including consent), anonymisation is an exceptionally high standard that is not always feasible to achieve. De-identified information continues to be subject to consent requirements subject to limited exceptions, which vary by jurisdiction.

Research exceptions may allow use without consent subject to conditions (eg, ethics approval, de-identification, safeguards).

Quebec’s privacy and health information statutes require transparency where decisions are made exclusively on automated processing of personal information, including a requirement that individuals be informed of the personal information and principal factors involved in such decisions, and provided with a right to have the decision reviewed by a human.

Sharing for AI development and deployment is governed by existing privacy and health information regimes, institutional policy, and contracts.

Privacy and health information laws in most provinces and territories mandate written agreements when a service provider processes personal health information, with some jurisdictions also specifically prescribing certain obligations that must be included. Prescribed content commonly includes:

  • a description of services;
  • limits on collection, use and disclosure;
  • security safeguards;
  • breach notification;
  • audit and oversight;
  • subcontracting controls; and
  • return or destruction on termination.

Data processing by the service provider/information manager must generally be limited to the purposes of performing the services, limiting the ability for shared personal information to be used for such service providers’ own purposes. Secondary use by service providers for AI development may therefore be limited, unless specifically within scope of the services.

Cross-border data transfers, for any purpose, require clear notice to individuals that their information will be transferred outside their jurisdiction, subject to different legal regimes in that region, and could be disclosed to law enforcement if authorised under the jurisdiction’s laws. Additionally, Quebec law requires data transfer impact assessments prior to any transfer of personal information outside Quebec, involving an assessment of factors including the contractual protections, the legal regime in the recipient state, and the sensitivity of the information transferred. Any necessary risk mitigation measures must then be implemented before the information is transferred outside Quebec. This is not AI-specific but would apply to personal information transferred cross-border for any purpose.

Generally, while there is some variation in language used between statutes in each Canadian jurisdiction, statutory definitions or court and regulatory guidance and authorities have established that:

  • personal information is information that would allow an individual to be identified, either alone or in combination with other information;
  • anonymised information is data that irreversibly no longer allows direct or indirect re-identification of an individual, and thus is generally outside the scope of “personal information”; and
  • de-identified data is information from which direct identifiers have been removed, but that carries residual indirect re-identification risk, when combined with other information.

Specific standards for de-identification and anonymisation of health data vary across provincial and territorial statutes. Some statutes address the issue only through the definition of “personal information”, while others specifically define de-identified and/or anonymised information. Some statutes also set specific requirements applicable to each of personal information, de-identified information and anonymised information, while others treat anything other than personal information as beyond the scope of the law’s obligations. Where statutes regulate de-identified and/or anonymised data, common requirements include technical and procedural standards for anonymisation, re-identification risk assessments, and restrictions on data matching and re-identification.

In Quebec, the legislation distinguishes de-identified information (still personal information) from anonymised information (no longer personal information) and imposes strong duties and constraints governing anonymisation and the purposes for which anonymised information may be used. This includes obligations to continuously assess re-identification risks, update anonymisation measures, and maintain records of anonymisation processes.

Medicines in Canada, including the approval of new drugs, are regulated under the Food and Drugs Act and Regulations. Drugs may also be subject to patent protection under the Patent Act if they meet the requirements for patentability. Patent claims may cover the medicinal ingredient itself; a process for making it; a formulation comprising the medicinal ingredient; a dosage form comprising the medicine; or a use of the medicine for a particular indication. Patent claims to methods of medical treatment are not patentable in Canada based on longstanding jurisprudence. Typically, an invention that requires a measure of skill and judgement by a medical professional will not be patentable. The Supreme Court of Canada is expected to provide clarity on the bounds of patentability for methods of medical treatment in an appeal to be heard in October 2025 (decision expected in 2026). The Patent Act also prohibits claims directed to an algorithm or mere scientific formula. Therefore, while AI is increasingly used for the discovery and assessment of pharmaceuticals, an algorithm for such use would not on its own be patentable without further patentable subject matter. For instance, a novel medical device that applies an algorithm, such as an AI system, may be patentable as it would meet the physicality requirement but a computer program that simply runs the algorithm would not. Further, the Patent Act requires a named “inventor” who contributed to the inventive concept. To date, non-human inventors, such as an AI system, have not been recognised as a potential inventor under the Patent Act.

Drug regulation and IP intersect through the Patented Medicines (Notice of Compliance) Regulations regime. AI-assisted inventions follow existing patentability rules.

Original works, including software code, are eligible for copyright protection, but only those with a human author. AI cannot be recognised as an “author,” and works created solely by non-human agents lack copyright protection. Conversely, use of AI to synthesise and compile information to generate new outputs may invoke copyright infringement concerns where the underlying material is subject to copyright protection and no exception applies. A fair use exception may apply where the use is for study or academic purposes only. Use of AI to compile medical information may also raise privacy concerns under federal and provincial legislation. While AI is useful in synthesising medical data and preparing regulatory submissions, Canadian law recognises the proprietary nature of such data, with some limits. For instance, innovative drugs that have not previously been marketed are subject to data protection, meaning that a generic manufacturer cannot rely on the data of the innovator for the purposes of comparing its product until six years after the first approval of the innovative product, and cannot obtain approval until eight years after such date.

The ownership of AI outputs will largely be a function of the contractual relationships between the parties involved. As noted in 7.1 Patent Protection, it is the inventor or author who has the right of IP protection and the ability to assign or license to others any acquired rights. Where research is conducted in an institutional or corporate setting, employment contracts often dictate that such rights be automatically assigned to the employer. The terms of such agreements may vary and may also provide for sharing of rights and commercial exploitation. Where AI is commonly involved in generating outputs, it may be advisable to consider contractional provisions specifically covering such outputs. Where the AI provider is a third party, the terms of the licence with such party may include provisions related to ownership of outputs and their commercial exploitations.

Licensing models for AI technologies or AI outputs in healthcare will vary based on the parties’ positions and the particular technology. However, the key inflection point is whether the IP owner seeks to be involved in commercialisation of the IP or act solely as a licensor. In this regard, whether an exclusive or non-exclusive licence is granted to another party, it will be critical to properly define the scope of rights granted. This may present challenges where AI technology or outputs are concerned, when the rights are not to a fixed object or system. For instance, does the licence only permit use of the AI system or do commercial rights extend to the new outputs it generates? Given the highly regulated nature of the healthcare industry, and in particular with respect to pharmaceuticals, licence agreements should contain terms specifying any obligations to seek and obtain regulatory approvals, including timelines and contingencies for failure to do so, if applicable.

In Canada, AI-based clinical decision support systems (CDS) are regulated as a SaMD or MLMD under the MDRs. That said, not all CDS is a “device.” To align with international jurisdictions, Health Canada has indicated that some CDS may fall outside the device definition where it only informs a clinician and otherwise meets specific exclusion criteria. Where claims or functionality cross into diagnosis/treatment that cannot be validated independently by the clinician, the software will generally be SaMD. The evidence required to support a medical device licence application is proportional to the risk posed by the device. In particular, devices presenting as high risks to patients, such as Class III and IV devices, require evidence such as clinical trials and reviews to establish their effectiveness. As part of their safety requirements, manufacturers must also include a cybersecurity vulnerability assessment where a risk management framework is put in place to mitigate any potential risks.

AI-based diagnostic tools are regulated as a SaMD under the MDRs. Manufacturers must determine the risk class of the medical device based on the intended use of the software and the applicable rules in Schedule 1 of the MDRs. The intended use describes how it will be used and what diseases it intends to treat or diagnose. Devices will be classified as Class II if the software is intended to image or monitor a physiological process or condition. However, if an erroneous result could lead to immediate and critical danger, the device will be classified as Class III.

The classification of the SaMD will determine the standards of evidence Health Canada requires to license the device. Devices falling into a high-risk category such as Class III will require more rigorous evidence of safety and effectiveness, including clinical validation studies that demonstrate diagnostic accuracy, reliability and generalisability across diverse patient populations.

As noted in 8.2 Diagnostic Applications, the risks associated with erroneous results will determine the classification and standards of evidence required to license them.

Regulatory bodies, such as the College of Physicians and Surgeons, require use of these tools with the supervision of a qualified healthcare professional. This ensures that the final treatment decisions remain with a human clinician who can override or contextualise AI-generated assessments. AI systems are used to complement clinical care, not replace human evaluation or expertise.

AI applications used in remote patient monitoring are regulated as a SaMD under the MDRs if they are intended to diagnose, monitor, or guide treatment decisions based on patient data collected outside traditional clinical settings.

Specific considerations for home or non-clinical settings may include more robust cybersecurity and privacy protections. Part of the safety assessment required to license a SaMD is an investigation of the device’s cybersecurity vulnerabilities, and the strategies that are employed to mitigate the risks. Cloud use requires documented controls for secure design, risk management, verification and validation testing, and continued monitoring of emerging risks.

Cybersecurity management intersects with broader laws and privacy statutes (such as PIPEDA and provincial Health Information Acts), requiring compliance with consent, data protection, and cross-jurisdictional data transfer rules.

AI tools used in drug discovery and development are primarily governed by the Food and Drugs Act and the associated regulations. Their regulatory status will depend on what the tool is used for in the drug development lifecycle. The AI tool’s classification and standards of regulation will be less stringent when the tool is used for things such as target identification, screening, or note taking. More stringent standards will apply when it is being used to inform or draw conclusions on clinical results. These may include more transparent documentation of algorithms, data sources, and model performance. It may also need to demonstrate the reliability and generalisability of the process and results across populations and datasets.

For AI tools that directly impact patient safety or trial outcomes, additional requirements may include Good Machine Learning Practice (GMLP) adherence, risk management, and ongoing monitoring.

Canada currently has no comprehensive, health-care-specific AI statute. Any future bill would need to be re-tabled and studied, leaving timing uncertain.

In the interim, Canada relies on existing privacy laws, MDRs, provincial legislation, and voluntary codes of conduct, alongside regulatory guidance.

A dedicated regulatory sandbox for healthcare AI remains at the exploratory stage. Health Canada explored using a flexible approach for regulating adaptive MLMD under its new authorities for innovative therapeutic products, but ultimately concluded existing regulatory framework (MDR), supplemented by policy and guidance, can appropriately oversee these products.

On the privacy side, Ontario’s Information and Privacy Commissioner has explored how a privacy regulatory sandbox could operate in the province, but this work remains conceptual. Federally, the Privacy Commissioner of Canada has expressed support with other international data protection and privacy authorities for mechanisms that could include regulatory sandboxes, but there is no health-specific privacy sandbox in operation.

Enabling infrastructure is advancing in parallel. The Shared Pan-Canadian Interoperability Roadmap aims to improve standards-based exchange of health information across jurisdictions, which can be used to improve data availability for AI system validation and performance monitoring. Canada is also advancing AI capacity through national investments and adoption programmes, which, while cross-sectoral, aim to drive responsible AI adoption, regional growth, and infrastructure access that can lower barriers for healthcare AI development and implementation.

Canada participates actively in international efforts to align expectations for medical-AI systems. As a member of the International Medical Device Regulators Forum, the International Organization for Standardization and the WHO, Canada contributes to working groups that develop consensus-based guidelines for AI/ML-enabled medical devices and other initiatives that inform domestic practice. In collaboration with the US FDA and UK MHRA, Health Canada has co-developed guiding principles on GMLP and transparency principles for MLMD, focusing on information essential to the safe performance of the human-AI team.

In practice, developers operating across borders are still likely to encounter differences in how authorities license devices and consider real-world evidence, so filings will often require tailoring across jurisdictions. Cross-border data issues persist. Canadian law generally permits international processing with appropriate protections, but once personal health data flows to multiple jurisdictions and cloud environments, practical compliance and enforcement become more complex.

As technology advances, regulatory focus is shifting from one-time approvals to lifecycle oversight. PCCPs are intended to address continuous learning systems by specifying planned changes in advance, and Health Canada’s guidance anticipates increased post-market requirements and licence conditions to manage performance drift and emergent risks.

There is no categorical ban on autonomous functions, but medical device requirements and professional duties effectively call for timely human verification and an override where outputs could materially influence diagnosis or treatment. In practice, fully automated tools are generally confined to narrow, lower-risk indications or controlled settings until stronger evidence and fail-safes are established.

As AI extends into robotic assistance, wearables and VR-enabled care, system validation, cybersecurity, usability, informed consent and accountability questions become more complex, which are areas where International Medical Device Regulators Forum (IMDRF) principles will increasingly shape expectations in Canada.

Privacy concerns remain significant. AI depends on large volumes of health data, and requirements for the use of de-identified or anonymised data vary across provinces, creating practical challenges for data availability and model development.

Algorithmic bias is a further challenge. Systems trained on under-representative data can perpetuate inequities and undermine diagnostic accuracy and patient safety. High-quality, diverse data, and transparency about representativeness and limitations, are pre-requisites for equitable AI development and deployment, but they must be balanced with privacy and governance constraints. International guidance and the joint transparency principles are prompting more explicit labelling and user information on representativeness and limitations, enabling clinicians to understand when to rely on outputs and when to discount them.

There are numerous steps that healthcare AI developers and users can implement to ensure compliance, including the following.

  • Governance structures: creation of a multi-disciplinary AI governance committee, risk categorisation and management frameworks, and escalation/rollback procedures.
  • Documentation practices: detailed development records, algorithmic impact assessments, privacy impact assessments, intended-use statements, version control and change logs.
  • Monitoring systems: monitor performance, incident reporting, and audits.
  • Balance innovation with compliance: explore use cases with pilots/limited releases with guardrails, “sandbox-like” test environments.
  • Data governance: incorporate data minimisation, de-identification, transparency, and appropriate security and access controls.

Key provisions to address in contracts that encompass the development and/or use of healthcare AI in Canada include the following.

  • Scope of use and regulatory compliance: define intended use, relevant jurisdictions and applicable regulatory regimes, regulatory status (eg, SaMD, where applicable).
  • Data protection and privacy: specify required privacy compliance, roles (eg, custodian/agent), data ownership and permitted uses, data flows and residency, data minimisation requirements, sub-processor requirements, breach notification, privacy impact assessment co-operation, security controls (organisational, technical measures), and audit rights.
  • Interoperability and integration: describe supported EMRs/interfaces, conformance to relevant standards, responsibilities for testing and change control, and clear statements on interoperability/readiness.
  • Intellectual property rights: clarify ownership/licensing of software, documentation, and outputs; rights to use de-identified data for quality improvement. Developers may seek rights to derived analytics and model improvements, and customers often require limits (eg, no use of identifiable data).
  • Performance standards and service levels: establish measurable quality and availability targets (including latency for CDS), support/response times, and remedies (eg, service credits). Set appropriate business continuity/disaster recovery expectations (backup frequency, restoration targets, downtime procedures).
  • Clinical validation and risk allocation: require disclosure of validation context (populations, settings), known limitations, and post-deployment monitoring plans; consider evaluation criteria for safety, effectiveness, fairness, transparency and cybersecurity.
  • Transparency and change management: include “essential user information” (intended use, limitations, update policy) and a change-control clause for model updates (eg, notice, testing in customer’s environment, rollback). Customers should retain a right to pause deployment of an update that degrades performance in their setting.
  • Indemnification and limitation of liability: tailor indemnities for IP infringement, data breaches and safety events; set liability caps that reflect risk and insurance requirements, with appropriate exceptions (eg, wilful misconduct).
  • Warranties: conformity to documentation, compliance with applicable law, no IP infringement, disclosure of known material defects or vulnerabilities.
  • Termination and transition: data export/portability (formats, timelines, fees), transition assistance obligations, and secure deletion/archival.
  • Implementation supports: consent language, training and user support, patient communication materials, etc.

Most insurance coverage is currently provided through tailored versions of existing policies (ie, professional liability, product liability, cyber liability, general commercial liability and technology errors and omissions). Some insurers and brokers are developing bespoke solutions. There are no AI-specific mandatory insurance requirements at present.

Insurers will typically evaluate several factors when carrying out their risk assessment before underwriting policies for healthcare AI, including:

  • intended use and risk class;
  • validation and testing evidence;
  • data governance and cybersecurity;
  • transparency and explainability;
  • human oversight and override procedures; and
  • incident response and breach management plans.

The preceding chapters identify issues that should be considered when implementing healthcare AI. Organisations should implement appropriate risk management measures to ensure safe and effective deployment, including the following.

  • Accountability and governance: assign clear owners (clinical, IT), define approval requirements and the required algorithmic and privacy assessments, maintain issues/rollback escalation pathways.
  • Training and change management: provide role-specific training for clinicians and staff (capabilities, limits, when to override, how to document AI involvement) and refresh periodically after updates.
  • Essential user information at the point of care: ensure intended use, limitations, validation context, known populations, and update policy are visible and usable.
  • Phased deployment with safeguards: start with pilots or limited release, set success metrics and safety thresholds in advance, and implement reporting for user feedback, near-misses, and incidents for continuous improvement.
  • Post-deployment monitoring: track performance and drift, define triggers for rollback or retraining, and implement structured responses for incident handling.
  • Integration and security: co-ordinate with IT for role-based access management, auditing and logging, updates, backup/restore, disaster recovery, and appropriate security measures.
  • Operational documentation: maintain standard operating procedures regarding approved use and limitations, change logs, human oversight requirements; update consent materials to ensure patients understand how AI is used in their care.

In addition to the information in 6.3 Data Sharing and Access and 9.3 International Harmonisation, since the regulation of AI in Canada falls under different levels of government, depending on the issue, consideration must be given to the variability that may exist between provincial, territorial, and federal rules and regulations, including areas such as privacy, human rights and anti-discrimination laws, licensing and professional regulation. To address these changes, organisations should:

  • map data flows;
  • incorporate contractual safeguards to require compliance with relevant jurisdictional requirements;
  • ensure compliance with professional regulatory obligations that are governed provincially/territorially;
  • ensure third party and cloud processors meet security and local regulatory requirements; and
  • maintain a regulatory matrix to track obligations and incident reporting timelines.
Gowling WLG (Canada) LLP

160 Elgin Street
Suite 2600
Ottawa, ON, K1P 1C3
Canada

+1 613 233 1781

+1 613 563 9869

martin.lapner@gowlingwlg.com gowlingwlg.com/en-ca
Author Business Card

Law and Practice in Canada

Authors



Gowling WLG (Canada) LLP is a law firm that provides legal guidance on governance, operations and strategic issues to clients in the healthcare sector. In addition to representing healthcare providers in medical negligence litigation and other medico-legal matters, the firm also co-ordinates legal counsel nationally representing healthcare providers. The firm advises on stakeholder engagement and supports educational initiatives, legal spending, and database management. Gowling WLG also represents numerous health organisations. Its Medical Defence and Health Law Group defends regulated health professionals in malpractice and regulatory proceedings and operates pro bono hotlines. The firm represents accreditation bodies and provides regulatory and compliance advice to manufacturers and distributors, offering full life-cycle legal services for clients in the life sciences. It also supports healthcare infrastructure projects, not-for-profits, and advises on health privacy compliance and data protection matters.