As artificial intelligence (AI), including the rapid rise of generative AI (GAI), becomes more embedded in Canadian healthcare – encompassing medical diagnoses, virtual nursing assistants, medication management, robotic surgery and healthcare data management, to name a few – clear sector-specific regulation remains a work in progress.
The Regulation of AI
The principal federal proposal, the Artificial Intelligence and Data Act (AIDA), died on the Order Paper when Parliament was prorogued in January 2025, and no successor bill has yet been introduced. Provinces continue to rely on existing statutes, guidance documents and voluntary codes to guide the use of AI in healthcare.
Federal Landscape
The Government of Canada published its Digital Charter in 2019 and followed up with Bill C-27, the Digital Charter Implementation Act of 2022. Although Bill C-27 passed second reading, significant criticism was levied at its reliance on future regulations and its limited sectoral tailoring, which led to delays at the committee stage, and the Bill ultimately died on the Order Paper when Parliament was prorogued.
In this legislative vacuum, the federal government announced a series of initiatives to support responsible and safe AI adoption, including a refreshed membership of the Advisory Council on AI, establishment of a Safe and Secure AI Advisory Group, release of the Guide for Managers of AI Systems applicable to federal institutions, and expansion of signatories to the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
Health Canada continues to regulate many clinical AI tools as software as a medical device (SaMD) under the Medical Devices Regulations. Using the International Medical Device Regulators Forum risk classification, the department mandates more rigorous evidence and post‑market surveillance for software whose malfunction could directly compromise patient safety. In February 2025, Health Canada issued its Pre‑market Guidance for Machine‑Learning‑Enabled Medical Devices, detailing expectations for algorithm change protocols, transparency, and cybersecurity measures.
Software that is limited to administrative functions remains exempt, as do applications that merely support, rather than supplant, clinical judgment.
Provincial Initiatives
Provincial legislation applicable to AI in healthcare generally remains in the early stages, with many provinces relying on existing frameworks, such as privacy laws and healthcare regulations, to address AI-related concerns.
Some provinces have taken steps to modernise legislation and specifically contemplate AI. In Ontario, Bill 194, the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024, received Royal Assent on November 25, 2024. The statute empowers future regulations that will require public sector entities, including hospitals, to disclose their use of AI, implement accountability frameworks, adopt risk management measures, and adhere to prescribed technical standards. In prescribed circumstances, institutions may be required to ensure an individual provides oversight of AI use.
In Québec, An Act respecting the protection of personal information in the private sector (applicable to the private sector), An Act respecting Access to documents held by public bodies and the Protection of personal information (relevant to the public sector), and the Act respecting health and social services information (applicable to healthcare organisations) require organisations to notify individuals of automated decisions, disclose the personal data and principal factors relied upon, and provide a right to human review.
Professional Regulatory Guidance
Canadian health professional regulators have released preliminary, high-level guidance regarding the use of AI, emphasising that AI must augment rather than replace professional judgment. The guidance consistently urges caution when using AI with three dominant themes:
Several regulators apply their broader standards surrounding technology to AI, requiring practitioners to carefully evaluate, apply, and adapt technology in ways that prioritise and protect patient interests (eg, ensuring the use of reputable AI systems and continuing to assess electronic evaluations to identify any inadequate or erroneous results). Other Associations remind healthcare providers of the importance of understanding patients’ comfort and access to emerging AI tools before recommending them, and implementing safeguards to protect patient privacy and avoid conflicts of interest.
While most regulators do not prohibit registrants from using AI, many expressly warn against substituting computer-generated assessments, reports, or statements for the professional opinion of a healthcare provider.
AI and Civil Liability
Determining liability in cases involving the use of AI in healthcare remains complex and uncertain, as legal frameworks adapt to both rapidly evolving technologies and the shifting dynamics of human and AI-supported decision-making.
The introduction of AI in hospital settings may, for example, require institutions to develop protocols for the appropriate selection, implementation, training, maintenance, and inspection of such technologies, and to ensure that staff are appropriately qualified to use the applications. Developers and vendors may be expected to take reasonable care in the development of AI tools and to warn of limitations and risks.
It is challenging to predict how courts may assess healthcare providers’ use of AI, particularly given the evolving nature of these technologies and inconsistent adoption, guidance, and practices. Claims involving the use of software (other than AI) may provide some insight into the potential consideration of AI use in healthcare. These cases, coupled with existing liability principles, mean that individual healthcare providers, institutions, developers of AI systems, and vendors may find themselves defending new types of AI claims relating to the negligent design, implementation, or use of an AI tool.
Looking ahead, the use of AI systems is likely to result in an increasing number of defendants in legal actions, extending beyond traditional healthcare providers to include others in the supply chain, such as developers and vendors of AI systems. As algorithms become more autonomous and less susceptible to real-time human override, it may be harder to portray clinicians or hospitals as the principal risk bearers. At the same time, the opacity of AI systems is expected to create challenges for plaintiffs in identifying and proving that a specific act or omission caused them harm, potentially shifting the focus back to more traditional defendants, including how product liability claims will be assessed.
In this uncertain environment, organisations that develop, distribute, or integrate AI should carefully examine their contractual arrangements and proposed reallocations of risk, including through limitations of liability, indemnities, and liability protection.
Privacy and Cybersecurity
The adoption of AI in healthcare raises questions about patient consent, data sharing, and transparency obligations under federal and provincial privacy laws.
When applied in the healthcare context, issues may arise relating to authorisation to use the training dataset for the AI model, collection and use of new data to update or fine-tune the model, use of patient information when interacting with AI, and requirements for consent and/or de-identification of data in each of these cases. These issues must be increasingly investigated, including in the context of a privacy impact assessment prior to implementing an AI tool. There are broad requirements for undertaking privacy impact assessments before implementing systems. For example, in Alberta and Québec, privacy impact assessments are required under health sector-specific privacy legislation.
Privacy legislation is also beginning to impose additional obligations with respect to transparency when AI makes or recommends a particular decision, as well as the right of individuals to request a human decision-maker.
Regulatory Initiatives and Investigations
Federal and provincial Privacy Commissioners have been among the most active in developing expectations for the use of AI, including in the healthcare sector. While they do not directly regulate AI as a whole, they play a key role in ensuring that the use of AI systems aligns with existing privacy laws (eg, Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial health privacy statutes).
The Office of the Privacy Commissioner of Canada (OPC) published A Regulatory Framework for AI: Recommendations for PIPEDA Reform, which recommends stronger accountability, more explicit rules for automated decision-making, and rights for individuals to challenge AI-driven decisions. For the first time, both provincial and federal Commissioners have engaged in investigating AI-related privacy concerns, including working collaboratively to coordinate AI oversight in a joint investigation.
Algorithmic Bias and Discrimination
There may be unconscious bias and unintentional discrimination in the training data used to develop AI systems, which can extend historic harms in the form of biased output. Academic literature, including a 2024 Stanford-led study, has demonstrated that large language model chatbots can perpetuate debunked, racially biased medical myths. AI tools that recommend discriminatory practices, whether unintentional or not, could form the basis of a human rights claim.
Intellectual Property Uncertainties
Canadian intellectual property statutes have not yet expressly addressed ownership or infringement questions concerning AI-generated content, including its application in healthcare. For example, questions remain regarding the subsistence of copyright, authorship attribution, and inventorship for patent-eligible AI outputs. Responsible AI integration in health-related research and medical report generation is required to safeguard against plagiarism and protect intellectual property.
Conclusion
AI promises to revolutionise Canadian healthcare delivery, diagnostics, and resource allocation. While the regulatory landscape continues to evolve, liability and risk management considerations for healthcare providers and organisations, as well as AI vendors and developers, favour a cautious approach.