Digital healthcare in the United Kingdom (UK) is a broad field encompassing a variety of technologies and services. The main types, and their distinguishing features, are as follows.
Telehealth and Telemedicine
These terms refer to the provision of healthcare services at a distance, using digital information and communication technologies. Telemedicine typically involves clinical services (such as remote consultations, diagnosis and treatment). These services can be delivered via video calls, telephone or secure messaging platforms, and are subject to specific regulatory requirements. For example, practitioners providing these services will be regulated by the General Medical Council (GMC) and the Care Quality Commission (CQC).
Telehealth is a broader umbrella term, encompassing clinical and non-clinical services, which can include education, administrative meetings and training.
Mobile Health (mHealth)
mHealth refers to the use of portable and smart devices such as mobile phones, smartwatches and other wearable technology for the provision of healthcare services. This can also include the use of healthcare-related mobile applications.
mHealth apps may be used for a variety of purposes, including to:
mHealth apps can be broadly categorised based on their primary purpose/aim, as follows.
Chronic care management apps
These include apps for managing blood pressure, cancer care, diabetes care, mental health and other illnesses.
General health and fitness apps
This category represents the majority of mHealth apps, and includes health and well-being apps related to nutrition, health tracking, fitness and weight management, along with wearable technology sensors and other monitors.
Medical apps
These apps are mainly used by healthcare professionals. Examples include medical education/training apps, doctor consultation/appointment apps, and patient management and monitoring apps. Clinical decision support systems, which assist doctors/physicians in diagnosing various health conditions, are also considered medical apps.
Medication management apps
These apps help to keep track of medicine intake to ensure proper dosing at required intervals.
Personal health record apps
These applications allow patients to store medical data, allergy information and other medical information.
Women’s health apps
This category includes apps relating to fertility, pregnancy, breastfeeding, menstruation and other women’s health concerns.
Electronic Patient Records (EPRs)
These are digital systems for healthcare providers to store, manage and share patient health information. EPRs facilitate access to patient data across different healthcare professionals and institutions, and can be integrated with other digital health tools. They are not direct care delivery tools but are essential for the management and continuity of care. Their use is governed by strict data protection and security requirements, such as the General Data Protection Regulation (GDPR) in the EU and EEA.
Remote Patient Monitoring (RPM)
This involves the use of digital devices and software to monitor patients’ health status, and collect health data, outside traditional clinical settings. Examples include wearable sensors for heart rate, blood glucose monitoring devices, and apps that transmit data to healthcare providers for ongoing assessment. RPM can be integrated with telemedicine platforms and may be regulated as a medical device, depending on its intended use.
Summary
The distinction between the foregoing forms often hinges on their intended use, the nature of the data processed, and whether they are used for medical purposes (diagnosis, prevention, monitoring, treatment or alleviation of disease). The various types and their distinctions can be summarised as follows:
Digital technology is increasingly mainstream in UK healthcare settings. The NHS requires GP practices to offer online and video consultation tools, and there is widespread use of electronic patient records and digital communication methods. The COVID-19 pandemic accelerated the adoption of digital health solutions, making remote consultations and digital triage common practice. From October 2021, all GP practices were required to offer and promote online consultation tools, video consultation tools, secure electronic communication methods, and online facilities for updating personal or contact information. These are to be in place alongside, rather than as a replacement for, other access and communication methods such as telephone and face-to-face contact.
Digital technology is used in healthcare settings in a variety of ways, as follows.
Clinical Decision Support
Digital health apps and software assist healthcare professionals in diagnosing conditions, determining treatment options and managing patient care.
Remote Consultations
Telemedicine enables doctors to consult with patients via video, phone or online platforms, reducing the need for in-person visits. The GMC has published guidance on remote consultations, and the CQC regulates telehealth/telemedicine service providers in England for the regulated activity of providing triage and medical advice remotely.
Chronic Disease Management
mHealth apps help patients and clinicians monitor and manage chronic conditions such as diabetes, hypertension and mental health disorders. These apps can track symptoms and medication adherence, offer access to information or tools, and provide reminders for appointments or medication.
Healthcare Data Collection
The increased use of mHealth apps, including through wearables, has spurred the ability to collect so-called “real-world data” to gain insight into the use and performance of medicines in everyday clinical settings, as a complement to clinical trial data.
Patient Engagement
Apps allow patients to access their health records, book appointments, receive reminders and communicate with healthcare providers. Personal health record apps enable patients to store and manage their own medical data.
Remote Monitoring
Wearables and connected devices enable continuous monitoring of vital signs and other health metrics, with data being shared with clinicians for proactive intervention – for example, blood glucose meters, heart rate monitors and sleep trackers.
Women’s Health
Apps for tracking fertility, pregnancy, menstruation and other women’s health concerns allow patients to have more control over their own health, and can help to flag issues that patients should check with their medical practitioners.
Digital healthcare brings several advantages, including the following.
Improved Access and Convenience
Patients can access healthcare services remotely and on-demand, reducing travel and wait times. This is particularly beneficial for those in remote or underserved areas, and for taking pressure off hospital services out-of-hours.
Enhanced Patient Experience
Digital tools empower patients to manage their health, access information and communicate with providers more easily. Patients can book appointments, access their health records, and receive reminders through apps.
Better Clinical Outcomes
Real-time data from remote monitoring and decision support tools can lead to earlier interventions and more personalised care. The use of real-world data collected through mHealth apps and wearables provides long-term, holistic data on the efficacy and safety of therapeutic interventions in real-world settings, complementing clinical trial data.
Data-Driven Insights
The increased understanding of treatment efficacy and patient outcomes, gained through collection of real-world data, supports decision-making regarding product life cycle benefit-risk evaluation and pharmacovigilance.
Cost Impact
Digital healthcare improves efficiency and resource allocation, reducing overall healthcare costs. The use of digital tools can reduce unnecessary hospital visits and admissions by enabling remote monitoring and early intervention.
Support for Healthcare Professionals
Digital health apps, especially those used for clinical decision support, can assist doctors in making more accurate diagnoses and treatment decisions, improving patient safety and outcomes.
Efficiency for Healthcare Professionals
Digital records, automated reminders and clinical decision support streamline workflows and reduce administrative burdens. For example, electronic patient records facilitate the sharing of information among healthcare professionals, improving co-ordination and continuity of care.
There is no single regulatory definition of “digital healthcare” in the UK. While the term is widely used in policy and guidance, associated legal obligations are generally triggered by the function of a product (for example, whether it is a medical device or involves the processing of personal data). However, the term is generally understood as the use of technology (such as apps, programs, software, etc) in healthcare – either standalone or combined with other products such as therapeutics, diagnostics or medical devices. This can also be referred to as eHealth or digital health services.
It is worth noting that currently two sets of regulations coexist in the UK, as a result of the Northern Ireland (NI) Protocol and Windsor Framework, which maintains alignment with certain EU rules in NI. Following Brexit, Great Britain (England, Wales and Scotland) (GB) is governed by the Medical Devices Regulations 2002 (MDR (GB)), while NI is governed by the EU Medical Device Regulation (2017/745) and the In Vitro Diagnostic Medical Device Regulation (2017/746).
The UK and EU regulatory documents and guidance use terms such as “digital healthcare”, “eHealth” or “digital health services” in line with international understanding, and definitions are often informed by European Commission guidance and international standards (eg, MEDDEV 2.1/6 for software, which was superseded by the Medical Device Coordination Group (MDCG) Guidance 2019-11).
MDCG 2019-11 retained the MEDDEV 2.1/6 guidance’s definition of “software” as “a set of instructions that processes input data and creates output data” and further defined “input data” and “output data”. It introduced a new definition specifically for “medical device software” (MDSW): “medical device software is software that is intended to be used, alone or in combination, for a purpose as specified in the definition of a ‘medical device’ in the Medical Devices Regulation or In Vitro Diagnostic Medical Devices Regulation.”
The MDCG Guidance also provided guidance on the qualification and classification of software as a medical device.
The Medicines and Healthcare products Regulatory Agency (MHRA) has developed its own guidance framework, whose substance still broadly aligns with the EU approach. In practice, however, with the MHRA’s signalled intent to focus on innovation-friendly approaches to regulation, MHRA-specific interpretations of the guidance will possibly be seen in the future.
Key laws and regulations applicable to digital healthcare in the UK are set out below. Please note that, as set out in 2.1 Definition Of Digital Healthcare, the applicable regulatory requirements differ between GB and NI.
The Medical Devices Regulations 2002 (MDR (GB))
These regulate medical devices, including software as a medical device (SaMD). The MDR (GB) is based on the previous EU Directives. Following Brexit, the UK government adopted the Medicines and Medical Devices Act 2021, which enables a comprehensive reform of the framework legislation for medical devices and human medicines. The government has set out a roadmap for this reform; the first piece of legislation for the new framework was introduced in 2024 and covers post-market surveillance. The reform is ongoing, and further statutory instruments are expected in the coming months.
The EU Medical Device Regulation (2017/745) (EU MDR) and the In Vitro Diagnostic Medical Device Regulation (2017/746) (IVDR)
The EU MDR and IVDR are still applicable in NI, as a result of the Northern Ireland Protocol and Windsor Framework, as mentioned in 2.1 Definition Of Digital Healthcare. The Medical Devices (In Vitro Diagnostic Devices etc.) Amendment Regulations 2024 also came into force on 21 March 2024 and introduced provisions required for implementing the IVDR in NI. These include:
The Health and Social Care Act 2008 (Regulated Activities) Regulations 2014
These require providers of certain health or social care services in England and Wales, including telehealth/telemedicine, to register with the CQC and comply with specified quality standards.
Data Protection Laws
If the personal data of users/patients is processed using digital health software, any such processing must comply with the data protection law in force in GB. For NI, if a business in NI processes data in the context of offering goods or services to individuals in the EU, it may also be subject to EU data privacy legislation.
The UK General Data Protection Regulation (UK GDPR)
The UK GDPR is a retained version of the EU GDPR, with some UK-specific amendments. It governs the processing of personal data, including health data, and imposes requirements for lawfulness, fairness, transparency and security.
The Data Protection Act 2018 (DPA)
This supplements the UK GDPR and sets out additional requirements for the processing of special category data, including health data (Article 9 UK GDPR).
The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR)
These impose specific requirements in the context of marketing, cookies, keeping communications secure and customer privacy. The PECR are derived from EU law, specifically Directive 2002/58/EC, also known as the “e-privacy Directive”. It is important to note that the e-privacy Directive is currently under review in the EU and will be replaced by a Regulation. This new Regulation will not form part of UK law; however, it will be applicable in NI due to the unique legal framework that applies there, as previously mentioned.
Cybersecurity Requirements and Data Protection Compliance
Digital health products must comply with applicable cybersecurity and data protection requirements, including the Product Security and Telecommunications Infrastructure Act 2022 (GB), the Network and Information Systems (NIS) Regulations 2018 (UK-wide), and security obligations under the UK GDPR and the Data Protection Act 2018. In NI, businesses should also consider the potential application of EU legislation, such as the Network and Information Systems Directive 2, due to the Protocol, and should ensure compliance with any local or EU-derived requirements.
Consumer Protection Laws
Most UK consumer protection and product safety laws apply across both GB and NI. However, where these laws derive from EU legislation, NI may continue to follow EU amendments, while GB may diverge over time.
The Consumer Protection Act 1987 (CPA) (England/Wales/Scotland)/Consumer Protection Order (CPO) (NI)
Digital health software will generally constitute a “product” under the CPA/CPO 1987 where it is supplied as a distinct commercial offering. Courts have established that software can be subject to product liability principles, though the specific application depends on the nature of the software and how defects arise. It should be noted that the new EU Product Liability Directive 2024/2853 will not be implemented in GB but will be relevant for NI.
The Consumer Contracts (Information, Cancellation and Additional Charges) Regulations 2013 and the Consumer Rights Directive (2011/83/EC)
The 2013 Regulations implement most of the Consumer Rights Directive, which applies when a person purchases an app relating to lifestyle or well-being.
The General Product Safety Regulations 2005/General Product Safety Regulation (EU)
Digital health products that are not medical devices are still subject to general product safety obligations requiring producers to place only safe products on the market. The applicable framework in GB is the General Product Safety Regulations 2005 (SI 2005/1803) and, in NI, the EU General Product Safety Regulation 2023/988.
The E-Commerce Regulations, Amended by the Electronic Commerce (Amendment etc.) (EU Exit) Regulations 2019
This is retained EU law for all four nations of the UK (England, Wales, Scotland and NI), which imposes the “country of origin” rule. This rule means that a UK-established e-commerce operator will no longer be able to benefit from the previous principle allowing an information society service provider to comply with the laws of the country in which it is based. Instead, it will have to comply with the specific requirements of each jurisdiction in which it is active. A UK-based provider will therefore need to:
Health Service Digital Standards
Digital health suppliers seeking to work with UK health services must comply with jurisdiction-specific standards. In England, this includes the NHS Data Security and Protection Toolkit (DSPT) and Clinical Safety Standards (DCB 0129/0160); suppliers may need to meet NHS Digital Technology Assessment Criteria. Scotland, Wales and NI maintain separate health service digital standards and procurement requirements.
NHS Rules and Professional Standards
The GMC sets standards for medical practitioners, including those providing digital health services. The GMC, the royal colleges and the professional associations continue to develop publishing standards of practice, procedures and protocols that will cover the use of telemedicine.
The GMC has published guidance on remote consultations. Briefly, the doctor needs to consider whether a face-to-face consultation is necessary, or whether remote treatment may be appropriate. If appropriate, the doctor should then obtain the patient’s consent for this method of provision of medical services. If the doctor is not the patient’s usual doctor, they must ask the patient for consent to obtain information and medical history from the patient’s GP and to send details of any treatment the doctor has arranged.
Remote consultations via use of telehealth can take place where:
The consultation must be in person if the foregoing criteria are not met and/or if:
From October 2021, all GP practices are required to “offer and promote” to their patients (and those acting on their behalf):
These requirements are all subject to existing safeguards for vulnerable groups and third-party confidentiality. They are to be in place alongside, rather than as a replacement for, other access and communication methods – for example, telephone and face-to-face contact.
Further guidance regarding directly bookable appointments was introduced in October 2022. This sets out requirements for online appointment booking following changes to General Medical Services (GMS), Personal Medical Services (PMS) and Alternative Provider Medical Services (APMS) contractual arrangements that came into effect in England from October 2022. Practices must now ensure that all of their “directly bookable” appointments are made available online, as well as by phone or in person.
GMC professional standards apply regardless of whether the digital health service is provided by UK-based or foreign practitioners. Any doctor providing services to UK patients must comply with them.
Policymakers play a key role in the regulation and oversight of digital healthcare in the UK, including in the following ways.
Updating Regulations
The government, and MHRA in particular, regularly review and update medical device regulations to keep pace with technological advances. For example, the new GB Medical Devices Regulations will introduce changes specific to software as a medical device (SaMD), including definitions, classification rules and cybersecurity requirements.
Consultation and Guidance
The MHRA also consults with stakeholders and issues guidance on new technologies when these emerge. For example, between September and November 2021, the MHRA consulted on proposed changes to the regulatory framework for medical devices in GB. The government then published a response to the 2021 consultation, which outlined proposed changes to address emerging technologies and next steps for the MHRA in order to implement a transformed regulatory framework.
Oversight and Enforcement
The MHRA is the regulatory agency with statutory powers to regulate medical devices, including SaMDs, and to enforce applicable legislation. Applicable guidance, while not legally binding, is authoritative in how the applicable requirements should be interpreted. Other regulatory bodies such as the CQC, GMC and Information Commissioner’s Office (ICO) oversee compliance and enforce standards. The CQC regulates telehealth/telemedicine service providers in England, the GMC regulates individual medical practitioners in the UK, and the ICO enforces data protection laws.
Strategic Initiatives
The launch of the Office for Digital Health includes a key strategic aim to improve digital health approval pathways and reimbursement policy relating to telemedicine and healthcare mobile apps. Policymakers are also involved in developing standards and guidance for the use of telemedicine and digital health tools.
International Alignment
The UK continues to monitor and, where appropriate, align with international standards and definitions, such as those developed by the European Commission and the International Medical Devices Regulators Forum (IMDRF).
GB
Essential requirements
Under the Medical Devices Regulations 2002 (as amended), medical device software must meet essential requirements for safety and performance. These cover software life cycle processes, risk management and clinical evaluation requirements.
Conformity assessment
Medical devices undergo conformity assessment procedures appropriate to their classification, which may involve UK Approved Bodies (formerly Notified Bodies). Assessment includes technical documentation review, quality management systems and clinical evaluation.
Cybersecurity standards
Medical device software must incorporate appropriate cybersecurity measures to ensure device security and data integrity, as part of overall essential requirements. New regulations are introducing minimum cybersecurity requirements for software as a medical device (SaMD). The MHRA’s proposed changes include requirements for SaMD manufacturers to meet certain minimum cybersecurity standards. These standards aim to protect against unauthorised access and ensure the integrity and security of health data.
NI
General Safety and Performance Requirements (GSPRs)
Under the EU MDR, medical device software must meet GSPRs as set out in Annex I, including specific requirements for software used with mobile platforms and consideration of environmental factors.
Conformity assessment
Devices follow EU MDR conformity assessment procedures detailed in Annexes IX to XI, based on device classification and involving EU Notified Bodies.
Cybersecurity requirements
The EU MDR includes cybersecurity considerations as part of GSPRs, requiring manufacturers to implement appropriate measures to ensure software integrity and protection against cybersecurity threats.
Supporting Technical Standards
Interoperability and data security
Across GB and NI, standards ensure secure data exchange and system compatibility. Personal data protection must comply with applicable data protection laws.
International standards
Both jurisdictions reference international standards such as ISO 14155 (clinical investigations), ISO 13485 (quality management) and IEC 62304 (medical device software life cycle).
Software as a Medical Device (SaMD)
GB
This is regulated under the Medical Devices Regulations 2002 (as amended) and the Medicines and Medical Devices Act 2021. Software with a medical purpose must comply with essential requirements, post-market surveillance requirements, and emerging cybersecurity standards.
NI
This is regulated under the EU MDR. Software must meet General Safety and Performance Requirements (GSPRs) and follow EU conformity assessment procedures.
Both GB and NI
The MHRA and relevant authorities provide guidance on software qualification and classification. Post-market incident reporting and vigilance requirements apply in both jurisdictions (though different frameworks apply).
Selfcare, Wellness and Fitness IT Products (IoT, Wearables)
If classified as medical devices, these products follow the respective medical device regulations (GB or NI frameworks). Non-medical device products remain subject to general product safety laws including the General Product Safety Regulations 2005 (GB) and applicable consumer protection legislation.
Classification guidance helps determine when software and apps constitute medical devices. For example, general fitness tracking typically falls outside medical device regulation, while diagnostic applications would be regulated as medical devices.
Cybersecurity and Data Protection
If the personal data of users/patients is processed using digital health software, such processing must comply with the data protection laws in force in the UK, in particular with:
The UK GDPR generally governs the processing of personal data and requires that any processing undertaken be done (among other things) lawfully, fairly and in a transparent manner. (See in particular Articles 5(1)(a), 6, 13 and 14 UK GDPR.) The UK GDPR also imposes further conditions on the processing of “Special category data” including health data. (See Article 9 GDPR.)
The DPA is a national law which supplements the UK GDPR, and (among other things) sets out additional requirements for the processing of special category data. The PECR sit alongside the DPA and UK GDPR and impose specific requirements in the context of marketing, cookies, keeping communications secure and customer privacy.
Enhanced cybersecurity requirements for medical device software are under consideration as part of ongoing regulatory reforms in GB.
Data protection laws apply to the processing of personal data, including health data, and impose requirements for lawfulness, transparency, security and data subject rights.
Artificial Intelligence (AI) and Machine Learning (ML)
No dedicated AI legislation
The UK does not currently have a standalone legal framework specifically for AI or ML. Instead, existing sectoral regulations apply. In digital health, this means that AI/ML-based software is regulated as a medical device if it meets the definition under the UK Medical Devices Regulations 2002 (as amended). This requires compliance with safety, performance and technical documentation standards, including those specific to the use of AI/ML.
Principles-based, pro-innovation strategy
The UK government has articulated a “pro-innovation” approach to AI regulation. In July 2022, the UK published its policy paper, “Establishing a pro-innovation approach to regulating AI”, which sets out five cross-sectoral principles for regulators to apply. These principles are intended to guide existing regulators (such as the MHRA, the ICO and others) in their oversight of AI, rather than creating a new, centralised AI regulator. The principles are as follows:
Sector-specific guidance
In digital health, the MHRA has issued guidance on software and AI as a medical device (SaMD and AIaMD), including requirements for clinical evaluation, risk management and post-market surveillance. The National Institute for Health and Care Excellence (NICE) and NHSX have also published guidance on the evaluation and deployment of AI in healthcare.
Ongoing reform and consultation
The UK government is actively consulting on the future of AI regulation. In February 2024, the government published its response to the AI Regulation White Paper consultation, confirming its intention to continue with a flexible, context-specific approach, rather than adopting a comprehensive, prescriptive regime like the EU Artificial Intelligence Act (the “EU AI Act”).
International alignment and divergence
While the UK is monitoring international developments, including the EU AI Act, it has signalled that it will not simply replicate the EU’s approach. The UK aims to balance innovation with safety and public trust, and to remain agile in response to technological advances.
NI
Owing to the Windsor Framework and the Northern Ireland Protocol, certain EU product regulations – including the EU AI Act – apply directly to relevant goods placed on the market in NI. This means that AI/ML-based medical devices and other regulated AI products must comply with both UK and EU requirements if they are to be marketed in NI. Businesses operating in or supplying to NI should therefore be aware of both UK and EU regulatory obligations. In addition, if they develop, supply or deploy AI systems to the EU market, or provide AI-related services to EU-based customers, they must ensure compliance with the EU AI Act to maintain market access.
Environmental, Social and Governance (ESG)
Digital healthcare companies must comply with relevant ESG-related UK legislation, including the Equality Act 2010 and Modern Slavery Act 2015. When operating in EU markets, companies may also need to comply with EU ESG regulations with extraterritorial effect, such as sustainability reporting requirements.
The ESG regulatory landscape continues to evolve, with increasing focus on corporate transparency and due diligence obligations.
Telehealth
Telehealth is regulated through health and social care laws (eg, CQC registration for remote medical advice), professional standards (GMC Good Medical Practice) and data protection laws. The CQC registers telehealth/telemedicine service providers in England for the regulated activity of providing triage and medical advice remotely when certain criteria are met.
Additionally, GP practices in England are required to offer online consultation tools, video consultation capabilities and secure electronic communication methods alongside traditional access methods.
The current legislative framework in the UK provides a comprehensive basis for regulating digital healthcare, particularly in relation to medical devices, data protection and consumer safety. Overall, while the framework is robust, there is recognition that further updates and clarifications are needed to keep pace with technological advances and ensure proper regulation. Some of the issues to be aware of, which the government and regulators are actively working to address, are set out below.
Ongoing Reforms
The regulatory landscape is evolving, with new regulations planned to address emerging technologies and clarify areas such as software as a medical device (SaMD) and cybersecurity. The MHRA is introducing changes to the MDR (GB) to address gaps and uncertainties, including regarding the regulation of SaMD.
Gaps in Current Legislation
Some areas, such as the regulation of AI/ML-enabled medical devices and cross-border digital health services, are not yet fully addressed and may require further development. For example, the current medical devices legislation only regulates medical devices that are placed on the market or made available in the GB market. Therefore, a service provided from outside the GB market is arguably not regulated by current medical devices legislation even where it has a medical purpose. However, providing a service with a medical purpose from outside the GB market without it complying with medical devices legislation is not without risk, as this is a regulatory grey area and is also being considered by the MHRA, with possible changes to the definition of “placing on the market” to provide clarity on the requirements that apply when software is provided online to the GB market.
Even for areas that are regulated, there are often grey areas or uncertainty due to how quickly technology is progressing and new products being created. For example, classification and regulation of digital health products, especially software, often requires case-by-case analysis due to the complexity and rapid evolution of technology.
International Considerations
Post-Brexit, there are differences between the GB and EU regulatory regimes (with the EU regulatory regimes applying in NI), particularly regarding CE marking and UKCA marking. GB continues to monitor and, where appropriate, align with international standards and definitions.
In the UK, oversight of digital healthcare is shared among several bodies, each with distinct remits.
The Medicines and Healthcare products Regulatory Agency (MHRA)
The MHRA is responsible for regulating medical devices, including digital health apps and software that qualify as medical devices, in GB (England, Wales and Scotland). Its remit includes ensuring that such products meet safety, quality and performance standards, overseeing conformity assessments, post-market surveillance and vigilance systems. The MHRA also regulates AI tools that diagnose, prevent or treat diseases. In NI, the MHRA also has limited regulatory functions, as medical devices are primarily regulated under the EU MDR framework through Protocol arrangements. It is also worth noting that the MHRA has a role in regulating advertising of medicines and medical devices, notably as regards claims made to the general public.
The General Medical Council (GMC)
The GMC regulates individual medical practitioners, ensuring that doctors (including those utilising digital health and telemedicine) are appropriately qualified, fit to practise, and adhere to professional standards such as “Good Medical Practice”.
The Care Quality Commission (CQC)
In England, the CQC registers and inspects healthcare and social care providers, including providers of telehealth/telemedicine services, when they provide regulated activities such as remote triage and medical advice. The CQC has powers to grant or withdraw registration and to inspect services, and can enforce conditions or sanctions.
Healthcare Inspectorate Wales (HIW)
HIW is the independent inspectorate and regulator of healthcare in Wales, including oversight of independent medical agencies providing digital health services.
Healthcare Improvement Scotland (HIS)
HIS regulates and inspects health and social care facilities in Scotland, including independent healthcare services that may involve digital healthcare provision.
The Regulation and Quality Improvement Authority (RQIA)
The RQIA is responsible for inspecting registered health and social care services in NI, including independent medical agencies providing digital health services.
Other Agencies
Agencies such as NICE do not “regulate” like the above bodies, but they provide evidence standards (eg, NICE’s digital health framework) and NHS bodies develop digital policies.
Certain aspects of digital healthcare fall within the remit of non-healthcare regulatory bodies, primarily due to the cross-sectoral nature of digital health technologies:
The Information Commissioner’s Office (ICO)
The ICO regulates data protection and privacy, which is highly relevant to digital health apps and telemedicine services that process personal and health data. The UK GDPR, Data Protection Act 2018, and the Privacy and Electronic Communications Regulations (PECR) all apply. Personal health data is “special category” data under the UK GDPR/DPA 2018, so the ICO enforces strict privacy and security requirements.
Advertising Standards (ASA/CAP Code)
The UK Advertising Standards Authority (ASA) enforces the CAP Code for all marketing. Health and medical claims made by digital health apps, devices or services must be truthful, evidence-based and approved. Advertisements for unlicensed medicines or treatments to consumers are prohibited, whether online or offline.
Competition and Consumer Regulation
The Competition and Markets Authority (CMA) and local Trading Standards can apply general competition and consumer law to digital health. For example, the CMA’s unfair commercial practices provisions (now updated by the 2024 Digital Markets, Competition and Consumers Act) underpin consumer protection rules enforced by the ASA. Digital health companies must also comply with consumer legislation (eg, the Consumer Rights Act 2015) when contracting with patients or buyers (contracts with consumers).
General Pharmaceutical Council (GPhC)
For digital health services involving the provision of pharmacy services or remote prescribing, the GPhC regulates pharmacy owners and sets standards for distance-selling pharmacies.
The foregoing bodies are involved because digital healthcare often involves the processing of sensitive data, consumer transactions, and the provision of regulated products and services outside traditional healthcare settings.
Laws and regulations in digital healthcare are enforced through a combination of pre-market controls, post-market surveillance and direct enforcement actions.
Medical Devices
The MHRA is the regulatory agency with statutory powers to regulate medical devices, including software as medical devices (SaMDs), and to enforce applicable legislation. Enforcement measures range from simple informal or formal compliance requests to more stringent administrative measures (such as issuing product recalls, or restricting or prohibiting the placing or making available of devices on the market). The MHRA also has the power to issue financial penalties. The MHRA’s approach is generally to apply a proportionate response based on the risk for public health. It tends to favour a collaborative approach for technical violations. Escalations are expected if safety risks are identified, or in cases of non-cooperation. Examples where criminal prosecutions can occur would include cases of deliberate violations or cases of serious safety risks.
Data Protection
The ICO has significant enforcement powers, including issuing information and enforcement notices, conducting assessments and imposing fines of up to GBP17.5 million or 4% of global turnover for breaches of the UK GDPR. For breaches of the PECR, fines can reach up to GBP500,000.
CQC
The CQC can use both civil and criminal powers to enforce the fundamental standards of care. For example, the CQC can prosecute providers for breaches of Regulation 12 (safe care) under the HSCA 2008 – a criminal offence if patients suffer avoidable harm. Where standards fall short but are not criminal, the CQC may impose civil remedies: issuing warning notices, imposing additional conditions on a registration, or suspending or even cancelling a provider’s registration.
Since 2020 the CQC can also levy fixed penalties (fines) for certain breaches. Notably, the government has urged the CQC to be “tough” on online services – for example, a 2017 statement praised a “tough and comprehensive inspection regime” to uncover failings in digital care and protect patients. In practice, the CQC has inspected and sanctioned several online GP services and pharmacies for safety violations, with some providers being placed in special measures or closed.
ASA (Advertising)
The ASA enforces through its Compliance and Investigations Committees. It can quickly remove or ban ads (especially online) that breach the CAP Code. In 2025, it partnered with the MHRA to issue formal enforcement notices targeting illegal online adverts for prescription medicines.
Stricter Enforcement
Areas involving patient safety, the processing of special category data (such as health data), and the use of digital health apps as medical devices are subject to particularly strict enforcement due to the potential for significant harm.
The current regulatory framework is comprehensive but continues to evolve in response to the rapid development of digital health technologies. There is recognition that further enhancements and powers may be needed. Gaps have been identified, and it has been proposed that there is a need to update device regulations for software and AI, and to reform data protection law in this area.
Industry groups have urged a more agile, risk-based approach: the ABHI’s 2024 digital health White Paper recommends shifting to a classification system tailored for software/AI devices, streamlining data governance and clarifying liability for digital products.
As regards patient safety, the CQC has itself indicated a need for expanded powers. Currently, it cannot publish a separate rating for “digital-only” providers, although it expects new legislation to grant this in future.
The Regulatory Horizons Council’s 2022 report on AI in healthcare – and the UK government’s March 2025 response accepting either fully, or at least in principle, all of its 15 recommendations – signals plans to boost regulators’ capacity, introduce life cycle monitoring for AI devices, increase transparency and patient involvement, and encourage UK leadership on safe AI.
The main legal risks and drawbacks associated with digital healthcare include the following.
Non-Compliance With Regulations
Failure to comply with medical device regulations, data protection laws or consumer protection requirements can result in regulatory action, fines, product recalls or criminal prosecution. Failure to comply with the foregoing can trigger enforcement by the MHRA. Likewise, operating a telemedicine service without proper registration or meeting the required standards can risk CQC action.
Enforcement by Regulatory Authorities
Regulatory authorities have significant powers to enforce compliance, including the ability to prevent products from being marketed, require corrective actions and impose financial penalties. Failing to comply with data protection rules can result in severe penalties from the ICO. Similarly, providers must also comply with advertising and consumer laws – for example, making unsubstantiated medical claims about a health app could violate the CAP Code and result in ASA sanctions.
Liability
Digital health software will generally constitute a “product” under the CPA/CPO 1987 (for defective products) where it is supplied as a distinct commercial offering, though specific application depends on the nature of the software and how defects arise. EU law – specifically the new Product Liability Directive – could change the situation in NI compared to GB. If the Directive is deemed to apply in NI under the Northern Ireland Protocol, there could be a divergence in product liability regimes in NI in the future, including for digital health applications and software.
Legal exposures are addressed through a combination of statutory and common law mechanisms.
Statutory Product Liability
Many liabilities are covered by statute. For defective devices, the Consumer Protection Act 1987 implements the EU Product Liability Directive, imposing strict liability on producers. The MDR (GB) (via the MMD Act 2021) creates offences for marketing non-compliant devices (enforced by the MHRA) and enables notice powers. The MHRA in NI investigates similar offences under the EU MDR and has the same enforcement powers as included in the MMD Act 2021.
The DPA 2018 (and the UK GDPR) provides regulatory penalties and a right of action for data breaches. The Health and Social Care Act 2008 (Regulated Activities) Regulations 2014 include criminal offences (eg, Regulation 12 on safety) enforceable by the CQC. The Pharmacy Order and Medicines Act constrain online prescribing and advertising. Consumer protection laws (the Consumer Rights Act 2015, Unfair Terms, Digital Content Regulations) give statutory rights to individuals using digital health products.
Negligence
Common law tort principles apply where harm results from a breach of duty of care, including by healthcare professionals or software developers.
Contract Law
Where a direct contract exists with the user (eg, patient’s subscription terms, or a healthcare organisation’s agreement with an IT vendor), liability may arise for misrepresentation or breach of express or implied terms.
Regulatory Sanctions
Breaches of medical device or data protection regulations can result in regulatory enforcement, including fines and criminal penalties.
There is no absolute immunity from the liability discussed previously, but several methods can be taken to mitigate potential exposure. Defences and mechanisms to mitigate liability include the following.
Compliance With Regulatory Standards
Demonstrating adherence to applicable regulatory requirements and standards can provide a defence or mitigate liability. For example, using a UKCA/CE‑marked medical device approved by the MHRA, and following all MHRA guidelines, strengthens the defence in a product liability or negligence case.
Statutory Defences Under the CPA
These include that:
Contribution Claims
If a healthcare professional is found liable due to reliance on defective software, they may seek contribution from the software producer.
Contractual Limitations
Where permitted, contractual terms may limit or exclude certain liabilities, subject to statutory controls.
Data Safeguards
Risk can be mitigated by implementing strong data protection measures (encryption, access controls, privacy impact assessments, multi-factor authentication). Organisations should be proactive in ensuring that they are compliant with the UK GDPR and the DPA 2018. They should regularly scan their systems for vulnerabilities and keep them up to date.
Insurance
Carrying professional indemnity and cyber liability insurance can mitigate the effects of a potential sanction or fine.
Aligning With Professional and Technical Standards
Providers often follow voluntary or industry best-practice standards (eg, NICE evidence standards for digital tools, NHS England’s digital frameworks).
There have been several recent developments in digital healthcare regulation, particularly in relation to AI. Some recent developments/trends include the following.
There has been increasing focus on the regulation of software as a medical device (SaMD), including clarification of definitions, risk classification and post-market obligations. The MHRA is updating medical device regulation, particularly around software and AI, aiming for a clearer, more agile system. There is increased focus on AI risk management, especially for adaptive or autonomous AI used in diagnostics and treatment.
In March 2025, the UK government published its response to the Regulatory Horizons Council’s report on AI as a medical device. It accepted all of the Council’s recommendations, which include:
Enhanced data protection requirements have also emerged, particularly regarding the processing of special category data and the use of automated decision-making and profiling in digital health apps. The Data (Use and Access) Bill, introduced in 2024, will modernise and update data laws in the UK, and proposals include reducing compliance burdens for low-risk data processing and establishing a government-backed digital identity trust framework.
Cybersecurity requirements for digital health products are becoming more prominent, with proposed minimum standards for SaMD.
Significant reforms are underway, as follows.
New GB Medical Devices Regulations
As mentioned previously, core aspects of the new GB Medical Devices Regulations are expected to apply from 1 July 2025. These reforms will:
The MHRA has begun issuing secondary legislation and guidance to refine the above framework. For example, in late 2024 the MHRA launched an “Emerging Roadmap” including a statutory instrument to clarify vigilance and reporting requirements for devices on the market. It also announced plans (consultation in November 2024) to introduce “abridged approvals” for devices already approved by major foreign regulators (FDA, TGA, etc).
Extension of CE Mark Acceptance
The UK government has extended the acceptance of CE-marked medical devices on the GB market until 30 June 2028 (or 2030 for in vitro diagnostics), providing a transitional period for compliance with new UK requirements.
Ongoing Data Protection Reform
The UK government is considering changes to data protection law, though the extent of these changes is not yet finalised. The Data (Use and Access) Bill is expected to pass in 2025, one central aim of which is to reduce the compliance burden for organisations in the UK.
Overall, the regulatory landscape for digital healthcare in the UK is evolving, with a clear policy focus on patient safety, data protection and the effective regulation of rapidly advancing digital health technologies.
AI and Real Legal Risk: A Strategic Rethink for Life Sciences
Introduction: the double-edged sword of AI in life sciences
The life sciences sector stands at a technological crossroads. AI promises to revolutionise everything from drug discovery to clinical decision support, potentially saving countless lives through faster innovation and more personalised treatments. Yet alongside these opportunities lurks a complex web of legal risks that even seasoned legal professionals struggle to fully grasp.
This evolution could not come at a more challenging time. As regulatory frameworks race to catch up with technological advancement, life sciences companies find themselves caught between innovation imperatives and compliance obligations. The consequences of missteps can be severe – from regulatory enforcement actions to product liability claims and reputational damage.
This piece offers a provocative rethink of how in-house legal professionals should approach AI-related legal risks in life sciences companies. Rather than providing another framework for managing familiar risks, the article explores how AI’s unique characteristics challenge traditional approaches for legal risk assessment and considers what this means for how senior counsel can evolve their role in helping their organisations navigate this transformed risk landscape.
The Evolving Risk Landscape: Beyond Traditional Concerns
Traditional legal risks in life sciences are well-mapped territory – data protection, product liability, third-party contractor management, regulatory compliance and intellectual property protection have long been core concerns. However, AI integration in business operations has the potential to fundamentally transform these familiar challenges in ways that require fresh thinking. This section examines examples that illustrate how AI is impacting these previously well-trodden territories by opening up new areas of potential legal risks.
Data protection: from compliance exercise to strategic imperative
AI systems thrive on data, creating significant tensions with core data protection principles. When an AI system is designed to identify novel patterns, defining the “purpose” of processing becomes increasingly challenging under traditional purpose limitation requirements. This is compounded by the fundamental contradiction between data minimisation principles and AI’s inherent need for large datasets to improve performance and accuracy.
Transparency requirements present additional hurdles, as explaining complex AI processing methodologies to patients in understandable consent language can become significantly more challenging. Many organisations struggle to craft clear, comprehensible explanations of how AI systems process personal data – while still meeting regulatory requirements for informed consent.
Further complicating matters is the cross-border nature of many AI implementations. Training data frequently flows across jurisdictions with inconsistent privacy regulations, which can result in compliance gaps and increased enforcement risks, which organisations must carefully navigate.
Product liability: what happens when algorithms make mistakes?
The introduction of AI into healthcare products creates novel liability questions that traditional frameworks struggle to address.
From the outset, where an AI system is implicated in a product liability issue, assessing causation and determining who may be the responsible party can become significantly more complex: for example, when an AI-powered diagnostic medical device misses a critical finding that leads to a patient injury, determining liability between the healthcare provider, device manufacturer, algorithm developer and training data provider requires new legal frameworks and approaches.
The use of AI also presents another challenge in the context of establishing the standard of care. When the care delivered involves AI-powered technology that is constantly evolving through machine-learning (ML) capabilities, how does one determine what is the appropriate standard of care? As these systems continue to develop and change, the benchmark against which reasonable care is measured becomes a moving target.
Questions of transparency and explainability may further complicate liability determinations. Courts and regulators will need to establish what level of transparency and explainability is required to demonstrate that reasonable care was taken during development. Additionally, managing discovery and disclosure of algorithmic processes during litigation introduces procedural challenges not present in traditional product liability cases.
Regulatory compliance: navigating an evolving patchwork
Life sciences companies now face a complex regulatory ecosystem that spans both established frameworks and emerging AI-specific regimes. This landscape includes the MHRA’s evolving work on software as a medical device guidance, the EU’s new AI Act with its risk-based approach, and the FDA’s proposed framework for AI/ML-enabled medical devices. Simultaneously, existing regulations such as the Medical Device Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR) are being interpreted and applied in AI contexts, creating additional layers of complexity.
Perhaps the most vexing challenge for multinational life sciences companies is navigating this global patchwork of AI regulation, where fundamental inconsistencies can result in significant – and costly – compliance hurdles. Different jurisdictions employ varying terminology and definitions – the EU has established specific high-risk categories, while the UK and USA take different approaches to classification. The very definition of what constitutes “AI” versus conventional software varies across regulatory regimes, creating fundamental challenges in determining which rules apply to specific products. This can also complicate the design and implementation of internal processes across different markets and that will be subject to regulatory audits.
Requirements for human oversight and intervention also differ substantially between jurisdictions, forcing companies to implement different governance models depending on the market.
Beyond these definitional differences, procedural requirements diverge significantly across borders. Documentation standards, evidence requirements, testing methodologies, monitoring obligations, and expectations for algorithmic transparency all vary widely between regulatory frameworks.
These sometimes overlapping or even contradictory frameworks create substantial compliance challenges, particularly for products deployed across multiple markets. Companies must develop sophisticated regulatory intelligence capabilities to track evolving requirements and implement adaptable compliance strategies.
The third-party AI supplier conundrum
Building in-house AI capabilities requires significant investment of time and resources, which may not be within the reach of many companies; using external providers of AI systems may be a cost-effective solution, but it introduces its own set of challenges – and legal risks.
Due diligence: looking beyond traditional vendor assessment
When evaluating AI suppliers, companies must look beyond traditional vendor criteria. This extended due diligence should thoroughly assess the provenance and quality of training data used to develop the AI system. Organisations need detailed documentation of development and validation processes to ensure regulatory compliance and scientific validity of datasets. Equally important is understanding the supplier’s approach to ongoing monitoring and performance evaluation, as AI systems may evolve or degrade over time. Transparency about known limitations and edge cases becomes crucial, as these boundaries can often define the risk profile of the technology.
Built-in contractual protections: new clauses for new risks
Standard vendor agreements rarely address AI-specific concerns, necessitating new contractual approaches. Critical elements include clear delineation of responsibilities for ongoing performance monitoring throughout the AI system life cycle. Contracts should establish robust access rights to validation data and performance metrics to enable proper oversight. Explainability requirements and documentation standards need explicit definition to ensure regulatory compliance and defend against potential liability. Forward-looking provisions addressing regulatory changes and compliance updates help manage evolving requirements. Perhaps most importantly, contracts must include thoughtful liability allocation for AI-specific scenarios such as algorithmic drift or dataset biases that traditional agreements rarely contemplate.
Compliance transfer risk
The EU AI Act, like the General Data Protection Regulation (GDPR) before it, adopts an extraterritorial approach. This means that life sciences companies can face compliance obligations even when using AI suppliers based outside regulated territories. This creates a “compliance transfer risk” where an organisation becomes responsible for ensuring that the AI system meets regulatory requirements, even with limited visibility into the supplier’s compliance posture. Enforcement actions could target the organisation directly, even if the non-compliant elements were developed entirely by a third party.
Intellectual property (IP) minefields
Third-party AI solutions create several IP-related risks that life sciences companies often overlook. AI systems trained on datasets with unclear ownership or usage rights may inadvertently incorporate protected IP, potentially transferring infringement risk to an organisation. Even sophisticated AI suppliers may have trained their systems using datasets that include copyrighted materials, patented methods or confidential information without proper authorisation, creating contamination risks. Furthermore, contracts frequently lack clarity about who owns insights, innovations or other outputs generated when proprietary data is processed through a third-party AI system, leading to uncertain output ownership. The AI methods themselves may incorporate patented algorithms or techniques, creating potential patent infringement liability that could extend to products. Additionally, using third-party AI for proprietary research creates risks that an organisation’s valuable research directions could be incorporated into the AI system – and potentially exposed to competitors using the same service, resulting in competitive intelligence leakage. These IP concerns are particularly significant in life sciences, where patent landscapes are complex and IP often represents the core value of an organisation.
Why Traditional Risk Management Falls Short: The AI Blind Spot
The examples above illustrate a fundamental problem: AI does not just create new risks – it breaks the traditional risk categories that legal departments have relied on for decades. When an AI system causes harm, is it a product defect, a service failure, a data breach or regulatory non-compliance? Often, it is all of these simultaneously, creating cascading exposures that traditional risk frameworks fail to capture.
This creates a dangerous blind spot for in-house legal teams. The risk assessment methodologies that have served the life sciences industry well are suddenly inadequate when applied to AI-enabled operations. Traditional risk registers, which typically categorise risks as discrete, manageable issues, struggle to capture the interconnected, evolving nature of AI-related exposure.
Consider the implications: when a company reports to the board that “data protection risk is managed through our privacy compliance program”, is it accurately representing the risk landscape if AI systems are processing patient data in ways that challenge fundamental privacy principles? When the company assures leadership that “product liability is covered by our insurance and quality systems”, does that assessment hold when algorithmic decision-making creates new forms of causation and liability theories that courts are still developing?
The Compounding Effect: Why AI Risk Multiplies Rather Than Adds Up
What makes AI risk particularly insidious is its tendency to compound rather than simply accumulate. A single AI deployment can simultaneously create regulatory exposure across multiple jurisdictions, generate novel product liability theories, trigger data protection obligations and raise IP infringement issues – all while evolving in ways that make the risk profile itself a moving target.
This compounding effect means that traditional risk mitigation strategies may provide false comfort. Contractual liability caps become meaningless when regulatory enforcement actions bypass private law remedies. Insurance coverage designed for traditional product defects may not respond to AI-specific claims. Due diligence processes focused on static risk assessment fail to account for systems that learn and change post-deployment.
The result is a risk landscape where legal departments may be underestimating their organisation’s exposure while simultaneously providing leadership with assurances based on outdated risk assessment frameworks.
Regulatory Arbitrage Becomes a Regulatory Trap
The fragmented global approach to AI regulation creates another layer of complexity that challenges traditional compliance strategies. Life sciences companies have long managed regulatory complexity through careful market-by-market compliance planning. AI disrupts this approach by creating scenarios where compliance in one jurisdiction can create non-compliance in another.
For example, an AI system designed to meet FDA requirements for explainability may fail to satisfy EU AI Act transparency obligations. An algorithm that complies with UK innovation-friendly guidance may violate more restrictive EU approaches. These are not simply matters of parallel compliance – they represent fundamental conflicts in regulatory philosophy that require strategic choices about market access and risk tolerance.
This regulatory fragmentation forces in-house counsel to make risk assessments that go beyond legal compliance to strategic business positioning. The question becomes not just “are we compliant?” but “which regulatory framework do we optimise for, and what are the strategic implications of that choice?”.
The Liability Time Bomb: When Risk Materialises Years Later
Traditional product liability follows predictable patterns – defects are typically discoverable relatively quickly, and liability theories are well established. AI systems can have the tendency to create latent liability that may not surface until years after deployment; AI-related liability can emerge gradually as algorithms evolve, datasets change or new use cases reveal unforeseen risks.
This creates a particularly acute challenge for legal risk assessment. How do you quantify potential liability for an AI system that may develop new capabilities or exhibit different behaviours years after deployment? How do you advise leadership about risk tolerance when the full scope of potential exposure will not be apparent until long after business decisions have been made?
The compounding effect of this latent liability is that organisations may be accumulating AI-related exposure across multiple deployments, creating portfolio risks that are difficult to assess and potentially difficult to manage through traditional risk transfer mechanisms.
Rethinking Legal Risk Architecture for the AI Era
The strategic imperative for in-house legal teams is clear: the legal risk architecture that served life sciences companies in the pre-AI era requires fundamental reimagining. This is not about adding “AI risk” as a new category to existing risk registers – it is about recognising that AI integration changes the nature of legal risk itself. The challenge for senior in-house counsel is not just managing AI risk – it is helping their organisations understand that AI changes the fundamental nature of legal risk in ways that require new approaches to risk assessment, mitigation and communication.
AI will require legal departments to develop new frameworks for identifying interconnected risks, assessing evolving exposures, and communicating dynamic risk profiles to leadership. Traditional approaches to risk quantification, which rely on historical data and predictable patterns, must be supplemented with scenario planning and stress testing for novel risk combinations.
The Strategic Response: Beyond Risk Management to Risk Intelligence
In any organisation, the legal function’s response to AI cannot simply be defensive. In an environment where AI-related legal risks are evolving faster than traditional risk management frameworks can adapt, legal teams must develop what might be called “risk intelligence” – the ability to identify, analyse and respond to emerging risks in real-time.
This requires legal teams to not only enhance their knowledge and understanding of AI technology but to also develop a dynamic approach to risk assessment, and to adapt strategies in their communication with leadership about evolving risk landscapes.
The organisations that successfully navigate the AI transformation will be those whose legal functions evolve from risk managers to risk strategists – helping management make informed business decisions about AI adoption while building adaptive compliance capabilities that can evolve with the technology and regulatory landscape.
The future belongs to legal departments that can help their organisations harness AI’s transformative potential while building sophisticated, adaptive approaches to managing the novel risks that come with that transformation.