The Digital Healthcare 2024 guide covers 12 jurisdictions. The guide provides the latest legal information on digital healthcare, medicine and therapeutics; healthcare regulatory agencies and enforcement; wearables, implantable and digestibles; software as a medical device; telehealth; the internet of medical things; data protection; AI and machine learning; cloud computing and IP issues.
Last Updated: June 27, 2024
Introduction
Artificial intelligence (AI) is not a strategy. It is a technology layer. Moreover, good AI requires good data. Hospitals and technology companies that develop healthcare IT, enabling the transformative power of AI (both generative and non-generative) and leveraging the power of data, face legal challenges that are not readily solved by conventional legal approaches developed for older technologies and data asset classes. As AI and data have changed healthcare, they have also changed how agreements for AI, data and IT products and services should be structured. This article focuses on four areas of digital healthcare that rely on new sources of data and AI. These are:
Pros and Cons of AI in Digital Healthcare
The use of AI in healthcare promises much but also poses risks and can result in unintended consequences, as explored below.
Reducing administrative burdens on physicians
One of the most promising benefits of AI in healthcare is that it can meaningfully reduce the time physicians spend on administrative functions. This includes their time trying to reconcile data coming from different systems or in a form that itself does not provide sufficient information about the subject matter of a specific patient visit. In this sense, AI can be said to be a mediator between existing IT systems with their design-in limitations and the data that physicians need to extract from the output or on-screen presentation of the data.
Reducing the administrative burden will free up time doctors are currently required to spend not interacting with their patients. Anecdotally, this will increase doctor satisfaction and allow physicians more time for direct interaction with patients during appointments. For the same reason, using AI and the data governance systems it necessitates will increase patient satisfaction for the same reasons. Even simply using AI to co-ordinate appointments will lead to benefits in how medical care is explained and delivered. Put simply, healthcare AI can increase not only the quantity of direct physician-patient time but also the quality of that time, or, put another way, let doctors be doctors and not virtual IT staff.
Reducing the administrative burden at the individual physician level requires using AI at the equivalent of the enterprise or business unit level. This, in turn, requires not only good data practices and relevant AI practices but the legal agreements that implement practices through proper documentation into a technology and data-rich ecosystem with multiple stakeholders with differing expectations. The legal agreements, together with the inter-enterprise policies that are the equivalent of agreements, are used to allocate responsibilities and accountability. Switching the focus from the physician to the hospital or the medical practice as a whole, the promise of AI is that it will further reduce administrative overhead at the enterprise level.
Faster dissemination of best medical practices
AI is used in the development of new best medical practices, including incremental or substantial revisions or improvements to currently followed best practices. Best practices, by their nature, improve with advances in research and clinical studies. Not only can AI speed the development of new best practices, but also, as is important for the scope of this article, the dissemination of newly developed best practices to physicians and medical centres (compared to traditional publication of journals and medical papers). Accelerating dissemination can also lead to faster verification and adoption of best practices.
Freezing best practices in place
This is also a potential adverse effect of AI on medical protocols: it can be used to codify and potentially “freeze” best practices in place. A simple example of this may be the steps that emergency medical technicians follow as a matter of protocol when transporting patients to the hospital who are identified as having a specific medical condition. The risk created by AI is that once a best practice is embedded in an AI used to promulgate protocols, it becomes a practice that may not be subject to timely revision. Such practices might never be removed and replaced by other better practices as part of a standard protocol. In other words, AI-generated protocols may have the unintended effect of taking a best practice and freezing it in place, even when the practice has been superseded. To avoid this risk, healthcare professionals must be cognisant of the steps needed to “import” new best practices into standard protocols.
AI and the risk of systematic failure
In a scenario where a single MRI or a similar medical device fails, the impact can be characterised as a one-device, one-patient-at-a-time failure. If devices are connected and/or depend on a common AI solution, and the solution fails, then the result is not a one-patient-at-a-time failure but a systematic failure affecting a large number of machines and patients in near real time. This risk can be accelerated as the “internet of medical things” (discussed below) creates an integrated system used to deliver medical care.
A systematic failure can have two adverse consequences. First, it is difficult to reschedule patients who had appointments for treatment dependent upon systems for sophisticated treatment or who need specialised care provided by an AI-enabled system. This delays treatment and complicates rescheduling of treatment for seriously ill patients. In addition, the hospital loses income when the system is not being used. The hospital cannot get reimbursed for procedures it does not perform.
Because the failure is systematic, more of a remedy is required than having technicians who can remove a machine from service while it is diagnosed and repaired. Where AI has created a systematic failure, another solution is required. From both operational and legal perspectives, this means that healthcare system operators should be contractually obliged to have a SWAT team available to analyse and restore the system.
A Proposed Legal Data Licensing Model: The “Decision Rights” Licensing Model
Data becomes valuable in healthcare when it is converted into information, and information becomes valuable when it is converted into actionable insights. These insights are what lead to advances in clinical medicine and research. Data is not technology and data does not manage itself. Data must be collected, transmitted and analysed using digital healthcare technologies. Medical devices that are connected together in computer networks constitute the internet of medical things (IoMT). These connected devices collect data from multiple sources and provide it for multiple purposes, including use in AI, to generate insights that can be acted upon.
The four applications (or uses cases) for the decisions rights data licensing model
The decision rights data licensing model has four applications. It applies
A predicate problem: limitations of using a field of use for the scope of licence rights
The legal objective for licensing data is to define the scope of use. A common way to do this is for one party to grant rights in a particular field of use. However, defining the scope of use as a field of use creates practical and legal problems. For example, when both parties are in the same field of use, namely, healthcare, using the field of use to define the scope of use is often not workable because it does not provide precise boundaries, to the detriment of each party. The party granting the right to use data (the “licensor”) would be giving away significant rights to the party receiving the grant of data rights (the “licensee”). The licensor could be limiting its own rights in its own field of use. In extreme cases, the licensor could preclude itself from using its own rights in its own field of use. Similarly, it could limit its rights to use its own data in a manner that would equate to compromising its existing or future business.
Dividing a healthcare business field of use into subfields often does not work. For example, a licensee will want broad rights to use the data in a subfield that aligns with the scope of its business in the healthcare sector, but that scope will likely overlap with the licensor’s sector. If the scope of licensed rights were broad, here again, the licence could limit the licensor’s own use of its own data in its own area because its own use would carry significant restrictions. In addition, if the licensee’s scope of operations would define the subfield of use to contain many companies in similar business, then granting rights to the licensee could, as a practical matter, limit the right to license the data to other companies in the same field, even when the licensing company would use the data for different purposes.
How does the decision rights data licensing model work?
In place of defining the scope of licence rights in terms of a field of use, the decisions rights model defines data rights in terms of the decisions that can be made using the data. The set of decisions is defined with a precision that is not possible when using field of use or subfields of use. “Decisions” are a means to establish the scope of rights. Because data derives its meaning from context, decisions are a way to provide the context and the limitations to that context.
The first application: external licences
The following example is a simplified hypothetical of how the decision rights data licensing model could work. Assume a healthcare institution wants to prevent or improve the treatment of childhood malnutrition. It would be beneficial to obtain data from a hospital with a large paediatric practice to obtain data from a large number of children with a large number of medical conditions. Using the decision rights model, the hospital would license paediatric data to the malnutrition institution with the scope of licence rights defined as the right to make decisions limited to determining malnutrition factors. Machine learning could identify previously unknown correlations. The decision rights include the right to test these correlations for identifying medical treatment.
Under the decision rights model, the hospital would retain the right to license the same paediatric data for other purposes, including the right to license it to a pharmaceutical company, with the pharmaceutical company having rights to make decisions necessary to assemble a cohort for a clinical trial. Thus, the hospital can license the same data multiple times for different purposes, with those purposes defined as rights to make a set of defined decisions.
The second application: shared use of data
The model can also be used when two healthcare entities enter a joint development agreement where each company has data useful to the other, and the joint development project includes sharing data. Entity A can license its data to entity B for the limited purposes of improving a particular device of entity B. Entity B would license data from the device, or about the operation of the device, to entity A so that entity A could improve its technology that generates the data that is useful to entity B. Put another way, entity B’s rights are the rights to make decisions to improve its device. In this way, the entities share data and are prohibited from disclosing it outside of the joint development engagement.
The third application: internal use of data
The model applies to internal use of data as well as licences to third parties. In the internal use scenario, the model converts from an external agreement into a set of internal policies, setting boundaries around the business unit’s use scope. Using the decision rights model in this way is the means to ensure that the receiving business units do not use the data in a manner that violates the regulatory regime to which the entity is subject. It is also a means to enforce the entity’s data use policies. In addition, it is a way to control which employees have access to the data and for what purposes.
The fourth application: sharing data between separate legal entities in the same healthcare system
The fourth application is when data is shared between two entities that are separate but which form part of the same healthcare system. This results in a hybrid combination of the first and third Applications. In the fourth application, there will be licensing agreement(s) between or among the companies as formal legal documents. The subject of the licence will cover the limited uses to which the receiving companies may put the data.
In addition to using the decision rights data licensing model to define the scope of use, the model is also a way to ensure that the licensee or receiving entity complies with applicable regulatory requirements to protect the licensor against regulatory sanctions. A risk in retaining personally identifiable information beyond its useful life in the organisation is that, in the event of a database breach, that information will be disclosed and used to calculate fines and damages in litigation and costs of providing identity theft protection. All of this can be avoided if the decision rights licence requires purging that data at the end of its useful life. It cannot be hacked if it is not there. This has particular relevance for personal health data, which is inherently sensitive.
Updating IT Infrastructure
Most current hospital IT systems are not designed to handle the volume of data now generated by connected devices in the IoMT as well as data from consumer devices such as fitness and wellness data. They also lack the capacity to conduct the sophisticated data analytics made possible by advances in AI technology.
Examples of connected devices in healthcare are wearables (eg, sensors and data collection devices attached to the skin), implantables (eg, pacemakers), ingestibles (eg, diagnostic pills that transmit images), smartphones and similar devices, real-time location sensors (for hospital staff and medical equipment) and virtual reality and augmented reality devices (which are used in surgery and medical student training). Even drones, used in the healthcare aspects of disaster response, and devices that transmit medical images and data between ambulances and emergency rooms are part of the system of connected devices.
As a result, the use of digital healthcare technology requires the upgrading of IT infrastructure and the negotiation of the agreements that provide for those upgrades, which often involve moving to cloud computing and data storage environments, with their attendant security risks. Here legal departments and outside counsel must co-ordinate with the hospital IT and medical departments. Similarly, technology vendors must ensure a fair allocation of rights and responsibilities when they contribute to parts of the overall technology infrastructure. Many healthcare systems do not have dedicated data scientists or sophisticated or experience IT systems, and rely on third-party IT systems. As a result, healthcare systems rely on third-party IT vendors to provide required functionality as digital healthcare advances. This reliance carries with it the risk that third-party vendors may apply their own AI systems (large language models) that can, by the nature of their structure or through inattention in use, disclose personal health data to individuals or entities who are not entitled to see it. In the United States, certain categories of personal health data can be used for research but not for commercial purposes. The third-party risk in this scenario is that third-party AI will disclose health data approved for research in a manner that includes it in commercial use. This leads to exposure to regulatory sanctions and violation of law depending on the rules of the relevant jurisdiction. This is complicated of course through transfer or use of data across jurisdictional borders with different scopes of permitted uses of specific types of personal data.
Buying Technology to Build Technology
Risks resulting from changes in components in a device
Technology companies that build medical devices in particular and digital healthcare products in general need to buy technology in order to build their own technology. Digital healthcare products and services often consist of hardware components, software, services and raw materials provided to the product manufacturer or service provider by subcontractors, business partners and other third parties. The risks that arise, especially with connected devices made part of an IoMT, are as follows.
If a third party changes a component included in the overall device, the substituted component may result in a changed device that no longer qualifies as an approved device. Put another way, the product will have to be approved as a new product with the required expenditure of time and funds.
A change in a third-party component may decrease the functionality of the device as a whole. This is a risk that applies whether or not the device requires regulatory approval.
A change in a component may result in a change to a device that adversely affects the performance of other devices connected to it or otherwise dependent upon it (eg, for the generation of data).
Contractual steps to address these risks
Digital healthcare technology companies can use contracts to address the risks introduced by changes in constituent components. The technology company can require approval of changes or substitutions in components of raw materials. Another solution, especially when continued regulatory approval prohibits changes in components, is to require the subcontractor to continue producing the old version of the product along with the new version. This way the technology company can be assured of a supply of conforming components.
Similarly, the technology company can require the subcontractor to produce a large quantity of the old version of the component for the company to use even as the subcontractor provides the new version to other customers. The technology company as buyer can build a substantial inventory of the required component for its use, even if the subcontractor changes the component. The contract can require the provision of this inventory when both parties know that the component will change either because of supply change problems or because of advances in technology. Finally, the technology company can secure alternative backup suppliers, which also may mitigate the dangers of supply chain problems for products manufactured in certain countries.
Backward compatibility
Another issue that digital healthcare technology companies face is ensuring that new versions of components continue to work with the prior versions of the components. To address this, the technology companies can require that subcontractors and suppliers design components to be backwards compatible with prior versions. A common approach is to require the new version to be backward compatible with the earlier two versions of the component. (Among other things, this will allow the technology to support and maintain products that it sold to customers before the new version was released.) The contract should define what backward compatibility means. It may require that the component be backward compatible with external devices that connect with the technology company’s product.
Backward compatibility should include, where applicable, requirements that the new version connects with, interfaces with, integrates with and otherwise works in conjunction with the external devices and the prior versions of the component.
Forward compatibility
The success of backward compatibility is increased if each version of the component is designed to be forward compatible with planned new versions. This is implemented by technology requirements in contracts.
Cybersecurity requirements
Backward and forward compatibility are an important part of implementing “security by design”. Contracts should address the risk that new versions of a component will introduce cybersecurity risks that did not exist before or that become an avenue for a cyber-attack on a hospital’s IT infrastructure or an avenue to make unknown changes to data use in machine learning that, in turn, can have an adverse effect on medical care.
Conclusion
As AI and data become increasingly useful in creating advances in patient treatment and other aspects of digital healthcare, it is important for attorneys at hospital systems, healthcare technology companies and their outside law firms to understand that innovation is also required in the legal aspects of healthcare. This includes restructuring agreements to increase the precision in the scope of licensing, structuring agreements to account for new sources of data as well as data being created in the course of patient care and research, and to account for the changes needed to traditional legal methodologies that were not designed for, and do not adequately reflect, the business arrangement and the interests of the parties.
The IT ecosystem of AI, data and the IoMT requires contracts that provide the necessary interoperability and data exchange between connected devices and also impose technology requirements to address cybersecurity risks. This in turn requires contracts that require co-operation from the manufactures of devices and providers of services used in the IoMT. Accelerating advances in AI and data analytics and the technological capabilities of software and physical devices will improve patient care and speed up medical research. All this requires thoughtful contracts so that all technology companies and hospitals can meet opportunities and mitigate risks as digital healthcare evolves.