Contributed By Jones Walker LLP
Healthcare artificial intelligence (AI) encompasses a diverse range of technologies transforming medical practice across the United States. The number and type of approved applications continues to expand with each passing year. Over the past decade, US Food and Drug Administration (FDA) approvals of AI- and machine learning (ML)-enabled medical devices have surged, with nearly 800 such devices authorised for marketing (via 510(k) clearance, the granting of a de novo request or pre-market approval (PMA)) just during the five-year period ending in September 2024.
The spectrum of AI/ML-enabled applications is quite broad and includes the following.
Adoption Rates and Implementation
In a survey conducted by the American Medical Association looking at changes in physician sentiment towards healthcare AI between August 2023 and November 2024, nearly three in five physicians reported using AI in their practices. Healthcare AI adoption varies significantly across institutions and specialties, with larger health systems and academic medical centres typically leading implementation efforts. Regulatory approval pathways, reimbursement policies and technical infrastructure capabilities influence adoption timelines across different healthcare settings.
Healthcare AI delivers significant advantages, including enhanced diagnostic accuracy, improved clinical efficiency, management of workforce shortages and overall reductions in healthcare costs through optimised resource utilisation. AI systems, particularly when integrated with telemedicine platforms, also enhance access to specialised care, particularly in underserved areas where specialist expertise may be limited.
However, there are a number of issues that are, at least temporarily, causing some pushback to the adoption of healthcare AI solutions.
A number of market forces are affecting the development and deployment of healthcare AI solutions in the United States, including the following.
The United States lacks a single, comprehensive definition of healthcare AI across regulatory agencies. Instead, different federal bodies provide context-specific definitions tailored to their respective jurisdictions and regulatory frameworks.
FDA Classification Approach
The FDA regulates healthcare AI primarily under existing medical device frameworks, classifying AI-enabled software as “software as a medical device” (SaMD) when it meets specific criteria for medical purposes. The FDA’s traditional paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorisation.
In January 2021, the FDA issued the AI/ML-based SaMD Action Plan, which outlined the following five actions based on the total product life cycle (TPLC) approach for the oversight of AI-enabled medical devices:
Regulatory Categories by Function
Healthcare AI systems receive different regulatory treatment in the United States based on their intended functions and clinical applications:
Emerging Classification Challenges
As of late 2023, the FDA had not approved any devices that rely on a purely generative AI (genAI) architecture. genAI technologies can create synthetic content, including medical images or clinical text, which may require new regulatory approaches.
The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.
Federal Medical Device Regulation
The Federal Food, Drug, and Cosmetic Act (FFDCA) provides the foundational legal framework governing healthcare AI systems that meet medical device criteria. In 2021, the Health Information Technology for Economic and Clinical Health Act (the “HITECH Act”) was amended to require the Health and Human Services (HHS) Secretary to further encourage regulated entities to bolster their cybersecurity practices. The 21st Century Cures Act clarified FDA authority over certain software functions while exempting specific low-risk applications from medical device regulation. In January 2025, proposed legislation (The Health Technology Act of 2025 (H.R. 238) was introduced to amend the FFDCA and allow AI systems to prescribe FDA-approved drugs autonomously.
Cybersecurity Requirements
The Consolidated Appropriations Act of 2023 amended the FFDCA to require cybersecurity information in pre-market submissions for “cyber devices”. Medical device manufacturers must now include cybersecurity information in pre-market submissions for AI-enabled devices that connect to networks or process electronic data.
Health Information Privacy Regulation
The HIPAA and the HITECH Act establish comprehensive privacy and security requirements for protected health information (PHI) used in AI systems. The introduction of AI does not change the traditional HIPAA rules on permissible uses and disclosures of PHI.
AI-Specific Privacy Considerations
AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimise performance. Healthcare organisations must ensure that AI vendors processing PHI operate under robust business associate agreements (BAAs) that specify permissible data uses and required safeguards.
Executive Branch AI Initiatives
President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence established government-wide AI governance requirements that affected healthcare applications. The order was rescinded by President Trump on 20 January 2025, within hours of his inauguration. Further White House action in this area is uncertain.
State-Level AI Regulation
Multiple states have enacted healthcare-specific AI legislation addressing various aspects of AI deployment and use. In the 2024 legislative session, 45 states, Puerto Rico, the US Virgin Islands and the District of Columbia introduced artificial AI bills, and 31 states, Puerto Rico and the US Virgin Islands adopted resolutions or enacted legislation.
Federal Anti-Discrimination Requirements
In July 2024, new requirements were put in place to help protect consumers from discrimination when AI tools are used in healthcare. A final rule, published by the Department of HHS Office for Civil Rights (OCR) as part of Section 1557 of the Affordable Care Act (ACA), stated that healthcare entities must ensure AI systems do not discriminate against protected classes and must take corrective action when discrimination is identified. Given President Trump’s opposition to diversity, equity and inclusion (DEI) initiatives, however, it is uncertain whether or how compliance with the final rule will be enforced.
FDA Pre-Market Pathways
Healthcare AI developers must navigate established FDA pre-market pathways depending on their system’s risk classification and intended use. The FDA reviews medical devices through an appropriate pre-market pathway, as follows.
PCCPs
The guidance recommends information to include in a PCCP as part of a marketing submission for a medical device using AI. The PCCP should include a description of the device’s planned modifications; methods to develop, validate and implement the modifications; and an assessment of the modification’s impacts. This innovative approach enables AI developers to modify their systems without additional pre-market submissions when changes fall within predetermined parameters.
Clinical Evidence Requirements
AI system developers must provide clinical evidence demonstrating safety and effectiveness for intended uses. Evidence requirements vary based on risk classification, with higher-risk systems requiring more extensive clinical validation. The FDA emphasises real-world evidence and post-market surveillance.
Expedited Pathways
The FDA provides several expedited pathways for breakthrough medical devices, including AI systems, that address unmet medical needs or provide significant advantages over existing treatments. These pathways offer enhanced FDA communication and expedited review timelines while maintaining safety and effectiveness standards.
Regulatory Framework for AI-Based SaMD
On 6 January 2025, the FDA published the Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations. This comprehensive guidance addresses unique challenges posed by AI-enabled software medical devices.
Other issues include the following.
Developers and users of healthcare AI must adhere to a number of data privacy and security requirements, including:
A number of mandated and voluntary standards regimes apply to healthcare AI, and multiple standards organisations have developed technical requirements and best practices for interoperability of such systems. Among others, the Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standard enables AI systems to exchange data with electronic health record systems and other healthcare technologies.
The FDA recognises consensus standards developed by organisations such as ASTM International, the Institute of Electrical and Electronics Engineers (IEEE), and the International Organization for Standardization (ISO) that address AI system safety, performance and cybersecurity requirements; establish quality management system requirements for medical device development; and address design controls, risk management and validation processes throughout the AI development life cycle.
The Department of HHS proposed regulations in January 2025 that require covered entities to conduct vulnerability scanning at least every six months and penetration testing at least annually. The National Institute of Standards and Technology (NIST) publishes comprehensive cybersecurity frameworks that provide guidelines for protecting AI systems and the health information they process.
Adherence to established data exchange standards, application programming interface (API) specifications and workflow integration protocols enable AI tools to function within complex healthcare technology environments.
US federal regulatory bodies that oversee healthcare AI include the following.
Healthcare AI developers must comply with a range of pre-market requirements involving the following, among other issues.
Following market introduction, healthcare AI systems must continue to monitor performance and compliance. Key areas of concern include:
Non-compliance with relevant laws and regulations can be addressed in several ways.
Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.
Healthcare AI liability often involves multiple stakeholders, including healthcare providers, AI developers, healthcare institutions and others in the AI supply chain. For example, a consultation that results in patient harm might implicate the treating physician, the health system and the developers of clinical decision support software used during the encounter.
Other considerations include the following.
Traditional malpractice standards must adapt to address algorithm-based recommendations and decision support. In April 2024, the Federation of State Medical Boards released recommendations to its members indicating, among other suggestions, that they should hold clinicians liable if AI technology makes a medical error. Healthcare providers must understand AI system limitations and maintain appropriate clinical judgment when incorporating algorithmic recommendations into patient care decisions.
Causation Challenges
When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Among other issues, courts must evaluate whether AI system recommendations served as a proximate cause of patient harm, as well as the impacts of the healthcare provider’s independent medical judgment and other contributing factors.
Documentation and Evidence Requirements
Healthcare providers must maintain detailed documentation of AI system use, including the specific recommendations provided, clinical reasoning for accepting or rejecting algorithmic guidance and any modifications made to AI-generated suggestions.
Expert Testimony Considerations
AI-related malpractice cases may require expert witnesses with specialised knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards.
Burden of Proof Considerations
Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority. Decisions regarding potential liability often depend on judgments made by lay-person jurors.
To mitigate risks associated with healthcare AI, developers, vendors, health systems and practitioners should:
When disputes arise, healthcare providers, systems, and healthcare AI developers can look to several defence strategies:
In the United States, healthcare AI ethical frameworks emphasise core principles such as beneficence, non-maleficence, autonomy and justice. These principles guide AI development and deployment decisions while addressing unique challenges posed by algorithmic decision-making in healthcare settings. Some of the more commonly known (and mostly voluntary) frameworks include the following.
To minimise risk to patients, providers and health systems, a number of tools can be implemented, including the following.
Given the rapid turnabout in executive-branch policy toward DEI and anti-discrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected. The following review looks at policies that existed before the Trump administration; it is fair to say that many of these will be revised this year.
In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organisations that implement it — not to the AI itself. Specific best practices include:
Training data for healthcare AI systems must meet stringent standards in order to provide meaningful information and outcomes.
The transfer of PHI and other sensitive information should occur only under specific rules that protect patient privacy and address the following.
Data sharing activities should address the following:
To better ensure appropriate safeguards for, and anonymity of, health data, the following must be taken into account.
Patents remain one of the most effective tools for protecting healthcare innovations. AI developers should consider the following when developing new systems and platforms.
Recent developments suggest increased scrutiny of AI-related patents. The Federal Circuit’s approach to software patentability under the Alice framework continues to evolve, with AI-specific considerations including whether the AI application provides a technical improvement to computer functionality, the degree of human intervention in AI-generated inventions and potential heightened obviousness rejections when prior art includes AI tools.
Software code, user interfaces, and documentation may receive copyright protection as original works of authorship. However, copyright protection does not extend to underlying algorithms or mathematical concepts, limiting its scope for AI innovations.
Additional issues to consider include:
When multiple parties contribute to the development, deployment and analysis of healthcare AI technology and its outputs, the following issues should be considered.
Two primary licensing and commercialisation models prevail in today’s healthcare AI marketplace:
Specific considerations must be addressed with respect to:
AI-based clinical decision support systems receive different regulatory treatment depending on their specific functionality and the level of interpretation provided to healthcare providers. Other considerations include:
AI-based diagnostic tools are regulated as medical devices when they analyse patient data to provide diagnostic information or recommendations. These systems typically require clinical validation demonstrating safety and effectiveness for specific intended uses.
Different medical specialties have developed specific frameworks for AI diagnostic tools that address unique validation requirements and clinical applications. For example, radiology AI systems may require different validation approaches compared to pathology or cardiology applications.
Successful diagnostic AI deployment requires effective integration with existing clinical workflows, imaging systems and laboratory processes.
AI systems used in treatment planning or therapeutic decision-making face regulatory oversight based on their risk classification and potential impact on patient care. Higher-risk applications may require extensive clinical validation, PMA processes and meaningful human oversight to ensure appropriate clinical judgment and professional accountability, as well as robust safety monitoring programmes.
Therapeutic AI systems should demonstrate clinical benefits through appropriate evidence generation.
Although in use for decades, telemedicine demonstrated its value most clearly during the recent COVID-19 pandemic. In this context, healthcare AI applications used in remote patient monitoring and telemedicine must comply with both AI-specific regulations and broader telemedicine legal frameworks.
AI systems functioning in home or non-clinical settings face unique regulatory challenges related to device performance, user training and clinical oversight, as well as data privacy and security requirements for data collected outside traditional healthcare settings.
At-home and remote-monitoring AI also requires integration with clinical workflows that enable healthcare providers to review data, respond to alerts and co-ordinate care for remote patients. Finally, AI-enabled remote monitoring services must navigate complex reimbursement landscapes that vary by payer, service type and clinical application.
AI tools used in drug discovery and development receive regulatory oversight through established pharmaceutical development pathways, but are also subject to a number of challenges posed by algorithmic approaches to drug design and clinical trial optimisation, including:
The second Trump administration has taken great pains to change the role of the federal government with respect to technology, medical research and much, much more. In this environment, it is difficult to predict with certainty how the healthcare AI landscape will develop over the next year or two. With that in mind, however, certain questions remain.
Given current executive-branch disruptions and uncertainties, existing innovation programmes and regulatory sandboxes may soon see their expiration dates. The following programmes may be under threat.
Given current uncertainties, it is difficult to predict the degree to which US agencies, businesses and other organisations will be allowed to participate in multinational initiatives aimed at harmonising healthcare AI regulations.
Given current political, legislative and regulatory uncertainties in the United States, it remains to be seen which legal challenges with respect to healthcare AI are likely to rise to the fore. That said, there remain a number of key issues that continue to be subject to ongoing scrutiny and debate, including questions involving:
Healthcare stakeholders should remain focused on a number of core compliance matters:
Healthcare AI contracts must address complex technical, legal and regulatory requirements and allocate risks and responsibilities appropriately. Contracts should cover system performance, data handling, regulatory compliance and liability allocation with sufficient detail to prevent disputes. Other key issues to consider include:
Healthcare organisations should audit existing insurance coverage to identify potential gaps related to AI risks. Traditional policies may not adequately cover AI-specific risks such as algorithmic errors, data breaches or intellectual property infringement.
The insurance industry has developed specialised products and pricing models to address AI-related risks, including cyber-liability insurance, technology errors and omissions coverage, and AI-specific professional liability policies. Insurance carriers increasingly evaluate AI-related risks during underwriting processes, requiring detailed information about AI system deployment, governance frameworks and risk management practices. Organisations with robust AI governance may qualify for preferred pricing or coverage terms.
Organisations considering the implementation of healthcare AI should:
To navigate cross-border deployment of healthcare AI, organisations should pay specific attention to the following, among other issues:
201 St Charles Ave
New Orleans
LA 70170-5100
USA
+1 337 593 7634
+1 337 593 7601
ndelahoussaye@joneswalker.com www.joneswalker.com