Healthcare AI 2025 Comparisons

Last Updated August 06, 2025

Contributed By Jones Walker LLP

Law and Practice

Authors



Jones Walker LLP is among the largest law firms in the United States, with more than 350 lawyers across the Southeast and other strategic locations, including Miami, New York City and Washington, DC. Led by a core group of veteran healthcare lawyers, the firm’s healthcare industry team includes lawyers from all the firm’s major practice areas, who all have extensive experience in specific practice areas, as well as in-depth knowledge of today’s healthcare marketplace and regulatory environment. The firm’s attorneys have a deep understanding of the technologies that constitute the world of AI, including generative AI, machine learning, natural language processing, large language models (LLMs) and neural networks. This knowledge enables Jones Walker to better help its clients navigate this complex world, mitigate risks, be strategic and develop approaches to differentiate themselves.

Healthcare artificial intelligence (AI) encompasses a diverse range of technologies transforming medical practice across the United States. The number and type of approved applications continues to expand with each passing year. Over the past decade, US Food and Drug Administration (FDA) approvals of AI- and machine learning (ML)-enabled medical devices have surged, with nearly 800 such devices authorised for marketing (via 510(k) clearance, the granting of a de novo request or pre-market approval (PMA)) just during the five-year period ending in September 2024.

The spectrum of AI/ML-enabled applications is quite broad and includes the following.

  • Diagnostic applications: The most prevalent healthcare AI applications centre on diagnostics, including imaging and disease identification. Over the past three decades, radiology devices accounted for about 76% of all AI medical device approvals. All told, these systems assist physicians in analysing medical images, including X-rays, magnetic resonance images (MRIs), computed tomography (CT) scans and pathology slides to detect diseases.
  • Clinical decision support systems: AI-powered clinical decision support tools provide real-time recommendations to healthcare providers during patient encounters. These systems analyse electronic health records, laboratory results and clinical guidelines to suggest treatment protocols, flag potential drug interactions and predict patient risks. The systems integrate seamlessly with existing electronic health record systems to enhance clinical workflows without disrupting established practices.
  • Therapeutic and treatment planning: AI applications increasingly support personalised treatment planning by analysing patient-specific data to recommend optimal therapeutic approaches. ML algorithms process genetic information, medical history and treatment response patterns to suggest individualised medication dosages, surgical approaches and rehabilitation protocols.
  • Remote patient monitoring: AI-enabled remote monitoring systems utilise wearable devices, internet of medical things (IoMT) sensors and mobile health applications to continuously track patient vital signs, medication adherence and disease progression. These technologies enable early intervention for deteriorating conditions and support chronic disease management outside traditional healthcare settings.
  • Drug discovery and development: Pharmaceutical companies leverage AI to accelerate drug discovery processes through target identification, molecular modelling and clinical trial optimisation. ML algorithms analyse vast datasets to identify potential drug candidates, predict therapeutic efficacy and optimise trial design to reduce development timelines and costs.
  • Administrative and operational applications: Healthcare organisations deploy AI for administrative functions including revenue cycle management, claims processing, scheduling optimisation and resource allocation. Natural language processing tools automate clinical documentation, coding and billing processes, while predictive analytics optimise staffing and inventory management.

Adoption Rates and Implementation

In a survey conducted by the American Medical Association looking at changes in physician sentiment towards healthcare AI between August 2023 and November 2024, nearly three in five physicians reported using AI in their practices. Healthcare AI adoption varies significantly across institutions and specialties, with larger health systems and academic medical centres typically leading implementation efforts. Regulatory approval pathways, reimbursement policies and technical infrastructure capabilities influence adoption timelines across different healthcare settings.

Healthcare AI delivers significant advantages, including enhanced diagnostic accuracy, improved clinical efficiency, management of workforce shortages and overall reductions in healthcare costs through optimised resource utilisation. AI systems, particularly when integrated with telemedicine platforms, also enhance access to specialised care, particularly in underserved areas where specialist expertise may be limited.

However, there are a number of issues that are, at least temporarily, causing some pushback to the adoption of healthcare AI solutions.

  • Healthcare-specific challenges: While healthcare’s expanding use of AI provides benefits, it also complicates the industry’s ability to protect patient information and remain compliant with the data privacy, security and other protections required by the Health Insurance Portability and Accountability Act (HIPAA). AI systems require access to vast amounts of sensitive patient information that must be protected against unauthorised disclosure and cyber-attacks.
  • Algorithmic bias and health equity concerns: Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. Training data quality and representativeness significantly impact AI system performance across diverse patient populations. In a 2024 review of 692 AI/ML-enabled, FDA-approved medical devices, researchers found that only 3.6% of approvals reported race/ethnicity, 99.1% provided no socioeconomic data and 81.6% did not report the age of study subjects.
  • Clinical integration and workflow disruption: Healthcare organisations face substantial challenges integrating AI tools into existing clinical workflows and electronic health record systems. Technical interoperability issues, user training requirements, and change management processes require significant investment and co-ordination across multiple departments and stakeholders.
  • Regulatory uncertainty and liability concerns: In the United States in 2025, AI is becoming more integrated into healthcare at the same time that funding for federal and state support for medical research is becoming less certain. Likewise, the evolving regulatory landscape means that a number of concerns remain unresolved for healthcare providers and AI developers regarding compliance requirements, liability allocation and professional standards.

A number of market forces are affecting the development and deployment of healthcare AI solutions in the United States, including the following.

  • Technology company leadership: Major technology corporations are driving significant innovation in healthcare AI through substantial research and development investments. Companies such as Google Health, Microsoft Healthcare, Amazon Web Services and IBM Watson Health continue to develop foundational AI platforms and tools.
  • Healthcare provider initiatives: Large health systems and academic medical centres lead healthcare AI adoption through dedicated innovation centres, research partnerships and pilot programmes. These organisations often serve as testing grounds for emerging AI technologies, internal expertise and governance frameworks that smaller healthcare providers can subsequently adopt.
  • Pharmaceutical industry investment: Pharmaceutical companies increasingly integrate AI throughout drug development pipelines, from target identification and molecular design to clinical trial optimisation and regulatory submissions. These investments aim to reduce development costs and timelines while improving success rates for new therapeutic approvals.
  • Notable industry collaborations: Strategic partnerships between technology companies, academic medical centres and healthcare organisations facilitate AI development and deployment through shared expertise and resources while maintaining access to diverse patient populations for training and validation studies. The Coalition for Health AI (CHAI) represents a significant industry collaboration focused on developing best practices and standards for healthcare AI implementation. In June 2025, The Joint Commission announced a new partnership with CHAI to help accelerate the development and adoption of best practices in order “to elevate patient safety and quality, and ultimately improve health outcomes for all”.
  • Investment and funding trends: Venture capital investment in healthcare AI continues, with billions of dollars allocated to start-ups and established companies developing innovative AI solutions. However, uncertainties regarding government funding through federal agencies may affect investor sentiment.
  • Market consolidation and acquisition activity: Large healthcare technology companies increasingly acquire specialised AI start-ups to integrate innovative capabilities into comprehensive healthcare platforms. These acquisitions accelerate technology deployment while providing start-ups with the resources necessary for large-scale implementation and regulatory compliance.

The United States lacks a single, comprehensive definition of healthcare AI across regulatory agencies. Instead, different federal bodies provide context-specific definitions tailored to their respective jurisdictions and regulatory frameworks.

FDA Classification Approach

The FDA regulates healthcare AI primarily under existing medical device frameworks, classifying AI-enabled software as “software as a medical device” (SaMD) when it meets specific criteria for medical purposes. The FDA’s traditional paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorisation.

In January 2021, the FDA issued the AI/ML-based SaMD Action Plan, which outlined the following five actions based on the total product life cycle (TPLC) approach for the oversight of AI-enabled medical devices:

  • tailoring the regulatory framework with the issuance of draft guidance on predetermined change control plans (PCCPs);
  • harmonising good machine learning practices (GMLPs);
  • developing a patient-centric approach, including ensuring transparency of devices for users;
  • supporting methods for the elimination of ML algorithm bias and algorithm improvement; and
  • working with stakeholders piloting real-world performance monitoring.

Regulatory Categories by Function

Healthcare AI systems receive different regulatory treatment in the United States based on their intended functions and clinical applications:

  • diagnostic AI systems undergo medical device regulation when they analyse patient data to provide diagnostic information or recommendations;
  • therapeutic AI systems directly influence treatment decisions or provide therapeutic interventions, and therefore face the most stringent regulatory requirements; and
  • administrative AI systems used for non-clinical purposes such as scheduling, billing or operational management generally fall outside FDA medical device regulation but may be subject to other privacy and security requirements.

Emerging Classification Challenges

As of late 2023, the FDA had not approved any devices that rely on a purely generative AI (genAI) architecture. genAI technologies can create synthetic content, including medical images or clinical text, which may require new regulatory approaches.

The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.

Federal Medical Device Regulation

The Federal Food, Drug, and Cosmetic Act (FFDCA) provides the foundational legal framework governing healthcare AI systems that meet medical device criteria. In 2021, the Health Information Technology for Economic and Clinical Health Act (the “HITECH Act”) was amended to require the Health and Human Services (HHS) Secretary to further encourage regulated entities to bolster their cybersecurity practices. The 21st Century Cures Act clarified FDA authority over certain software functions while exempting specific low-risk applications from medical device regulation. In January 2025, proposed legislation (The Health Technology Act of 2025 (H.R. 238) was introduced to amend the FFDCA and allow AI systems to prescribe FDA-approved drugs autonomously.

Cybersecurity Requirements

The Consolidated Appropriations Act of 2023 amended the FFDCA to require cybersecurity information in pre-market submissions for “cyber devices”. Medical device manufacturers must now include cybersecurity information in pre-market submissions for AI-enabled devices that connect to networks or process electronic data.

Health Information Privacy Regulation

The HIPAA and the HITECH Act establish comprehensive privacy and security requirements for protected health information (PHI) used in AI systems. The introduction of AI does not change the traditional HIPAA rules on permissible uses and disclosures of PHI.

AI-Specific Privacy Considerations

AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimise performance. Healthcare organisations must ensure that AI vendors processing PHI operate under robust business associate agreements (BAAs) that specify permissible data uses and required safeguards.

Executive Branch AI Initiatives

President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence established government-wide AI governance requirements that affected healthcare applications. The order was rescinded by President Trump on 20 January 2025, within hours of his inauguration. Further White House action in this area is uncertain.

State-Level AI Regulation

Multiple states have enacted healthcare-specific AI legislation addressing various aspects of AI deployment and use. In the 2024 legislative session, 45 states, Puerto Rico, the US Virgin Islands and the District of Columbia introduced artificial AI bills, and 31 states, Puerto Rico and the US Virgin Islands adopted resolutions or enacted legislation.

Federal Anti-Discrimination Requirements

In July 2024, new requirements were put in place to help protect consumers from discrimination when AI tools are used in healthcare. A final rule, published by the Department of HHS Office for Civil Rights (OCR) as part of Section 1557 of the Affordable Care Act (ACA), stated that healthcare entities must ensure AI systems do not discriminate against protected classes and must take corrective action when discrimination is identified. Given President Trump’s opposition to diversity, equity and inclusion (DEI) initiatives, however, it is uncertain whether or how compliance with the final rule will be enforced.

FDA Pre-Market Pathways

Healthcare AI developers must navigate established FDA pre-market pathways depending on their system’s risk classification and intended use. The FDA reviews medical devices through an appropriate pre-market pathway, as follows.

  • 510(k) pre-market notification: Most AI-enabled medical devices receive market authorisation through the 510(k) pathway, which requires demonstration of substantial equivalence to existing legally marketed devices. This pathway may require less extensive clinical testing compared to other approval routes but requires comprehensive technical documentation and validation data.
  • De novo classification: Novel AI systems without suitable predicate devices may pursue de novo classification, which establishes new device classifications and regulatory pathways. This process requires more extensive documentation and often includes clinical validation requirements.
  • PMA: High-risk AI systems supporting life-sustaining functions or presenting significant safety risks require PMA, the most stringent FDA review process. PMA requires extensive clinical trials demonstrating safety and effectiveness through rigorous scientific evidence.

PCCPs

The guidance recommends information to include in a PCCP as part of a marketing submission for a medical device using AI. The PCCP should include a description of the device’s planned modifications; methods to develop, validate and implement the modifications; and an assessment of the modification’s impacts. This innovative approach enables AI developers to modify their systems without additional pre-market submissions when changes fall within predetermined parameters.

Clinical Evidence Requirements

AI system developers must provide clinical evidence demonstrating safety and effectiveness for intended uses. Evidence requirements vary based on risk classification, with higher-risk systems requiring more extensive clinical validation. The FDA emphasises real-world evidence and post-market surveillance.

Expedited Pathways

The FDA provides several expedited pathways for breakthrough medical devices, including AI systems, that address unmet medical needs or provide significant advantages over existing treatments. These pathways offer enhanced FDA communication and expedited review timelines while maintaining safety and effectiveness standards.

Regulatory Framework for AI-Based SaMD

On 6 January 2025, the FDA published the Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations. This comprehensive guidance addresses unique challenges posed by AI-enabled software medical devices.

Other issues include the following.

  • Continuous learning systems: The FDA’s traditional paradigm of medical device regulation was not designed for adaptive AI and ML technologies.
  • Algorithm transparency and explainability: Healthcare AI systems must provide sufficient transparency to enable healthcare providers to understand system recommendations and limitations. The FDA emphasises the importance of explainable AI that allows clinicians to understand the reasoning behind algorithmic recommendations.
  • Training data requirements: Clinical study participants and datasets should be representative of the intended patient population. AI-based SaMD developers must ensure training datasets avoid bias and ensure generalisability across diverse clinical settings.
  • Post-market surveillance requirements. AI-enabled medical devices require robust post-market surveillance programmes that monitor real-world performance and detect potential safety issues or performance degradation.

Developers and users of healthcare AI must adhere to a number of data privacy and security requirements, including:

  • HIPAA compliance – a 2025 Department of HHS-proposed regulation states that entities using AI tools must include those tools as part of their risk analysis and risk management compliance activities, including risk assessments, security measures, and breach notification procedures;
  • minimum necessary standard application – AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimise performance;
  • de-identification requirements – healthcare AI systems must meet HIPAA’s safe harbour or expert determination standards and prevent re-identification when datasets are combined;
  • BAAs – BAAs must include language covering permissible data use and safeguards, as well as AI-specific risks including algorithm updates, data retention policies and security measures for ML processes; and
  • patient consent requirements – healthcare AI deployment requires careful consideration of patient consent, particularly when such systems influence clinical decisions or when data is used for secondary purposes.

A number of mandated and voluntary standards regimes apply to healthcare AI, and multiple standards organisations have developed technical requirements and best practices for interoperability of such systems. Among others, the Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standard enables AI systems to exchange data with electronic health record systems and other healthcare technologies.

The FDA recognises consensus standards developed by organisations such as ASTM International, the Institute of Electrical and Electronics Engineers (IEEE), and the International Organization for Standardization (ISO) that address AI system safety, performance and cybersecurity requirements; establish quality management system requirements for medical device development; and address design controls, risk management and validation processes throughout the AI development life cycle.

The Department of HHS proposed regulations in January 2025 that require covered entities to conduct vulnerability scanning at least every six months and penetration testing at least annually. The National Institute of Standards and Technology (NIST) publishes comprehensive cybersecurity frameworks that provide guidelines for protecting AI systems and the health information they process.

Adherence to established data exchange standards, application programming interface (API) specifications and workflow integration protocols enable AI tools to function within complex healthcare technology environments.

US federal regulatory bodies that oversee healthcare AI include the following.

  • FDA: The FDA’s Digital Health Center of Excellence, Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) share responsibility for regulating different aspects of healthcare AI depending on the specific application and product type.
  • Department of HHS: The Department of HHS serves as the primary federal department responsible for healthcare policy and regulation, with multiple agencies addressing different aspects of healthcare AI oversight. The department co-ordinates AI governance across healthcare programmes while establishing government-wide policies for AI use in federal healthcare services.
  • The HHS OCR: The OCR has placed a heavy focus on the potential for unauthorised use or disclosure of PHI through the use of emerging technologies. The OCR enforces the HIPAA Privacy, Security, and Breach Notification Rules for healthcare AI systems, ensuring patient privacy and data security compliance.
  • Centers for Medicare & Medicaid Services (CMS): The CMS establishes coverage and reimbursement policies for AI-enabled healthcare services and technologies through Medicare, Medicaid and other federal health programmes.
  • Office of the National Coordinator for Health Information Technology (ONC): The ONC co-ordinates nationwide efforts to implement health information technology and promote secure electronic health information exchange.
  • Centers for Disease Control and Prevention (CDC): The CDC provides leadership in disease prevention and public health emergency response, utilising AI tools for population health monitoring, disease surveillance and epidemiological analysis.
  • Federal Trade Commission (FTC): The FTC regulates AI-related advertising claims, the privacy practices of non-HIPAA covered entities, competition in digital health markets, and compliance with consumer protection laws and truth-in-advertising principles.

Healthcare AI developers must comply with a range of pre-market requirements involving the following, among other issues.

  • Clinical validation: Healthcare AI developers must provide comprehensive clinical validation demonstrating safety and effectiveness for intended uses; ensure that the relevant characteristics of the intended patient population are represented; and ensure that results can be reasonably generalised to the intended use population.
  • Technical documentation: AI system developers must submit detailed technical documentation describing algorithm design, training methodologies, validation processes and performance characteristics.
  • Risk assessment frameworks: Comprehensive risk assessments must include reviews of AI-specific hazards, including algorithmic bias, cybersecurity vulnerabilities and performance degradation over time.
  • Bias testing and mitigation: Regular testing and validation of AI tools and algorithms is required to ensure compliance with non-discrimination standards.
  • Algorithmic transparency requirements: AI systems must provide sufficient transparency to enable healthcare providers and regulatory reviewers to understand system functionality, limitations and appropriate use cases.
  • Cybersecurity and data protection: The Consolidated Appropriations Act of 2023 added Section 524B to the FFDCA, requiring medical device manufacturers to include cybersecurity information in pre-market submissions.

Following market introduction, healthcare AI systems must continue to monitor performance and compliance. Key areas of concern include:

  • ongoing monitoring requirements – organisations should implement multilayered approaches that include regular scanning for outdated code or anomalies in AI systems, potentially different behaviour in clinical practice compared to controlled development environments, key performance indicators, performance drift and potential bias or safety issues
  • adverse event reporting – adverse events and safety issues associated with AI system use must be reported through established FDA reporting;
  • algorithm update processes – post-market algorithm updates require ongoing evaluation to ensure safety and effectiveness. PCCPs ensure regulatory oversight of significant changes while enabling systematic updates within predefined parameters; and
  • real-world evidence collection – post-market surveillance programmes should be implemented to collect and analyse real-world evidence of AI system performance.

Non-compliance with relevant laws and regulations can be addressed in several ways.

  • FDA enforcement mechanisms: FDA enforcement actions include warning letters, product recalls, injunctions, marketing prohibitions, civil monetary penalties and criminal referrals.
  • HIPAA privacy and security enforcement: The OCR enforces HIPAA violations through civil monetary penalties and corrective action plans. The OCR also introduced its risk analysis initiative at the end of 2024, focusing OCR enforcement on entities that fail to properly conduct the required periodic security risk analysis (SRA).
  • State-level enforcement: State attorneys general increasingly enforce state-specific data privacy laws and consumer protection statutes against healthcare AI companies.
  • Professional licensing board actions: State medical and professional licensing boards may take disciplinary action against healthcare providers for inappropriate AI use or failure to meet professional standards.

Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.

Healthcare AI liability often involves multiple stakeholders, including healthcare providers, AI developers, healthcare institutions and others in the AI supply chain. For example, a consultation that results in patient harm might implicate the treating physician, the health system and the developers of clinical decision support software used during the encounter.

Other considerations include the following.

  • Product liability: AI-enabled medical devices may be subject to product liability claims under theories of design defect, manufacturing defect or failure to warn. The “black box” nature of some AI systems can further complicate product liability analysis.
  • Institutional liability: Healthcare institutions face potential liability for AI system selection, implementation, training and oversight. Hospitals and health systems must establish appropriate governance frameworks, staff training programmes and quality assurance processes.
  • Insurance coverage considerations: The distribution of liability will likely shift as device manufacturers, algorithm developers, administrators and other parties include AI products in deployed diagnostic and treatment tools. Since professional liability insurance policies may not cover (adequately or at all) AI-related claims, healthcare providers may be forced to secure specialised coverage or policy modifications.
  • Emerging liability theories: Recent litigation has introduced novel liability theories specific to AI systems, including algorithmic negligence claims when AI systems produce systematically biased outcomes, breach of fiduciary duty for inappropriate reliance on AI recommendations, consumer protection claims for misrepresentation of AI capabilities and contract claims for AI systems failing to meet performance specifications.

Traditional malpractice standards must adapt to address algorithm-based recommendations and decision support. In April 2024, the Federation of State Medical Boards released recommendations to its members indicating, among other suggestions, that they should hold clinicians liable if AI technology makes a medical error. Healthcare providers must understand AI system limitations and maintain appropriate clinical judgment when incorporating algorithmic recommendations into patient care decisions.

Causation Challenges

When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Among other issues, courts must evaluate whether AI system recommendations served as a proximate cause of patient harm, as well as the impacts of the healthcare provider’s independent medical judgment and other contributing factors.

Documentation and Evidence Requirements

Healthcare providers must maintain detailed documentation of AI system use, including the specific recommendations provided, clinical reasoning for accepting or rejecting algorithmic guidance and any modifications made to AI-generated suggestions.

Expert Testimony Considerations

AI-related malpractice cases may require expert witnesses with specialised knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards.

Burden of Proof Considerations

Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority. Decisions regarding potential liability often depend on judgments made by lay-person jurors.

To mitigate risks associated with healthcare AI, developers, vendors, health systems and practitioners should:

  • develop and deploy institutional risk assessments – these assessments should evaluate potential clinical risks, cybersecurity vulnerabilities, privacy threats and liability exposures associated with AI applications;
  • establish AI governance frameworks – robust AI governance frameworks address system selection, validation, implementation, monitoring and updates, and are led by multidisciplinary committees with the clinical, technical, legal and ethical expertise needed to oversee AI deployment;
  • reinforce staff training and competency – comprehensive training and support programmes should be provided to ensure that healthcare professionals understand AI system capabilities, limitations and appropriate use cases, and that staff possess the appropriate competencies to understand escalation protocols when AI recommendations conflict with clinical judgment.
  • implement quality assurance programmes – ongoing quality assurance programmes monitor AI system performance, detect potential issues and ensure continued safety and effectiveness in real-world deployment;
  • obtain effective insurance – healthcare organisations should evaluate the adequacy of existing insurance coverage for AI-related risks and, where gaps exist, consider specialised policies addressing emerging liability exposures; and
  • create effective vendor management protocols – vendor management processes for AI system procurement should include due diligence on vendor capabilities, contractual risk allocation and ongoing vendor performance monitoring, among other concerns.

When disputes arise, healthcare providers, systems, and healthcare AI developers can look to several defence strategies:

  • regulatory compliance – healthcare providers may assert regulatory compliance as a defence against AI-related liability claims by demonstrating adherence to applicable laws, regulations, professional standards and institutional policies;
  • informed consent protections – proper informed consent processes that disclose AI system use, limitations and potential risks may provide liability protections;
  • state-of-the-art defences – healthcare providers may argue that their use of AI systems reflects current state-of-the-art in medical practice, particularly when following established clinical guidelines and professional recommendations;
  • learned intermediary doctrine – this doctrine may shield AI developers from direct liability to patients, particularly where healthcare providers serve as intermediaries who evaluate and apply algorithmic recommendations; and
  • contractual risk allocation – well-drafted contracts between healthcare providers and AI vendors can allocate liability risks through indemnification clauses, limitation of liability provisions and clear scope of service descriptions.

In the United States, healthcare AI ethical frameworks emphasise core principles such as beneficence, non-maleficence, autonomy and justice. These principles guide AI development and deployment decisions while addressing unique challenges posed by algorithmic decision-making in healthcare settings. Some of the more commonly known (and mostly voluntary) frameworks include the following.

  • Federal ethical guidelines: Immediately following President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 28 healthcare providers and payers voluntarily committed to compliance. Although the order has since been rescinded by President Trump, some organisations remain committed to the standards established in the order.
  • Professional society guidelines: Medical professional societies, including the American Medical Association, have developed principles for AI use that emphasise physician responsibility, patient safety, ethical deployment practices, informed consent, algorithm transparency and professional liability.
  • Institutional ethics committees: Healthcare organisations increasingly establish AI ethics committees or incorporate AI considerations into existing institutional review boards (IRBs) and ethics committees. These bodies provide oversight for AI deployment decisions while addressing ethical dilemmas that arise during implementation and use.
  • International ethical standards: Healthcare AI development increasingly references international ethical frameworks, including those developed by the World Health Organization (WHO) and other global health organisations. While US participation in the WHO is likely to be limited, at best, during the Trump administration, voluntary adherence to these standards is likely given patient and other stakeholder pressures.

To minimise risk to patients, providers and health systems, a number of tools can be implemented, including the following.

  • Patient disclosures: At the state level, many healthcare providers face increasing requirements to disclose AI system use to patients and obtain appropriate consent for AI-assisted care. For example, California’s AB 3030 regulates the use of genAI in healthcare provision.
  • Algorithmic transparency standards: The FDA’s guiding principles regarding transparency for ML-enabled medical devices require that AI systems offer sufficient transparency that balances the information needs of healthcare providers and patients against proprietary algorithm details and trade secrets that companies may wish to protect.
  • Decision explanations: AI systems must provide understandable explanations for their recommendations, which healthcare providers in turn use to communicate with patients.

Given the rapid turnabout in executive-branch policy toward DEI and anti-discrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected. The following review looks at policies that existed before the Trump administration; it is fair to say that many of these will be revised this year.

  • Regulatory anti-discrimination requirements: On 26 April 2024, the Department of HHS issued a final rule under Section 1557 of the ACA advancing protections against discrimination in healthcare which – with respect to AI – underscored the importance of inclusive data practices and continuous evaluation of AI tools and algorithms to promote equitable health outcomes.
  • Training data diversity: AI system developers have, until recently, faced increasing regulatory pressure to ensure training datasets adequately represent diverse patient populations. Most healthcare AI developers and practitioners continue to maintain that relevant characteristics – including age, gender, sex, race and ethnicity – should be appropriately represented and tracked in clinical studies to ensure that results can be reasonably generalised to the intended use population.
  • Bias testing and mitigation: Healthcare organisations should implement systematic bias testing and mitigation strategies throughout the AI life cycle, with the following goals in mind:
    1. promotion of health and health care equity;
    2. ensuring that healthcare algorithms and their uses are transparent and explainable;
    3. engaging with – and earning the trust of – patients and communities;
    4. identification of healthcare algorithmic fairness issues and trade-offs; and
    5. ensuring accountability for equity and fairness in outcomes.
  • Protection of vulnerable populations: Special attention must be paid to protecting vulnerable populations, including paediatric patients, elderly individuals, racial and ethnic minorities, and individuals with disabilities.

In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organisations that implement it — not to the AI itself. Specific best practices include:

  • providers maintain clinical decision-making authority – healthcare providers must maintain ultimate authority for clinical decisions even when using AI-powered decision support tools;
  • humans in the loop – healthcare AI applications must require meaningful human involvement in decision-making processes rather than defaulting to fully automated systems;
  • override capabilities – AI systems must provide healthcare providers with clear, easily accessible mechanisms to override algorithmic recommendations when clinical judgment suggests alternative approaches;
  • competency and training – healthcare providers using AI systems must be provided with the tools to achieve system competency through ongoing training and education programmes; and
  • quality assurance – healthcare organisations must implement robust quality assurance programmes that monitor AI system performance and healthcare provider usage patterns.

Training data for healthcare AI systems must meet stringent standards in order to provide meaningful information and outcomes.

  • Data quality: Healthcare AI systems require complete, accurate, consistent and relevant training data that accurately represents the clinical conditions and patient populations for which the AI will be used.
  • Representation, diversity and bias: Training datasets must include adequate representation across demographic groups, clinical conditions and healthcare settings to ensure AI system generalisability. Likewise, AI developers must systematically identify and document potential biases in training data and implement bias-mitigation strategies.
  • Data provenance and lineage: Comprehensive documentation of data sources, collection methods and processing steps enables proper evaluation of training data quality and potential limitations.

The transfer of PHI and other sensitive information should occur only under specific rules that protect patient privacy and address the following.

  • Legal frameworks for data reuse: Healthcare data collected for clinical purposes may be reused for AI training and development under specific legal frameworks that address consent, privacy protection and data use limitations.
  • Consent requirements: Organisations must obtain appropriate consent for secondary use of health data in AI development, with consent requirements varying based on data sensitivity, intended use and applicable legal frameworks.
  • Research and development exemptions: Certain research activities may qualify for exemptions from standard consent requirements. These exemptions typically require IRB approval and implementation of appropriate privacy safeguards.
  • Data use agreements: Secondary use of health data for AI development typically requires formal data use agreements that specify permitted uses, privacy protections and data handling requirements, as well as restrictions on further disclosure or use.

Data sharing activities should address the following:

  • collaborative research frameworks, including federated learning approaches, secure multiparty computation and other technologies that enable data sharing across institutions while maintaining privacy protections;
  • cross-border data transfer restrictions that comply with federal, state and international laws and regulations that may limit where health data can be processed or stored during AI development;
  • data sharing agreements that specify the terms and conditions for collaborative AI development projects, including data access rights, use limitations, intellectual property ownership, regulatory compliance and liability allocation; and
  • the role of industry consortia in facilitating AI development through shared datasets and collaborative research initiatives.

To better ensure appropriate safeguards for, and anonymity of, health data, the following must be taken into account.

  • HIPAA de-identification standards: Healthcare organisations must comply with HIPAA de-identification requirements when using health data for AI development. HIPAA provides two methods for de-identification: safe harbour, which requires the removal of specific identifiers; and expert determination, which relies on statistical and scientific principles to minimise re-identification risk.
  • Re-identification risks: AI systems may create new re-identification risks through pattern recognition capabilities that can infer individual identities from seemingly anonymous data. Organisations must guard against re-identification risks when datasets are combined and implement ongoing monitoring for potential privacy breaches.
  • Synthetic data generation: Synthetic data generation techniques can create realistic datasets for AI training while protecting individual privacy. These approaches use statistical methods or genAI to preserve important statistical properties while removing direct links to individual patients.
  • Technical safeguards: Strong technical safeguards must be implemented when using de-identified data for AI training – including access controls, encryption, audit logging and secure computing environments – and should address both intentional and accidental re-identification risks throughout the AI development process.

Patents remain one of the most effective tools for protecting healthcare innovations. AI developers should consider the following when developing new systems and platforms.

  • Patentability of AI algorithms and software: Healthcare AI innovations are eligible for patent protection when they meet traditional patentability requirements of novelty, non-obviousness and utility, and satisfy subject matter eligibility requirements under current patent law interpretations. However, abstract mathematical algorithms and natural phenomena often cannot be patented.
  • Medical method patents: AI-enabled diagnostic and treatment methods may qualify for patent protection when they apply emerging AI technology to specific medical problems. These patents must demonstrate technical innovation beyond merely applying known AI techniques.
  • International patent strategies: Healthcare AI companies must often develop tailored, international patent strategies that address varying patent law requirements across different jurisdictions.

Recent developments suggest increased scrutiny of AI-related patents. The Federal Circuit’s approach to software patentability under the Alice framework continues to evolve, with AI-specific considerations including whether the AI application provides a technical improvement to computer functionality, the degree of human intervention in AI-generated inventions and potential heightened obviousness rejections when prior art includes AI tools.

Software code, user interfaces, and documentation may receive copyright protection as original works of authorship. However, copyright protection does not extend to underlying algorithms or mathematical concepts, limiting its scope for AI innovations.

Additional issues to consider include:

  • training data copyright – when using third-party data for training purposes, developers should obtain appropriate licences for copyrighted medical literature, imaging databases and other materials.
  • trade secret protection – trade secret protection may provide valuable intellectual property protection for AI algorithms, training datasets and development methodologies that derive economic value from remaining confidential.

When multiple parties contribute to the development, deployment and analysis of healthcare AI technology and its outputs, the following issues should be considered.

  • Generated content ownership: Legal frameworks governing the ownership of AI-generated diagnostic findings, treatment recommendations and other clinical outputs remain largely unsettled, as traditional intellectual property concepts may not adequately address ownership questions.
  • Healthcare provider rights: Healthcare providers using AI systems may claim ownership rights in AI-generated clinical insights that result from their patient data and clinical expertise. These ownership claims may conflict with AI developer intellectual property rights.
  • Patient data contributions: Patients whose data contributes to AI training and operation may have interests in AI outputs that incorporate their health information. However, existing legal frameworks provide limited protection for patient interests in AI-generated insights derived from their data.
  • Contractual ownership arrangements: Contracts between AI developers and healthcare organisations typically must include specific contractual provisions regarding intellectual property.

Two primary licensing and commercialisation models prevail in today’s healthcare AI marketplace:

  • Commercial licensing models – these include subscription-based software-as-a-service arrangements, per-use licensing and comprehensive platform licences; and
  • Academic-industry collaboration – academic medical centres and commercial AI developers frequently collaborate on healthcare AI development through licensing agreements, research partnerships and joint ventures.

Specific considerations must be addressed with respect to:

  • open source frameworks – open source components and frameworks may impose specific licensing requirements and obligations; and
  • technology transfer processes – such processes must balance public interest in technology access with commercial development incentives.

AI-based clinical decision support systems receive different regulatory treatment depending on their specific functionality and the level of interpretation provided to healthcare providers. Other considerations include:

  • implementation requirements – healthcare organisations must establish appropriate governance frameworks for clinical decision support AI deployments;
  • clinical evidence standards – AI systems must demonstrate clinical validity and utility through appropriate evidence generation, including retrospective studies, prospective validation and real-world evidence collection;
  • user interface and workflow integration – effective clinical decision support requires user interfaces and workflow integration that enhance rather than disrupt clinical practice and provide actionable recommendations at appropriate times while minimising alert fatigue and cognitive burden; and
  • liability and professional standards – healthcare providers retain professional responsibility and accountability for clinical decisions while benefitting from algorithmic insights.

AI-based diagnostic tools are regulated as medical devices when they analyse patient data to provide diagnostic information or recommendations. These systems typically require clinical validation demonstrating safety and effectiveness for specific intended uses.

Different medical specialties have developed specific frameworks for AI diagnostic tools that address unique validation requirements and clinical applications. For example, radiology AI systems may require different validation approaches compared to pathology or cardiology applications.

Successful diagnostic AI deployment requires effective integration with existing clinical workflows, imaging systems and laboratory processes.

AI systems used in treatment planning or therapeutic decision-making face regulatory oversight based on their risk classification and potential impact on patient care. Higher-risk applications may require extensive clinical validation, PMA processes and meaningful human oversight to ensure appropriate clinical judgment and professional accountability, as well as robust safety monitoring programmes.

Therapeutic AI systems should demonstrate clinical benefits through appropriate evidence generation.

Although in use for decades, telemedicine demonstrated its value most clearly during the recent COVID-19 pandemic. In this context, healthcare AI applications used in remote patient monitoring and telemedicine must comply with both AI-specific regulations and broader telemedicine legal frameworks.

AI systems functioning in home or non-clinical settings face unique regulatory challenges related to device performance, user training and clinical oversight, as well as data privacy and security requirements for data collected outside traditional healthcare settings.

At-home and remote-monitoring AI also requires integration with clinical workflows that enable healthcare providers to review data, respond to alerts and co-ordinate care for remote patients. Finally, AI-enabled remote monitoring services must navigate complex reimbursement landscapes that vary by payer, service type and clinical application.

AI tools used in drug discovery and development receive regulatory oversight through established pharmaceutical development pathways, but are also subject to a number of challenges posed by algorithmic approaches to drug design and clinical trial optimisation, including:

  • clinical trial design – AI applications must meet good clinical practice standards while demonstrating advantages over traditional trial approaches;
  • validation requirements – AI-driven drug discovery tools must undergo appropriate validation to ensure reliability and accuracy for their intended uses in pharmaceutical development; and
  • intellectual property considerations – drug discovery raises complex intellectual property questions regarding the ownership of AI-generated insights, novel compound designs and therapeutic targets.

The second Trump administration has taken great pains to change the role of the federal government with respect to technology, medical research and much, much more. In this environment, it is difficult to predict with certainty how the healthcare AI landscape will develop over the next year or two. With that in mind, however, certain questions remain.

  • Will Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence, which was rescinded by President Trump, be revived in some other form, particularly those provisions addressing AI use in healthcare?
  • Will the FDA’s Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations, which includes specific recommendations to support marketing submissions for AI-enabled medical devices, come into force?
  • Will the regulations proposed by the Department of HHS in January 2025, specifying that covered entities must conduct vulnerability scanning at least every six months and penetration testing at least annually, be reversed?
  • Given the current paralysis of the US Congress, will proposals addressing healthcare AI, including AI governance, data privacy and healthcare-specific applications, see the light of day?

Given current executive-branch disruptions and uncertainties, existing innovation programmes and regulatory sandboxes may soon see their expiration dates. The following programmes may be under threat.

  • FDA innovation pathways: The FDA’s Digital Health Center of Excellence was established to provide regulatory advice on digital health policy, cybersecurity and AI/ML applications. The Digital Health Software Precertification Program pilots new approaches to regulate software-based medical devices through streamlined oversight for qualified developers.
  • Public-private partnerships: Should federal funding dwindle, the collaboration between government agencies and industry leaders with respect to the development of AI standards and best practices may be threatened.
  • State pilot programmes and demonstrations: Several states have established regulatory sandboxes or innovation programmes specifically for healthcare technologies, including AI applications. It remains to be seen whether such initiatives will survive potential funding disruptions.

Given current uncertainties, it is difficult to predict the degree to which US agencies, businesses and other organisations will be allowed to participate in multinational initiatives aimed at harmonising healthcare AI regulations.

Given current political, legislative and regulatory uncertainties in the United States, it remains to be seen which legal challenges with respect to healthcare AI are likely to rise to the fore. That said, there remain a number of key issues that continue to be subject to ongoing scrutiny and debate, including questions involving:

  • continuous learning systems;
  • genAI;
  • AI integration with emerging technologies;
  • algorithmic accountability;
  • physician responsibility;
  • institutional oversight; and
  • manufacturer accountability for AI system performance.

Healthcare stakeholders should remain focused on a number of core compliance matters:

  • documentation and record-keeping – successful AI compliance requires meticulous documentation of AI system selection, validation, implementation and ongoing monitoring activities;
  • risk-based compliance – organisations should implement risk-based approaches to AI compliance that prioritise resources and attention based on the potential impact and risk level of different AI applications, particularly higher-risk AI systems; and
  • ongoing monitoring and assessment – monitoring and assessment programmes that track system performance, identify potential issues and ensure continued compliance with regulatory requirements and professional standards should be implemented and maintained.

Healthcare AI contracts must address complex technical, legal and regulatory requirements and allocate risks and responsibilities appropriately. Contracts should cover system performance, data handling, regulatory compliance and liability allocation with sufficient detail to prevent disputes. Other key issues to consider include:

  • liability allocation mechanisms;
  • indemnification provisions;
  • insurance requirements; and
  • service levels and expectations.

Healthcare organisations should audit existing insurance coverage to identify potential gaps related to AI risks. Traditional policies may not adequately cover AI-specific risks such as algorithmic errors, data breaches or intellectual property infringement.

The insurance industry has developed specialised products and pricing models to address AI-related risks, including cyber-liability insurance, technology errors and omissions coverage, and AI-specific professional liability policies. Insurance carriers increasingly evaluate AI-related risks during underwriting processes, requiring detailed information about AI system deployment, governance frameworks and risk management practices. Organisations with robust AI governance may qualify for preferred pricing or coverage terms.

Organisations considering the implementation of healthcare AI should:

  • conduct an organisational readiness assessment – such assessments evaluate technical infrastructure, staff capabilities, regulatory compliance frameworks and cultural readiness for AI adoption, identifying gaps and development needs before AI deployment begins;
  • create multidisciplinary implementation teams – these teams should include clinical leaders, information technology specialists, legal and compliance professionals, and quality assurance experts, and be given clear authority and accountability for AI implementation decisions;
  • consider phased implementation approaches – such an approach may begin with lower-risk AI applications and gradually expand to more complex systems as organisational capabilities and experience develop;
  • establish training and change management infrastructure – comprehensive training programmes ensure healthcare professionals understand AI system capabilities, limitations and appropriate use while supporting successful adoption and compliance;
  • emphasise quality assurance – ongoing QA programmes monitor AI system performance and implementation effectiveness while identifying opportunities for improvement and optimisation;
  • prioritise patient communication and engagement – healthcare organisations should develop clear policies and procedures for communicating with patients about AI system use, including disclosure requirements and consent processes; and
  • implement AI governance – healthcare organisations should implement AI governance practices, including the implementation of risk assessment frameworks aligned with National Institute of Standards and Technology (NIST) guidelines, continuous monitoring programmes tracking algorithm performance and bias metrics, procedures for AI-related incidents and regulator assessments and audits of AI systems.

To navigate cross-border deployment of healthcare AI, organisations should pay specific attention to the following, among other issues:

  • regulatory harmonisation;
  • data transfer and privacy requirements;
  • professional licensing and practice standards;
  • intellectual property protection;
  • compliance co-ordination; and
  • risk management strategies.
Jones Walker LLP

201 St Charles Ave
New Orleans
LA 70170-5100
USA

+1 337 593 7634

+1 337 593 7601

ndelahoussaye@joneswalker.com www.joneswalker.com
Author Business Card

Law and Practice in USA

Authors



Jones Walker LLP is among the largest law firms in the United States, with more than 350 lawyers across the Southeast and other strategic locations, including Miami, New York City and Washington, DC. Led by a core group of veteran healthcare lawyers, the firm’s healthcare industry team includes lawyers from all the firm’s major practice areas, who all have extensive experience in specific practice areas, as well as in-depth knowledge of today’s healthcare marketplace and regulatory environment. The firm’s attorneys have a deep understanding of the technologies that constitute the world of AI, including generative AI, machine learning, natural language processing, large language models (LLMs) and neural networks. This knowledge enables Jones Walker to better help its clients navigate this complex world, mitigate risks, be strategic and develop approaches to differentiate themselves.