Healthcare artificial intelligence (AI) encompasses a diverse range of technologies transforming medical practice across the United States. The number and type of approved applications continues to expand with each passing year. Over the past decade, US Food and Drug Administration (FDA) approvals of AI- and machine learning (ML)-enabled medical devices have surged, with nearly 800 such devices authorised for marketing (via 510(k) clearance, the granting of a de novo request or pre-market approval (PMA)) just during the five-year period ending in September 2024.
The spectrum of AI/ML-enabled applications is quite broad and includes the following.
Adoption Rates and Implementation
In a survey conducted by the American Medical Association looking at changes in physician sentiment towards healthcare AI between August 2023 and November 2024, nearly three in five physicians reported using AI in their practices. Healthcare AI adoption varies significantly across institutions and specialties, with larger health systems and academic medical centres typically leading implementation efforts. Regulatory approval pathways, reimbursement policies and technical infrastructure capabilities influence adoption timelines across different healthcare settings.
Healthcare AI delivers significant advantages, including enhanced diagnostic accuracy, improved clinical efficiency, management of workforce shortages and overall reductions in healthcare costs through optimised resource utilisation. AI systems, particularly when integrated with telemedicine platforms, also enhance access to specialised care, particularly in underserved areas where specialist expertise may be limited.
However, there are a number of issues that are, at least temporarily, causing some pushback to the adoption of healthcare AI solutions.
A number of market forces are affecting the development and deployment of healthcare AI solutions in the United States, including the following.
The United States lacks a single, comprehensive definition of healthcare AI across regulatory agencies. Instead, different federal bodies provide context-specific definitions tailored to their respective jurisdictions and regulatory frameworks.
FDA Classification Approach
The FDA regulates healthcare AI primarily under existing medical device frameworks, classifying AI-enabled software as “software as a medical device” (SaMD) when it meets specific criteria for medical purposes. The FDA’s traditional paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorisation.
In January 2021, the FDA issued the AI/ML-based SaMD Action Plan, which outlined the following five actions based on the total product life cycle (TPLC) approach for the oversight of AI-enabled medical devices:
Regulatory Categories by Function
Healthcare AI systems receive different regulatory treatment in the United States based on their intended functions and clinical applications:
Emerging Classification Challenges
As of late 2023, the FDA had not approved any devices that rely on a purely generative AI (genAI) architecture. genAI technologies can create synthetic content, including medical images or clinical text, which may require new regulatory approaches.
The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.
Federal Medical Device Regulation
The Federal Food, Drug, and Cosmetic Act (FFDCA) provides the foundational legal framework governing healthcare AI systems that meet medical device criteria. In 2021, the Health Information Technology for Economic and Clinical Health Act (the “HITECH Act”) was amended to require the Health and Human Services (HHS) Secretary to further encourage regulated entities to bolster their cybersecurity practices. The 21st Century Cures Act clarified FDA authority over certain software functions while exempting specific low-risk applications from medical device regulation. In January 2025, proposed legislation (The Health Technology Act of 2025 (H.R. 238) was introduced to amend the FFDCA and allow AI systems to prescribe FDA-approved drugs autonomously.
Cybersecurity Requirements
The Consolidated Appropriations Act of 2023 amended the FFDCA to require cybersecurity information in pre-market submissions for “cyber devices”. Medical device manufacturers must now include cybersecurity information in pre-market submissions for AI-enabled devices that connect to networks or process electronic data.
Health Information Privacy Regulation
The HIPAA and the HITECH Act establish comprehensive privacy and security requirements for protected health information (PHI) used in AI systems. The introduction of AI does not change the traditional HIPAA rules on permissible uses and disclosures of PHI.
AI-Specific Privacy Considerations
AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimise performance. Healthcare organisations must ensure that AI vendors processing PHI operate under robust business associate agreements (BAAs) that specify permissible data uses and required safeguards.
Executive Branch AI Initiatives
President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence established government-wide AI governance requirements that affected healthcare applications. The order was rescinded by President Trump on 20 January 2025, within hours of his inauguration. Further White House action in this area is uncertain.
State-Level AI Regulation
Multiple states have enacted healthcare-specific AI legislation addressing various aspects of AI deployment and use. In the 2024 legislative session, 45 states, Puerto Rico, the US Virgin Islands and the District of Columbia introduced artificial AI bills, and 31 states, Puerto Rico and the US Virgin Islands adopted resolutions or enacted legislation.
Federal Anti-Discrimination Requirements
In July 2024, new requirements were put in place to help protect consumers from discrimination when AI tools are used in healthcare. A final rule, published by the Department of HHS Office for Civil Rights (OCR) as part of Section 1557 of the Affordable Care Act (ACA), stated that healthcare entities must ensure AI systems do not discriminate against protected classes and must take corrective action when discrimination is identified. Given President Trump’s opposition to diversity, equity and inclusion (DEI) initiatives, however, it is uncertain whether or how compliance with the final rule will be enforced.
FDA Pre-Market Pathways
Healthcare AI developers must navigate established FDA pre-market pathways depending on their system’s risk classification and intended use. The FDA reviews medical devices through an appropriate pre-market pathway, as follows.
PCCPs
The guidance recommends information to include in a PCCP as part of a marketing submission for a medical device using AI. The PCCP should include a description of the device’s planned modifications; methods to develop, validate and implement the modifications; and an assessment of the modification’s impacts. This innovative approach enables AI developers to modify their systems without additional pre-market submissions when changes fall within predetermined parameters.
Clinical Evidence Requirements
AI system developers must provide clinical evidence demonstrating safety and effectiveness for intended uses. Evidence requirements vary based on risk classification, with higher-risk systems requiring more extensive clinical validation. The FDA emphasises real-world evidence and post-market surveillance.
Expedited Pathways
The FDA provides several expedited pathways for breakthrough medical devices, including AI systems, that address unmet medical needs or provide significant advantages over existing treatments. These pathways offer enhanced FDA communication and expedited review timelines while maintaining safety and effectiveness standards.
Regulatory Framework for AI-Based SaMD
On 6 January 2025, the FDA published the Draft Guidance: Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations. This comprehensive guidance addresses unique challenges posed by AI-enabled software medical devices.
Other issues include the following.
Developers and users of healthcare AI must adhere to a number of data privacy and security requirements, including:
A number of mandated and voluntary standards regimes apply to healthcare AI, and multiple standards organisations have developed technical requirements and best practices for interoperability of such systems. Among others, the Health Level Seven International (HL7) Fast Healthcare Interoperability Resources (FHIR) standard enables AI systems to exchange data with electronic health record systems and other healthcare technologies.
The FDA recognises consensus standards developed by organisations such as ASTM International, the Institute of Electrical and Electronics Engineers (IEEE), and the International Organization for Standardization (ISO) that address AI system safety, performance and cybersecurity requirements; establish quality management system requirements for medical device development; and address design controls, risk management and validation processes throughout the AI development life cycle.
The Department of HHS proposed regulations in January 2025 that require covered entities to conduct vulnerability scanning at least every six months and penetration testing at least annually. The National Institute of Standards and Technology (NIST) publishes comprehensive cybersecurity frameworks that provide guidelines for protecting AI systems and the health information they process.
Adherence to established data exchange standards, application programming interface (API) specifications and workflow integration protocols enable AI tools to function within complex healthcare technology environments.
US federal regulatory bodies that oversee healthcare AI include the following.
Healthcare AI developers must comply with a range of pre-market requirements involving the following, among other issues.
Following market introduction, healthcare AI systems must continue to monitor performance and compliance. Key areas of concern include:
Non-compliance with relevant laws and regulations can be addressed in several ways.
Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.
Healthcare AI liability often involves multiple stakeholders, including healthcare providers, AI developers, healthcare institutions and others in the AI supply chain. For example, a consultation that results in patient harm might implicate the treating physician, the health system and the developers of clinical decision support software used during the encounter.
Other considerations include the following.
Traditional malpractice standards must adapt to address algorithm-based recommendations and decision support. In April 2024, the Federation of State Medical Boards released recommendations to its members indicating, among other suggestions, that they should hold clinicians liable if AI technology makes a medical error. Healthcare providers must understand AI system limitations and maintain appropriate clinical judgment when incorporating algorithmic recommendations into patient care decisions.
Causation Challenges
When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Among other issues, courts must evaluate whether AI system recommendations served as a proximate cause of patient harm, as well as the impacts of the healthcare provider’s independent medical judgment and other contributing factors.
Documentation and Evidence Requirements
Healthcare providers must maintain detailed documentation of AI system use, including the specific recommendations provided, clinical reasoning for accepting or rejecting algorithmic guidance and any modifications made to AI-generated suggestions.
Expert Testimony Considerations
AI-related malpractice cases may require expert witnesses with specialised knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards.
Burden of Proof Considerations
Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority. Decisions regarding potential liability often depend on judgments made by lay-person jurors.
To mitigate risks associated with healthcare AI, developers, vendors, health systems and practitioners should:
When disputes arise, healthcare providers, systems, and healthcare AI developers can look to several defence strategies:
In the United States, healthcare AI ethical frameworks emphasise core principles such as beneficence, non-maleficence, autonomy and justice. These principles guide AI development and deployment decisions while addressing unique challenges posed by algorithmic decision-making in healthcare settings. Some of the more commonly known (and mostly voluntary) frameworks include the following.
To minimise risk to patients, providers and health systems, a number of tools can be implemented, including the following.
Given the rapid turnabout in executive-branch policy toward DEI and anti-discrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected. The following review looks at policies that existed before the Trump administration; it is fair to say that many of these will be revised this year.
In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organisations that implement it — not to the AI itself. Specific best practices include:
Training data for healthcare AI systems must meet stringent standards in order to provide meaningful information and outcomes.
The transfer of PHI and other sensitive information should occur only under specific rules that protect patient privacy and address the following.
Data sharing activities should address the following:
To better ensure appropriate safeguards for, and anonymity of, health data, the following must be taken into account.
Patents remain one of the most effective tools for protecting healthcare innovations. AI developers should consider the following when developing new systems and platforms.
Recent developments suggest increased scrutiny of AI-related patents. The Federal Circuit’s approach to software patentability under the Alice framework continues to evolve, with AI-specific considerations including whether the AI application provides a technical improvement to computer functionality, the degree of human intervention in AI-generated inventions and potential heightened obviousness rejections when prior art includes AI tools.
Software code, user interfaces, and documentation may receive copyright protection as original works of authorship. However, copyright protection does not extend to underlying algorithms or mathematical concepts, limiting its scope for AI innovations.
Additional issues to consider include:
When multiple parties contribute to the development, deployment and analysis of healthcare AI technology and its outputs, the following issues should be considered.
Two primary licensing and commercialisation models prevail in today’s healthcare AI marketplace:
Specific considerations must be addressed with respect to:
AI-based clinical decision support systems receive different regulatory treatment depending on their specific functionality and the level of interpretation provided to healthcare providers. Other considerations include:
AI-based diagnostic tools are regulated as medical devices when they analyse patient data to provide diagnostic information or recommendations. These systems typically require clinical validation demonstrating safety and effectiveness for specific intended uses.
Different medical specialties have developed specific frameworks for AI diagnostic tools that address unique validation requirements and clinical applications. For example, radiology AI systems may require different validation approaches compared to pathology or cardiology applications.
Successful diagnostic AI deployment requires effective integration with existing clinical workflows, imaging systems and laboratory processes.
AI systems used in treatment planning or therapeutic decision-making face regulatory oversight based on their risk classification and potential impact on patient care. Higher-risk applications may require extensive clinical validation, PMA processes and meaningful human oversight to ensure appropriate clinical judgment and professional accountability, as well as robust safety monitoring programmes.
Therapeutic AI systems should demonstrate clinical benefits through appropriate evidence generation.
Although in use for decades, telemedicine demonstrated its value most clearly during the recent COVID-19 pandemic. In this context, healthcare AI applications used in remote patient monitoring and telemedicine must comply with both AI-specific regulations and broader telemedicine legal frameworks.
AI systems functioning in home or non-clinical settings face unique regulatory challenges related to device performance, user training and clinical oversight, as well as data privacy and security requirements for data collected outside traditional healthcare settings.
At-home and remote-monitoring AI also requires integration with clinical workflows that enable healthcare providers to review data, respond to alerts and co-ordinate care for remote patients. Finally, AI-enabled remote monitoring services must navigate complex reimbursement landscapes that vary by payer, service type and clinical application.
AI tools used in drug discovery and development receive regulatory oversight through established pharmaceutical development pathways, but are also subject to a number of challenges posed by algorithmic approaches to drug design and clinical trial optimisation, including:
The second Trump administration has taken great pains to change the role of the federal government with respect to technology, medical research and much, much more. In this environment, it is difficult to predict with certainty how the healthcare AI landscape will develop over the next year or two. With that in mind, however, certain questions remain.
Given current executive-branch disruptions and uncertainties, existing innovation programmes and regulatory sandboxes may soon see their expiration dates. The following programmes may be under threat.
Given current uncertainties, it is difficult to predict the degree to which US agencies, businesses and other organisations will be allowed to participate in multinational initiatives aimed at harmonising healthcare AI regulations.
Given current political, legislative and regulatory uncertainties in the United States, it remains to be seen which legal challenges with respect to healthcare AI are likely to rise to the fore. That said, there remain a number of key issues that continue to be subject to ongoing scrutiny and debate, including questions involving:
Healthcare stakeholders should remain focused on a number of core compliance matters:
Healthcare AI contracts must address complex technical, legal and regulatory requirements and allocate risks and responsibilities appropriately. Contracts should cover system performance, data handling, regulatory compliance and liability allocation with sufficient detail to prevent disputes. Other key issues to consider include:
Healthcare organisations should audit existing insurance coverage to identify potential gaps related to AI risks. Traditional policies may not adequately cover AI-specific risks such as algorithmic errors, data breaches or intellectual property infringement.
The insurance industry has developed specialised products and pricing models to address AI-related risks, including cyber-liability insurance, technology errors and omissions coverage, and AI-specific professional liability policies. Insurance carriers increasingly evaluate AI-related risks during underwriting processes, requiring detailed information about AI system deployment, governance frameworks and risk management practices. Organisations with robust AI governance may qualify for preferred pricing or coverage terms.
Organisations considering the implementation of healthcare AI should:
To navigate cross-border deployment of healthcare AI, organisations should pay specific attention to the following, among other issues:
201 St Charles Ave
New Orleans
LA 70170-5100
USA
+1 337 593 7634
+1 337 593 7601
ndelahoussaye@joneswalker.com www.joneswalker.comNavigating Regulatory Evolution, Market Dynamics and Emerging Challenges in an Era of Rapid Innovation
The use of artificial intelligence (AI) tools in healthcare continues to evolve at an unprecedented pace, fundamentally reshaping how medical care is delivered, managed and regulated across the United States. As 2025 progresses, the convergence of technological innovation, regulatory adaptation (or lack thereof) and market shifts has created remarkable opportunities and complex challenges for healthcare providers, technology developers, and federal and state legislators and regulatory bodies alike.
The rapid proliferation of AI-enabled medical devices represents perhaps the most visible manifestation of this transformation. With nearly 800 AI and machine learning (ML)-enabled medical devices authorised for marketing by the US Food and Drug Administration (FDA) in the five-year period ending September 2024, the regulatory apparatus has been forced to adapt traditional frameworks designed for static devices to accommodate dynamic, continuously learning algorithms that evolve after deployment. This fundamental shift has prompted new approaches to oversight, such as the development of predetermined change control plans (PCCPs) that allow manufacturers to modify their systems within predefined parameters and without requiring additional pre-market submissions.
Regulatory Frameworks Under Pressure
The regulatory environment governing healthcare AI reflects the broader challenges facing federal agencies as they attempt to balance innovation with patient safety. The FDA’s approach to AI-enabled software as a medical Device (SaMD) has evolved significantly, culminating in the January 2025 publication of comprehensive draft guidance addressing life cycle management and marketing submission recommendations for AI-enabled device software functions. This guidance represents a critical milestone in establishing clear regulatory pathways for AI and ML systems that challenge traditional notions of device stability and predictability.
The traditional FDA paradigm of medical device regulation was not designed for adaptive AI and ML technologies. This creates unique challenges for continuously learning algorithms that may evolve after initial market authorisation. The FDA’s January 2021 AI/ML-based SaMD Action Plan outlined five key actions based on the total product life cycle approach, including tailoring regulatory frameworks with predetermined change control plans, harmonising good ML practices, developing patient-centric approaches, supporting bias elimination methods and piloting real-world performance monitoring.
However, the regulatory landscape remains fragmented and uncertain. The rescission of Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence by the Trump administration, and the administration’s issuance of its own EO on AI (Removing Barriers to American Leadership in Artificial Intelligence) in January 2025, has created additional uncertainty regarding federal AI governance priorities. While the EO has been rescinded, its influence persists through agency actions already underway, including the Section 1557 final rule on non-discrimination of the US Department of Health and Human Services (HHS) and the final rule on algorithm transparency of the Office for Civil Rights (ONC). Consequently, enforcement priorities and future regulatory development remain uncertain.
State-level regulatory activity has attempted to fill some of these gaps, with 45 states introducing AI-related legislation during the 2024 session. California’s AB 3030, which specifically regulates generative AI use in healthcare, exemplifies the growing trend towards state-specific requirements that healthcare organisations must navigate alongside federal regulations. This patchwork of state and federal requirements creates particularly acute challenges for healthcare AI developers and users operating across multiple jurisdictions.
Data Privacy and Security: The HIPAA Challenge
One of the most pressing concerns facing healthcare AI deployment involves the intersection of AI capabilities and healthcare data privacy requirements. The Health Insurance Portability and Accountability Act (HIPAA) was enacted long before the emergence of modern AI systems, creating significant compliance challenges as healthcare providers increasingly rely on AI tools for clinical documentation, decision support and administrative functions.
The use of AI-powered transcription and documentation tools has emerged as a particular area of concern. Healthcare providers utilising AI systems for automated note-taking during patient encounters face potential HIPAA violations if proper safeguards are not implemented. These systems often require access to comprehensive patient information to function effectively, yet traditional HIPAA standards may conflict with AI systems’ need for extensive datasets to optimise performance. AI tools must be designed to access and use only the protected health information (PHI) strictly necessary for their purpose, even though AI models often seek comprehensive datasets to achieve their full potential.
The proposed Department of HHS regulations issued in January 2025 attempt to address some of these concerns by requiring covered entities to include AI tools in their risk analysis and risk management compliance activities. These requirements mandate that organisations conduct vulnerability scanning at least every six months and penetration testing annually, recognising that AI systems introduce new vectors for potential data breaches and unauthorised access.
Business associate agreements (BAAs) have become increasingly complex as organisations attempt to address AI-specific risks. These agreements must now encompass algorithm updates, data retention policies and security measures for ML processes while ensuring that AI vendors processing protected health information operate under robust contractual frameworks that specify permissible data uses and required safeguards. Healthcare organisations must ensure that AI vendors processing PHI operate under robust BAAs that specify permissible data uses and necessary security measures, and account for AI-specific risks related to algorithm updates, data retention policies and other ML processes.
Algorithmic Bias and Health Equity Concerns
The potential for algorithmic bias in healthcare AI systems has emerged as one of the most significant ethical and legal challenges facing the industry. A 2024 review of 692 AI- and ML-enabled FDA-approved medical devices revealed troubling gaps in demographic representation, with only 3.6% of approvals reporting race and ethnicity data, 99.1% providing no socioeconomic information and 81.6% failing to report study subject ages.
These data gaps have profound implications for health equity, as AI systems trained on non-representative datasets may perpetuate or exacerbate existing healthcare disparities. Training data quality and representativeness significantly – and inevitably – impact AI system performance across diverse patient populations. The challenge is particularly acute given the rapid changes in federal enforcement priorities regarding diversity, equity and inclusion (DEI) initiatives.
While the April 2024 HHS final rule under Section 1557 of the Affordable Care Act established requirements for healthcare entities to ensure AI systems do not discriminate against protected classes, the current administration’s opposition to DEI initiatives has created uncertainty about enforcement mechanisms and compliance expectations. Given the rapid turnabout in executive-branch policy towards DEI and anti-discrimination initiatives, it remains to be seen how federal healthcare AI regulations with respect to bias and fairness will be affected.
Healthcare organisations are increasingly implementing systematic bias testing and mitigation strategies throughout the AI life cycle, focusing on technical validation, promoting health equity, ensuring algorithmic transparency, engaging patient communities, identifying fairness issues and trade-offs, and maintaining accountability for equitable outcomes. AI system developers have, until recently, faced increasing regulatory pressure to ensure training datasets adequately represent diverse patient populations. Most healthcare AI developers and practitioners continue to maintain that relevant characteristics, including age, gender, sex, race and ethnicity, should be appropriately represented and tracked in clinical studies to ensure that results can be reasonably generalised to the intended use populations.
However, these efforts often occur without clear regulatory guidance or standardised methodologies for bias detection and remediation. Special attention must be paid to protecting vulnerable populations, including paediatric patients, elderly individuals, racial and ethnic minorities, and individuals with disabilities.
Professional Liability and Standards of Care
The integration of AI into clinical practice has created novel questions about professional liability and standards of care that existing legal frameworks struggle to address. Traditional medical malpractice analysis relies on established standards of care, but the rapid evolution of AI capabilities makes it difficult to determine what constitutes appropriate use of algorithmic recommendations in clinical decision-making.
Healthcare AI liability generally operates within established medical malpractice frameworks that require the establishment of four key elements: duty of care, breach of that duty, causation and damages. When AI systems are involved in patient care, determining these elements becomes more complex. While a physician must exercise the skill and knowledge normally possessed by other physicians, AI integration creates uncertainty about what constitutes reasonable care.
The Federation of State Medical Boards’ April 2024 recommendations to hold clinicians liable for AI technology medical errors represent an attempt to clarify professional responsibilities in an era of algorithm-assisted care. However, these recommendations raise complex questions about causation, particularly when multiple factors contribute to patient outcomes and when AI systems provide recommendations that healthcare providers may accept, modify or reject based on their clinical judgment.
When algorithms influence or drive medical decisions, determining responsibility for adverse outcomes presents novel legal challenges not fully addressed in existing liability frameworks. Courts must evaluate whether AI system recommendations served as a proximate cause of patient harm, as well as the impacts of the healthcare provider’s independent medical judgment and other contributing factors.
Documentation requirements have become increasingly important, as healthcare providers must maintain detailed records of AI system use, including the specific recommendations provided, clinical reasoning for accepting or rejecting algorithmic guidance, and any modifications made to AI-generated suggestions. These documentation practices are essential for defending against potential malpractice claims while ensuring that healthcare providers can demonstrate appropriate clinical judgment and professional accountability.
AI-related malpractice cases may require expert witnesses with specialised knowledge of medical practice and existing AI technology capabilities and limitations. Such experts should have the experience necessary to evaluate whether healthcare providers used AI systems in an appropriate manner and whether algorithmic recommendations met relevant standards. Plaintiffs in AI-related malpractice cases face challenges proving that AI system errors directly caused patient harm, particularly when healthcare providers retained decision-making authority.
Market Dynamics and Investment Trends
Despite regulatory uncertainties, venture capital investment in healthcare AI remains robust, with billions of dollars allocated to start-ups and established companies developing innovative solutions. However, investment patterns have become more selective, focusing on solutions that demonstrate clear clinical value and regulatory compliance rather than pursuing speculative technologies without proven benefits.
The American Hospital Association’s early 2025 survey of digital health industry leaders revealed cautious optimism, with 81% expressing positive or cautiously optimistic outlooks for investment prospects and 79% indicating plans to pursue new investment capital over the next 12 months. This suggests continued confidence in the long-term potential of healthcare AI despite near-term regulatory and economic uncertainties.
Clinical workflow optimisation solutions, value-based care enablement platforms and revenue cycle management technologies have attracted significant funding, reflecting healthcare organisations’ focus on addressing immediate operational challenges while building foundations for more advanced AI applications. The increasing integration of AI into these core healthcare functions demonstrates the technology’s evolution from experimental applications to essential operational tools.
Major technology corporations are driving significant innovation in healthcare AI through substantial research and development investments. Companies such as Google Health, Microsoft Healthcare, Amazon Web Services and IBM Watson Health continue to develop foundational AI platforms and tools. Large health systems and academic medical centres lead healthcare AI adoption through dedicated innovation centres, research partnerships and pilot programmes, often serving as testing grounds for emerging AI technologies.
Pharmaceutical companies increasingly integrate AI throughout drug development pipelines, from target identification and molecular design to clinical trial optimisation and regulatory submissions. These investments aim to reduce development costs and timelines while improving success rates for new therapeutic approvals.
Large healthcare technology companies increasingly acquire specialised AI start-ups to integrate innovative capabilities into comprehensive healthcare platforms. These acquisitions accelerate technology deployment while providing start-ups with the resources necessary for large-scale implementation and regulatory compliance.
Emerging Technologies and Integration Challenges
The rapid advancement of generative AI technologies has introduced new regulatory and practical challenges for healthcare organisations. As of late 2023, the FDA had not approved any devices relying on purely generative AI architectures, creating uncertainty about the regulatory pathways for these increasingly sophisticated technologies. Generative AI’s ability to create synthetic content, including medical images and clinical text, requires new approaches to validation and oversight that traditional medical device frameworks may not adequately address.
The distinction between clinical decision support tools and medical devices remains an ongoing area of regulatory clarification. Software that provides information to healthcare providers for clinical decision-making may or may not constitute a medical device depending on the specific functionality and level of interpretation provided.
Healthcare AI systems must provide sufficient transparency to enable healthcare providers to understand system recommendations and limitations. The FDA emphasises the importance of explainable AI that allows clinicians to understand the reasoning behind algorithmic recommendations. AI systems must provide understandable explanations for their recommendations, which healthcare providers in turn use to communicate with patients.
The integration of AI with emerging technologies such as robotics, virtual reality and internet of medical things (IoMT) devices creates additional complexity for healthcare organisations attempting to navigate regulatory requirements and clinical implementation challenges. These convergent technologies offer significant potential benefits but also introduce new risks related to cybersecurity, data privacy and clinical safety that existing regulatory frameworks struggle to address comprehensively.
AI-enabled remote monitoring systems utilise wearable devices, IoMT sensors and mobile health applications to continuously track patient vital signs, medication adherence and disease progression. These technologies enable early intervention for deteriorating conditions and support chronic disease management outside traditional healthcare settings, but face unique regulatory challenges related to device performance, user training and clinical oversight.
Cybersecurity and Infrastructure Considerations
Healthcare data remains a prime target for cybersecurity threats, with data breaches involving 500 or more healthcare records reaching near-record numbers in 2024, continuing an alarming upward trend. Healthcare data remains a prime target for hackers due to its high value on black markets and the critical nature of healthcare operations, which makes organisations more likely to pay ransoms.
The integration of AI systems, which often require access to vast amounts of patient data, further complicates the security landscape and creates new vulnerabilities that organisations must address through robust security frameworks. Healthcare organisations face substantial challenges integrating AI tools into existing clinical workflows and electronic health record systems. Technical interoperability issues, user training requirements and change management processes require significant investment and co-ordination across multiple departments and stakeholders.
The Consolidated Appropriations Act of 2023’s requirement for cybersecurity information in pre-market submissions for “cyber devices” represents an important step towards addressing these concerns, but the rapid pace of AI innovation often outstrips the development of adequate security measures. Medical device manufacturers must now include cybersecurity information in pre-market submissions for AI-enabled devices that connect to networks or process electronic data.
Healthcare organisations must implement comprehensive cybersecurity programmes that address not only technical vulnerabilities but also the human factors that frequently contribute to data breaches. Strong technical safeguards must be implemented when using de-identified data for AI training, including access controls, encryption, audit logging and secure computing environments, and should address both intentional and accidental re-identification risks throughout the AI development process.
A significant concern is the lack of a private right of action for individuals affected by healthcare data breaches, leaving many patients with limited recourse when their sensitive information is compromised. While many states have enacted laws more stringent than federal legislation, enforcement resources may be stretched thin.
Human Oversight and Professional Standards
In most federal and state regulatory schemes, ultimate responsibility for healthcare AI systems is assigned to the people and organisations that implement it rather than to the AI itself. Healthcare providers must maintain ultimate authority for clinical decisions even when using AI-powered decision support tools. Healthcare AI applications must require meaningful human involvement in decision-making processes rather than defaulting to fully automated systems.
AI systems must provide healthcare providers with clear, easily accessible mechanisms to override algorithmic recommendations when clinical judgment suggests alternative approaches. Healthcare providers using AI systems must be provided with the tools to achieve system competency through ongoing training and education programmes. At the organisation level, hospitals and health systems must implement robust quality assurance programmes that monitor AI system performance and healthcare provider usage patterns.
Medical schools and residency programmes are beginning to incorporate AI literacy into their curricula, while professional societies are developing guidelines for the responsible use of these tools in clinical practice. For digital health developers, these shifts underscore the importance of designing AI systems that complement clinical workflows and support physician decision-making rather than attempting to automate complex clinical judgments.
The rapid advancement of AI in healthcare is reshaping certain medical specialties, particularly those that rely heavily on image interpretation and pattern recognition, such as radiology, pathology and dermatology. As AI systems demonstrate increasing accuracy in reading X-rays, magnetic resonance images (MRIs) and other diagnostic images, some medical students and physicians are reconsidering their specialisation choices. This trend reflects broader concerns about the potential for AI to displace certain aspects of physician work, though most experts emphasise that AI tools should augment rather than replace clinical judgment.
Conclusion: Balancing Innovation and Responsibility
The healthcare AI landscape in the United States reflects the broader challenges of regulating rapidly evolving technologies while promoting innovation and protecting patient welfare. Despite regulatory uncertainties and implementation challenges, the fundamental value proposition of AI in healthcare remains compelling, offering the potential to improve diagnostic accuracy, enhance clinical efficiency, reduce costs and expand access to specialised care.
Success in this environment requires healthcare organisations, technology developers and regulatory bodies to maintain vigilance regarding compliance obligations while advocating for regulatory frameworks that protect patients without unnecessarily hindering innovation. Organisations that can navigate the complex and evolving regulatory environment while delivering demonstrable clinical value will continue to find opportunities for growth and impact in this dynamic sector.
The path forward demands a collaborative approach that brings together clinical expertise, technological innovation, regulatory insight and ethical review. As 2025 progresses (and beyond), the healthcare AI community must work together to realise the technology’s full potential while maintaining the trust and confidence of patients, providers and the broader healthcare system. This balanced approach will be essential to ensuring that AI fulfils its promise as a transformative force in American healthcare delivery.
201 St Charles Ave
New Orleans
LA 70170-5100
USA
+1 337 593 7634
+1 337 593 7601
ndelahoussaye@joneswalker.com www.joneswalker.com