California State-Specific Legislation on Healthcare AI
California has established itself as a leader in the governance of artificial intelligence (AI), particularly within the healthcare industry. The state has enacted several laws that specifically address the use of AI in healthcare, often exceeding federal requirements with California-specific patient protections. The following sections outline the state’s principal new AI laws, which have wide-ranging implications for healthcare providers, AI developers, and related stakeholders, including: notice to patients, physician autonomy, accountability for the emerging technology’s use in patient care, consumer privacy and access to medical information, professional liability, and even the corporate practice of medicine.
California Health and Safety Code Provisions on Digital Health and AI
Assembly Bill 3030 (2024) – AI in Healthcare Act
Effective 1 January 2025, AB 3030 introduced requirements for healthcare entities utilising generative AI in patient communications related to clinical information, as outlined below.
An exemption applies if a communication generated by generative AI is subsequently read and reviewed by a human licensed or certified healthcare provider, who must document the review and any modifications made.
Senate Bill 1120 (2024) – Physicians Make Decisions Act
Effective 1 January 2025, SB 1120 addressed the use of AI in health insurance utilisation review and management functions. Unsurprisingly, the statute pays careful attention to physician autonomy, a concern for which California is well-known.
Assembly Bill 2885 (2024) – Algorithmic Accountability Act
Effective 1 January 2025, AB 2885 seeks coherence in California AI law by standardising the definition of “artificial intelligence” across the state’s legal framework and introducing certain governance requirements that you can find below.
California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) Amendments Relating to Health Data
The CCPA, as amended by the CPRA, provides augmented privacy protections for California residents. The CPRA introduced the concept of “sensitive personal information,” which includes health data and “neural data” (information generated by measuring the activity of a consumer’s central or peripheral nervous system). These protections apply to businesses that meet specific thresholds as defined in the law.
Key consumer rights under CCPA/CPRA relevant to healthcare AI include the following.
The California Attorney General’s office has emphasised that AI applications must adhere to these privacy laws, warning that non-compliance may result in penalties under the Unfair Competition Law.
California Medical Information Act (CMIA)
The venerable Confidentiality of Medical Information Act (CMIA) might be considered a “pre-HIPAA” state statute. It regulates the use and disclosure of individually identifiable medical information by licensed healthcare professionals, providers, and their contractors. The CMIA generally prohibits the disclosure of medical information without patient authorisation, with specific exceptions. Violations can result in both civil and criminal penalties, including fines of up to USD250,000 per violation.
Not surprisingly, given the law’s premise and the CMIA’s mandates, the California Attorney General has emphasised that AI systems comply with its requirements for safeguarding and securely using patient data. The California Attorney General’s office has stated that AI systems handling patient data must adhere to the CMIA, emphasising the need to limit access and improper use of sensitive information, including data used to train AI models.
Interplay Between California Laws
California’s legislative approach often builds upon or supplements other frameworks, frequently imposing more stringent requirements. This deeply layered regulatory environment necessitates a careful and comprehensive compliance strategy for stakeholders.
Liability and Malpractice Considerations for AI-Assisted Diagnosis and Treatment
The integration of artificial intelligence into clinical practice introduces complex liability and medical malpractice considerations that must be carefully managed. Traditional legal doctrines, designed for human-centric medical care, are being re-evaluated in the context of AI-assisted diagnosis and treatment, requiring healthcare providers to implement specific risk management protocols. Consider the standard of care, for example.
Standard of Care in an AI-Augmented Environment
Medical malpractice claims traditionally hinge on the “standard of care,” defined as the level of skill and judgment that a reasonably competent physician would exercise under similar circumstances.
As AI tools become more prevalent and accurate, the standard of care itself may evolve. While litigators may argue that a physician was negligent for under-utilising advanced AI tools that could have improved diagnostic accuracy or treatment outcomes, physicians should document their clinical reasoning when choosing not to rely on AI recommendations to demonstrate appropriate medical judgment.
Conversely, if a physician relies on a faulty AI recommendation that leads to patient harm, questions arise about whether the reliance itself was negligent or if the liability extends to the AI developer. Healthcare providers must establish clear protocols for documenting their review and validation of AI recommendations to demonstrate appropriate clinical judgment.
The Medical Board of California emphasises that AI tools are generally not capable of replacing a physician’s professional judgment, ethical responsibilities, or accountability, reinforcing that human oversight remains paramount.
The legal system currently lacks extensive precedent for AI-assisted malpractice. The question of whether AI falls under traditional product liability laws (which typically apply to medical devices) or medical malpractice doctrines remains an open issue for courts.
Corporate Practice of Medicine and AI
California’s famously stringent prohibition on the corporate practice of medicine (CPOM) is particularly relevant to AI in healthcare. This doctrine generally prohibits lay persons or entities from providing or engaging in clinical healthcare practices, ensuring that licensed professionals make medical decisions unhindered by fiscal or administrative management.
Now that AI has been grafted onto that ambitious regulatory scheme, the California Attorney General’s office has, of course, stated that AI cannot replace or override healthcare providers’ decisions and using AI to make decisions about patient treatment or to override medical needs may violate this ban.
Data Privacy and Security Requirements Specific to Healthcare AI
Much as the once-revolutionary electronic medical record, AI’s integration into healthcare systems adds yet another revolutionary impact on the handling of sensitive patient data, necessitating strict adherence to privacy and security regulations. California’s legal framework, encompassing both the CCPA/CPRA and CMIA, imposes significant obligations on healthcare AI stakeholders, including the following.
Algorithmic Bias and Fairness Requirements
This previously arcane concept is now a particularly significant concern in the deployment of artificial intelligence in healthcare. California has taken a proactive stance to address algorithmic bias, recognising that biased AI systems can perpetuate or exacerbate existing health inequities.
Assembly Bill 2885 (AB 2885), the Algorithmic Accountability Act, mandates that the Department of Technology conduct a comprehensive inventory of “high-risk automated decision systems” used or proposed by state agencies. After all, algorithms are similar to recipes: They provide step-by-step instructions for accomplishing a complex task. Healthcare “recipes” are complex and carry profound consequences. These decision systems are deemed high-risk if they materially impact access to, or approval for, critical areas such as housing, education, employment, or healthcare.
The inventory requires a description of measures in place to mitigate risks, including the risk of inaccurate, unfairly discriminatory, or biased decisions. This includes performance metrics to gauge accuracy and risk assessments or audits for potential biases.
The California Attorney General’s legal advisories explicitly address algorithmic bias within healthcare AI. The Attorney General emphasises that AI systems must align with state anti-discrimination laws. These laws prohibit discrimination based on protected characteristics such as sex, race, religion, or disability, and their applicability extends to AI systems, even if the discriminatory impact is unintentional.
The Attorney General warns that AI systems making “less accurate” predictions about protected classes could be considered discriminatory, regardless of data availability.
Healthcare entities are required to proactively design, acquire, and implement AI solutions that prevent past discrimination from being embedded or amplified by new technologies, and must maintain documentation of their bias testing and mitigation efforts. This includes avoiding uses of AI that could lead to discriminatory outcomes, such as using past claims data to deny patient access or conducting cost-benefit analyses based on stereotypes that undervalue certain patient populations.
Informed Consent and Transparency Obligations
The principles of Informed consent and transparency are woven into the fabric of healthcare. California’s regulatory framework is actively extending these principles to artificial intelligence. The state aims to ensure that patients are fully aware when AI is involved in their care and have the means to understand and question its role.
The AI in Healthcare Act mandates specific disclosure requirements for health facilities, clinics, physicians’ offices, and group practices that use generative AI to communicate patient clinical information.
Communications must include a prominent disclaimer indicating that generative AI produced the content. The disclaimer’s placement and format vary depending on whether the communication is written (physical, digital, or continuous online interactions), audio, or video.
The communication must also provide clear instructions on how a patient can contact a human healthcare provider, an employee of the facility, or other appropriate person.
These disclosure requirements do not apply if the AI-generated communication is read and reviewed by a human licensed or certified healthcare provider before being sent. This provision aims to strike a balance between transparency and the efficiency benefits of AI.
In what might have been rightly seen in years past as the basis for a science fiction movie, California is also addressing AI’s potential ability to impersonate healthcare professionals. Proposed legislation, such as Assembly Bill 489 (AB 489), seeks to explicitly prohibit AI and generative AI systems from misrepresenting themselves as titled healthcare professionals. This bill grants state boards the authority to pursue legal recourse against developers and deployers of AI systems that impersonate healthcare workers, reinforcing the principle that only licensed human professionals can provide medical advice or care.
The California Attorney General’s legal advisories further underscore the importance of patient transparency. The advisories state that healthcare providers are required to notify patients when AI technologies are used in diagnostic or treatment decisions, fostering trust and enabling informed patient choices.
Reimbursement and Coverage Considerations for AI-Powered Healthcare Services
The integration of AI into healthcare services introduces new complexities for reimbursement and coverage, particularly concerning the role of AI in utilisation management. California has taken steps to regulate this area, aiming to ensure that AI tools enhance, rather than impede, patient access to medically necessary care.
Regulating the role of AI
The Physicians Make Decisions Act directly limits the ability of healthcare service plans and disability insurers to use AI for utilisation review and management functions.
The statute prohibits health plans from denying, delaying, or modifying healthcare services based solely on artificial intelligence algorithms. The law explicitly mandates that human judgment must remain central to coverage decisions, with clear documentation of the decision-making process.
Any determination of medical necessity, leading to an approval, modification, delay, or denial of care, must be reviewed and decided by a licensed physician or qualified healthcare professional with expertise in the specific clinical issues involved. This ensures that AI tools serve as aids, not replacements, for clinical judgment and decision-making.
AI tools used in utilisation review must base their decisions on the enrollee’s medical or clinical history, individual clinical circumstances, and other relevant clinical information, rather than relying solely on group datasets. Healthcare organisations must maintain an auditable record of how individual patient data is weighted against population data in AI decision-making processes.
The law requires that AI tools, including their underlying algorithms, be open to inspection for audit or compliance reviews by regulatory bodies like the California Department of Managed Health Care (DMHC).
The DMHC is tasked with overseeing the enforcement of SB 1120, including auditing denial rates and ensuring transparency in AI-driven utilisation review processes. The law also imposes strict deadlines for authorisation requests, with administrative penalties for non-compliance.
These measures are intended to prevent inappropriate denials of benefits and ensure patients receive timely access to medically necessary services.
Enforcement Mechanisms and Penalties
Non-compliance with California’s healthcare AI regulations carries enforcement risks and potential penalties, including administrative fines, civil litigation, and criminal charges, as applicable under state law. State agencies and the Attorney General’s office possess broad authority to enforce AI-related laws in healthcare.
Violations of AB 3030’s generative AI communication disclosure requirements are subject to specific enforcement mechanisms, as outlined below.
The DMHC and the California Department of Insurance (CDI) have the authority to assess administrative penalties for health plans and insurers that fail to meet the requirements of SB 1120, including timeframes for authorisation decisions or improper use of AI.
The California Privacy Protection Agency (CPPA) and the California Attorney General enforce the CCPA and CPRA. Penalties for non-compliance can be substantial, ranging from USD2,500 per non-intentional violation to USD7,500 per intentional violation or for offences involving the personal information of minors under the age of 16.
Individuals can bring private rights of action against entities that negligently release confidential medical information under CMIA, seeking actual damages, nominal statutory damages of USD1,000, and/or punitive damages upon proof of willful misconduct. Healthcare providers who knowingly and willfully obtain, disclose, or use medical information in violation of CMIA may be liable for an administrative fine of up to USD2,500 per violation. The Department of Public Health can also assess administrative penalties, including USD25,000 per patient whose medical information was unlawfully accessed, used, or disclosed, with subsequent occurrences incurring USD17,500 per incident, and daily penalties for failure to report.
Practical Compliance Strategies for Healthcare Organisations
Navigating California’s healthcare AI regulatory landscape requires a proactive and multi-faceted compliance strategy. Organisations must integrate legal requirements into every stage of AI development, deployment, and ongoing operation.
Risk Assessment Frameworks for Healthcare Organisations
Healthcare organisations are obliged to take specific actions in order to assess risks related to the use of AI, such as:
Compliance Checklists for Healthcare AI Developers
Below, you can find compliance checklists for the developers of AI used within the healthcare sector.
Strategies for Navigating Conflicting State Requirements
Adopt the “most stringent” standard
When state laws impose different requirements, organisations should generally adhere to the more stringent standard to ensure comprehensive compliance.
Develop a layered compliance approach
Organisations need to implement a compliance program that addresses all state requirements distinctly, recognising where California laws add specific obligations (eg, AB 3030’s disclaimers, SB 1120’s human oversight for utilisation review).
Conclusion
California’s healthcare AI regulatory landscape is rapidly taking shape amid ongoing innovation and debate. That evolutionary process occurs through a legislative and enforcement approach that prioritises patient safety, data privacy, algorithmic fairness, and informed consent. The state’s framework introduces unique and often more stringent requirements, including transparency in AI-generated patient communications, human oversight in health insurance utilisation review, and robust data privacy rights for health information.