AI’s advancement has transformed industries and raised legal challenges. In the USA, most AI-related laws stem from existing frameworks rather than AI-specific laws. Key legal areas include:
These principles provide a framework for addressing AI issues while comprehensive AI-specific laws are considered. On the federal level, with the change in the administration, the authors expect further AI-related policy announcements (which will come after the publication of this guide).
AI has permeated various industries, driven innovations and offered benefits to industries, businesses and consumers – including the legal, financial services, marketing and manufacturing industries. For further information, see 2.2 Involvement of Governments in AI Innovation, 9.1 AI in the Legal Profession and Ethical Considerations, 14.2 Financial Services, and 14.5 Manufacturing.
Federal and state governments have taken significant steps towards facilitating AI adoption and advancement.
Federal Government
Key achievements include:
The Department of Commerce and the Department of Energy have developed AI testbeds and model evaluation tools, while the National Science Foundation’s (NSF) ExpandAI programme promotes AI research at minority-serving institutions.
Strategic partnerships include a plan announced by the United States and the United Arab Emirates (UAE), which recognised AI’s potential to accelerate economic growth, transform education and healthcare, create new jobs and fund environmental sustainability. In early 2025, the Trump administration removed certain restrictions on the use and development of AI. Further AI policy announcements from the administration are expected.
The National AI Research Resource and AI Talent Surge collaborate to improve AI research and development across industries. These projects promote innovation, equitable AI resource access, and AI talent integration across industries.
State Governments
US state governments are promoting AI adoption. For instance, New Jersey’s Next New Jersey Program offers tax breaks to AI-related businesses, while California and Colorado have enacted AI laws to ensure transparency and prevent discrimination regarding the use of AI. These measures guide AI developers and users and attract global talent.
There is no federal law that specifically addresses AI. The federal government has taken a conservative approach regarding new AI-specific laws and is instead relying on existing laws, executive orders and agency rule-making. State governments, however, have been proactive in enacting laws. In 2024, 45 states, Puerto Rico, the Virgin Islands and Washington, DC introduced AI bills. Moreover, 31 states, Puerto Rico and the Virgin Islands have passed resolutions or enacted AI-specific laws. The result is a “patchwork” regulatory framework.
The Trump administration is likely to take a more hands-off approach to regulating AI compared to the past administration, except in critical areas such as defence and healthcare. The Trump administration has previously criticised the Biden administration’s approach as heavy-handed and hindering innovation.
On 10 May 2025, the Trump administration notified Shira Perlmutter, the Register of Copyrights and Director of the US Copyright Office, of the immediate termination of her role. This follows prior concerns from top technology company leaders about the proposed “heavy-handed” regulations stifling AI innovation and the constraints imposed by existing intellectual property laws on AI development. It also follows the publication of Part 3 of the Office’s report on Copyright and Artificial Intelligence, in which it was stated that the US Copyright Office expects that some uses of copyright-protected material for training AI tools would not be considered fair use. The change in leadership may signify a transformation in the US Copyright Office’s position on AI-generated works and related policy, reflecting a shift from caution to pro-AI development.
Further policy announcements are expected later in 2025.
Enacted Federal AI Laws
Various orders, directives and guidelines have been issued, as detailed in 3.3 Jurisdictional Directives.
Enacted State AI Laws
State governments have enacted AI laws with a particular focus on privacy, deepfakes, employment, telemarketing and transparency:
October 2022
The White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, outlining five principles to help guide designers, developers and deployers of AI in the design, use and deployment of automated systems, with the goal of protecting the public’s rights.
January 2023
The National Institute of Standards and Technology developed a framework to better manage risks to individuals, organisations and society associated with AI. This framework aims to enhance the integration of trustworthiness in the design, development, use and evaluation of AI products, services and systems.
September 2024
The US Department of Labor published its AI & Inclusive Hiring Framework website to support employers with the inclusive use of AI in hiring technology. The website is currently not available.
December 2024
On 6 December 2024, President Trump appointed David Sacks as the Special Advisor for AI and Crypto.
President Trump’s Replacement AI Executive Order
On 23 January 2025, Trump issued the “Removing Barriers to American Leadership in AI” executive order, which was designed to enhance US global AI dominance, including by directing a co-ordinated team to submit an action plan within 180 days.
This topic is not applicable.
This topic is not applicable.
Issues Raised by Existing or Proposed State AI-Specific Laws
Lawmakers in 45 states introduced 635 bills relating to AI in 2024, with 99 of those being enacted. These bills centre on privacy, digital replicas, automated decisions, rights of publicity, telemarketing and transparency. These laws are discussed in detail in 3.2 Jurisdictional Law and 3.7 Proposed AI-Specific Legislation and Regulations.
Key Implications for Business
Because of the vast number of laws being enacted at the state level, it is challenging for businesses to stay informed of the various regulations. City-level AI laws, such as New York City’s Local Law 144, add further complexity. Businesses should adopt a comprehensive risk management programme to comply with such laws and their issues, including:
US State Comprehensive Privacy Laws
20 states have passed state comprehensive privacy laws, 16 of which are in effect, and four of which will come into effect within the next year. Some of these laws, such as the California Consumer Privacy Act (CCPA), regulate the automated processing of data and thus regulate the use of AI processing personal data in these states.
US AI Laws
California, Utah and Colorado have introduced AI-specific state laws, which include provisions that regulate the use of AI in processing personal data. These laws emphasise the need for transparency, accountability and fairness in AI applications, ensuring that AI systems do not infringe on individual privacy rights.
Key Highlights of Proposed Federal AI Laws
The No AI FRAUD Act
This protects individuals’ rights to control the use of their likeness and voice in AI-generated simulations, allowing lawsuits for violations. It was introduced on 10 January 2024.
The R U REAL Act
This requires telemarketers to disclose the use of AI to emulate human speech or text at the start of a call or message. It was introduced on 29 January 2024.
The Preventing Algorithmic Collusion Act
This bans the use or distribution of pricing algorithms that use non-public competitor data to facilitate collusion, and will empower antitrust authorities to audit, monitor and enforce violations. It was introduced on 30 January 2024.
The No Robot Bosses Act
This protects job applicants/employees from discrimination by AI hiring tools. It was introduced on 12 March 2024.
The Stop Spying Bosses Act
This forbids employers from collecting sensitive worker data or using automated decision systems to predict non-work behaviour. It was introduced on 15 March 2024.
The AI CONSENT Act
This requires companies to obtain informed consent from consumers before using their data to train AI systems, and directs the Federal Trade Commission (FTC) to set standards for disclosure, consent and de-identification of such data. It was introduced on 19 March 2024.
Key Highlights of Proposed State AI Laws
Algorithmic discrimination
Five states proposed bills that aim to prevent and remedy the harms of unfair automated decision tools. Requirements include impact assessments, disclosures, audits, notices, and alternative processes and enforcement powers for agencies and individuals.
Automated employment decision tools
Three states proposed bills that aim to prevent and remedy the adverse effects of automated employment decision tools on protected groups. The bills require bias audits, impact assessments, disparate impact analyses, disclosures and human oversight.
AI developers and deployers
Seven states proposed bills to hold developers to a duty of reasonable care to avoid algorithmic discrimination. They:
Synthetic media and disclosure
11 states proposed bills that aim to protect consumers, businesses and individuals from fraud, deception, harm and liability arising from synthetic media. Requirements include clear disclosure of the use of AI, as well as warning of potential legal consequences for misuse or abuse of AI.
User transparency
Three states have proposed bills that aim to improve transparency by mandating warnings regarding AI-generated output, notifications of AI-powered chatbot interactions, and disclosures regarding training data sources.
Consumer protection
Illinois and Pennsylvania have proposed bills that aim to prevent misleading AI use and to ensure compliance with AI-generated guarantees, respectively.
Insurance
Illinois has proposed a bill that limits AI use in adverse determinations relating to insurance claims and coverage, and mandates human oversight to ensure fairness.
Thaler v Perlmutter (DC, 18 March 2025)
On 18 March 2025, the United States Court of Appeals for the District of Columbia Circuit affirmed decisions by a lower court and the United States Copyright Office that human authorship is required to obtain copyright protection in the United States, thereby foreclosing copyright registration for content solely generated by AI. However, the decision does still leave open the possibility of copyright protection for works generated through the combination of human input and AI (although the level of human input is still to be tested in the courts).
Thomson Reuters Enterprise Centre GmbH et al v ROSS Intelligence Inc (D Del 2025)
Thomson Reuters claimed that ROSS committed copyright infringement by copying its Westlaw headnotes and using them to train its AI models without authorisation. The US District Court for the District of Delaware initially denied summary judgment, but, in February 2025, granted Thomson Reuters’ motion, finding the headnotes copyrightable and copied substantially by ROSS. The court rejected ROSS’s fair use defence, noting the commercial use and harm to Thomson Reuters’ business. This case highlights that transforming text into an algorithm does not ensure non-infringement or fair use, and that the expansion of the market for in-licensing training data may strengthen infringement claims.
The key regulatory agencies that play a leading role in AI in the USA are set forth below.
The US Copyright Office (USCO)
The USCO oversees US copyright law, advises Congress, aids courts and executive agencies, and conducts studies on copyright issues. The USCO began a multi-part report in 2023 covering digital replicas and AI-generated outputs, based on research and feedback from over 10,000 stakeholders.
The US Patent and Trademark Office (USPTO)
The USPTO grants patents and trade marks, and advises on IP policy.
FTC
The FTC enforces laws prohibiting unfair competition and deceptive practices, including those related to data privacy and patent matters.
The Federal Communications Commission (FCC)
The FCC regulates interstate and international communications (including AI/robo communications).
State Agencies and State Attorney Generals
These oversee data practices and AI applications within the state. State attorney generals or other representatives within the state government enforce state-specific data protection laws, investigate violations, and prosecute cases related to privacy and AI.
The key federal guidance issued in 2023 and 2024 is discussed below.
Binding Directives
FCC
On 8 February 2024, the FCC recognised calls made with AI-generated voices as “artificial” under the Telephone Consumer Protection Act.
USPTO guidance on AI-assisted inventions
Effective 12 February 2024, AI-assisted inventions require significant human contribution for patent protection.
USCO guidance on AI-generated works
This requires significant human authorship for copyright protection and disclosure of AI use in copyright applications, and recommends the adoption of a new federal law for unauthorised digital replicas.
The Department of Commerce’s Bureau of Industry and Security
Starting 15 May 2025, an export licence is required to transfer parameters for closed-weight AI models trained with 1026 (one hundred septillion) or more operations. Violating this requirement risks breaching export administration regulations. Providing infrastructure as a service to train AI models outside specified countries is a red flag, as it may lead to the diversion or exportation of model weights to the entity’s ultimate parent, thereby violating the rule.
Non-Binding Directives
Multi-agency
In April 2023, the CFPB, Department of Justice (DOJ), EEOC and FTC addressed biased AI systems. The statement was removed in January 2025.
DOJ
On 7 March 2024, the department issued a directive to assess AI risk in corporate compliance programmes.
The Cybersecurity and Infrastructure Security Agency
On 26 April 2024, the agency released guidelines for secure AI system development, emphasising “Secure by Design” principles and radical transparency.
The Department of Energy (DOE)
On 26 April 2024, the DOE released its Generative AI Reference Guide on the responsible use of AI.
The National Institute of Standards and Technology (NIST) Framework
See the NIST update in 3.3 Jurisdictional Directives, the “July 2024 Fact Sheet”. On 29 April 2024, NIST released four draft publications on AI risk management, secure software development, synthetic content mitigation and global engagement.
The Department of Housing and Urban Development
On 2 May 2024, the department released guidance addressing the use of AI to screen applicants for rental housing and advertisements for housing, credit and other real estate-related transactions on digital platforms.
The Commodity Futures Trading Commission (CFTC)
On 5 December 2024, the CFTC issued an advisory on AI integration’s impact on CFTC-regulated entities in derivatives trades.
Texas Attorney General Settlement
On 18 September 2024, the Texas Attorney General’s Office settled with Pieces Technologies, Inc, a generative AI healthcare company, over deceptive product accuracy and safety claims. The company must pay USD250,000, implement a compliance programme and avoid unsubstantiated claims about its AI capabilities. This sets a precedent for other regulators to scrutinise AI in healthcare, emphasising the need for accurate, transparent and evidence-supported marketing.
FTC Settlement
On 3 December 2024, the FTC settled with IntelliVision Tech Corp over misleading ads about its AI-powered facial recognition software’s accuracy regarding race, gender and ethnicity detection. The consent order prohibits IntelliVision from misrepresenting its software’s accuracy and efficacy and from claiming that it can detect spoofing without reliable testing.
Securities and Exchange Commission (SEC) Consent Order
On 18 March 2024, the SEC announced a consent order with Delphia (USA) Inc and Global Predictions Inc for false statements about its AI’s role in managing investment portfolios. Delphia must pay USD225,000, and Global Predictions must pay USD175,000. The SEC’s actions caution investors against unsubstantiated AI claims and to conduct due diligence.
Standards include those that are self-imposed by industry leaders, those by industry organisations and those by governmental authorities.
In July 2023, the Biden administration announced that Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI committed to help move towards safe, secure and transparent development of AI technology. These commitments underscored three principles – safety, security and trust – and marked a critical step towards developing responsible AI.
The Artificial Intelligence Standards Committee of the Institute of Electrical and Electronics Engineers (IEEE) is responsible for standards that enable the governance and practice of AI as relates to computational approaches to machine learning, algorithms and data usage.
NIST, a non-regulatory agency within the US Department of Commerce, develops AI standards, including federal AI standards, and collaborates with government and industry to identify strategies and gaps in AI standards development. NIST has prepared publications to help improve the safety, security and trustworthiness of AI systems, including:
In April 2025, NIST issued a Plan for Global Engagement on AI standards, surveying both independent and governmental standards and identifying opportunities for international standardisation and collaboration.
Notably, the Executive Order on Removing Barriers to American Leadership in AI, issued by President Trump on 23 January 2025, did not mention NIST. The extent of NIST’s role regarding AI remains uncertain as further guidance from the Trump administration is awaited.
ISO/IEC 42001:2023
Introduced by the International Organization for Standardization (ISO) in December 2023, this standard outlines AI management system requirements. It guides organisations in developing, providing or using AI products or services, focusing on ethics, transparency and continuous improvement. Applicable to all industries, it promotes sound AI governance through the Plan‐Do‐Check‐Act methodology, and offers a practical approach to managing AI-related risks and opportunities.
UNESCO’s Recommendation on the Ethics of AI
In November 2021, UNESCO released the “Recommendation on the Ethics of AI”, which aims to ensure that AI respects human rights, dignity and environmental sustainability, emphasising transparency, accountability and the rule of law. Adopted by 193 member states, it addresses AI’s rapid evolution and impact, aiming to prevent discrimination and human rights violations. Although not legally binding, it has influenced AI practices in US companies, with major firms such as Microsoft, Salesforce and Mastercard adopting its framework.
Federal and state governments are evaluating or using AI:
Governmental Operations
Authorities leverage AI to enhance efficiency and decision-making. Agencies such as the Department of Health and Human Services and the General Services Administration use AI to streamline operations, reduce costs and improve service delivery.
The Office of Management and Budget (OMB) published two memoranda in April 2025 updating Executive Branch AI acquisition and governance guidance. These guidelines emphasise risk management, transparency and responsibility for federal agencies buying and using AI technologies.
Criminal Justice System
Agencies such as the Department of Homeland Security, US Border Patrol and military organisations use AI to enhance security, protect privacy and employ military advantages.
Compliance With Laws
The SEC and the FTC use AI to detect fraud and financial misconduct, while the Internal Revenue Service (IRS) uses AI for tax audits.
Public Service Purposes
State legislatures fund AI initiatives to enhance public service delivery (eg, Ohio’s Medicaid programme savings, Florida’s AI customer service solution and Hawaii’s wildfire forecast system).
Criminal Investigations and Border Protection
Several government agencies use facial recognition and biometrics to support criminal prosecution and national security. This tech raises concerns about privacy, misidentification and surveillance, especially for marginalised groups. Biased algorithms necessitate stringent oversight to ensure privacy and fairness. See 7.2 Judicial Decisions and 11.2 Facial Recognition and Biometrics for more detail.
Recent judicial decisions and pending cases related to government use of AI have highlighted key issues concerning due process, fairness and transparency. Courts have primarily focused on procedural grounds, emphasising due process infringements. For example, in State v Loomis (Wis 2016), Eric Loomis challenged the use of correctional offender management profiling for alternative sanctions (COMPAS) AI risk assessment software in his sentencing, arguing due process violations. Loomis contended that the closed-source nature of the software prevented him from challenging its scientific validity and accuracy. The Wisconsin Supreme Court upheld the use of COMPAS, and the US Supreme Court declined to hear the case in 2017.
The case of Woodruff v City of Detroit (ED Mich 2023) challenges the Detroit police department’s use of facial recognition technology (FRT). The ACLU filed an amicus brief arguing that the reliance on flawed FRT tainted the investigation and failed to establish probable cause. FRT has higher false match rates for people of colour, women and young adults. The case is still ongoing.
Use and Impact of AI in National Security
Presently, government agencies are using AI for surveillance, reconnaissance, cybersecurity, autonomous systems, decision support systems, and logistics and supply chain management. National security shapes laws and policy concerning AI, as emphasised by President Biden’s memorandum (released 24 October 2024) titled “Advancing the United States’ Leadership in AI; Harnessing AI to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of AI”. Therein, the USA stated that it aims to lead in the application of AI for national security functions.
The National Security Commission on AI (NSCAI) Report
Following the identification of AI’s importance in national security, Congress established the NSCAI from 2018 to 2021. The NSCAI’s report, released in March 2021, outlined strategies to defend against AI threats, responsibly use AI and compete in tech:
Generative AI (eg, ChatGPT) presents several IP challenges, including the ownership and protection of AI-generated content, the potential for IP infringement at an unprecedented scale, and the influence of AI tool providers’ terms and conditions.
Ownership and Protection
The USCO requires human authorship for copyright protection, thereby excluding purely AI-generated works. However, works with significant human creativity may be protected. The comic book “Zarya of the Dawn” is an example, with its text and arrangement being protected, but not its AI-generated images.
Patent Protection for AI Inventions
The USPTO has denied patents for AI-created inventions, emphasising the necessity of human inventorship. However, the USPTO is exploring the concept of AI-assisted inventorship and has sought public input on this matter.
Terms and Conditions of AI Tool Providers
OpenAI’s terms state that users own their input and, subject to compliance with such terms, are assigned rights to the output. However, OpenAI does not guarantee that the output is free from third-party IP infringement claims. While the industry is still determining what are customary terms, other companies have followed a similar approach in their licensing.
IP Infringement Risks
Generative AI may infringe on existing IP rights if the training data includes copyrighted material that is not licensed or that is used outside the scope of a licence, and there is no applicable non-infringement or fair use defence.
Part 3 of the US Copyright Offices report on Copyright and Artificial Intelligence, published on 9 May 2025, clarifies that the training of AI models with copyrighted materials generally meets the criteria for a prima facie case of copyright infringement. According to the Office, the multiple steps required to produce a dataset useful for generative AI “clearly implicate” the copyright owners’ right to control the reproduction of their works. These steps include the collection, curation and use of copyrighted works in training, as well as the deployment of the resulting model.
The report goes on to explain that not all uses of copyright-protected material for training will constitute fair use. The report emphasises that the increasing availability and adoption of licensing options for copyrighted works significantly weakens the argument that unlicensed use of copyrighted works for AI training can be justified as fair use. The existence of actual or potential licensing markets is a critical factor in the fair use analysis, as lost revenue from these markets constitutes market harm under the fourth fair use factor (which is the most important factor in the analysis).
Ongoing litigation against various generative AI providers highlights these risks.
US State Comprehensive Privacy Laws
See 3.6 Data, Information or Content Laws.
See 3.6 Data, Information or Content Laws. These state that comprehensive privacy laws do not specifically discuss the right to rectification, deletion, purpose limitation and data minimisation regarding AI models.
AI is changing legal services, the rules and regulations promulgated or pending by organisations regulating the practice of law, and relevant judicial decisions. AI is used in litigation, including discovery and automated support services. Transactional attorneys are using AI tools to review and draft documents.
The American Bar Association (ABA)
The ABA’s Formal Opinion 512 guides lawyers using generative AI tools, addressing competence, confidentiality, communication, candour, supervision and fees. Lawyers must maintain technological competence, understand generative AI and protect client interests.
State Bar Guidance
In 2024, state bars in California, New York, Florida and Texas issued guidance on generative AI use, emphasising confidentiality, competence, non-discrimination and ethical compliance. Lawyers must consult experts, verify AI responses, and ensure proper billing and advertising practices.
Currently, no specific statutes address injuries or losses caused by AI. Courts apply the following traditional legal theories to determine fault when AI products cause harm.
Product Liability Legal Theories
Product liability law is based on three main theories: negligence, breach of warranty, and strict liability.
Negligence
The plaintiff must prove that the defendant owed a duty of care, breached this duty, and that breach caused injury or damage to the plaintiff. Negligence can occur at any stage of the supply chain, including design, manufacturing, testing, assembly, distribution and sale. Claims may also involve failure to warn about product defects.
Breach of warranty
This requires a contractual relationship with the defendant. Warranties can be express (explicit promises about the product) or implied (automatic guarantees by law). Implied warranties include merchantability and fitness for a particular purpose.
Strict liability
Plaintiffs must prove that the product was sold in an unreasonably dangerous condition, reached the consumer without alteration, and the product caused injury or damage. Strict liability applies to inherently dangerous products such as AI-driven autonomous vehicles. Most states follow Section 402A of the Restatement (Second) of Torts, holding sellers liable for defective products regardless of due care or privity of contract.
Insurance
Insurance might mitigate some risks associated with AI tech for both consumers and manufacturers, though clear guidance is limited due to few disputes. Consumers may use self-driving cars, implicating automobile insurance. Consumers may also use IoT devices, which homeowners’ insurance may address. Insurance coverage levels should be high enough to address these risks.
Manufacturer liability for autonomous AI
Liability for AI tech acting autonomously, assuming AI is not a legal person, raises complex issues. Manufacturers may be held liable if the AI product fails to use reasonable care, as demonstrated in Nilsson v General Motors LLC (ND Cal 2018). Courts must determine how foreseeability applies to AI actions, assessing whether the AI’s actions were predictable and whether the manufacturer took adequate steps to prevent foreseeable harm.
Several proposed and enacted state laws aim to clarify the imposition and allocation of liability for actions taken by AI systems. These regulations emphasise that AI systems, despite their autonomous capabilities, do not absolve businesses from legal responsibilities.
Clarification of Liability
State laws are increasingly specifying that AI systems are subject to legal frameworks. For instance, the 2024 Utah statute mandates that businesses cannot use AI-generated statements as a defence against violations of consumer protection laws. This means that inaccuracies, hallucinations or unsubstantiated claims made by AI chatbots during consumer transactions can still lead to legal repercussions.
Consumer Protection
See 3.5 US State Law for detail.
Anti-Discrimination
Even in the absence of specific AI anti-discrimination laws, discriminatory practices by AI systems remain illegal. See 3.2 Jurisdictional Law and 3.7 Proposed AI-Specific Legislation and Regulations for more detail.
Exceptions
The Illinois digital replica law provides specific exceptions. For example, data centres are excluded from the definition of “person”, thereby shielding them from liability under this law.
Algorithmic bias in AI systems presents technical, legal and regulatory challenges.
Technical and Legal Characterisations of Bias and Legislative Efforts
Algorithmic bias can arise at various stages of development. Technical sources include association, confirmation, historical, labelling, reporting, selection, sampling, automation, experimenter, and group attribution biases. See 3.2 Jurisdictional Law and 3.7 Proposed AI-Specific Legislation and Regulations for details on enacted and proposed laws.
Consumer Risk Exposure
AI bias poses risks in various sectors. In recruitment, it can lead to discrimination, in insurance to unfair premiums and coverage denials, and in healthcare to misdiagnoses for minorities.
Potential Liability for Companies
Companies face potential liability for AI bias through lawsuits and regulatory actions. In May 2022, the Equal Employment Opportunity Commission sued iTutorGroup for age discrimination, alleging that their AI software automatically rejected candidates based on age, violating the Age Discrimination in Employment Act of 1967 (ADEA).
Current Industry Efforts
Industry efforts to address AI bias include using diverse and inclusive data, conducting bias audits, employing transparent algorithms and developing ethical guidelines. Companies are also forming interdisciplinary teams and implementing bias mitigation techniques.
Regulatory Response
The FTC has issued guidance on preventing discrimination and ensuring transparency in AI algorithms. Companies must evaluate datasets, test algorithms for bias, and embrace transparency. Companies must adhere to these guidelines to avoid legal challenges and ensure fair AI practices.
HIPAA
The Health Insurance Portability and Accountability Act (HIPAA) protects patient health information in the USA by requiring healthcare providers and associates to ensure the continued confidentiality, integrity and security of protected health information (PHI). It grants patients access and amendment rights, and imposes safeguards against unauthorised access, with violations leading to fines and legal penalties.
State Health Privacy Laws
Several states have enacted laws that complement HIPAA. California’s Confidentiality of Medical Information Act (CMIA) requires patient consent to disclose medical information. Texas’ Medical Records Privacy Act demands safeguards for records and consent for disclosures. New York’s SHIELD Act mandates comprehensive data security measures for protecting health information.
State-Specific Biometric Data Laws
Illinois’ Biometric Information Privacy Act (BIPA) mandates written consent before collecting biometric data, prohibits its sale, and requires data retention policies. Texas’ Capture or Use of Biometric Identifier (CUBI) mandates notice and consent before capturing biometric identifiers, and requires reasonable care in data protection. Washington State’s law mandates notice, consent and data retention policies.
Potential Liability Under Existing Laws
Businesses using facial recognition and biometric data risk liability for privacy violations, data breaches and consumer protection issues. Inadequate security measures and misleading practices can lead to regulatory fines, legal penalties and reputational damage.
Automated decision-making tech is increasingly prevalent across various sectors, raising questions about regulatory compliance, consent and potential liabilities.
Tech Used and Applicable Regulations
Automated decision-making tech, including AI, machine learning (ML) and deep learning (DL), is used in recruitment and background checks. AI systems can reason, learn and act like humans. ML, a subset of AI, extracts knowledge from past data to make predictions. DL, a subset of ML, uses neural networks for complex pattern recognition. Generative AI (GAI) uses DL to generate new content from learned patterns. Regulations vary by jurisdiction. In the USA, New York City’s Local Law 144 mandates bias audits. Federally, proposed bills such as PADRA and NOFAKES regulate AI. States such as Utah, California, Colorado and Tennessee have enacted specific AI laws.
Risks to Companies
These tools can exacerbate workplace discrimination based on race, sex, disability and other protected characteristics, despite claims of objectivity. AI tools are trained on large datasets, often reflecting existing biases, which poses risks to companies.
The use of AI tech to replace human services introduces regulatory challenges concerning disclosure and consumer protection.
Chatbot and AI Regulatory Schemes
The proposed Algorithmic Accountability Act of 2023 mandates transparency in algorithms and bias assessments. The Bolstering Online Transparency Act (BOTA), enacted in California, prohibits the use of bots to mislead individuals about their artificial identity with the intent to deceive for commercial transactions or to influence votes. Users of bots have disclosure requirements. The California Attorney General enforces BOTA, which does not provide a private right of action, though violations are subject to California’s unfair competition laws, resulting in fines and equitable remedies.
Disclosure Requirements and Safe Harbours
Utah’s AI Consumer Transparency Law requires disclosure when AI interacts with consumers, enhancing transparency and protecting consumers from manipulation.
Tech for Undisclosed Suggestions and Consumer Manipulation
AI tech, including chatbots, can generate misleading or incorrect results, known as “hallucinations”, which can manipulate consumer behaviour.
AI tech introduces new risks that must be addressed in transactional contracts between customers and AI suppliers, in AI-as-a-service models.
IP Risks
AI tech often involves IP issues. Contracts should include representations and warranties, ensuring that the AI does not infringe on third-party IP rights, and should include indemnification provisions to cover any claims.
Privacy Compliance
Given the evolving landscape of privacy laws, contracts must ensure compliance with all applicable privacy regulations to mitigate liabilities associated with data breaches.
System Functionality and Reliability
AI systems must function as intended without defects that could cause harm. Contracts should include warranties addressing the AI’s performance and disclaimers for any malfunctions. SLAs are essential for AI offered as part of Software as a Service (SaaS), ensuring service standards.
Limitation of Liability
These clauses should be carefully tailored to exclude certain high-risk areas, such as privacy breaches and IP infringement, to manage financial exposure.
Insurance and Escrow
Contracts should require adequate insurance coverage to protect against potential losses from AI system failures. Software escrow agreements can ensure access to the AI’s source code and underlying datasets if the supplier ceases operations.
Training and Use of Inputs
Contracts should be clear as to how user inputs may be used, and whether they may be used by the provider for training.
AI in Employee Hiring and Termination
AI assists employers in:
State-Specific Laws
See 1.1 General Legal Background and 11.3 Automated Decision-Making for state-specific laws applicable to hiring and termination practices.
Benefits to Employers
AI tools can reduce costs, save time, improve quality and increase diversity in hiring.
Harm to Employees and Employers
AI can amplify discrimination and bias based on protected characteristics, potentially excluding qualified candidates. Employers using AI without safeguards risk increased liability. Anti-discrimination laws apply to actions by both humans and AI.
AI tools are revolutionising how organisations evaluate employee performance and monitor work, including in remote settings.
Evaluating Employee Performance and Monitoring
AI tools help to:
Further, AI tools are being used to analyse employees’ communications, browsing history and email response times. Employers must comply with the Fair Labor Standards Act (FLSA) by accurately capturing all compensable time. AI also aids in task assignments and scheduling, but human oversight is essential to avoid mischaracterising compensable waiting time and travel time, ensuring accurate wage calculations and compliance with labour laws.
Benefits and Harm to Employees: Liability
AI tools streamline administrative tasks, offering quicker access to information and support. However, potential harms include job displacement and exacerbation of existing biases. See 11.1 Algorithmic Bias for legal liabilities from non-compliance. Employers must ensure that AI tools do not disproportionately impact protected classes, and provide equal retraining opportunities. Human oversight is essential to mitigate risks of unlawful retaliation and ensure legal compliance.
Digital platform companies in the car services and food delivery sectors are increasingly leveraging AI to enhance operational efficiency, improve user experience and streamline logistics.
Food Delivery
DoorDash uses Amazon Web Services (AWS) for a generative AI self-service contact centre to reduce response latency, increase testing capacity and improve operational efficiency. Other food delivery platforms use AI-powered predictive analytics and dynamic routing algorithms to optimise delivery routes.
Car Services
Amazon deploys generative AI to optimise warehouse operations and delivery routes, predicting item placement in warehouses to minimise travel distance and enhance delivery speed.
Employment and Regulation
Digital platform companies use AI to boost efficiency and customer experience. This impacts both employment and regulation. AI aims to enhance efficiency without replacing human workers, requiring employee upskilling to manage AI systems. Increased AI reliance demands robust data collection and management, often conflicting with data minimisation principles. Best practices include de-identifying data and regularly auditing data use to ensure compliance. While AI integration offers significant efficiency and job creation benefits, it requires careful management of employment practices and regulatory compliance.
The integration of AI in financial services is transforming the industry, but it also brings regulatory and ethical considerations.
Use of AI and Regulations
See 2.1 Industry Use for details on AI in the financial service industry. Companies leverage AI to enhance underwriting processes, customer interactions and fraud detection. AI analyses vast datasets, including utility payments and social media activity, to assess creditworthiness and personalise loan products. However, the integration of AI in lending is subject to stringent regulations to ensure fairness and transparency.
Key regulations are overseen by the following:
Risks to Such Companies From the Use of AI
The deployment of AI in financial services carries risks, primarily due to AI-induced bias from unseen factors in the data. This bias can make AI models unreliable or harmful, especially in lending and credit scoring. Historical data may replicate existing biases, leading to unfair outcomes such as automatic loan denials for marginalised communities. Additionally, flawed or insufficient data can negatively impact credit scoring, disproportionately affecting individuals with limited credit histories.
Biases in Repurposed Data
Biases in repurposed data pose substantial risks, including discriminatory practices. AI systems trained on biased historical data can perpetuate and even exacerbate existing disparities, such as racial or gender biases. In lending, this can result in marginalised communities facing automatic loan denials. Credit scoring models may inaccurately assess the creditworthiness of lower-income individuals or those with limited credit histories, leading to unfair outcomes.
Regulations Governing AI in Healthcare
AI in healthcare is subject to stringent regulations to ensure patient safety and privacy. The FDA regulates AI as part of Software as a Medical Device (SaMD), requiring pre-market approval, post-market surveillance, and adherence to quality standards. State-specific health privacy laws, such as California’s CMIA and Texas’s Medical Records Privacy Act, impose additional requirements on the handling of health information, including biometric data.
Patient Treatment Risks and Hidden Bias
AI in healthcare can lead to misdiagnosis and inappropriate treatment due to hidden biases in training data, causing disparities in care. Diverse and representative training data is essential to mitigate bias.
AI in SaMD and Robotic Surgery
AI tech, including SaMD and robotic surgery systems, enhances precision and efficiency but poses risks such as software malfunctions and cybersecurity vulnerabilities. Regulatory oversight ensures safety and efficacy.
Data Use and Sharing for Training Algorithms
The use of personal health information for training AI is governed by HIPAA, state health privacy laws and state AI laws, requiring patient consent, data minimisation and security measures.
Key Roles and Risks in Digital Healthcare
AI is crucial in diagnostics, personalised treatment plans and predictive analytics. Centralised electronic health record (EHR) systems improve data accessibility but pose risks of data breaches and cybersecurity attacks.
Natural Language Processing (NLP) and Regulatory Schemes
NLP in healthcare extracts and analyses unstructured data from clinical notes and patient records. Regulations such as HIPAA and FDA guidelines ensure the secure and ethical use of NLP in clinical decision support and patient monitoring.
The deployment of AI in autonomous vehicles presents numerous regulatory, ethical and technical challenges. The following discusses the regulations governing AI in autonomous vehicles, responsibility for accidents, and frameworks for AI algorithms, privacy, cybersecurity and vehicle performance.
Applicable Laws
Regulations are multifaceted, involving federal and state laws. The National Highway Traffic Safety Administration amended the Federal Motor Vehicle Safety Standards to ensure that vehicles with automated driving systems (ADS) provide the same levels of occupant protection as non-ADS vehicles. Oklahoma’s 2022 bill allows fully autonomous vehicles without human drivers, provided they meet conditions such as proof of financial responsibility and a law enforcement interaction plan. New York introduced a similar bill, highlighting the evolving legal landscape.
Allocating Responsibility
Determining responsibility for accidents involving autonomous vehicles presents novel legal challenges. Traditional legal theories are being tested, as seen in Nilsson v Gen Motors LLC (ND Cal 2018), where the plaintiff alleged negligence by the autonomous vehicle itself. The case raised questions about AI products as actors and applicable standards of care. In a related incident, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona in March 2018. Uber was not criminally charged due to several factors, including the driver’s distraction and the evolving regulatory framework.
Privacy and Security Issues
Privacy and security issues involve compliance with global data protection laws. The FTC and NIST provide guidelines to ensure that AI algorithms are transparent, fair and accountable. Ethical considerations include preventing discriminatory outcomes and ensuring that decisions align with individuals’ reasonable expectations. The GDPR and state laws such as the CCPA and Colorado Privacy Act emphasise transparency, fairness and purpose limitation in data processing, posing challenges for organisations using AI.
The USA is actively developing regulations to govern the use of AI in manufacturing, focusing on product safety and liability, workforce impact, and privacy and security.
Product Safety and Liability
AI systems in manufacturing must comply with stringent product safety standards to prevent defects and ensure consumer protection. Key considerations include the following.
Workforce Impact
The deployment of AI in manufacturing also impacts the workforce, necessitating compliance with labour regulations.
Privacy and Security
There are no specific AI regulations related to manufacturing, though the Colorado and Utah state AI laws discuss the general use of generative AI.
Previous sections in this chapter have discussed the ethical guidance provided by the American Bar Association (ABA) regarding the use of AI in the legal profession. Here, the importance of confidentiality and client consent as emphasised by the ABA is reiterated.
Legal Profession
The ABA emphasises confidentiality and client consent in AI use. Lawyers must assess the risks of disclosing client information to AI, especially self-learning models. The ABA’s Formal Opinion 512 mandates understanding AI providers’ terms of service and data security to protect client data, and evaluating unauthorised access risks and information sensitivity.
Healthcare Sector
AI use in healthcare also demands strict confidentiality and patient consent, governed by HIPAA. Healthcare providers must comply with HIPAA to protect patient information. AI app developers, as business associates under HIPAA, must implement safeguards to protect PHI, ensuring that AI tools do not inadvertently disclose patient data and have security measures against unauthorised access.
The protection of IP in the AI process is a complex and evolving area, encompassing AI models, training data, input prompts and output.
Protection of Assets in the AI Process
See 8.1 Specific Issues in Generative AI for a detailed discussion of IP protection. Patents cover the practical application of algorithms, while copyrights protect software code. Trade secrets require confidentiality, and contracts enforce usage terms. Training data can be protected by copyright if it includes original human expression, such as in data selection and categorisation. Patents may protect novel processes for developing training sets. Contracts are effective for licensing training data.
Input queries are challenging to protect. Trade secret protection is unavailable since user input is unlikely to remain private and secret, which are key requirements for trade secret protection, especially if the AI tool is publicly available. Copyright and patent protection are limited unless a unique method for generating queries is developed. Output results are not protectable under current IP laws if the computer is considered the author or inventor. Contract law and trade secrets are the primary means of protection.
Influence of AI Tool Providers’ Terms and Conditions on Asset Protection
AI tool providers’ terms and conditions influence the protection of input and output data. These terms can define ownership rights, usage limitations and confidentiality obligations, thereby shaping the legal framework for protecting and monetising the input and output of generative AI tools. IP infringement may occur in the following cases:
Legal debate is ongoing regarding the role of AI tech in being recognised as an inventor or author for patents and copyrights.
Authorship for Patent Purposes
Judicial and agency decisions have clarified that AI cannot be recognised as an inventor for patent purposes. The USPTO states that AI-assisted inventions are not unpatentable, but the patent system is designed to protect human ingenuity. Only inventions with significant human contributions qualify for patent protection. This ensures that the system promotes human creativity and investment while incorporating AI in innovation.
Authorship for Copyright Purposes
Judicial and agency decisions have also explored whether AI tech can be recognised as an author for copyright purposes. The USCO maintains that copyright protection is limited to works produced by human authors. Judicial precedents such as Thaler v Perlmutter (DC Cir 2023) and guidance from the USCO 2025 Report affirm that copyright protection extends only to material originating from human creativity. AI-generated content, without significant human creative input, does not qualify for copyright protection. However, works containing AI-generated material may be eligible for copyright if there is sufficient human authorship, such as creative selection, arrangement or modification of the AI-generated content.
The protection of AI tech and data through trade secrets and similar IP rights involves understanding the legal framework and practical considerations.
Applicability of Trade Secret Protection
Trade secrets offer broad protection for AI, including algorithms, training sets, data compilations and software code. Unlike patents, trade secrets do not require public disclosure or proof of usefulness. Key points include:
Contractual Aspects of Trade Secret Protection
To maintain the confidentiality of trade secrets, companies must implement contractual measures:
Trade secret protection may be appropriate for AI tech but requires diligent management to prevent unauthorised access and disclosure. The assets will lose their trade secret protection if publicly or inappropriately disclosed.
The intersection of AI and IP law has prompted developments. The following addresses the scope of IP protection for AI-generated works, including case law and ongoing litigation.
Scope of IP Protection for AI-Generated Works
The USCO has clarified that copyright protection requires human authorship. AI-generated works without sufficient human contribution are not eligible for copyright. This principle is rooted in the Copyright Clause of the US Constitution and is reinforced by case law.
Case Law and Current Litigation
In Thaler v Perlmutter (DC Cir 2023), the court ruled that AI-generated works without human involvement do not meet the human authorship requirement. Additionally, the USCO has registered works incorporating AI-generated content, provided there is a clear human contribution, such as the selection, co-ordination or arrangement of AI-generated material.
When creating works and products using OpenAI’s services, several IP issues arise concerning the ownership and use of AI-generated content.
Ownership and Use of AI-Generated Content
OpenAI’s terms of service grant users ownership of the input they provide and assign rights to the output generated by the AI. However, this assignment is subject to compliance with OpenAI’s terms, which include restrictions on the use of the output. Users must review these terms carefully to ensure that their intended use of the output is permissible.
Copyright Protection
The USCO does not recognise works produced solely by AI as being eligible for copyright protection. See 5.2 Regulatory Directives and 8.1 Specific Issues in Generative AI for further detail.
Risk of Infringement
AI-generated content may infringe on existing IP rights. Ongoing litigation against OpenAI (such as cases involving Stable Diffusion) highlights the potential for copyright infringement claims due to unauthorised training on copyrighted works leading to derivative works. These cases are currently being litigated.
Practical Considerations
Creators should consider limiting or avoiding the use of AI-generated content if they wish to secure copyright protection. Additionally, they should be aware of the potential for IP infringement and ensure that their use of AI complies with all relevant legal and contractual obligations.
The intersection of AI and antitrust law is raising critical issues for competition regulators, including “acqui-hires”, price fixing, algorithmic collusion, and the abuse of data-driven market power.
Acqui-Hires
Acqui-hires – the practice of acquiring companies primarily for their talent – are under scrutiny by competition regulators. These transactions can stifle competition by consolidating expertise and innovation within dominant firms, reducing opportunities for smaller firms and new entrants.
Price Fixing and Algorithmic Collusion
Price fixing and algorithmic collusion are major antitrust issues in AI markets. Section 1 of the Sherman Act prohibits any agreement that unreasonably restrains trade, including implicit agreements. Algorithms can enable competitors to co-ordinate pricing without direct communication, forming a hub-and-spoke cartel. The DOJ and FTC have suggested that pricing algorithms may be per se illegal price-fixing arrangements.
Abuse of Data-Driven Market Power
The abuse of data-driven market power is another critical issue. Firms controlling key inputs such as data and specialised chips can exploit bottlenecks, reducing innovation and competition. Regulators emphasise fair dealing, interoperability and consumer choice to mitigate these risks and ensure a competitive AI market.
There are currently no existing or proposed cybersecurity laws specific to AI.
The intersection of AI and environmental, social and governance (ESG) dimensions is becoming increasingly significant, as organisations strive to meet ESG reporting requirements and leverage AI for sustainability goals.
ESG Reporting Requirements
Securities regulators and ESG standard-setters have not mandated AI-related disclosures. However, AI impacts existing disclosures such as Form 10-K Risk Factors and ESG goals. Key areas include:
AI’s Impact on the Environment
ESG includes climate change, resource efficiency and energy consumption. AI aids in optimising renewable energy and climate predictions but consumes significant electricity. By 2027, AI servers may use as much power as Argentina, the Netherlands and Sweden combined, risking US power shortages.
Corporate Challenges and Responsibilities
AI’s high energy consumption challenges companies with zero-emissions pledges. They must enhance data centre efficiency, risking unmet net-zero goals or accusations of greenwashing. Though costly, improving efficiency is essential due to the fiduciary duties to consider ESG factors affecting profitability and risk. Energy efficiency is vital as power shortages threaten AI innovation.
Regulatory and Political Landscape
Despite the anti-ESG movement, there is bipartisan support for slowing data centre growth. For instance, Georgia’s top Republican leaders advocate for pausing data centre incentives.
Laws
The Federal AI Environmental Impacts Act of 2024, introduced by Senator Markey, aims to assess and mitigate the environmental impact of, and to promote transparency and accountability in, AI development and use.
Effective AI governance addresses legal, regulatory and user risks, as well as generative AI errors. It should consider legal changes, data protection, litigation, IP issues and antitrust concerns. A robust governance framework is essential for compliance and future-proofing AI strategies. Clear communication about AI’s impacts is crucial to building trust and encouraging adoption.
A coherent AI governance structure can be benchmarked using responsible and ethical AI principles aligned with legal standards and clear usage policies. These principles should be publicly shared at the top to inspire confidence. The second level involves actionable rules for end users, detailing permissible inputs and outputs, with tailored policies for specific AI systems. The third level targets system owners and risk control functions to define use cases, assess risks and set requirements based on risk levels.
To implement AI governance effectively, one should leverage existing governance frameworks and integrate them into a centralised AI governance structure. This approach avoids a complete corporate governance overhaul, allowing for the upgrade and incorporation of existing policies on information security, tech use and privacy into the new AI governance framework.
599 Lexington Avenue
New York
NY 10022
USA
+1 212 848 4000
information@aoshearman.com www.aoshearman.com