Artificial Intelligence 2023 Comparisons

Last Updated May 30, 2023

Contributed By Moses & Singer LLP

Law and Practice

Authors



Moses & Singer LLP is a New York firm recognised in the USA and internationally for its experience in assisting companies entering the US market and navigating the constantly evolving requirements of US federal and state laws and the issues arising at the intersection of regulatory compliance, the distinctive features of US intellectual property law and business transactions. The firm handles AI, machine learning and the internet of things. Moses & Singer established a multidisciplinary data law practice focusing on data as a corporate asset: this provides a broad-gauge, cross-functional practice to guide clients in leveraging their data assets, internally and externally, as new technology and analytics platforms create new business opportunities – this includes using machine learning to design US-focused products, and negotiating and structuring contracts in the US style. A number of the firm’s lawyers are highly ranked by international legal directories.

Aside from sector-specific regulatory schemes, the treatment of AI continues to evolve under the distinctive requirements of general areas of US law, including:

  • data law, including monetisation and internal business operations;
  • healthcare law;
  • product liability, tort, strict liability – in combination with machine learning;
  • intellectual property;
  • inconsistent and overlapping state laws for privacy, consent and data processing;
  • predictive analytics for equipment repair before failure;
  • financial services – evolving cryptocurrency practices;
  • employment law and US anti-discrimination laws at federal and state levels, including algorithmic and human basis;
  • criminal law – facial recognition and surveillance technology;
  • consumer protection law regarding consumer uses of generative AI.

The key industry applications of AI and machine learning are outlined below.

Healthcare

  • COVID-19 accelerated adoption, development and application of AI to diagnose and treat patients – decreasing the time between diagnosis and treatment and speeding adoption and deployment of telehealth.
  • Data analytics, wellness and HIPAA data, preventative healthcare and expansion of revenue.
  • Insurance, reimbursement and payment characteristics of US healthcare system.
  • Digital healthcare technologies.
  • Use of Solid Web 3.0 protocol for controlling data access and use.
  • Robotics – in surgeries, patient care.
  • Analytics technologies – data fabrics for AI on disparate databases.
  • Plan of care adherence.
  • Administrative tasks – claims processing and records management.
  • The common legal issues raised from AI in healthcare are accountability for mistakes, the transparency of the AI being used, and privacy concerns about the data being collected.

Financial Services

  • Financial services are regulated at the federal and state level in the US, and states have different rules that require specific analyses. 
  • Detection of insider trading based on non-obvious factors.
  • Enhancing accuracy and efficiency of credit checks.
  • Analytics of historical data for better risk assessments and increased fraud protection.
  • Developing trading strategies and executing trades.
  • Analysing first-party and third-party data in combination.
  • As with healthcare, AI in financial services raises legal concerns regarding transparency and privacy.

Aerospace and Defence

  • Drone operations.
  • Satellite and image interpretation.
  • Advanced technology development and software modelling.
  • Maintenance scheduling and predictive analytics.
  • Sustainability studies for energy efficiency.
  • Supply chain management.
  • Determining technology to export and to not export.

Emerging Technology

  • Emerging AI and machine learning technologies are used across industries:
    1. chatbots, including virtual assistants;
    2. AI for suggestions in employment decisions about hiring and performance evaluations – workflow tools that “recommend” or “analyse” business engagements and schedules;
    3. facial and image recognition by government entities, law enforcement and private establishments for identify verification, exclusion from property, predictive policing, identification of child sexual abuse material (see 8 Government Use of AI); and
    4. machine learning for operational and administrative efficiency when implementing services or products within internet of things.

Overview of the US federal landscape of AI Regulations

While Europe’s AI regulatory landscape is beginning to take shape, most notably the EU Artificial Intelligence (AI) Act, which is expected to pass later in 2023, the US AI regulatory landscape at the federal level remains unclear. To date, US federal government has not proposed or considered any US equivalent of the EU’s AI Act nor has it set forth any specific policy rationale for its expected AI law or regulation.

The proposed US federal data privacy bill – the American Data Protection and Privacy Act (ADPPA) – sets out some rules for AI and automated decision-making tools. The ADPPA includes some risk assessment obligations for covered businesses and several other algorithmic governance obligations that are generally like those under existing US state laws that govern AI systems (see 3.3. US State Law).

There is no applicable information in this jurisdiction.

There is no applicable information in this jurisdiction.

US states have been actively proposing and enacting state comprehensive data privacy laws that regulate the use of automated decision-making tools that have “legal or other significant effects” for individuals and/or AI-specific laws that regulate how AI systems can be used in the context of employment. Generally, existing and proposed state comprehensive data privacy laws (see examples below) grant consumers the right to opt-out of “profiling” or processing activities that use automated decision-making techniques (ADM), which often use algorithms to analyse data, and require covered entities to make certain disclosures to consumers, and/or conduct data protection impact assessments (DPIAs). Some examples of such states taking action to regulate AI applications under their comprehensive data privacy laws include the following.

California

California was the first US state to enact a comprehensive consumer data privacy law – the California Consumer Privacy Act (CCPA), which was amended by the California Privacy Rights Act (CPRA), which took effect 1 January 2023. 

  • CCPA/CPRA gives consumers the right to opt out of a business’s use of “automated decision-making technology.” This CCPA/CPRA right includes the right to opt out of any “profiling”, which would cover certain automated decision-making to review or evaluate, for example, a consumer’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviourism, or location.
  • CCPA/CPRA also requires businesses to conduct a privacy risk assessment (similar to other state privacy laws’ Data Protection Impact Assessment requirements) for processing of consumers’ personal information that presents a “significant risk” to consumers’ privacy or security.

On 30 January 2023, the California legislature introduced Assembly Bill 331 (AB 331), which aims to regulate the use of automated decision tools (ADTs).

If passed, AB 331 would require a deployer – an entity that uses ADTs to make certain decisions of legal significance, and the ADT developer to perform an impact assessment for any ADTs used. The impact assessment would have to include, among other things, a statement of the purpose of the ADT, its intended benefits, uses and deployment contexts. Both the deployer and ADT developer would have to provide the impact assessment to the California Civil Rights Department within 60 days of its completion, on or before 1 January 2025, and annually thereafter. AB 331 would grant a California resident the right to opt out of the ADT, and a private right of action for violations of the bill, and prohibit a deployer from using ADTs in a manner that contributes to algorithmic discrimination.

Colorado

On 7 July 2021, Colorado enacted the Colorado Privacy Act (CPA), a comprehensive consumer data privacy law of the state, which takes effect on 1 July 2023.

  • CPA grants Colorado consumers with the right to opt out of processing of their personal data for the purpose of “profiling” for decisions that produce legal or similarly significant effects. Such decision would be one that results in either a provision or denial of financial services, housing, insurance, educational opportunity, criminal justice, employment opportunity, health care services, or access to certain essential goods or services. Like other states’ comprehensive consumer privacy laws, CPA requires a controller to conduct and document a data protection impact assessment (DPIA) if the processing of a consumer’s personal data creates a “heightened risk of harm” to a consumer.
  • Under the CPA, covered businesses that use automated decision-making (ADM) would need to ensure that the design and use of the profiling ADM tools do not create “heightened risk of harm”, and conduct DPIAs to be made available to the state regulator upon request. CPA and its final regulations issued on 15 March 2023 clarify that processing activities that present a “heightened risk of harm” include profiling that presents reasonably foreseeable risk of:
    1. unfair or deceptive treatment of, or unlawful discriminatory impact on, consumers;
    2. financial or physical injury to consumers;
    3. a physical or other intrusion upon consumers’ privacy that would be offensive to a reasonable person; or
    4. other substantial injury to consumers.

Existing or proposed state laws regulating the use of AI systems in the context of employment require covered employers to conduct audits on their AI systems for any discriminatory impacts that may be “harmful” to job applicants, provide certain statutorily required notices to the applicants about their uses of AI systems in the hiring process, and permit the applicants to exercise certain rights granted under the laws. For example:

Illinois

In January 2020, Illinois enacted the Artificial Intelligence Video Interview Act (the Act), which established the parameters for employers using AI in their hiring process. The Act was amended on 1 January 2022 to add a reporting requirement for employers who use video-recorded interviews. The Act established notice, consent, confidentiality and data destruction responsibilities on employers who use AI technology to evaluate job candidates in Illinois. Specifically, covered employer must notify each applicant before the interview that an AI system may be used to analyse the interview.

New York

In December 2021, New York City passed the first law in the US (albeit at the municipal level) – Local Law 144 – that mandates employers to conduct bias audits of AI-enabled tools used for employment decisions. The law took effect on 1 January 2023 and imposes notice and reporting obligations on NYC employers. Specifically, Local Law 144 requires employers who use automated employment decisions tools (AEDTs) to, among others, conduct a bias audit (by an independent auditor) within one year of the use of AEDTs.

Federal

Federal court cases in the US have interpreted the US Patent Act to require a human inventor for an invention to be eligible for patent protection because the definition of “inventor” defines inventor to be a “natural person.” The United States Supreme Court declined to consider the lower court case that was appealed to it, preserving the ruling below and setting the standard for patentability of AI inventorship in the United States.

The Supreme Court also issued two decisions that sidestepped ruling on the question of whether a technology company’s machine learning algorithm could subject the company to liability for the algorithm’s output, notwithstanding Section 230 of the Communications Decency Act. The Court instead decided those cases on alternate grounds before reaching the issue of liability under Section 230.

In lower federal courts, there are also pending cases regarding intellectual property and AI, including:

  • copyright eligibility of a two-dimensional artwork created by an AI machine; and
  • AI system liability for copyright infringement, among other claims, when the AI system is trained using work protected by copyright without the copyright holder’s authorisation, and in some cases producing work “in the style of” the copyrighted work.

Cases addressing AI applications have not generally thus far discussed definitions of AI, instead applying existing statutes to reach conclusions before the analysis of the AI is needed, especially in the patent and copyright cases, where human inventorship or authorship is required based on the definitions of the Patent Act and Copyright Act.

The United States Department of Commerce, acting pursuant to the NAIIA, through the NIST and the National Artificial Intelligence Advisory Committee (NAIAC), has been tasked with a voluntary risk management framework for trustworthy AI systems and advises the President and other federal agencies regarding key issues concerning AI.

The US Federal Trade Commission (FTC), acting pursuant to Section 5 of the FTC Act, as well as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), seeks to investigate the use of biased algorithms and create compliance standards for companies to follow. In the meantime, the FTC has been issuing AI guidelines that it seeks to regulate in the context of “unfair or deceptive” practices.

The US Food and Drug Administration (FDA) is responsible for regulating medical devices in the USA. AI companies developing digital health products should recognise how recent regulatory changes may affect them and that the FDA is engaging industry to further refine its oversight approach. The FTC has issued recent guidance around AI and ML, and clarified through its enforcement actions and press releases that AI may pose issues that run afoul of the FTC Act’s prohibition against unfair and deceptive trade practices.

The National Security Commission and Government Accountability Office (GAO), through the National Security Commission on Artificial Intelligence (NSCAI), advise the government to take certain actions at the domestic level to protect the privacy and civil rights of US citizens in the government’s deployment of AI.

Each of the US federal agencies that regulate AI set forth different definitions for “artificial intelligence”, “machine learning”, or “automated decision-making” as explored below.

NAIIA

The NAIIA defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” AI systems use machine and human-based inputs to:

  • perceive real and virtual environments;
  • abstract such perceptions into models through analysis in an automated manner; and
  • use model inference to formulate options for information or action.

FTC

The US Federal Trade Commission (FTC) recognises that “artificial intelligence” is an ambiguous term in the context of US AI regulatory landscape. Nonetheless, FTC generally refers to “artificial intelligence” as referring to a variety of technological tools and techniques that use computation to perform tasks such as prediction, recommendations and decisions.

FDA

The US FDA has broadly defined artificial intelligence as the science and engineering of making intelligent machines, especially intelligent computer programs, and took cognisance of the fact that AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on “if-then” statements, and machine learning.

Machine learning is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use machine learning to create an algorithm that is “locked” (so that its function does not change), or “adaptive” (so its behaviour can change over time based on new data). Some real-world examples of artificial intelligence and machine learning technologies include:

  • an imaging system that uses algorithms to give diagnostic information for skin cancer in patients; and
  • a smart sensor device that estimates the probability of a heart attack.

NIST

On 26 January 2023, NIST released Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF 1.0). The AI RMF 1.0 was developed in collaboration with the private and public sectors to incorporate trustworthiness considerations in the design, development, use, and evaluation of AI systems, products and services. It is a guide for managing the risks associated with the use of AI systems that consists of two parts. Part 1 discusses how organisations can frame the risks associated with AI systems and describes the intended audience. Part 2 of AI RMF 1.0 sets forth the “core” of the framework and describes four specific functions to help organisations address the risks of AI:

  • govern;
  • map;
  • measure; and
  • manage.

The AI RMF 1.0 recommends organisations and boards to follow a structured approach to managing AI-related risks, which includes five components:

  • risk governance;
  • risk assessment;
  • risk mitigation;
  • risk communication; and
  • monitoring and review.

NAIAC

The NAIAC is focused on advising the President and the government on topics related to the NAIIA, the progress of implementing the NAIIA, the current state of the USA’s competitiveness in AI, etc.

FTC

Over the last three years, the FTC has issued several non-binding AI guidelines to help organisations using AI systems to avoid its enforcement scrutiny pursuant to its authority under Section 5 of the FTC Act, including: “Keep Your AI claims in check” on 27 February 2023. The FTC’s AI guidelines demonstrate its focus on the use of AI systems and suggest “best practices”:

  • Avoid using data sets that are missing information from certain populations.
  • Test algorithms to ensure there are no discriminatory outcomes.
  • Be honest about the use of AI systems and what decisions are made using them.
  • Be honest about what data is used for AI systems.
  • Avoid overstating or understating what an algorithm or AI can deliver.

FDA

The US Food and Drug Administration (FDA) issued the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan from the Center for Devices and Radiological Health’s Digital Health Center of Excellence. Traditionally, the FDA reviews medical devices through an appropriate pre-market pathway, such as pre-market clearance (510(k)), De Novo classification, or pre-market approval. The FDA may also review and clear modifications to medical devices, including software as a medical device, depending on the significance or risk posed to patients of that modification.

US federal agencies have only recently increased their regulatory scrutiny on the use of AI systems. To date, the FTC, often with the US DOJ’s cooperation, has been most active in its AI-related enforcement activities. These enforcement actions, and related settlements, can result in regulatory penalties and/or forced deletion of the personal data collected and used to build algorithms and AI or machine learning models, as well as the destruction of the algorithms and AI models themselves.

For example, on 11 January 2021, the FTC reported that it settled with Everalbum for its “unfair or deceptive” use of facial recognition technology. Everalbum allowed its users to store and organise their photos and videos by uploading them to its cloud-based servers and used its users’ photos and videos to develop facial recognition technologies that it marketed to certain customers under a different name platform. Under the settlement, the company was required, among other things, to destroy all algorithms and “models” it developed using its users’ photos and videos.

In 2019, the FTC announced its settlement with Facebook, Inc. (now Meta) in the matter of Cambridge Analytica, LLC. Under the settlement order, a USD5 billion penalty was imposed on Meta.

The proposed and existing legislations and regulations relating to AI seek to set regulatory frameworks that:

  • ensure honesty and transparency about the use of AI systems, including algorithmic and machine learning models, in an organisation’s AI products or services using AI systems;
  • promote organisational safeguards and oversight of AI systems, including the use of personal data for training;
  • prevent unfair or inequitable results (ie, biases or discriminatory effects with “legal or other significant effects”) impacting consumers and personal or classes of characteristics protected in the US, such as race, religion, national origin, sex, gender identity, sexual orientation, familial status, sources of income, or disability; and
  • grant AI-related rights for affected individuals such as consumers and job applicants, such as the right to opt out of automated decision-making or hiring processes using AI systems.

Considering these regulatory focuses under proposed and existing legislations and regulations (state or federal), impacted organisations must review and audit their use of AI systems, and implement internal safeguards and oversight policies and procedures to ensure the transparency and integrity of their AI systems in compliance with the applicable AI laws or regulations. For more details see 3 Legislation and Directives and 5 AI Regulatory Regimes.

The International Organization of Standardization (ISO) and the International Electrotechnical Commission (IEC) have created two new foundational standards for AI:

  • ISO/IEC 22989  – establishes common terminology and describes concepts related to AI systems, covering a wide range of technologies and containing over 100 commonly used terms like explainability, controllability, transparency, bias, test data, dataset, validation data, and trained model; and
  • ISO/IEC 23053 – describes a generic framework for using machine learning technology and explains system components and their functions in the AI ecosystem.

The Institute of Electrical and Electronic Engineers Standards Association, through its Artificial Intelligence Systems Committee, creates standards to prioritise ethical considerations when developing and using AI.

See 5.3 Regulatory Objectives for discussion of NIST’s AI Risk Management Framework.

So far, international standard-setting bodies have not had a great impact on business in the US. While the goals of ISO/IEC in creating common terminology and concepts may be to help find more harmonious AI regulation internationally, it remains to be seen whether and how standards from organisations in other jurisdictions such as the European Telecommunications Standards Institute (ETSI) and the UK’s AI Standards Hub will in fact be harmonious and if and when they can be squared with the currently mismatched pace of regulatory development governing AI as between the US and other jurisdictions.

Governments are using AI applications and technology in many of the same ways as private industry, particularly facial recognition. Government’s use of AI also extends far beyond the ways in which it is used by private industry given the government’s role in law enforcement, prosecution and sentencing; the administration of public benefits; regulatory enforcement; and providing information to and interacting with citizens. Examples of government’s use of AI include:

  • Law enforcement:
    1. facial recognition to locate and identify people like perpetrators, victims and witnesses of crime;
    2. predictive policing (predict information or trends about crime or criminality in the past or future, including based on the characteristics or profile of any person(s) likely to commit a crime, the identity of any person(s) likely to commit crime, the locations or frequency of crime, or the person(s) impacted by predicted crime)
    3. reduce paperwork and human report writing through automatic data capture;
    4. use of robotics and remote/virtual policing; and
    5. identifying occurrence and location of gunshots;
  • Judicial system:
    1. courts may use AI to assist in sentencing and probation decisions; and
    2. private litigants and litigation funders may use machine learning to predict likelihood of success or failure.
  • Public benefits and housing:
    1. use of AI to determine eligibility for public benefits; and
    2. use of facial recognition and biometric identifiers to verify identity of public benefit recipients or public housing residents.
  • Regulatory enforcement:
    1. US agencies, including the Securities and Exchange Commission, Treasury and IRS use algorithmic inferences to help identify and evaluate potential fraud and enforcement cases and direct limited resources as needed.

While governments should carefully consider and address the potential risks of implementing AI technology prior to doing so, governments often face obstacles to adopting new technology period, including in the AI arena, such as a lack of specialised AI skills and management, budget priorities and constraints, legacy cultures and established practices and processes, and selection of trustworthy AI to meet the high standards placed upon them by the public. Inherent risks of government use of AI are the same as those discussed in 12 General Technology-Driven AI Issues.

There have been judicial decisions that have pushed back on the government’s use of AI, including the use of facial recognition and predictive tools and recidivism algorithms in law enforcement and criminal justice and use of AI for public benefits eligibility. But courts have also upheld the government’s use of AI for risk scoring and sentencing and parole. Even in cases where the government has ultimately upheld the government’s use of AI, there have been multiple cases that have required the government to divulge the source code of the AI or machine learning system to the citizens challenging the government’s AI-supported decision.

National security concerns include keeping the USA as a leader in the development and use of AI, retaining military superiority and restricting foreign countries from applying AI to misuse data of US citizens. To address these national security considerations, the NSCAI (see 5.1 Key regulatory Agencies and 5.3 Regulatory Objectives) proposed that the USA increase export control of EUV and ArF lithography equipment to China, granting the Treasury the authority to mandate CFIUS filings for non-controlling investments in AI from China, Russia and other competitive nations.

Another consideration is protecting against data collection of US citizens by foreign AI. For example, former President Trump banned WeChat and TikTok in 2020, though those bans were not upheld. Several states have proposed or outright banned TikTok themselves, however, and President Biden signed orders that require the Department of Commerce to launch national security reviews of any apps that have links to foreign adversaries.

In addition:

  • The Department of Defense (DOD) has developed an AI Strategy focused on using AI to advance the USA’s security and prosperity, delivering AI-enabled capabilities for key missions, partnering with leading private sector technology companies and academia, cultivating an AI workforce and leading in military ethics and AI safety. DOD also updated its autonomous weapons policy in 2023 to account for AI’s “dramatic” future role.
  • DARPA is investing more than USD2 billion in new and existing programmes such as automating critical DoD business processes; improving robustness and reliability of AI systems; enhancing security and resiliency of machine learning technologies; and reducing power, data, and performance inefficiencies.
  • AI is used across the intelligence community, the Department of Homeland Security, and it is increasingly playing a role in supply chain management and national security, for predicting managing costs, compliance and tracking, and to respond to shocks in supply chain disruption.
  • AI is also used to ensure the supply chain transparency to reduce the risks to national security, such as in the space industry where there are overlapping civilian and military priorities and in which only about half of the 10,000 companies that participate in the market are based in the US.

Generative AI, which is a subset of AI that generates original content by learning patterns from existing data, raises a variety of issues for lawyers.  These include:

  • Deceptive content: Misuse of generative AI can create fake news and deceptive content that is difficult to differentiate from authentic content. 
  • Bias and fairness: Because generative AI systems learn from existing data, biases present in training data can be magnified, raising concerns of discrimination (see 12.1 Algorithmic Bias).
  • Data privacy and security: Large datasets used in generative AI raise privacy and security concerns.
  • Transparency: Generative AI systems operate in a black box. Even when AI-generated outputs are reliable and accurate, the process for achieving the output is opaque. Such a lack of transparency raises concerns about accountability, trust, due process and the ability to explain AI-generated outputs in legal contexts (see 12.5 Transparency).
  • Intellectual property: Generative AI models rely on vast amounts of data, including intellectual property content. Ownership of generated content derived, in part or in whole, from intellectual property content, is debated.  (See 16 Intellectual Property).

Scientists, academics and policymakers are working on developing ethical guidelines emphasising transparency in the deployment of generative AI. These include the Blueprint for an AI Bill of Rights promulgated by the White House Office of Science and Technology Policy, as well as various state-level directives (see 5 AI Regulatory Regimes and6 Proposed Legislation and Regulations).

AI has the potential to radically alter the practice of law. If leveraged properly, it can empower lawyers to focus more on high-value tasks and deliver better client service. 

In the litigation context, AI is being used and developed to perform the following tasks:

  • discovery and document review: analyse and organise large volumes of documents, reducing costs and improving accuracy;
  • legal research: identify relevant case law, statutes and secondary sources articles, enhancing the speed and quality of research;
  • case prediction and analytics: analyse case data to identify patterns and provide insights on potential outcomes, aiding litigation strategy; and
  • e-discovery and data management: automate the identification, classification and review of electronically stored information, reducing time and costs.

AI has the following capabilities in the non-litigation context:

  • contract drafting and analysis: generate initial drafts of contracts and assist in reviewing and analysing contractual terms for accuracy and consistency;
  • due diligence: automate due diligence processes by analysing large amounts of data, identifying potential risks and extracting relevant information;
  • contract management: assist in contract management by organising, tracking and flagging key contract terms, deadlines and obligations;
  • regulatory compliance monitoring: monitor compliance with complex regulatory frameworks by tracking changes in laws and regulations and flagging potential issues; and
  • intellectual property management: perform patent searches, trademark analysis and intellectual property management by automating searches and providing insights into prior art.

The use of AI in the practice of law creates a bevy of novel ethical considerations. These include:

  • Competence and supervision: Lawyers have an ethical duty of competence. Lawyers must understand AI’s limitations and potential biases, if possible, and ensure appropriate supervision of AI systems. Lawyers must further ensure that the use of AI does not lead to the unauthorised practice of law.
  • Confidentiality: Lawyers must take measures to protect client confidentiality and ensure the security of sensitive information handled by AI systems.
  • Bias and fairness: AI algorithms can be susceptible to biases if not properly designed and trained.  Lawyers need to be vigilant in identifying and mitigating any biases as a result of AI use.
  • Transparency: AI algorithms operate as black boxes. Lawyers have an ethical duty to be able to explain their use of AI systems to provide legal advice and make judgement calls.
  • Billing practices: Because AI can enhance efficiency and accuracy, lawyers may be inclined to take on more clients than practicable, inflate billable hours, or generate unjustified charges. Lawyers should inform clients about how AI contributes to the services rendered and associated costs.

If a self-taught algorithm makes an error, who bears the responsibility? In cases where multiple individuals contribute to the design of a self-teaching algorithm, when, where, and to whom does liability attach? Can liability eventually detach? AI technology may give rise to new theories of liability throughout the supply chain, from programmers/manufacturers to end users. The following fundamental liability theories are applicable in the AI context:

  • Product liability: Businesses that sell or provide AI products and services may face product liability claims when harm occurs due to defects or failures in the AI system. For example, if the AI of an autonomous vehicle malfunctions, multiple contributing manufacturers could be held liable.
  • Negligence/personal injury/malpractice: Liability can arise if businesses fail to exercise reasonable care in the development or deployment of AI. For instance, if an AI-powered medical device administers an incorrect dosage, doctors, hospitals and manufacturers may all be liable for negligence or malpractice. The same applies to licensed professions that use AI in their practice.
  • Privacy and data breaches: Businesses utilising AI must adhere to privacy laws and protect the data they handle. If an AI system’s data management practices lead to a data breach, legal consequences may arise throughout the supply chain and data custody. Liability can extend to various parties involved.
  • Discrimination: AI systems can exhibit biases (refer to 12.1 Algorithmic Bias). Companies involved in the design and utilisation of AI-powered tools may face discrimination claims if the tools discriminate against individuals, such as in the case of biased job applicant selection.

The determination of when and how liability attaches and detaches in the AI context is still evolving and largely unwritten. Insurance is likely to play a role in mitigating AI-related liabilities. Liability insurance policies specifically covering AI risks can offer financial protection, subject to policy terms and exclusions. Additionally, indemnification clauses may help assign responsibility for AI-related harm or damages within the supply chain based on each party’s involvement and control over the AI system.

The AI regulatory landscape at the federal level remains unclear, including with respect to the imposition and allocation of liability. To date, the US Federal Government has not proposed or considered any US equivalent of the EU’s AI Act, nor has it set forth any specific policy rationale for its expected AI law or regulation (see 2 Legislation and Directives; 5 AI Regulatory Regimes; 6 Proposed Legislation and Regulations; and 7 Standard Setting Bodies).

Use of AI has underlying risks for all potential users, including:

  • Privacy: Are the privacy rights of consumers and citizens protected through adherence to applicable data privacy regulations?
  • Security: Is the AI system and data protected against cybersecurity vulnerabilities and risks? Is the model protected so it cannot be manipulated to produce unintended results?
  • Transparency and explainability: Can the programmer and/or entity selling the AI explain how the AI model works and the methodology it uses?
  • Third-party risks: Is the entity purchasing or using the AI ensuring that third-party vendors and partners are held to the same standards as the entity that will be using the AI? Is there a due diligence and software selection process for private parties and governments?
  • Fairness: Are AI applications fair and unbiased toward all segments of customers or citizens?

Critics have called attention to bias in AI, particularly that of facial recognition, which has historically had a harder time distinguishing between people with non-white skin, leading to higher rates of error, mistaken identity and unlawful arrest and detention of individuals. While this problem can potentially be addressed with better training data and training data that more accurately represents the population on which the AI will be used, that is not always the case. For example, use of AI by law enforcement is criticised as being trained on past policing data, such that its predictive power will work best on neighbourhoods and segments of society that are already being policed. Not only does this run the risk of missing crime that happens elsewhere, but for which the AI is not trained, it amplifies the historical inequities of predicting crime as more likely to happen in over-policed neighbourhoods and segments of society, and can therefore justify police continuing to direct resources there if they discover evidence of crime.

Users of AI must understand that just because its predictions and outcomes are derived from “maths” does not mean that they are fair, equitable, unbiased, or even correct. Governments have begun to think about and pass regulation in this area, with New York City passing regulations governing the use of AI in automated employment decisions, requiring such AI systems to be subject to independent bias audits.

AI processing of personal data of individuals poses risks of data persistence and unauthorised data repurposing, but more fundamental is the question of consent and whether individuals have consented to their personal data being fed into and analysed by an AI system. Going back years we have seen Facebook and other social media companies’ ability to serve up targeted advertisements based on user preferences and usage, which has also exposed the risks of Facebook revealing information about individuals that it had deduced, but which the individual did not consider public, such as sexual orientation or disease status, based upon “likes”, “follows”, etc. More recently, TikTok is said to pose the same threats to the privacy of users by not only being able to construct a profile of its users based on their habits, but also the ways in which the application interacts with others on a users’ phone to collect additional data, all of which is held by a foreign company.

As individuals increasingly turn to wearable technology and other applications to assist in health issues, to self-diagnose or self-manage conditions, privacy risks increase with the sharing of medical data with entities that are not otherwise subject to HIPAA or medical privacy laws. AI also provides a solution to assist in the re-identification of data that supposedly has been de-identified.

Additional challenges to privacy include the potential inability of individuals to exercise rights granted to them under data privacy laws, including data access, correction and deletion. If someone’s personal information has been co-mingled and made a part of an algorithm, is it possible to correct it or delete it without affecting the algorithm? The FTC found one solution to this problem, which was to force a company to destroy an algorithm it had trained on inappropriately collected data.

Regarding security, traditional cybersecurity issues persist, although AI systems now potentially have made entities that purchase and use them repositories of much more personal data, and much more inferred information about their customers or their constituents than they are otherwise accustomed to. Without strong oversight, policies and procedures, this data could be misused or vulnerable to compromise.

The use and development of AI in healthcare poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information.

AI processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. AI systems can be used in the context of healthcare operations and administration, predominantly for the reduction of costs. Relating to HIPAA, healthcare providers may use third-party organisations (known as Business Associates) to analyse their data relating to healthcare operations and administrations to increase operational and administrative efficiencies. However, healthcare providers would have to ensure that their Business Associates comply with HIPAA, often through contractual obligations, in using their data and use of AI systems on that data.

In the context of healthcare services and research, AI systems may improve detection, diagnosis and treatment of health conditions. However, organisations using AI systems or AI third vendors must comply with HIPAA when the data is used to train the AI system or subjected to AI processes, and ensure that there are no discriminatory effects. The growing concern for use of AI in the healthcare research setting is increasing the integrity and diversity of data fed into AI systems.

Generally, the US federal government does not have the level of regulatory rules that EU or other jurisdictions such as Canada or UK have. However, concerns regarding automated decision-making in AI-based facial recognition, employment and profiling applications have increasingly grown throughout the years and have led US state legislatures to ban the use of facial recognition systems, introduce legislative and regulatory requirements for employers using AI systems, and disclosure and assessment obligations for businesses using AI systems.

Illinois enacted its Biometric Information Privacy Act (BIPA) in 2008, which prohibits the unlawful collection and storing of biometric information. Biometric information includes retina scans, iris scans, fingerprints, palm prints, voice recognition, facial geometry recognition, DNA recognition, gait recognition and even scent recognition. Negligent violations of the BIPA result in a USD1,000 penalty, while wilful violations result in a USD5,000 penalty. Further, in 2019, Illinois enacted the Artificial Intelligence Video Interview Act, which requires employers to disclose to candidates if AI will or may be used to analyse the candidate’s interview, to explain how the AI will be used and to obtain the candidate’s consent.

In 2019, California’s AB 1215 placed a three-year moratorium on any use by law enforcement of any biometric information collected by an officer camera. The cities of Berkeley, CA and San Francisco, CA banned all government use of facial recognition technology, although San Francisco established an approval process for any future uses. In July 2021, New York passed at the state level a two-year moratorium on the use of facial recognition in schools.

These bans were adopted due to concerns relating to privacy and concerns relating to the inaccuracy of the automated decision-making of the AI-based facial recognition technology.

In 2022, Illinois enacted the Artificial Intelligence Video Interview Act (the Act), which established the parameters for employers using AI in their hiring process.

For more examples and discussion of US state laws and regulations governing the use of AI and automated decision-making that may involve the use of biometric data or other sensitive personal data, see 3.3 US State Law and 12.3 Facial Recognition and Biometrics.

The FTC has clarified that that it will use its authority under Section 5 of the FTC Act to prevent “unfair or deceptive” business practices pertaining to a business’s use of AI systems that impact consumers. Pursuant to the AI regulatory focus and guidelines of the FTC, businesses that use chatbots or other similar AI technologies to interact with consumers must be transparent about their use of such systems. For instance, if the chatbot or other AI system is designed to recommend to consumers certain services or products with which the business has a commercial relationship, that business must inform the consumers of its commercial relationship with the recommended products or services.

An example of state legislation that similarly reflects the FTC’s regulatory policy on chatbots is the California SB 1001 – the Bolstering Online Transparency Act (BOT Act). BOT Act was introduced in 2018, and took effect in July 2019. BOT Act prohibits a person or entity from using a “bot” to communicate or interact online with a person in California to incentivise a sale or transaction of goods or services or to influence a vote in an election without disclosing that the communication is via a bot. The BOT Act defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”

Although California is the only state to pass such a law, it may be indicative of the types of regulations that will follow from other states or the federal government.

Antitrust and price-setting issues that may arise out of using AI technology include:

  • Collusion risk: AI-powered systems could facilitate collusion among competitors by enabling (tacitly or actively) the dissemination of sensitive information such as pricing strategies. This issue may be exacerbated by the complexity and opacity of AI systems.
  • Predatory pricing: AI algorithms capable of analysing market conditions and competitor pricing strategies may lead to predatory pricing practices aimed at driving rivals out of the market.
  • Unfair market concentration: If a few dominant firms possess superior AI capabilities, they could exploit their market power and engage in anticompetitive practices, potentially leading to market concentration, reduced competition and antitrust enforcement.

AI offers upsides and downsides when it comes to combating climate change. On the one hand, it can enhance our comprehension of climate change and support in devising effective mitigation strategies. On the other hand, it carries inherent risks of bias and perpetuating social inequality, while the resource-intensive nature of AI systems can contribute to climate change and increase greenhouse gas emissions.

AI helps improve understanding of climate change by improving climate modelling to make better predictions and deploy mitigation sooner. It is also being used to assist in the design of materials to create lighter and stronger materials for building larger windmills, to plan satellite paths for image capturing satellites that contribute to our understanding of climate change, powering robots for data collection in inhospitable or inaccessible terrain, and boosting adaptation and resiliency by helping design infrastructure with fewer climate hazards or lower impact on the climate.

The uses of AI in an employment/hiring context include:

  • screening employment applications according to criteria;
  • using big data to confirm or investigate employee regulatory;
  • establishing criteria for success in corporate positions;
  • use in training programmes and self-evaluation;
  • comparison studies against other industry studies; and
  • determining tasks to automate, including for robotic process automation.

AI technology may be utilised by employers to evaluate employee performance. Tools include natural language processing (NLP) to analyse written communication, machine learning algorithms to analyse quantitative data and computer vision to assess visual cues and behaviours.

The benefits of utilising AI to monitor and evaluate employee performance include objectivity, increase efficiency, real-time feedback and scalability. There are also many drawbacks, including overemphasis of quantitative metrics (and relative de-emphasis of creativity and interpersonal skills), privacy concerns, and discrimination and bias. In addition, there may be concerns over employee morale due to a sense of being constantly scrutinised, which may lead to decreased job satisfaction.

The use of AI in digital platforms in the US, such as those utilised by car services and food delivery services, is governed by the existing state comprehensive data privacy laws and may be subject to FTC’s regulatory scrutiny pursuant to its authority under Section 5 of the FTC Act. Specifically, the collection and processing (or use) of personal data by digital platforms must comply with the requirements under each of the applicable state comprehensive data privacy laws. For instance, if a digital platform is available to US consumers across all states, digital platform companies must evaluate whether they are subject to the applicable state privacy laws and prepare for compliance under each law. Typically, this will require digital platform companies to conduct a dataflow assessment to determine which laws apply to their platform(s). In the context of federal regulations, digital platform companies should also ensure that their privacy notices, practices and procedures, and online statements generally are not “unfair or deceptive” to avoid FTC’s scrutiny. See 3.3 US State Law and 5.1 Key Regulatory Agencies.

Companies that use AI systems in their hiring processes must also abide by existing state legislative and regulatory frameworks. Currently, Illinois and New York City have laws that require employers that use AI systems for hiring processes to make certain disclosures to job applicants. However, other states such as New Jersey have introduced bills regulating the use of automated employment decision tools.

Please see the information in 1.1 General Legal Background Framework.

Please see the information in 1.1 General Legal Background Framework.

On 16 March 2023, the Copyright Office issued a statement on its practices for examining and registering works involving AI-generated material. It affirmed the established policy that copyright protects only human-created content.

Regarding works submitted for registration that combine human authorship with AI-generated material, the Copyright Office stated it will evaluate, case by case, whether the AI contributions result from mechanical reproduction or the author’s original mental conception. The Office compared AI to other technological tools, like cameras or image editing software, emphasising the importance of assessing the degree of human creative control and contribution to traditional elements of authorship.

Similarly, recognising the growing role of AI in innovation, the United States Patent and Trademark Office (USPTO) sought to clarify its stance on AI-enabled inventions. In early 2023, the USPTO solicited public input through a Request for Comments Regarding Artificial Intelligence and Inventorship. The submission period closed on 15 May 2023, and the outcome is pending.

The application of trade secret law and similar intellectual property rights can play a crucial role in the protection of AI technologies and data. The law maintains a very broad definition of a trade secret, and in the context of AI, trade secrets can include algorithms, models, training data, proprietary techniques and any other valuable knowledge related to AI technologies that are kept secret.

Contractual agreements with employees, contractors and third parties involved in the development or usage of AI can help safeguard AI-related trade secrets and maintain their confidentiality. It is essential to include robust intellectual property and non-disclosure provisions in licensing contracts, technology transfer agreements, joint development agreements, or any other agreements engaged by the owner of AI technologies and data.

The scope of intellectual property protection for works of art and authorship generated by AI remains a subject of ongoing discussion. The key considerations revolve around questions of authorship, ownership and the legal framework surrounding AI-generated works and training or other input data. While AI-generated works typically do not qualify for copyright protection, recent efforts by the Copyright Office and USPTO demonstrate notable progress and an active commitment to tackling the legal challenges and adapting existing intellectual property laws to encompass AI-generated works.

The discourse surrounding the creation of works and products utilising OpenAI and other generative AIs, has been a matter of significant interest and contention.

Microsoft, GitHub and OpenAI are facing a proposed class action lawsuit claiming that their AI-powered coding assistant, GitHub Copilot, allegedly copies code from public repositories without crediting the original creators (See 4.1 Judicial Decisions). Similarly, in April 2023, a song believed to have been created by Drake and The Weeknd emerged, but it was subsequently disclosed that the song was AI-generated by inputting the artists’ discographies into an AI system.

There remains an ongoing and unresolved debate regarding potential copyright infringement through machine learning and the input of copyrighted material to train AI systems. Similarly, when the output of AI bears a striking resemblance to one or more copyrighted materials from its training dataset, it raises concerns about the exclusive right of the copyright holder to create derivative works. Meanwhile, the doctrine of fair use could also be a factor of consideration.

To date, there has been no direct legal precedent in the United States concerning the utilisation of copyrighted materials in machine learning and the copyright implications that arise from AI, but multiple suits are pending (see 4.1 Judicial Decisions).

In-house counsel needs to understand:

  • Data privacy and security concerns – be mindful of the data the organisation is collecting and the oversight and data governance structure the organisation has in place.
  • Wrong and fictitious answers or output, whether the by-product of the AI tool itself, or a result of the system being infiltrated or compromised to manipulate data and skew the results.
  • The vendor contract governing the use of the AI application if it was developed outside the organisation to ensure it complies with applicable law and covers relevant issues, including: use of company data by vendor; representations or warranties from the vendor regarding the accuracy of the AI system or its freedom from bias; whether the AI tool is subject to independent audits; the privacy and security of personal and company data input into the AI system; the parties’ ownership rights with respect to input and output data.
  • Protecting company IP and ownership rights in both input data and AI-generated output that incorporates the company’s data, as applicable, including with respect to employee use of AI that may lead to loss of protection.
  • The use of AI in automated employment decision-making and relevant regulations.
  • Remember that existing laws still apply, including anti-discrimination and unfair and deceptive trade practices laws, which may be applied to the company’s use of AI.

Board of Director activities include:

  • determining the proper use of AI in the company business (in consultation with management board);
  • mandating identification of risks and mitigation measures;
  • mandating that the AI developments are designed to meet a specific goal and create tools to measure, in a non-biased way, the metrics by which the goal is met;
  • mandating that AI achieves economic savings versus traditional tools for approaching a problem – ie, that the cost it adds to develop and utilise the AI application is more than offset by savings from allowing the abandonment of traditional tools;
  • mandating regulatory and legal review of AI compliance with established laws and regulations and ordering a report of the same to be presented to the board; and
  • creating processes to continually assess the above mandates so that the Supervisory Board is regularly apprised of the achievement (or non-achievement) of the above principles.
Moses & Singer LLP

The Chrysler Building
405 Lexington Avenue
New York
New York 10174
USA

+1 212 554 7800

+1 212 554 7700

Wtanenbaum@mosessinger.com www.mosessinger.com
Author Business Card

Law and Practice in USA

Authors



Moses & Singer LLP is a New York firm recognised in the USA and internationally for its experience in assisting companies entering the US market and navigating the constantly evolving requirements of US federal and state laws and the issues arising at the intersection of regulatory compliance, the distinctive features of US intellectual property law and business transactions. The firm handles AI, machine learning and the internet of things. Moses & Singer established a multidisciplinary data law practice focusing on data as a corporate asset: this provides a broad-gauge, cross-functional practice to guide clients in leveraging their data assets, internally and externally, as new technology and analytics platforms create new business opportunities – this includes using machine learning to design US-focused products, and negotiating and structuring contracts in the US style. A number of the firm’s lawyers are highly ranked by international legal directories.