Contributed By Moses & Singer LLP
Aside from sector-specific regulatory schemes, the treatment of AI continues to evolve under the distinctive requirements of general areas of US law, including:
The key industry applications of AI and machine learning are outlined below.
Healthcare
Financial Services
Aerospace and Defence
Emerging Technology
Overview of the US federal landscape of AI Regulations
While Europe’s AI regulatory landscape is beginning to take shape, most notably the EU Artificial Intelligence (AI) Act, which is expected to pass later in 2023, the US AI regulatory landscape at the federal level remains unclear. To date, US federal government has not proposed or considered any US equivalent of the EU’s AI Act nor has it set forth any specific policy rationale for its expected AI law or regulation.
The proposed US federal data privacy bill – the American Data Protection and Privacy Act (ADPPA) – sets out some rules for AI and automated decision-making tools. The ADPPA includes some risk assessment obligations for covered businesses and several other algorithmic governance obligations that are generally like those under existing US state laws that govern AI systems (see 3.3. US State Law).
There is no applicable information in this jurisdiction.
There is no applicable information in this jurisdiction.
US states have been actively proposing and enacting state comprehensive data privacy laws that regulate the use of automated decision-making tools that have “legal or other significant effects” for individuals and/or AI-specific laws that regulate how AI systems can be used in the context of employment. Generally, existing and proposed state comprehensive data privacy laws (see examples below) grant consumers the right to opt-out of “profiling” or processing activities that use automated decision-making techniques (ADM), which often use algorithms to analyse data, and require covered entities to make certain disclosures to consumers, and/or conduct data protection impact assessments (DPIAs). Some examples of such states taking action to regulate AI applications under their comprehensive data privacy laws include the following.
California
California was the first US state to enact a comprehensive consumer data privacy law – the California Consumer Privacy Act (CCPA), which was amended by the California Privacy Rights Act (CPRA), which took effect 1 January 2023.
On 30 January 2023, the California legislature introduced Assembly Bill 331 (AB 331), which aims to regulate the use of automated decision tools (ADTs).
If passed, AB 331 would require a deployer – an entity that uses ADTs to make certain decisions of legal significance, and the ADT developer to perform an impact assessment for any ADTs used. The impact assessment would have to include, among other things, a statement of the purpose of the ADT, its intended benefits, uses and deployment contexts. Both the deployer and ADT developer would have to provide the impact assessment to the California Civil Rights Department within 60 days of its completion, on or before 1 January 2025, and annually thereafter. AB 331 would grant a California resident the right to opt out of the ADT, and a private right of action for violations of the bill, and prohibit a deployer from using ADTs in a manner that contributes to algorithmic discrimination.
Colorado
On 7 July 2021, Colorado enacted the Colorado Privacy Act (CPA), a comprehensive consumer data privacy law of the state, which takes effect on 1 July 2023.
Existing or proposed state laws regulating the use of AI systems in the context of employment require covered employers to conduct audits on their AI systems for any discriminatory impacts that may be “harmful” to job applicants, provide certain statutorily required notices to the applicants about their uses of AI systems in the hiring process, and permit the applicants to exercise certain rights granted under the laws. For example:
Illinois
In January 2020, Illinois enacted the Artificial Intelligence Video Interview Act (the Act), which established the parameters for employers using AI in their hiring process. The Act was amended on 1 January 2022 to add a reporting requirement for employers who use video-recorded interviews. The Act established notice, consent, confidentiality and data destruction responsibilities on employers who use AI technology to evaluate job candidates in Illinois. Specifically, covered employer must notify each applicant before the interview that an AI system may be used to analyse the interview.
New York
In December 2021, New York City passed the first law in the US (albeit at the municipal level) – Local Law 144 – that mandates employers to conduct bias audits of AI-enabled tools used for employment decisions. The law took effect on 1 January 2023 and imposes notice and reporting obligations on NYC employers. Specifically, Local Law 144 requires employers who use automated employment decisions tools (AEDTs) to, among others, conduct a bias audit (by an independent auditor) within one year of the use of AEDTs.
Federal
Federal court cases in the US have interpreted the US Patent Act to require a human inventor for an invention to be eligible for patent protection because the definition of “inventor” defines inventor to be a “natural person.” The United States Supreme Court declined to consider the lower court case that was appealed to it, preserving the ruling below and setting the standard for patentability of AI inventorship in the United States.
The Supreme Court also issued two decisions that sidestepped ruling on the question of whether a technology company’s machine learning algorithm could subject the company to liability for the algorithm’s output, notwithstanding Section 230 of the Communications Decency Act. The Court instead decided those cases on alternate grounds before reaching the issue of liability under Section 230.
In lower federal courts, there are also pending cases regarding intellectual property and AI, including:
Cases addressing AI applications have not generally thus far discussed definitions of AI, instead applying existing statutes to reach conclusions before the analysis of the AI is needed, especially in the patent and copyright cases, where human inventorship or authorship is required based on the definitions of the Patent Act and Copyright Act.
The United States Department of Commerce, acting pursuant to the NAIIA, through the NIST and the National Artificial Intelligence Advisory Committee (NAIAC), has been tasked with a voluntary risk management framework for trustworthy AI systems and advises the President and other federal agencies regarding key issues concerning AI.
The US Federal Trade Commission (FTC), acting pursuant to Section 5 of the FTC Act, as well as the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA), seeks to investigate the use of biased algorithms and create compliance standards for companies to follow. In the meantime, the FTC has been issuing AI guidelines that it seeks to regulate in the context of “unfair or deceptive” practices.
The US Food and Drug Administration (FDA) is responsible for regulating medical devices in the USA. AI companies developing digital health products should recognise how recent regulatory changes may affect them and that the FDA is engaging industry to further refine its oversight approach. The FTC has issued recent guidance around AI and ML, and clarified through its enforcement actions and press releases that AI may pose issues that run afoul of the FTC Act’s prohibition against unfair and deceptive trade practices.
The National Security Commission and Government Accountability Office (GAO), through the National Security Commission on Artificial Intelligence (NSCAI), advise the government to take certain actions at the domestic level to protect the privacy and civil rights of US citizens in the government’s deployment of AI.
Each of the US federal agencies that regulate AI set forth different definitions for “artificial intelligence”, “machine learning”, or “automated decision-making” as explored below.
NAIIA
The NAIIA defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” AI systems use machine and human-based inputs to:
FTC
The US Federal Trade Commission (FTC) recognises that “artificial intelligence” is an ambiguous term in the context of US AI regulatory landscape. Nonetheless, FTC generally refers to “artificial intelligence” as referring to a variety of technological tools and techniques that use computation to perform tasks such as prediction, recommendations and decisions.
FDA
The US FDA has broadly defined artificial intelligence as the science and engineering of making intelligent machines, especially intelligent computer programs, and took cognisance of the fact that AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on “if-then” statements, and machine learning.
Machine learning is an artificial intelligence technique that can be used to design and train software algorithms to learn from and act on data. Software developers can use machine learning to create an algorithm that is “locked” (so that its function does not change), or “adaptive” (so its behaviour can change over time based on new data). Some real-world examples of artificial intelligence and machine learning technologies include:
NIST
On 26 January 2023, NIST released Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF 1.0). The AI RMF 1.0 was developed in collaboration with the private and public sectors to incorporate trustworthiness considerations in the design, development, use, and evaluation of AI systems, products and services. It is a guide for managing the risks associated with the use of AI systems that consists of two parts. Part 1 discusses how organisations can frame the risks associated with AI systems and describes the intended audience. Part 2 of AI RMF 1.0 sets forth the “core” of the framework and describes four specific functions to help organisations address the risks of AI:
The AI RMF 1.0 recommends organisations and boards to follow a structured approach to managing AI-related risks, which includes five components:
NAIAC
The NAIAC is focused on advising the President and the government on topics related to the NAIIA, the progress of implementing the NAIIA, the current state of the USA’s competitiveness in AI, etc.
FTC
Over the last three years, the FTC has issued several non-binding AI guidelines to help organisations using AI systems to avoid its enforcement scrutiny pursuant to its authority under Section 5 of the FTC Act, including: “Keep Your AI claims in check” on 27 February 2023. The FTC’s AI guidelines demonstrate its focus on the use of AI systems and suggest “best practices”:
FDA
The US Food and Drug Administration (FDA) issued the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan from the Center for Devices and Radiological Health’s Digital Health Center of Excellence. Traditionally, the FDA reviews medical devices through an appropriate pre-market pathway, such as pre-market clearance (510(k)), De Novo classification, or pre-market approval. The FDA may also review and clear modifications to medical devices, including software as a medical device, depending on the significance or risk posed to patients of that modification.
US federal agencies have only recently increased their regulatory scrutiny on the use of AI systems. To date, the FTC, often with the US DOJ’s cooperation, has been most active in its AI-related enforcement activities. These enforcement actions, and related settlements, can result in regulatory penalties and/or forced deletion of the personal data collected and used to build algorithms and AI or machine learning models, as well as the destruction of the algorithms and AI models themselves.
For example, on 11 January 2021, the FTC reported that it settled with Everalbum for its “unfair or deceptive” use of facial recognition technology. Everalbum allowed its users to store and organise their photos and videos by uploading them to its cloud-based servers and used its users’ photos and videos to develop facial recognition technologies that it marketed to certain customers under a different name platform. Under the settlement, the company was required, among other things, to destroy all algorithms and “models” it developed using its users’ photos and videos.
In 2019, the FTC announced its settlement with Facebook, Inc. (now Meta) in the matter of Cambridge Analytica, LLC. Under the settlement order, a USD5 billion penalty was imposed on Meta.
The proposed and existing legislations and regulations relating to AI seek to set regulatory frameworks that:
Considering these regulatory focuses under proposed and existing legislations and regulations (state or federal), impacted organisations must review and audit their use of AI systems, and implement internal safeguards and oversight policies and procedures to ensure the transparency and integrity of their AI systems in compliance with the applicable AI laws or regulations. For more details see 3 Legislation and Directives and 5 AI Regulatory Regimes.
The International Organization of Standardization (ISO) and the International Electrotechnical Commission (IEC) have created two new foundational standards for AI:
The Institute of Electrical and Electronic Engineers Standards Association, through its Artificial Intelligence Systems Committee, creates standards to prioritise ethical considerations when developing and using AI.
See 5.3 Regulatory Objectives for discussion of NIST’s AI Risk Management Framework.
So far, international standard-setting bodies have not had a great impact on business in the US. While the goals of ISO/IEC in creating common terminology and concepts may be to help find more harmonious AI regulation internationally, it remains to be seen whether and how standards from organisations in other jurisdictions such as the European Telecommunications Standards Institute (ETSI) and the UK’s AI Standards Hub will in fact be harmonious and if and when they can be squared with the currently mismatched pace of regulatory development governing AI as between the US and other jurisdictions.
Governments are using AI applications and technology in many of the same ways as private industry, particularly facial recognition. Government’s use of AI also extends far beyond the ways in which it is used by private industry given the government’s role in law enforcement, prosecution and sentencing; the administration of public benefits; regulatory enforcement; and providing information to and interacting with citizens. Examples of government’s use of AI include:
While governments should carefully consider and address the potential risks of implementing AI technology prior to doing so, governments often face obstacles to adopting new technology period, including in the AI arena, such as a lack of specialised AI skills and management, budget priorities and constraints, legacy cultures and established practices and processes, and selection of trustworthy AI to meet the high standards placed upon them by the public. Inherent risks of government use of AI are the same as those discussed in 12 General Technology-Driven AI Issues.
There have been judicial decisions that have pushed back on the government’s use of AI, including the use of facial recognition and predictive tools and recidivism algorithms in law enforcement and criminal justice and use of AI for public benefits eligibility. But courts have also upheld the government’s use of AI for risk scoring and sentencing and parole. Even in cases where the government has ultimately upheld the government’s use of AI, there have been multiple cases that have required the government to divulge the source code of the AI or machine learning system to the citizens challenging the government’s AI-supported decision.
National security concerns include keeping the USA as a leader in the development and use of AI, retaining military superiority and restricting foreign countries from applying AI to misuse data of US citizens. To address these national security considerations, the NSCAI (see 5.1 Key regulatory Agencies and 5.3 Regulatory Objectives) proposed that the USA increase export control of EUV and ArF lithography equipment to China, granting the Treasury the authority to mandate CFIUS filings for non-controlling investments in AI from China, Russia and other competitive nations.
Another consideration is protecting against data collection of US citizens by foreign AI. For example, former President Trump banned WeChat and TikTok in 2020, though those bans were not upheld. Several states have proposed or outright banned TikTok themselves, however, and President Biden signed orders that require the Department of Commerce to launch national security reviews of any apps that have links to foreign adversaries.
In addition:
Generative AI, which is a subset of AI that generates original content by learning patterns from existing data, raises a variety of issues for lawyers. These include:
Scientists, academics and policymakers are working on developing ethical guidelines emphasising transparency in the deployment of generative AI. These include the Blueprint for an AI Bill of Rights promulgated by the White House Office of Science and Technology Policy, as well as various state-level directives (see 5 AI Regulatory Regimes and6 Proposed Legislation and Regulations).
AI has the potential to radically alter the practice of law. If leveraged properly, it can empower lawyers to focus more on high-value tasks and deliver better client service.
In the litigation context, AI is being used and developed to perform the following tasks:
AI has the following capabilities in the non-litigation context:
The use of AI in the practice of law creates a bevy of novel ethical considerations. These include:
If a self-taught algorithm makes an error, who bears the responsibility? In cases where multiple individuals contribute to the design of a self-teaching algorithm, when, where, and to whom does liability attach? Can liability eventually detach? AI technology may give rise to new theories of liability throughout the supply chain, from programmers/manufacturers to end users. The following fundamental liability theories are applicable in the AI context:
The determination of when and how liability attaches and detaches in the AI context is still evolving and largely unwritten. Insurance is likely to play a role in mitigating AI-related liabilities. Liability insurance policies specifically covering AI risks can offer financial protection, subject to policy terms and exclusions. Additionally, indemnification clauses may help assign responsibility for AI-related harm or damages within the supply chain based on each party’s involvement and control over the AI system.
The AI regulatory landscape at the federal level remains unclear, including with respect to the imposition and allocation of liability. To date, the US Federal Government has not proposed or considered any US equivalent of the EU’s AI Act, nor has it set forth any specific policy rationale for its expected AI law or regulation (see 2 Legislation and Directives; 5 AI Regulatory Regimes; 6 Proposed Legislation and Regulations; and 7 Standard Setting Bodies).
Use of AI has underlying risks for all potential users, including:
Critics have called attention to bias in AI, particularly that of facial recognition, which has historically had a harder time distinguishing between people with non-white skin, leading to higher rates of error, mistaken identity and unlawful arrest and detention of individuals. While this problem can potentially be addressed with better training data and training data that more accurately represents the population on which the AI will be used, that is not always the case. For example, use of AI by law enforcement is criticised as being trained on past policing data, such that its predictive power will work best on neighbourhoods and segments of society that are already being policed. Not only does this run the risk of missing crime that happens elsewhere, but for which the AI is not trained, it amplifies the historical inequities of predicting crime as more likely to happen in over-policed neighbourhoods and segments of society, and can therefore justify police continuing to direct resources there if they discover evidence of crime.
Users of AI must understand that just because its predictions and outcomes are derived from “maths” does not mean that they are fair, equitable, unbiased, or even correct. Governments have begun to think about and pass regulation in this area, with New York City passing regulations governing the use of AI in automated employment decisions, requiring such AI systems to be subject to independent bias audits.
AI processing of personal data of individuals poses risks of data persistence and unauthorised data repurposing, but more fundamental is the question of consent and whether individuals have consented to their personal data being fed into and analysed by an AI system. Going back years we have seen Facebook and other social media companies’ ability to serve up targeted advertisements based on user preferences and usage, which has also exposed the risks of Facebook revealing information about individuals that it had deduced, but which the individual did not consider public, such as sexual orientation or disease status, based upon “likes”, “follows”, etc. More recently, TikTok is said to pose the same threats to the privacy of users by not only being able to construct a profile of its users based on their habits, but also the ways in which the application interacts with others on a users’ phone to collect additional data, all of which is held by a foreign company.
As individuals increasingly turn to wearable technology and other applications to assist in health issues, to self-diagnose or self-manage conditions, privacy risks increase with the sharing of medical data with entities that are not otherwise subject to HIPAA or medical privacy laws. AI also provides a solution to assist in the re-identification of data that supposedly has been de-identified.
Additional challenges to privacy include the potential inability of individuals to exercise rights granted to them under data privacy laws, including data access, correction and deletion. If someone’s personal information has been co-mingled and made a part of an algorithm, is it possible to correct it or delete it without affecting the algorithm? The FTC found one solution to this problem, which was to force a company to destroy an algorithm it had trained on inappropriately collected data.
Regarding security, traditional cybersecurity issues persist, although AI systems now potentially have made entities that purchase and use them repositories of much more personal data, and much more inferred information about their customers or their constituents than they are otherwise accustomed to. Without strong oversight, policies and procedures, this data could be misused or vulnerable to compromise.
The use and development of AI in healthcare poses unique challenges to companies that have ongoing obligations to safeguard protected health information, personally identifiable information and other sensitive information.
AI processes often require enormous amounts of data. As a result, it is inevitable that using AI may implicate the Health Insurance Portability and Accountability Act (HIPAA) and state-level privacy and security laws and regulations with respect to such data, which may need to be de-identified. AI systems can be used in the context of healthcare operations and administration, predominantly for the reduction of costs. Relating to HIPAA, healthcare providers may use third-party organisations (known as Business Associates) to analyse their data relating to healthcare operations and administrations to increase operational and administrative efficiencies. However, healthcare providers would have to ensure that their Business Associates comply with HIPAA, often through contractual obligations, in using their data and use of AI systems on that data.
In the context of healthcare services and research, AI systems may improve detection, diagnosis and treatment of health conditions. However, organisations using AI systems or AI third vendors must comply with HIPAA when the data is used to train the AI system or subjected to AI processes, and ensure that there are no discriminatory effects. The growing concern for use of AI in the healthcare research setting is increasing the integrity and diversity of data fed into AI systems.
Generally, the US federal government does not have the level of regulatory rules that EU or other jurisdictions such as Canada or UK have. However, concerns regarding automated decision-making in AI-based facial recognition, employment and profiling applications have increasingly grown throughout the years and have led US state legislatures to ban the use of facial recognition systems, introduce legislative and regulatory requirements for employers using AI systems, and disclosure and assessment obligations for businesses using AI systems.
Illinois enacted its Biometric Information Privacy Act (BIPA) in 2008, which prohibits the unlawful collection and storing of biometric information. Biometric information includes retina scans, iris scans, fingerprints, palm prints, voice recognition, facial geometry recognition, DNA recognition, gait recognition and even scent recognition. Negligent violations of the BIPA result in a USD1,000 penalty, while wilful violations result in a USD5,000 penalty. Further, in 2019, Illinois enacted the Artificial Intelligence Video Interview Act, which requires employers to disclose to candidates if AI will or may be used to analyse the candidate’s interview, to explain how the AI will be used and to obtain the candidate’s consent.
In 2019, California’s AB 1215 placed a three-year moratorium on any use by law enforcement of any biometric information collected by an officer camera. The cities of Berkeley, CA and San Francisco, CA banned all government use of facial recognition technology, although San Francisco established an approval process for any future uses. In July 2021, New York passed at the state level a two-year moratorium on the use of facial recognition in schools.
These bans were adopted due to concerns relating to privacy and concerns relating to the inaccuracy of the automated decision-making of the AI-based facial recognition technology.
In 2022, Illinois enacted the Artificial Intelligence Video Interview Act (the Act), which established the parameters for employers using AI in their hiring process.
For more examples and discussion of US state laws and regulations governing the use of AI and automated decision-making that may involve the use of biometric data or other sensitive personal data, see 3.3 US State Law and 12.3 Facial Recognition and Biometrics.
The FTC has clarified that that it will use its authority under Section 5 of the FTC Act to prevent “unfair or deceptive” business practices pertaining to a business’s use of AI systems that impact consumers. Pursuant to the AI regulatory focus and guidelines of the FTC, businesses that use chatbots or other similar AI technologies to interact with consumers must be transparent about their use of such systems. For instance, if the chatbot or other AI system is designed to recommend to consumers certain services or products with which the business has a commercial relationship, that business must inform the consumers of its commercial relationship with the recommended products or services.
An example of state legislation that similarly reflects the FTC’s regulatory policy on chatbots is the California SB 1001 – the Bolstering Online Transparency Act (BOT Act). BOT Act was introduced in 2018, and took effect in July 2019. BOT Act prohibits a person or entity from using a “bot” to communicate or interact online with a person in California to incentivise a sale or transaction of goods or services or to influence a vote in an election without disclosing that the communication is via a bot. The BOT Act defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”
Although California is the only state to pass such a law, it may be indicative of the types of regulations that will follow from other states or the federal government.
Antitrust and price-setting issues that may arise out of using AI technology include:
AI offers upsides and downsides when it comes to combating climate change. On the one hand, it can enhance our comprehension of climate change and support in devising effective mitigation strategies. On the other hand, it carries inherent risks of bias and perpetuating social inequality, while the resource-intensive nature of AI systems can contribute to climate change and increase greenhouse gas emissions.
AI helps improve understanding of climate change by improving climate modelling to make better predictions and deploy mitigation sooner. It is also being used to assist in the design of materials to create lighter and stronger materials for building larger windmills, to plan satellite paths for image capturing satellites that contribute to our understanding of climate change, powering robots for data collection in inhospitable or inaccessible terrain, and boosting adaptation and resiliency by helping design infrastructure with fewer climate hazards or lower impact on the climate.
The uses of AI in an employment/hiring context include:
AI technology may be utilised by employers to evaluate employee performance. Tools include natural language processing (NLP) to analyse written communication, machine learning algorithms to analyse quantitative data and computer vision to assess visual cues and behaviours.
The benefits of utilising AI to monitor and evaluate employee performance include objectivity, increase efficiency, real-time feedback and scalability. There are also many drawbacks, including overemphasis of quantitative metrics (and relative de-emphasis of creativity and interpersonal skills), privacy concerns, and discrimination and bias. In addition, there may be concerns over employee morale due to a sense of being constantly scrutinised, which may lead to decreased job satisfaction.
The use of AI in digital platforms in the US, such as those utilised by car services and food delivery services, is governed by the existing state comprehensive data privacy laws and may be subject to FTC’s regulatory scrutiny pursuant to its authority under Section 5 of the FTC Act. Specifically, the collection and processing (or use) of personal data by digital platforms must comply with the requirements under each of the applicable state comprehensive data privacy laws. For instance, if a digital platform is available to US consumers across all states, digital platform companies must evaluate whether they are subject to the applicable state privacy laws and prepare for compliance under each law. Typically, this will require digital platform companies to conduct a dataflow assessment to determine which laws apply to their platform(s). In the context of federal regulations, digital platform companies should also ensure that their privacy notices, practices and procedures, and online statements generally are not “unfair or deceptive” to avoid FTC’s scrutiny. See 3.3 US State Law and 5.1 Key Regulatory Agencies.
Companies that use AI systems in their hiring processes must also abide by existing state legislative and regulatory frameworks. Currently, Illinois and New York City have laws that require employers that use AI systems for hiring processes to make certain disclosures to job applicants. However, other states such as New Jersey have introduced bills regulating the use of automated employment decision tools.
Please see the information in 1.1 General Legal Background Framework.
Please see the information in 1.1 General Legal Background Framework.
On 16 March 2023, the Copyright Office issued a statement on its practices for examining and registering works involving AI-generated material. It affirmed the established policy that copyright protects only human-created content.
Regarding works submitted for registration that combine human authorship with AI-generated material, the Copyright Office stated it will evaluate, case by case, whether the AI contributions result from mechanical reproduction or the author’s original mental conception. The Office compared AI to other technological tools, like cameras or image editing software, emphasising the importance of assessing the degree of human creative control and contribution to traditional elements of authorship.
Similarly, recognising the growing role of AI in innovation, the United States Patent and Trademark Office (USPTO) sought to clarify its stance on AI-enabled inventions. In early 2023, the USPTO solicited public input through a Request for Comments Regarding Artificial Intelligence and Inventorship. The submission period closed on 15 May 2023, and the outcome is pending.
The application of trade secret law and similar intellectual property rights can play a crucial role in the protection of AI technologies and data. The law maintains a very broad definition of a trade secret, and in the context of AI, trade secrets can include algorithms, models, training data, proprietary techniques and any other valuable knowledge related to AI technologies that are kept secret.
Contractual agreements with employees, contractors and third parties involved in the development or usage of AI can help safeguard AI-related trade secrets and maintain their confidentiality. It is essential to include robust intellectual property and non-disclosure provisions in licensing contracts, technology transfer agreements, joint development agreements, or any other agreements engaged by the owner of AI technologies and data.
The scope of intellectual property protection for works of art and authorship generated by AI remains a subject of ongoing discussion. The key considerations revolve around questions of authorship, ownership and the legal framework surrounding AI-generated works and training or other input data. While AI-generated works typically do not qualify for copyright protection, recent efforts by the Copyright Office and USPTO demonstrate notable progress and an active commitment to tackling the legal challenges and adapting existing intellectual property laws to encompass AI-generated works.
The discourse surrounding the creation of works and products utilising OpenAI and other generative AIs, has been a matter of significant interest and contention.
Microsoft, GitHub and OpenAI are facing a proposed class action lawsuit claiming that their AI-powered coding assistant, GitHub Copilot, allegedly copies code from public repositories without crediting the original creators (See 4.1 Judicial Decisions). Similarly, in April 2023, a song believed to have been created by Drake and The Weeknd emerged, but it was subsequently disclosed that the song was AI-generated by inputting the artists’ discographies into an AI system.
There remains an ongoing and unresolved debate regarding potential copyright infringement through machine learning and the input of copyrighted material to train AI systems. Similarly, when the output of AI bears a striking resemblance to one or more copyrighted materials from its training dataset, it raises concerns about the exclusive right of the copyright holder to create derivative works. Meanwhile, the doctrine of fair use could also be a factor of consideration.
To date, there has been no direct legal precedent in the United States concerning the utilisation of copyrighted materials in machine learning and the copyright implications that arise from AI, but multiple suits are pending (see 4.1 Judicial Decisions).
In-house counsel needs to understand:
Board of Director activities include:
The Chrysler Building
405 Lexington Avenue
New York
New York 10174
USA
+1 212 554 7800
+1 212 554 7700
Wtanenbaum@mosessinger.com www.mosessinger.com