TMT 2024

Last Updated January 15, 2024

USA

Law and Practice

Authors



Kelley Drye & Warren LLP has provided carefully tailored legal counsel to its clients for more than 180 years. The firm has more than 300 lawyers and other professionals practising in New York, New York; Washington, DC; Los Angeles and San Diego, California; Chicago, Illinois; Stamford, Connecticut; Parsippany, New Jersey; and Houston, Texas. The firm’s technology transactions and sourcing practice has handled some of the world’s largest and most complex technology-driven transactions on all sides of the deal spectrum and across a range of industries. The firm’s communications practice assists a wide array of enterprises, ranging from global carriers to start-ups, in meeting their legal and regulatory needs. The firm’s privacy and information security practice works on the leading edge of media, technology and business, helping clients understand and proactively address privacy, information security and compliance issues.

Laws and Regulations

The US legal and regulatory landscape governing the metaverse is rapidly evolving. The laws and regulations that are implicated by the metaverse are numerous and potentially create a range of legal issues. There are currently no US laws that specifically apply only to the metaverse – however, in some situations existing laws apply; in other situations, new laws and regulation are likely to be developed over time.

Intellectual Property

Intellectual property laws are key to metaverse governance. User-generated digital content and digital assets, such as non-fungible tokens (NFTs), raise novel intellectual property issues including those summarised below.

Copyright

Copyright is a type of intellectual property that protects original works of creators. Copyright owners have the exclusive right to reproduce the work, display it, distribute copies of it, and to authorise others to exercise those rights. In the metaverse, copyright protects user-generated digital content including avatars, virtual buildings, and digital artwork.

Business and individual users should carefully consider platform-specific terms regarding the use and ownership of copyrighted content. Moreover, content created in the metaverse that is similar to copyrighted content in the physical world may give rise to claims for copyright infringement. For example, an avatar or NFT in the metaverse that is similar to copyrighted work outside the metaverse could trigger a copyright infringement claim.

Trade marks

A trade mark is a word, phrase, logo, design, or slogan that indicates the source of goods and services. Trade mark law protects against the unauthorised use of a trade mark by third parties that would cause a consumer to believe that the trade mark owner either was the source of the goods or services, or endorsed or sponsored such goods or services, in a manner that may dilute or disparage the trade mark.

Many companies register their brands with the US Patent and Trade Office (USPTO) for use in connection with virtual offerings as well as those in the physical world. These companies obtain registered trade mark protection for things like virtual goods, retail store services featuring virtual goods, NFTs, and digital tokens.

Patents

A patent for an invention is the grant of a property right to the inventor. Generally, a new patent is valid 20 years from the date on which the application for the patent is filed in the United States. US patent grants are effective only within the United States, its territories and possessions. A company developing metaverse-related technologies will need to consider whether to seek patent protection and whether its technology might infringe on the patents of other parties in the same way as technology providers outside of the metaverse.

User-Generated Content Litigation

The proliferation of user-generated content creates risks of unauthorised use of third-party trade marks and brand dilution. For example, some metaverse spaces operate as an online economy, allowing users to create their own virtual worlds, to develop intellectual property, to sell branded creations, and/or to build an online business presence to sell their products in the real world. Using another party’s trade marks in these ways can trigger a trade mark infringement claim.

Torts

As with other platforms, tort law governs conduct between users in the metaverse. For example, a business or individual user may be liable under tort law for fraudulent or defamatory statements made on a metaverse platform. Other user-generated content and conduct may also give rise to tort claims.

However, the immersive nature of the metaverse poses novel questions under tort law. For example, US courts have not explored whether and when conduct in the metaverse can constitute tortious assault, conversion, or emotional distress. Most platforms engage in some degree of real-time content moderation, but anecdotal evidence suggests it is insufficient to protect all users.

Tax and Financial Regulations

The purchase and sale of virtual goods trigger tax implications, including sales tax and income tax. NFTs may be subject to US commodities, banking, and securities laws, due to the manner in which these assets are created and exchanged.

Contracts

In the metaverse, contract law applies to agreements between users, such as selling virtual goods or renting virtual property. Businesses entering into agreements in the metaverse need to comply with laws and regulations applicable to contracts in the physical world, including meeting all consumer disclosure requirements.

Data Protection

Data practices relating to the metaverse are subject to generally applicable US privacy and data protection frameworks, primarily the California Consumer Privacy Act and other comprehensive state privacy laws that went into effect in 2023, Section 5 of the Federal Trade Commission Act, and state laws that prohibit unfair or deceptive acts and practices.

As with other platforms, it is important for companies that have a presence in the metaverse to understand the personal data flows involving the company, the platform, and exchanges between them. Key issues to examine include:

  • whether a company collects personal data through its presence in the metaverse;
  • whether the company shares any of this data with the metaverse platform; and
  • whether such collection and sharing are adequately disclosed in privacy notices and covered under existing rights request processes.

Cybersecurity and Data Security

Companies face potential liability for disclosing personal data to vendors or third parties that do not maintain reasonable data security measures. Therefore, to the extent that personal data will be shared with a metaverse platform, it is important to assess the platform’s cybersecurity practices in advance. In addition, if the metaverse supports operational business activities, then the platform’s general cybersecurity measures, including availability guarantees and ability to resist and respond to various forms of cyber-attacks, are important considerations.

Laws and Regulations

Numerous laws and regulations regulate the digital economy in the United States, including a variety of laws, regulations and codes of conduct particular to specific industries or to the type of data and users involved. Laws and regulations at the federal, state and local level – and in some instances even laws of foreign jurisdictions – may apply to a participant in the digital economy in the USA. As a general matter, laws and regulations applicable outside of the digital economy will also apply to the establishment and operation of a digital business, in addition to those laws and regulations focused primarily on digital operations and transactions.

Terms & Conditions

A business operating a website or mobile platform will need to carefully consider the terms and policies applicable to the platform and the manner in which the terms and policies are disclosed. Even if no goods or services are being sold on the platform, the operator will generally reference the terms of use for the platform and link to the applicable policies, including as a privacy policy describing the collection, storage, use, and disclosure of personal information and policies related to the provision and use of user-generated content. The legal requirements for these policies can vary significantly depending upon the nature of the platform.

In addition to the terms of use of the platform, the operator will need to require customers to enter into appropriate binding contracts if goods or services are being sold, licensed or otherwise made available on the platform in order to specify the terms associated with the transfer of the goods and services. Courts in the United States have sometimes refused to enforce certain provisions of contracts entered into online (or the entire contract) either because the elements for valid contract formation have not been met or because certain provisions were found to be against public policy. In order to increase the likelihood of a digital contract being enforced, the applicable terms should be prominently displayed. Operators of digital platforms generally should either:

  • require potential customers to affirmatively accept the contract terms (such as by clicking on an “Accept” button clearly referencing the terms and conditions) before proceeding to use the platform; or
  • provide that the contract becomes effective if the user continues to use the platform after the terms and conditions are prominently presented.

Requiring the customer to affirmatively accept the contract terms after the terms are presented to the customer will decrease the likelihood that a court in the United States will find the contract to be unenforceable as a result of the customer not having actual or implied notice of the contractual terms or the customer not having agreed to those terms. However, certain applicable statutes or common law principles may still lead a court to deny enforcement of certain provisions, such as relating to arbitration provisions or choice of law and forum selection provisions. 

Intellectual Property

The digital economy also implicates intellectual property laws. Companies that offer consumers innovative experiences have to navigate IP issues including branding and trade mark protection, copyright, licences for specific software or technology, patents, trade secrets and know-how for their digital offerings.

Digital commodities such as non-fungible tokens (NFTs) pose unique intellectual property challenges. Recent cases in the Southern District of New York have found liable defendants that minted NFTs which appropriated the likeness and trade marks of physical goods (Nike, Inc v StockX LLC and Hermès v Mason Rothschild).

Privacy/Data Security/Consumer Protection

Privacy, data security, and consumer protection laws play a key role in regulating commercial practices in the digital economy. The US Federal Trade Commission (FTC), which has jurisdiction over consumer protection and competition enforcement across broad areas of the US economy, is based in part on Section 5 of the FTC Act, which prohibits unfair methods of competition and unfair or deceptive acts and practices. State attorneys general have similar consumer protection authority under their laws against unfair or deceptive acts and practices.

Over the past few decades, the FTC has used its Section 5 authority to establish standards for the processing of personal data through enforcement actions against specific companies, as well as non-binding guidance and policy documents. Until recently, the FTC limited its rule-making activity to specific industries or practices for which Congress granted clear regulatory authority, such as children’s privacy or the security of personal information that financial institutions handle.

The FTC, however, has indicated that the growing digital economy, coupled with business models that are based on monetising personal data, may have given rise to unfair or deceptive data practices that are prevalent. As discussed in 4. Artificial Intelligence, the FTC is now considering developing regulations to govern “commercial surveillance” and data security, which could apply far more broadly than the sector-specific rules mentioned above.

Industry-Specific Laws

Other federal and state regulators play an important role in the legal order surrounding the digital economy. For example, a number of federal laws applicable to entities operating in specific industries apply to the operation of a digital business in those industries, including financial institutions, health care providers and insurers (and their business associates), companies doing business with governmental entities, and educational institutions. See 3. Cloud and Edge Computing for a summary of some of these laws.

Laws and Regulations

Entrusting processes or data to a cloud or other distributed computing environment like edge computing may implicate a variety of laws and regulation in the US depending upon the industry, data, and users involved. Laws and regulations at the federal and state level – as well as laws of foreign jurisdictions – may apply directly to providers of these services operating in the US as well as their customers. In addition, these offerings often involve providers processing data on behalf of customers that is subject to additional regulation (such as controllers of personal data). The obligations of those customers are required to be passed through to the providers in the computing contracts.

Sector-Specific Laws and Regulations and Industry Standards

Laws and standards that govern entities operating in specific industries, including financial institutions, healthcare providers and insurers (and their business associates), companies doing business with governmental entities, educational institutions, and telecommunications common carriers, are applicable to cloud and edge computing providers and information received by the providers. The following are some of the more-frequently implicated laws and standards when such entities move processes and data to the cloud.

Financial institutions

The Gramm-Leach-Bliley Act (GLBA) is a US federal law regulating the treatment of non-public personal information (NPI) by financial institutions, such as banks, financial advisors, and insurance companies.

The GLBA includes provisions on privacy applicable to the collection and disclosure of NPI (the “Privacy Rule”) and security provisions requiring the financial institutions to protect NPI (the “Safeguards Rule”). The GLBA applies not only to financial institutions, but may also apply to companies receiving non-public personal information from a financial institution or who perform activities that are financial in nature or incidental to financial activities. Entities subject to the GLBA generally require their providers to agree to contract terms that reflect the applicable obligations under the GLBA.

An entity subject to the GLBA utilising a third-party service provider for processing will need to confirm the selection of a service provider that maintains appropriate policies and safeguards consistent with the GLBA and enter into an appropriate contract.

The Safeguards Rule (and more detailed guidelines for banks, which are not subject to the Safeguards Rule) requires financial institutions to develop and maintain a comprehensive information security programme and to exercise appropriate oversight over service providers, among other requirements. Significant revisions to the Safeguards Rule went fully into effect on 9 June 2023, and a breach notification requirement takes effect on 13 May 2024.

In addition, on 8 February 2023, the US Department of Treasury issued a report on financial institutions’ adoption of cloud services. Generally, the report states that “applicable federal regulatory requirements place responsibility for effective and appropriate management of technology operations and related risks... on financial institutions, regardless of whether any particular activities or operations are outsourced to third parties”. In addition, the report notes that the GLBA requires federal and state regulators to provide guidance, and notes that the US Federal Deposit Insurance Corporation (FDIC), the US Federal Reserve Board (FRB), and the US Office of the Comptroller of the Currency (OCC) issued Interagency Guidelines Establishing Information Security Standards pursuant to their authority under GLBA.

Healthcare

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law addressing the treatment and use of individuals’ personal protected health information (PHI). HIPAA applies to healthcare providers, health insurance plans and healthcare clearinghouses (“Covered Entities”) and their business associates (and those business associates’ subcontractors) performing certain services invoking PHI (“Business Associates”). Under the authority granted by HIPAA, the US Department of Health and Human Services has issued Privacy, Security, and Breach Notification Rules, which together establish requirements for the use, disclosure, and protection of PHI. Generally, Covered Entities’ use of cloud and edge computing services must comply with HIPAA.

Entities participating in federal programmes

Requirements applicable to activities by federal agencies and their contractors include the Federal Information Security Modernization Act (FISMA). FISMA establishes federal agency roles and responsibilities for information technology security. In addition, the Federal Risk and Authorization Management Program (FedRAMP), which was previously a policy for over a decade, was adopted as a federal law in early 2023, codifying several key aspects of the security assessment and authorisation for cloud service providers.

Auditing standards

The Statement on Standards for Attestation Engagements No 18 (SSAE 18) sets forth standards used by auditors to review certain practices of service providers. Companies offering cloud-based services in the US frequently make available to their customers on an annual basis a type of report based on SSAE 18 known as a Service Organization Control (SOC) 2 report focusing on the principles of security, privacy, availability, processing integrity, and confidentiality. A provider providing services materially impacting the financial statements of its customers will often be requested to also provide a SOC 1 report, which focuses on the service provider’s financial controls.

Payment card industry standard

The Payment Card Industry Data Security Council has created the Payment Card Industry Data Security Standard (PCI DSS) to address data security for any company that stores, processes or transmits “Cardholder Data” or “Sensitive Authentication Data” as defined by the PCI DSS, though the requirements are not statutorily mandated. Nevertheless, it is standard industry practice for a service provider receiving Cardholder Data or Sensitive Authentication Data to be required to meet the extensive requirements of PCI DSS compliance.

Surveillance

Several federal laws authorise law enforcement and intelligence agencies to compel cloud and edge computing providers to produce personal data and other information in response to subpoenas, court orders, and other forms of legal process. Key statutes include the Foreign Intelligence Surveillance Act (FISA) and the Electronic Communications Privacy Act (ECPA), as amended by the Clarifying Lawful Overseas use of the Data (CLOUD Act). The CLOUD Act permits federal authorities in certain instances to compel technology providers based in the US to provide data stored on the provider’s servers located both inside and outside the US.

Given the potential multi-jurisdictional reach of cloud-based products and services, these US laws may conflict with laws of other countries claiming jurisdiction over data or computer assets. For instance, with regard to the European Union, the scope of the US legal authorities’ reach, the strength of judicial and other safeguards, and the rights and protections that Europeans may exercise against government agencies seeking data stored by US-based providers, such as pursuant to the EU’s General Data Protection Regulation (GDPR), have become major issues and especially following the Court of Justice for the European Union’s July 2020 Schrems II decision. Subsequently, the US and the EU have implemented frameworks relating to the exchange of personal information, including the US’s October 2022 “Executive Order on Enhancing Safeguards For United States Signals Intelligence Activities” and the EU’s December 2022 “Declaration on Government Access to Personal Data Held by Private Sector Entities” as well as the EU’s 2023 Data Act relating to access and use of information. In addition to requisite processes to evaluate and respond to government demands, cloud providers increasingly also face demands from their customers to assess the risk of government access to the customers’ data processed by the providers.

Specific Issues for Processing of Personal Data

In addition to the generally-applicable sector-specific laws and standards discussed above, several federal and state laws and regulations govern specific circumstances relating to the type of personal data collected or transmitted. These laws include broadly defined federal and state consumer protection laws, comprehensive state-level statutes, and laws designed to protect either certain categories of data collected or certain data collected on specific populations.

Federal laws

The FTC is the main consumer protection enforcement agency in the US and has long applied its authority to prevent “unfair or deceptive acts or practices” to the data protection arena. Although this authority, defined under Section 5 of the FTC Act, is not specific to data protection, the FTC has used it to bring more than 100 privacy and data security enforcement actions over approximately two decades.

State laws

Virtually every state has enacted narrow legislation to protect specific categories of sensitive personal data of its residents and several have enacted comprehensive structures, including in California, with the California Consumer Privacy Act (CCPA), Virginia, with the Virginia Consumer Data Protection Act (VCDPA), and others, including Colorado, Connecticut, Texas, and Utah. In general, these privacy structures require contracts with service providers/processors such as cloud computing providers to limit the service provider’s data use, assist with consumer rights requests and data protection impact assessments, and ensure personal data security, among other requirements. Below is a high-level description of some of these state statutory requirements.

California

The CCPA was enacted in 2018 to give Californians more control over the personal information certain businesses collect and use about them. “Personal information” is defined under the CCPA as information that identifies, relates to, or could reasonably be linked with a California consumer or their household, including name, social security number, email address, product purchasing records, online browsing history, geolocation information, and biometric data. Personal information does not include information that is publicly available, de-identified, or aggregated, as defined under the CCPA.

In addition, in November 2020, California voters approved an amendment to the CCPA called the California Privacy Rights Act (CPRA). The CPRA fully went into effect on 1 January 2023. Key amendments under the CPRA include the following.

  • Higher applicability threshold to organisations that buy, sell, or share the personal data from at least 50,000 residents or households annually to at least 100,000. This modification may exclude some entities from application of the CCPA.
  • New category of “sensitive personal information”, which includes personal information that reveals a consumer’s:
    1. social security, driver’s license, state identification card, or passport number;
    2. financial account and related data;
    3. precise geolocation within a 1,850 foot radius;
    4. practical or ethnic origin, religious or philosophical beliefs, or union membership;
    5. mail, email, or text messages content, unless the business is the intended recipient of such information;
    6. genetic data and/or biometric information for the purpose of unique identification; and
    7. health status or medical conditions and sexuality, including sexual orientation.

In addition, in 2023, California expanded this list to include citizenship and immigration status, starting 1 January 2024.

Under the CCPA and CPRA, consumers have the right to limit or opt out of certain uses of sensitive personal information.  And, beginning in 2026, consumers are permitted to request deletion of their information across all data brokers using a single deletion request.

  • In addition to limiting the use of sensitive personal data, consumers can opt-out of:
    1. the sale of personal information to a third party for monetary or other valuable consideration; and
    2. sharing of personal information with a third party “for cross-context behavioural advertising, whether or not for monetary or other valuable consideration”.
  • Consumers also have a new right to require businesses to correct inaccurate information “taking into account the nature of the personal information and the purposes of the processing of the personal information”.

Violations of the CCPA are enforced by both the California Attorney General and the California Privacy Protection Agency, both of which have the power to impose penalties/fines of up to USD2,500 per violation or USD7,500 per intentional violation/violation involving consumers under 16 years of age. The law also affords California consumers a private right of action for breaches of sensitive personal information.

Virginia

The Commonwealth of Virginia is the second state to enact a comprehensive data privacy law, the Virginia Consumer Data Protection Act (VCDPA). The VCDPA was passed into law in 2021 and went into effect on 1 January 2023.

Virginians have comparable rights under the VCDPA as Californians do under the CCPA with certain material variances such an exemption to the right to delete personal data where the information was obtained from a third party. However, Virginia does afford additional, specific rights to opt out of targeting advertising and profiling.

Finally, there is no private right of action for Virginians to recover damages for a business’s breach of the VCDPA. The Virginia Attorney General is responsible for enforcing the VCDPA and may impose penalties of up to USD7,500 per violation.

Colorado

The Colorado Privacy Act (CPA) was signed into law on 7 July 2021 and went into effect on 1 July 2023. In addition, entities will be required to accept opt-out requests through a universal mechanism starting 1 July 2024. In general, the CPA is generally more closely aligned with the VCDPA than the CCPA. Violations of CPA are enforceable exclusively by the Colorado Attorney General and the 22 Colorado District Attorneys, and are subject to penalties of up to USD20,000 per violation under the Colorado Consumer Protection Act. There is no private right of action for Colorado consumers under the law.

Other states

At least nine other states have enacted their own laws. Given the differences among current state laws, companies will need to devote careful thought to a compliance strategy that accounts for these differences and the laws’ varying coverage.

Artificial intelligence (AI) in general terms refers to the use of computers to solve problems or perform activities traditionally requiring the application of human intelligence. The term has been in use for many decades, but an enormous amount of publicity has resulted from the recent release of “generative AI” tools, such as ChatGPT. Generative AI tools use sophisticated algorithms to analyse very large data sets and identify patterns and then, in response to a user’s prompt, create new output closely resembling human-created content. For example, generative AI may be used to create text, images or software code.

There is no comprehensive current federal statutory structure in the US dedicated to the regulation of AI. As a result, businesses involved in AI-related activities will need to consider a variety of laws at the federal, state and local level that may impact those activities.

Federal AI Approach

The Biden Administration released Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on 30 October 2023. The Order covers more than just generative AI and uses the following definition of artificial intelligence to establish its scope: “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments”. This order seeks to establish responsible AI development and deployment by providing direction to numerous federal agencies related to a variety of AI-related areas, including safety and security, innovation and competition, work support, AI bias and civil rights, consumer protection, privacy, the federal use of AI, and international leadership. While the Order is directed towards specific federal agencies, it is likely to also influence additional legislation as well as best practices.

Entities that hold personal information about consumers and businesses should consider privacy, disclosure, equal opportunity/non-discriminatory uses, and transparency concerns. For example, the Federal Trade Commission (FTC) and Consumer Financial Protection Bureau (CFPB) have indicated that they may use their authority to prevent unfair or deceptive acts and practices in the AI and big data arenas. In November 2023, the FTC adopted a resolution streamlining its ability to issue civil investigative demands for AI products that potentially violate Section 5 of the FTC Act. In September 2023, the CFPB issued guidance on the use of AI in credit denials, explaining that creditors are required to explain the specific reasons for taking adverse actions even if companies use complex algorithms and black-box credit models to make such decisions.

Employment-Related AI Regulation

Regulatory authorities in the United States have also increasingly become focused on the potential discriminatory impact of the use of AI in decision-making, such as when algorithms are used in connection with employment decisions or the decision to offer credit to a consumer.

A number of states and municipalities have adopted laws specifically regulating the use of automated tools in the employment area. For example, New York City adopted a law effective 1 January 2023, with broad application to the use of AI by employers, including prohibiting the use of AI tools to screen candidates unless a bias audit has been conducted, and establishing liability for employers and third-party vendors alike.

AI Privacy Considerations

AI implicates privacy, disclosure, equal opportunity/non-discriminatory uses, and transparency considerations, and generative AI raises additional considerations. Generative AI often depends on algorithms that evolve or change over time. To the extent an entity provides disclosure that certain personal information is being collected, retained and used, and the consumer provides consent for that purpose, any changes outside the scope of that disclosure and consumer’s consent may violate federal and state privacy and protection laws or be considered an “unfair or deceptive act or practice” under Section 5 of the FTC Act.

Entities should ensure that disclosures and consents are sufficiently specific to inform consumers of the nature of the personal information being collected, retained or used, and periodically update these disclosures and consent in parallel with changes to their machine learning and AI algorithmic processes.

In addition, various states have generally-applicable statutory restrictions on the collection, retention and use of personal information, generally, or with respect to specific types of personal information.

AI Intellectual Property Issues

In the case of generative AI, a number of intellectual property-related issues arise regarding ownership of intellectual property associated with both the AI itself and the output from the use of AI. Many of these issues are the subject of current litigation. An entity engaged in creating generative AI or using generative AI for the creation of output (whether text, software, images or music) will need to consider the intellectual property issues associated with such use. 

Recent decisions have established that AI cannot be either the inventor of a patent or the author for purposes of copyright protection (see In Thaler v Vidal, 43 F.4th 1207 (Fed. Cir. 2022) and Thaler v Perlmutter, Case No 1:22-cv-01564, (D.D.C. 2022)). However, a number of intellectual property-related issues remain outstanding, such as whether an AI-generated work is owned by the AI’s software programmer and the ability to protect intellectual property created jointly by a human and an AI tool.

In addition, a number of cases have been filed alleging that the process used to “train” generative AI tools violated the intellectual property or other rights of the plaintiffs, including with respect to the use of open source software, books and recordings for training generative AI tools and the failure to obtain consent of consumers (see, for example,  J. Doe et al. v GitHub et al. (November 2022), Sarah Silverman v OpenAI (July 2023) and P.M. et al. v OpenAI (June 2023)).   

Liability and Risk Allocation

In addition to the legal issues described above, generative AI tools will regularly produce wrong or offensive answers to prompts, known as “hallucinations”. 

An entity utilising AI in its products or services, whether generative AI or otherwise, will need to consider the potential product liability issues. Courts in the United States are in the process of examining the manner in which the use of AI in products and services implicates traditional concepts of product liability for negligence (eg, failure to act as a reasonable person under the circumstances) and strict liability (eg, liability associated with putting a defective product or service into the stream of commerce without regard to the standard of care used, or contractual relationships).

In light of the potential risks, entities considering the use of generative AI should ensure proper risk allocation and precautions are considered. For example, an acquirer of an AI tool from a third party will need to carefully consider (i) the terms and conditions under which the tool is provided, including indemnification protection from third-party claims and disclaimers, and limitations of liability; (ii) the business risk associated with the use of output from an AI tool and the practical ability to mitigate the risk of use of the output; and (iii) the legal and business risks associated with allowing an AI tool to utilise data provided by the end user to conduct additional “training” of the AI tool.

AI Policies and Governance

The use of generative AI offers a variety of potential business benefits and efficiencies and is rapidly becoming more reliable. Many entities in the United States are establishing policies and committees of internal stakeholders in order to analyse and manage the use of these AI tools and the risks described above.

“Smart” or “connected” devices, also known as internet of things (IoT) devices, follow the federal National Institute of Standards and Technology (NIST) guidelines published by the US Department of Commerce and the statutory requirements found in certain state laws. Further, pursuant to the federal Internet of Things Cybersecurity Improvement Act of 2020, compliance with the NIST guidelines is required for federal procurements.

The NIST guidelines, NISTIR 8259, provides a summary of cybersecurity and privacy risk considerations, as well as assessment tools, and NISTIR 8259A provides a baseline for how a connected device will be defined as “securable”.

Connected devices raise considerations of privacy, disclosure, and cybersecurity concerns relating to information that the connected device uses, receives, stores, or transmits described elsewhere in this article, and in particular, 2. Digital Economy, 3. Cloud and Edge Computing, 4. Artificial Intelligence, and 8. Challenges with Technology Agreements. Two additional frameworks are also significant in this arena: federal and state wiretapping laws and critical infrastructure.

Federal and State Wiretapping Laws

The federal Wiretap Act and similar state laws generally prohibit the interception of electronic communications. Although these laws contain exceptions for recipients of communications and that may allow analysis of communications for security purposes, the application of these exceptions requires fact-specific analysis. For example, some state wiretap laws require all parties to a communication to consent to interception. If this exception is the basis for intercepting machine-to-machine traffic, it is important to understand whether such multi-party consent is necessary.

Critical Infrastructure

The Cybersecurity and Infrastructure Security Agency of the US Department of Homeland Security is developing cyber-incident and ransom payment reporting regulations pursuant to the Cyber Incident Reporting for Critical Infrastructure Act of 2022. Entities in the communications and healthcare sectors, among others, may be covered by these regulations. The reporting requirements, however, are not effective until the regulations are finalised. In addition, in August 2023, the US Federal Communications Commission (FCC) instituted a proceeding proposing a voluntary cybersecurity labelling programme based on cybersecurity standards for IoT devices and products. The FCC may implement that programme in 2024.

Certain states, such as California and Oregon, have statutes specifically focused on securing connected devices by requiring them to be equipped with cybersecurity safeguards which differ depending on the type of connected device. These state statutory requirements and NIST guidelines for securing a connected device should be considered in addition to privacy, disclosure, and transparency statutes, as well as general consumer protection statutes.

Radio and Broadcast Television

The Communications Act of 1934, as amended, with rules promulgated and enforced by the FCC, governs commercial AM, FM radio, and television broadcast authorisations. An authorisation from the FCC is required to operate a commercial AM, FM radio, and broadcast television station in the US, as described more fully on the FCC’s application guidance.

An application to the FCC for a commercial AM radio station (with frequencies of 540 kHz to 1700 kHz) requires a demonstration of non-interference on the same or on adjacent frequencies as existing US or foreign-based AM stations, as well as harmonic and intermediate frequency analyses. Application and fees are required during allotment application windows. At this time, the FCC states that it is not accepting new AM broadcast station applications.

An application to the FCC for a commercial FM radio station (with frequencies of 92.1 MHz to 107.9 MHz) requires an application for a construction permit and a concurrent petition for rulemaking to the FCC which must, according to the FCC’s application guidance, show the following.

  • Include the proposed new channel, class, and the community to be served.
  • The proposed new allotment site meets the spacing requirements of (the FCC’s rules) to other stations, prior-filed applications, and vacant allotments.
  • The proposed new allotment site must provide at least a 70 dBu signal strength over the entire community of licence.

If the petition is accepted, the FCC would then issue a notice which would permit public comment on the application. If approved, the new allotment would then be placed in an auction bid which would require the original petitioner to bid on the allotment. At this time, the FCC states that it is not accepting new FM commercial broadcast station applications.

Full-powered television broadcast stations are allocated through the FCC’s Table of Allotment (47 CFR Section 73.622). Applicants seeking a new broadcast television station must petition the FCC and the FCC will then conduct an auction. At this time, the FCC states that it is not accepting new full-powered broadcast television station applications.

Video Programming by Cable and Open Video Services

Historically, video programming has been governed by state and local jurisdictions, called local franchise authorities. However, the Cable Communications Policy Act of 1984, as expanded by the Cable Television Consumer Protection and Competition Act of 1992, added a limited amount of cable television regulation to the FCC’s authority under the Communications Act of 1934 while maintaining the primary regulatory role of the local franchise authorities, with the notable exceptions of establishing a federal prohibition on regulating rates for cable operators that are “subject to effective competition”, as defined by the FCC, and a prohibition on exclusive cable franchises. In addition, Section 653 of the Telecommunications Act of 1996, as amended, established an “open video system” (OVS) distribution method for video programming in the absence of a local franchising authority regulatory requirement. The specific local franchise requirements vary widely across jurisdictions.

Online Video Services

The FCC’s video regulations generally do not apply to IP-delivered video programming that is not provided by a multichannel video programming distributor (MVPD). There are no prior regulatory authorisation requirements to post videos online. However, entities should ensure compliance with federal, state, and local rules when making videos available online. For example, a distribution of online videos may implicate the 21st Century Communications and Video Accessibility Act (CVAA), 47 USC §613, Copyright Act protections, and FCC rules where the video was previously published or shown on television. Further, the applicability of some statutes, such as Title III of the federal Americans with Disabilities Act (ADA), 42 USC §12182, are open to interpretation and may be subject to change through judicial interpretation as to whether specific requirements apply to videos online. For example, the federal Video Privacy Protection Act (VPPA), 18 USC §2710, establishes notice and consent requirements for “video tape service providers”, a term that is defined with sufficient breadth to include many online streaming services as well as video-on-demand services. The VPPA generally requires a consumer’s opt-in consent to disclose personally identifiable viewing history information. The VPPA provides a private right of action and has led to a significant volume of class action information against video services providers and, in some instances, advertising platforms.

With regard to the content of video posted online, pursuant to Section 230 of the Communications Decency Act, providers of an interactive computer service generally are not treated as a publisher or speaker for information provided by another information content provider. As a result, companies with video-sharing platform services will generally not be liable for civil damages for the content of videos where the provider, in good faith, restricts access to, or the availability of, material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected, or where the provider takes action to enable or to make available technical means to restrict access to the above-described material, along with undertaking other procedural requirements of the statute.

Title II of the federal Telecommunications Act of 1996, as amended (the “Telecommunications Act”), generally governs the offer to the public of interstate and international “telecommunications”, which are transmissions by the aid of wire, cable, radio, or other like connections, through regulations promulgated and enforced by the Federal Communications Commission (FCC) 47 USC §§153(50), (53). The FCC also asserts jurisdiction over certain aspects of interconnected Voice over Internet Protocol (VoIP) services. Whether a transmission is interstate and international, on the one hand, or intrastate, on the other hand, is generally determined by the origination and termination points of the transmission. Generally, providers of telecommunications must possess authorisation from the FCC under Section 214 of the Communications Act of 1934 for interstate and international transmissions, though certain wireless carriers are relieved of the requirement to obtain Section 214 authority and broadband internet access service (“broadband”) is currently subject to distinct regulatory frameworks. However, in October 2023, the FCC proposed to re-classify broadband as a Title II service, with certain forbearances, in its proposed “Net Neutrality Rule”. That proposal may be adopted in 2024. Currently, all Title II telecommunications service providers and interconnected VoIP providers must obtain an FCC Registration Number (FRN) through the FCC’s website, register with the FCC, and designate an agent for service of process by filing a form with the Universal Service Administrative Company (USAC). These obligations apply to both wholesale providers and resale providers.

Individual state commissions and state and local statutes regulate intrastate transmissions. Providers of intrastate telecommunications must register with or obtain authorisation from each individual state in which the intrastate transmission occurs, except where the state legislature or commission has exempted the requirement. Providers of intrastate telecommunications services are also subject to state statutory requirements, such as state statutes on unfair or deceptive acts and practices and privacy.

In contrast to federal Title II “telecommunications services”, transmissions may be subject to reduced FCC regulation if provided on a “private carriage” basis, or if the interstate or international transmission consists of “information services”. Private carriage is the transmission of telecommunications that are not offered to the public. When interstate or international telecommunications are provided on a private carrier basis, the provider is not required to obtain a Section 214 authorisation from the FCC and fewer federal compliance obligations apply. State regulators, however, generally do not recognise the concept of private carriage as an exception to authorisation or compliance obligations.

In addition, there is no requirement to obtain Section 214 authority from the FCC to provide interstate and international information service transmission. Information services are statutorily defined as “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilising, or making available information via telecommunications” 47 USC §153(24). The definition further states that information service “does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service”. Generally, if there is a net change in the protocol of a transmission, the transmission may likely qualify as an information service 47 USC §153(50). Information services have been commonly identified as email, online gaming, web browsing, videoconferencing, instant messaging, and other, similar non-voice IP-enabled services. The FCC has a long-standing policy against the economic regulation of “information services”, and there is a prohibition on states subjecting information services to any state economic regulation. However, the FCC’s policy does not prohibit the application of federal and state consumer protection laws to these services.

With respect to VoIP services and VoIP service providers, there is a continuing dispute as to whether federal communications law permits states to require applications for authority; however, one or more states currently assert that authority. For the most part, states either require registrations, typically for purposes of collecting state public fund contributions, or do not impose either a registration or application requirement, although even in such cases, contributions to state public purpose funds may still be required. State economic regulation, such as tax obligations, generally apply to VoIP revenues. In general, state regulation of VoIP services and VoIP services providers is limited, but registration, where it exists, varies considerably across the states with each state uniquely handling the question of VoIP jurisdiction.

Key Challenges

Technology agreements may cover the deployment or sale of a number of different services, products, solutions and platforms, including on-premise software licences, software-as-a-service offerings, software development and maintenance, data-related product and services, artificial intelligence-enabled solutions and many others. Each type of agreement will have its own challenges. However, key challenges often include performance commitments (eg, warranties and service levels), clear upfront pricing and addressing changes to prices over time, compliance with laws’ provisions, intellectual property ownership, data security and privacy, audit, indemnification and limitations and exclusions of liability.

Legal Framework

As a general matter, technology agreements will choose the laws of a particular state to apply to the agreement and a court will enforce the parties’ choice of law in the contract as long as there is a reasonable relationship to the transaction or the parties, subject to certain exceptions. However, in the United States, federal law generally takes precedence over state laws. While there is no over-arching federal contract law, various federal laws will continue to apply to a technology agreement containing the choice of a specific state’s laws depending upon factors such as the subject matter of the agreement, the industry involved and the technology or data involved. For example, software and products that incorporate encryption may be subject to export restrictions under the Export Administration Regulations, a complex licensing and exemption scheme for encryption exports.

Federal, state and local governmental agencies and entities entering into technology agreements are often subject to laws and regulations applicable to the procurement process for technology agreements as well as specific requirements related to provisions contained within the agreements.

Data Protection and Cybersecurity

An increasingly important data protection and security issue concerning technology agreements is whether the parties are entering into a service provider/processor relationship, or whether personal data that is transferred pursuant to the agreement is between parties with independent rights to determine the means and purposes of processing. Comprehensive state privacy laws establish specific requirements for service provider processor contracts, similar to those under the GDPR. In addition, California requires agreements under which a party sells or shares personal data to include a subset of these provisions, including specifying the purposes of the data transfer, obligating the recipient to comply with applicable privacy laws, and providing the data source the rights to assess the recipient’s compliance and remedy instances of non-compliance.

US laws generally do not require data localisation or restrict storage location (other than in relation to countries that are under sanctions or export controls), nor do they require specific measures for cross-border data transfers. However, the location of personal data storage, including the ability to enforce confidentiality provisions against employees or contractors, is often a factor in assessing a contracting party’s ability to meet contractual obligations and to provide a reasonable level of data security.

Data Protection and Cybersecurity

Trust, digital identity, and similar services process personal data that may be highly sensitive because of its potential to be misused for fraud, identity theft, or account compromise. Personal data used in the course of providing such services may be subject to data breach notification laws, which have been enacted in all 50 states, the District of Columbia, and several US territories. These laws typically provide exemptions for encrypted data, provided that encryption keys are not compromised, but determining whether or not this exemption may require a forensic investigation of the relevant data security incident.

Other data protection and cybersecurity considerations that relate to trust and identity services include the following.

  • Biometric privacy laws. At least three states have enacted laws that establish notice, consent, and retention requirements for biometric information that is used to establish individuals’ identity. One of these laws, the Illinois Biometric Information Privacy Act, provides a private right of action and has given rise to a significant amount of class action litigation.
  • Comprehensive state privacy laws. In addition to requiring heightened consent and security measures for sensitive data (including biometric information and, in California, account credentials), state privacy laws require parties to determine whether the provider or trust or identity service is acting as a service provider/processor or as a third party with independent rights to use personal data under the relevant agreement.
  • State data security and disposal laws. Most states have enacted data security legislation, including secure disposal requirements, that specifically govern sensitive data such as social security numbers and state identification numbers.

Electronic Signatures

The federal United States Electronic Signatures in Global and National Commerce Act (the “ESIGN Act”), as supplemented by Uniform Electronic Transactions Acts (the “UETA Act”) and similar laws adopted at the state level, establishes that electronic records are not invalid solely because of their electronic nature when the parties have chosen to use electronic documents and signatures. The ESIGN Act permits individual states to further address electronic signatures for transactions subject to the individual state’s laws, other than in certain areas where the ESIGN Act overrules (or pre-empts) state law. While most states have adopted an act very similar to the model UETA Act, some states have modified the model act or not enacted it. In addition, the model UETA contains certain exceptions to the use of electronic signatures, such as their use with wills, codicils and certain trusts.

Generally speaking, a party to an agreement seeking to establish the validity of an electronic signature will need to:

  • show that the counterparty intended to sign the document and consented to conduct business electronically;
  • establish the validity of the process by which the signature was created or indicated; and
  • show that a record of the electronic signature was retained and can be reproduced by all parties to the agreement.

Additional requirements apply to transactions involving consumers in some cases.

In addition to meeting the requirements related to electronic signature, an electronic contract will still need to meet the requirements for an enforceable contract under applicable state law (ie, an offer, acceptance of the offer and consideration).

Kelley Drye & Warren LLP

3 World Trade Center
New York
NY 10007
USA

+ 1 212 808 7800

+ 1 212 808 7897

crubsamen@kelleydrye.com www.kelleydrye.com
Author Business Card

Law and Practice

Authors



Kelley Drye & Warren LLP has provided carefully tailored legal counsel to its clients for more than 180 years. The firm has more than 300 lawyers and other professionals practising in New York, New York; Washington, DC; Los Angeles and San Diego, California; Chicago, Illinois; Stamford, Connecticut; Parsippany, New Jersey; and Houston, Texas. The firm’s technology transactions and sourcing practice has handled some of the world’s largest and most complex technology-driven transactions on all sides of the deal spectrum and across a range of industries. The firm’s communications practice assists a wide array of enterprises, ranging from global carriers to start-ups, in meeting their legal and regulatory needs. The firm’s privacy and information security practice works on the leading edge of media, technology and business, helping clients understand and proactively address privacy, information security and compliance issues.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.