Trade Marks & Copyright 2025

Last Updated February 18, 2025

USA – New York

Trends and Developments


Authors



Cowan, DeBaets, Abrahams & Sheppard LLP is a boutique firm specialising in entertainment, media, branding and IP law, based in New York, and Beverly Hills, CA. The firm is a founding legal adviser to the Copyright Alliance and its attorneys hold leadership positions within the MLRC and ABA IP Section. Cowan, DeBaets, Abrahams & Sheppard LLP (CDAS) lawyers have held leadership roles as officers and trustees of the Copyright Society of the USA, spoken on copyright-related issues throughout the world, and advocate on behalf of associations in furtherance of copyright reform. Additionally, CDAS lawyers have appeared in leading copyright cases, including three of the pending AI copyright class actions, and represented amicus parties in cases before the Supreme Court and many of the Courts of Appeals including in Sedlik v Von Drachenberg (Ninth Circuit), Hachette v Internet Archive (Second Circuit), and Sony Music Entertainment v Cox Communications (Supreme Court). The firm also provides copyright clearance review for entertainment and media clients, and advises on best practices and risk management.

The Digital Replica and Associated Copyright and AI Law

Introduction

With 2024 drawn to a close, one thing is clear – the advent, usage and transformative capacities of generative artificial intelligence (AI) have continued apace this past year. Businesses have continued to incorporate the technology into their operations, AI companies have continued to launch and update their suites of products, and government agencies and courts have continued to grapple with the landscape of legal issues raised in the burgeoning technology’s wake.

Within this landscape, individuals – from politicians and actors to musicians and private citizens – have faced significant challenges, first and foremost the ease with which AI has allowed for the creation and dissemination of unauthorised digital replicas, or “deepfakes” as they are colloquially known. Capitalising on an individual’s unique image and/or voice, these replicas – which may take the form of a video, still image, or audio recording – present an ever-increasing threat, particularly considering the hyper-real depictions AI now makes possible. Everyone from business analysts to recording artists have taken note, as bad actors have used AI to fraudulently persuade corporations to transfer funds based on the replication of a CEO’s voice purportedly giving such authorisation (see Courtney Adante, Deepfakes in 2024 are Suddenly Deeply Real: An Executive Briefing on the Threat and Trends, Teneo (25 April 2024)), and new tracks have been released “by” high-profile artists, such as a Drake and The Weeknd song that garnered millions of views and streams before being pulled as it came to light that the falsified work had unauthorisedly replicated each artists’ voice (see Bill Donahue, Fake Drake & The Weeknd Song – Made With AI – Pulled From Streaming After Going Viral, BILLBOARD (17 April 2023)).

Facing the skyrocketing volume and breadth of unauthorised digital replicas, this past year policymakers have increasingly begun taking note. State lawmakers as well as government agencies have begun to assess, and in some instances, indeed implement, policies to curb the threats posed by digital replicas. One thing most agree on, even at this nascent juncture, is that action must be taken to channel uses of generative AI for purposes of enhanced creativity, not for purposes of supplanting individuals’ rights as to their creative output or indeed their actual personas by allowing the creation, use, and dissemination of unauthorised digital replicas.

Part 1 of the US Copyright Office’s report

The US Copyright Office launched an initiative in 2023 to examine copyright law and related policy issues associated with AI. Undertaking a series of public listening sessions and webinars on numerous AI-IP topics including digital replicas, as well as the use of copyrighted works as AI training data and the copyrightability of AI-generated output, the Office subsequently published a notice of inquiry in the Federal Register that received over 10,000 comments. In 2024 it began putting forth its guidance and recommendations on these topics. On 31 July 2024, specifically, it released Part 1 of its overall Report, particularly addressing digital replicas.

Investigating the state of existing protections coupled with the many concerns raised in the comments it received, the Office has ultimately recommended a new federal law to protect individuals against the appropriation of their personas, owing, particularly, to “[t]he speed, precision, and scale of AI-created digital replicas.” (Copyright and Artificial Intelligence, Part 1: Digital Replicas at iii, US Copyright Office (July 2024).)

Specifically, the Office recommends that this protection be extended to all individuals across their lifetimes – not merely celebrities – with limited postmortem protection. Protection should ward against any hyper-realistic replicas that the public would be unable to distinguish from authentic depictions of an individual, whether AI-generated or through other means. Per the Office’s recommendation, liability would attach as a result of knowingly distributing or making such replicas available, not merely from their creation, and not merely in instances where such creation was without knowledge that it was unauthorised. The Office also recommends adopting existing safe harbour protections for online service providers that might unwittingly host the replicas, such that their removal after effective notice could safeguard providers from liability. Their recommendations acknowledge, as well, the need for a balancing framework to address free speech concerns, with carve-outs, however, for instances where unauthorised replicas would defame, defraud, or commercially mislead others. Noting the patchwork application of First Amendment principles in existing case law in this arena – often in cases involving rights of publicity (as discussed further below), in which courts across the country use at least five different balancing tests that lead to conflicting outcomes – the Office stressed the need for the uniformity that a federal law and approach to this widespread issue would create. The federal law, per the Office’s recommendation, should, however, accommodate the interests of states in tackling this issue head-on, by providing a legal floor that leaves states free to enact additional protections, particularly considering that rights of publicity and privacy, which inextricably tie in to the issue of digital replicas, are already governed by state law.

Existing legal frameworks and state protection

The Office’s recommendation for a federal framework to protect against the harms posed by digital replicas comes amidst a variety of existing legal frameworks, predominantly at the state level, that may already offer some protection against digital replicas, considering their purpose in guarding individuals’ rights of privacy and publicity. For example, various doctrines found in common or state law protect against intrusions into individuals’ privacy, aiming to shield an individual’s right “to be let alone” (see Samuel D. Warren & Louis D. Brandeis, The Right to Privacy, 4 Harv. L. Rev. 193, 195 (1890)); ie, the right to be left alone, as it would usually be termed nowadays.

In common law, these doctrines include the torts of false light and appropriation of name and likeness, which protect against offensively falsely representing individuals before the public, spreading misinformation against them publicly, or appropriating their identity or reputation for benefit. While these frameworks may lend some support to persons faced with unauthorised AI-generated replicas of themselves, particularly those of a sexual nature, there are limits to how far the doctrines can likely stretch to provide legal support in this novel context. For example, false light claims require the presentation of an individual in a false light to be “highly offensive” to a reasonable person; not all replicas, particularly those that are merely untruthful, would rise to this level. Another downside is that, as discussed above, these doctrines are not always uniformly interpreted, with courts throughout the country setting forth inconsistent standards. For example, some courts require the appropriation of a person’s likeness be for commercial purposes or limit viable claims to those where a person’s name or likeness has “intrinsic value”, such that essentially only celebrities are guaranteed protection.

Existing state laws on rights of publicity (there is currently no uniform federal law in this arena) aim to prevent the unauthorised use, for profit, of an individual’s persona, which includes identifiable aspects of their identity. New York extends such protection, for example, to a living individual’s “name, portrait or picture ... [for] advertising purposes, or for the purpose of trade,” affording civil liability against misappropriation of a person’s “voice” (see N.Y. Civ. R. L. Section 50). Most states do in fact recognise rights of publicity, either via common law or statute, and while at first blush, it may appear to be a viable guardrail against unauthorised digital replicas, the lack of uniformity across state law would likely, here, too, present a challenge. Some states, for example, restrict the right of publicity to limited classes of individuals, such as those domiciled in the state or exempting the deceased.

Still, states have begun to address digital replicas from the guise of their right of publicity statutes, acknowledging that it is a feasible fit for addressing at least some of the legal issues such replicas raise. In June 2024, for example, New York passed a Digital Replica Contracts Act that expanded the state’s right of publicity law to include digital replicas, but did so only for deceased performers living in New York when they died. Meanwhile, Tennessee enacted the so-called “ELVIS Act” or Ensuring Likeness Voice and Image Security Act, the following month, updating its “Protection of Personal Rights” to include generative AI models that enable human impersonation and allow users to make unauthorised fake works adopting another’s image or voice. Such legislation faces jurisdictional questions, though, as well as having a potentially limited reach to subject out-of-state defendants to liability.

Finally, the US Lanham Act, which is a federal legislation addressing trade marks and certain acts of unfair competition, may provide some recourse. In certain instances, digital replicas could conceivably constitute false endorsement, which is prohibited under the Act, and arises where a celebrity’s persona is used without permission to suggest false endorsement or association. Such claims require that the defendant demonstrate the following, as referenced in Beastie Boys v Monster Energy Co., 66 F. Supp. 3d 424, 448–49 (S.D.N.Y. 2014) (“False endorsement occurs when a celebrity’s identity is connected with a product or service in such a way that consumers are likely to be misled about the celebrity’s sponsorship or approval of the product or service.”): “(1) made a false or misleading representation of fact; (2) in commerce; (3) in connection with goods or services; [and/or] (4) that is likely to cause consumer confusion as to the origin, sponsorship, or approval of the goods or services”. Again, however, commercial use is required, as well as a likelihood of consumer confusion, or that the use caused mistake or deceived as to the connection of the person with the goods or commercial activity at issue. This limited scope would be unlikely to cover whole swathes of uses for which digital replicas would be created, such as political commentary or “revenge porn”.

California’s recent legislation

As discussed, various states have begun to take note of the issues posed by digital replicas, as well as generative AI generally, enacting or expanding existing laws to cover AI conduct. Of all of the states grappling with the risks posed by digital replicas, California has been perhaps the most proactive. Home to Hollywood, a booming tech sector, and the jurisdiction – in addition to New York – that addresses most of the country’s IP disputes, California is leading the way with far-reaching legislation that regulates the use of AI. The state recently passed 18 new AI-related bills, addressing areas such as media and entertainment and AI transparency. Much of the law recently went into effect in January 2025, such as California A.B. 2602, “Contracts against Public Policy: Personal or Professional Services: Digital Replicas Act”, addressing digital replica contracts, and California A.B. 1836 or “Use of Likeness: Digital Replica Act”, addressing digital replicas of deceased personalities. The former protects against using AI to misappropriate performers’ names, images, and likenesses through restricting certain unfair contract terms pertaining to digital replicas. The latter addresses digital replicas of deceased celebrities, allowing their beneficiaries to recover damages in the event that unauthorised replicas are used in audiovisual works or sound recordings. The Act also requires entities to obtain an estate’s consent before using a deceased celebrity’s likeness.

In turn, various legislation is slated to go into effect in January 2026, such as the “California AI Transparency Act”, mandating that publicly accessible AI systems with over one million monthly users disclose the presence of AI-generated or modified content, and also establishing licensing practices so that only compliant AI systems are allowed for public use. Under this law, AI providers must also offer a free public tool for users to verify the presence of AI generated or modified content. In a similar push for transparency, the “Generative AI: Training Data and Transparency Act” will go into effect at the same time and require AI developers to publish high-level summaries of the datasets they have used to train their AI systems. These are only some of the pertinent acts California has recently passed, with healthcare, electoral integrity, and consumer privacy also at the forefront of this sweeping legislation.

Overall, these bills represent one of the first major legislative efforts at greater AI regulation in the US. As to digital replicas specifically, there may perhaps be pre-emptive federal legislation on the horizon, though. In July 2024, Congress itself introduced a bill called the “Nurture Originals, Foster Art, and Keep Entertainment Safe Act”, or the “NO FAKES Act”, addressing digital replicas, which would pre-empt future state laws on the same topic (though not existing legislation, such as California’s, New York’s, and Tennessee’s). The law would apply to individuals generally, not merely celebrities, and include various free speech exemptions for satire, parody, and documentary use, that some criticise as missing from current state laws, like California’s. The law would also ensure that a person’s rights could not be fully transferred to a third party during their lifetime. While advocates continue to push for Congress to establish an overall federal right of publicity that would, as an umbrella, address digital replicas, to date the NO FAKES bill is one to watch at least as far as digital replicas are concerned, in this ever-shifting AI legal landscape.

Conclusion

The year 2024 was clearly yet another fast-paced year for the growth and adoption of generative AI and the issues raised therefrom, notably digital replicas, which has been at the forefront of legislative efforts thus far surrounding AI. As we set forth in 2025, the authors expect much more discussion and development from US judicial, legislative, and commercial entities on this topic. Taking into account the legal measures we are beginning to see, particularly from California, the authors expect other states to continue following suit this year, considering and passing amendments and bills addressing digital replicas, as well as other pertinent AI-related topics generally. Notably, Congress’s NO FAKES Act is a bill to watch, as it may introduce important overall reform to an existing patchwork state-by-state approach to the digital replica concerns raised by generative AI.

Cowan, DeBaets, Abrahams & Sheppard LLP

60 Broad Street
30th Floor
New York
NY 10004
USA

+1 212 974 7474

+1 212 974 8474

nwolff@cdas.com www.cdas.com
Author Business Card

Trends and Developments

Authors



Cowan, DeBaets, Abrahams & Sheppard LLP is a boutique firm specialising in entertainment, media, branding and IP law, based in New York, and Beverly Hills, CA. The firm is a founding legal adviser to the Copyright Alliance and its attorneys hold leadership positions within the MLRC and ABA IP Section. Cowan, DeBaets, Abrahams & Sheppard LLP (CDAS) lawyers have held leadership roles as officers and trustees of the Copyright Society of the USA, spoken on copyright-related issues throughout the world, and advocate on behalf of associations in furtherance of copyright reform. Additionally, CDAS lawyers have appeared in leading copyright cases, including three of the pending AI copyright class actions, and represented amicus parties in cases before the Supreme Court and many of the Courts of Appeals including in Sedlik v Von Drachenberg (Ninth Circuit), Hachette v Internet Archive (Second Circuit), and Sony Music Entertainment v Cox Communications (Supreme Court). The firm also provides copyright clearance review for entertainment and media clients, and advises on best practices and risk management.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.