Defamation & Reputation Management 2026

Last Updated February 10, 2026

USA – California

Trends and Developments


Authors



Kinsella Holley Iser Kump Steinsapir LLP (KHIKS) is the pre-eminent litigation boutique for entertainment, media, music and IP matters. Formed 20 years ago, its core is comprised of trial lawyers who attended the nation’s top law schools, clerked for the federal Circuit Courts of Appeals and District Courts, and previously practised at the most well-respected “big law” firms in the country. KHIKS’s attorneys have extensive experience inside and outside the courtroom and are regularly called upon by large corporations and high-profile individuals to handle cutting-edge legal matters for both plaintiffs and defendants, from pre-litigation disputes through trial and appeal. KHIKS’s team has particular experience and expertise in handling a broad range of defamation and reputational issues. Its lawyers are routinely honoured in leading industry publications and are frequently quoted in the media on a variety of legal topics, particularly disputes arising in the entertainment industry.

Introduction

As the homebase of the entertainment industry, California has traditionally been a leader in defamation and reputation management litigation. This article explores the following four recent trends and developments in these areas:

  • recent defamation rulings in entertainment cases that underscore the need for careful evaluation before filing, given the now widespread anti-SLAPP (Strategic Lawsuit Against Public Participation) laws;
  • the federal Ninth Circuit Court of Appeals’ reversal of twenty years of precedent in ruling that the denial of an anti-SLAPP motion is not immediately appealable, which has significant implications for litigating defamation and related issues in federal court;
  • California’s significant updates and improvements to its laws regarding AI-generated likenesses and deepfake content; and
  • the changes to defamation law practice that will arise from recent rapid improvements in AI video generation technology, embodied by OpenAI’s Sora 2.

Recent Hollywood Defamation Cases Offer Insight into Courts’ Treatment of Abuse Allegations

Recent decisions in defamation litigation in Hollywood provide insight into how courts are approaching claims arising from allegations of abuse or misconduct. While each case turns on its own facts, a number of high-profile rulings have drawn attention to the challenges plaintiffs face in this context. Courts have closely focused on the circumstances in which statements are made, as well as the broader public interest in discussions of alleged wrongdoing. These decisions are increasingly relevant to reputation management strategies for individuals and businesses facing public scrutiny.

One widely reported example is Manson v Wood. In 2022, musician Marilyn Manson filed a defamation lawsuit against actress Evan Rachel Wood after she publicly identified him as her abuser in an Instagram post. Represented by Michael Kump, Shawn Holley and Katherine Kleindienst of Kinsella Holley Iser Kump Steinsapir LLP, Wood responded with a robust anti-SLAPP motion (a procedural tool designed to quickly dismiss lawsuits seen as silencing public participation). The court dismissed substantial portions of Manson’s claims and ordered Manson to pay Wood’s legal fees, finding that Wood’s statements were protected speech and that Manson was unlikely to succeed at trial.

Following this defeat, Manson voluntarily dismissed the remaining claims in late 2024 and agreed to pay Wood the nearly USD327,000 fee award. The outcome illustrates how anti-SLAPP statutes can operate as an early and effective defence where defamation claims arise from public allegations of abuse. It also demonstrates the financial consequences for plaintiffs who pursue claims that courts view as retaliatory.

Another recent dispute attracting industry attention is Baldoni v Lively. Last year, Blake Lively filed a lawsuit against Justin Baldoni and others, alleging sexual harassment and a hostile work environment during the filming of It Ends With Us. In response, Baldoni filed defamation and other claims against Lively. Lively moved to dismiss Baldoni’s lawsuit on several grounds, including by invoking the new California statute enacted in 2023 (California Civil Code § 47.1) that bars retaliatory lawsuits based on public disclosures of sexual harassment and related allegations.

In June 2025, the court dismissed Baldoni’s defamation claims, ruling that Lively’s core statements were legally protected and thus could not support a defamation claim. The court allowed Baldoni to amend certain non-defamation allegations, but Baldoni opted not to file an amended complaint by the prescribed deadline.

Taken together, cases such as Manson v Wood and Baldoni v Lively suggest that courts are paying close attention to the context in which allegedly defamatory statements are made. In several instances, judges have emphasised the public interest in allowing open discussion of alleged misconduct, particularly when speakers recount personal experiences. This does not prevent defamation claims altogether, but it can affect how readily they proceed.

These recent decisions provide useful context rather than definitive rules. Defamation law remains fact-specific, and outcomes depend heavily on jurisdiction, procedural tools and timing. What the recent cases demonstrate is the need for careful, experienced evaluation before litigation begins.

Ninth Circuit Reverses Course on Appealability of California Anti-SLAPP Denials

In 1992, California became the first state to enact an anti-SLAPP statute. This groundbreaking legislation made California a pioneer in protecting free speech rights by authorising defendants to obtain early dismissal of meritless lawsuits designed to chill their right of petition or free speech on matters of public interest. Over the years, anti-SLAPP motions have proven to be formidable procedural tools against retaliatory litigation, including defamation, malicious prosecution and other claims aimed at silencing criticism or disapproval.

When evaluating whether to grant an anti-SLAPP motion, California courts apply a two-part test. The first issue to be decided is whether the defendant has made a threshold showing that the challenged cause of action is one arising from protected activity. If the defendant has made this threshold showing, then the plaintiff has the burden to demonstrate a probability of prevailing on the claim.

Historically, most California litigants seeking relief from adverse anti-SLAPP rulings have not been forced to wait until final judgment is entered. In the California state court, an order granting or denying a special motion to strike is immediately appealable. Until recently, the same was also true in California federal courts with respect to denials of such motions. In particular, in 2003, the Ninth Circuit held in Batzel v Smith that a district court's denial of an anti-SLAPP motion is immediately appealable under the collateral order doctrine. The collateral order doctrine is an exception to the general rule that a party is entitled only to a single appeal to be deferred until final judgment has been entered.

However, this previously settled law changed with the Ninth Circuit’s 2025 opinion in Gopher Media LLC v Melone. An en banc panel of 11 judges unanimously ruled that a district court's order denying a motion under California's anti-SLAPP law is not appealable under the collateral order doctrine. In doing so, the Ninth Circuit overruled more than 20 years of precedent and brought itself in closer alignment with its sister circuits in the Second and Tenth Circuits.

Using the same analytical framework as in Batzel, the Ninth Circuit in Gopher Media LLC recognised that to fall within the “narrow” collateral order doctrine, a district court decision must, among other things:

  • resolve an important issue completely separate from the merits of the action; and
  • be effectively unreviewable on appeal from a final judgment.

In concluding that denial of an anti-SLAPP motion does not satisfy the collateral order doctrine, the Court reasoned that the questions that must be answered to resolve an anti-SLAPP motion are “inextricably intertwined with the merits of the litigation.” In addition, though some important interest may be lost if a defendant must wait to appeal a final judgment in an anti-SLAPP case, this does not render the decision “effectively unreviewable.”

While Gopher Media LLC expressly holds that an anti-SLAPP motion under California law cannot be immediately appealed under the collateral order doctrine, significantly, the Ninth Circuit did not address the threshold issue of whether California’s anti-SLAPP statute should apply in federal court at all. One can only speculate that this may be the subject of future rulings from the Court in 2026 and beyond.

California Updates Laws Governing AI-Generated Likenesses and Deepfake Content

California’s 2025 legislative session produced a package of laws targeting deepfakes and AI-generated content. Governor Gavin Newsom signed these measures into law in October 2025. Together, they represent a step towards protecting individuals from AI-driven identity misuse. However, they also reflect the legislature’s caution about regulating digital replicas that may appear in expressive works protected by the First Amendment.

One key change addresses non-consensual deepfake pornography. California amended Civil Code Section 1708.86 to strengthen civil claims, allowing victims to sue not only creators but also anyone who knowingly facilitates or recklessly aids and abets the creation or disclosure of non-consensual deepfake pornography. Absent evidence that the depicted individual has provided express written consent, any person who owns, operates, or controls a deepfake pornography service, such as a website or mobile application, is presumed to have known that the depicted individual did not consent. Statutory damages have also increased to USD250,000 for malicious violations.

The legislature also closed a potential loophole regarding AI accountability. A new provision, Civil Code Section 1714.46, prevents defendants who develop, modify, or use AI from avoiding responsibility by claiming that the AI “autonomously” caused the plaintiff’s harm. This law responds to real-world attempts by companies to insulate themselves from AI errors, such as Air Canada’s 2024 argument that its chatbot was a “separate legal entity” after the bot erroneously promised discounts to consumers.

California Civil Code Section 3344

California also updated its right-of-publicity protections for living persons. Civil Code Section 3344 governs uses of another’s name, image, and likeness in:

  • products;
  • merchandise;
  • advertising; or
  • sales solicitations.

A 2025 amendment added an expedited injunctive relief mechanism, allowing courts to order the removal or recall of infringing content within two business days after service of certain court orders.

However, the legislature did not clarify whether Section 3344 applies to digital replicas of a living person’s likeness or voice. Although an amendment that would have expressly made Section 3344 applicable to digital replicas of a person’s likeness or voice was proposed, it was ultimately vetoed. By contrast, Tennessee’s 2024 ELVIS Act explicitly extended that state’s right of publicity statute to cover simulations of a person’s voice, whether living or dead.

An amendment to Section 3344 addressing digital replicas would have harmonised with California’s 2024 modernisation of Section 3344.1, the post-mortem right of publicity. That earlier law addressed AI-driven digital replicas of deceased personalities directly, treating highly realistic, computer-generated voice or likeness recreations as actionable, even when embedded in expressive audiovisual works or sound recordings, unless the rightsholder consents or the expressive work falls within specified exclusions.

Had Section 3344, which applies to living persons, been amended to cover digital replicas and expressive works, California would have created a more coherent right of publicity framework applying equally to the living and the dead. Although living individuals now have a faster path to stop ongoing commercial misuse through court-ordered removal, it remains unclear whether they can use this expedited mechanism against AI-generated digital replicas.

Beyond the digital replica question, Section 3344 still does not address unauthorised uses of a living person’s identity in broader expressive works, such as films, scripted entertainment, or music, even though Section 3344.1 now covers such uses for deceased individuals. The legislative history shows that this gap is intentional, not an oversight.

Section 3344 was purposefully crafted as a narrow remedy for commercial misappropriation, targeting unauthorised uses in advertising or merchandise rather than expressive works. When debating whether to amend the law to cover expressive works, legislators expressed concern that expanding the statute’s reach, particularly with expedited takedown orders, could invite unconstitutional prior restraints on First Amendment-protected expression. California’s response was to add faster injunctive tools only and preserve the existing balance between identity protection and freedom of expression.

AI dictates the future changes

Taken together, these California amendments improve the speed and enforceability of remedies for classic commercial misappropriation. However, they leave key questions about AI-generated digital replicas and expressive uses to future legislation and the courts. Businesses should expect continued legal uncertainty at the boundary between identity protection and First Amendment rights, even as enforcement against straightforward commercial misuse becomes easier.

Improvements in AI Video Generation Demand New Approaches to Defamation Cases

It would be remiss to summarise defamation-related developments in 2025 without mentioning the prevalence of AI video generators, particularly OpenAI’s Sora 2, which was released to media fanfare in late September. With this tool, users have an unprecedented ability to generate photorealistic videos simply by using text prompts.

Sora’s potential impact on malicious operators' ability to create defamatory content should require little explanation: one of the first Sora-generated videos to circulate widely on social media was a fake video feed showing OpenAI’s founder, Sam Altman, seemingly shoplifting graphics cards from a Target store. A false accusation of criminal conduct is, of course, a textbook example of a meritorious defamation claim.

While existing legal standards can theoretically address the misuse of AI-generated videos to defame, the acceleration of technology represented by Sora 2 and its inevitable progeny represents a sea change in the manner and degree to which malicious actors could deploy fake but photorealistic video footage to harm others' reputations.

Before Sora 2, deepfake videos were essentially altered versions of existing videos. While better versions of such videos could still deceive undiscerning audiences, more careful consumers could detect telltale signs of alteration: unnatural movements, blurry edges, mismatched lighting and shadows, unusual textures, and other details. These technological limitations both limited the potential of such content to cause real-world reputation harm - an idea highlighted by Megan Thee Stallion’s recent “victory” in a deepfake defamation case that resulted in a relatively meagre USD59,000 damages award.

By contrast, Sora 2’s videos utilise AI technology to generate videos from text prompts from the ground up. The results are much more photorealistic, pose a higher risk of passing for actual video recordings, and are therefore potentially more threatening to their subjects' reputations. While these videos occasionally still bear indicators of their false provenance (for example, in the AI-generated video of Altman shoplifting, the sign above him says “Gratics Cards” instead of “Graphics Cards”), the technology is only improving, and it would be naïve to assume that the AI origins of a video will always be detectable by the average viewer.

OpenAI appears to recognise the legal minefield posed by Sora’s capabilities in the context of likenesses. It has voluntarily adopted a number of safeguards:

  • a visible watermark on Sora-created videos;
  • embedded metadata in Sora-generated videos; and
  • a robust consent-based opt-in system for the use of real-world likenesses on the platform.

These measures reflect a remarkable deviation from the “seek forgiveness, not permission” model for which the tech industry is often criticised, one that reveals the seriousness with which industry leaders are taking the risk that its tools will be misused.

As AI’s de facto industry leader, OpenAI has strong incentives to shape the legal standards that will ultimately govern the industry through self-regulation. But it would be naïve to assume that all participants in the AI arms race similarly heed the risks of enabling the use of likenesses in AI video generators. Little imagination is needed to recognise the risk that future AI companies may not bother with watermarks, metadata, or an opt-in approach to likenesses.

The law of defamation will need to evolve (or at least consider the need to evolve) in light of the widespread availability of AI video creation technology. While the law governing such videos is still nascent (understandably so, since the technology is currently only months old), it is easy to predict some of the issues that litigators, courts, and legislators will need to confront in the near term, as outlined below.

  • What process will courts use to resolve disputes over whether a video is “real” or AI-generated? Will juries be trusted to evaluate all disputes over the authenticity of videos? Or, as technology progresses, and the ability to determine what is fake or real becomes more a province of expert analysis, will courts need to begin aggressively “gatekeeping” before allowing litigators to present videos to juries?
  • To what extent should companies be liable for giving potential defamers the tools to do so effectively? Should responsibility be placed solely on the creators of defamatory content? Should safeguards (such as OpenAI’s opt-in consent system for likenesses) be legislatively mandated?
  • How will courts balance the abuse of AI videos against the First Amendment? Will creators of defamatory AI videos be able to hide behind the excuse that they intended the content to serve as satire or parody? Will the malice requirement in New York Times v Sullivan need to be revisited, given the potential reputational damage that could result from a well-executed AI video, even in the absence of malice?

In 2024, during discovery in the case Huang v Tesla, the court rejected Tesla’s argument that it should not be required to admit or deny the authenticity of videos of Elon Musk’s public statements because the videos could have been altered. The court warned about the broad implications of accepting Tesla’s argument:

[W]hat Tesla is contending is deeply troubling to the Court. Their position is that … Mr Musk, and others in his position, can simply say whatever they like in the public domain, then hide behind the potential for their recorded statements being a deep fake to avoid taking ownership of what they did actually say and do. The Court is unwilling to set such a precedent by condoning Tesla’s approach here.

What the court in Huang rejected as a potentially dangerous precedent may now, in fact, better reflect reality: it may soon, in fact, be possible for everyone “to avoid taking ownership of what they did actually say and do.” As technological advancements in AI video generation continue to progress rapidly, courts and litigants will need to come to terms with the fact that tools to generate wholly false but realistic-looking videos will now be in the hands of the public at large.

Kinsella Holley Iser Kump Steinsapir LLP

11766 Wilshire Boulevard
Suite 750
Los Angeles, California 90025
USA

+1 (310) 566 9800

+1 (310) 566 9850

klahs@khiks.com www.khiks.com
Author Business Card

Trends and Developments

Authors



Kinsella Holley Iser Kump Steinsapir LLP (KHIKS) is the pre-eminent litigation boutique for entertainment, media, music and IP matters. Formed 20 years ago, its core is comprised of trial lawyers who attended the nation’s top law schools, clerked for the federal Circuit Courts of Appeals and District Courts, and previously practised at the most well-respected “big law” firms in the country. KHIKS’s attorneys have extensive experience inside and outside the courtroom and are regularly called upon by large corporations and high-profile individuals to handle cutting-edge legal matters for both plaintiffs and defendants, from pre-litigation disputes through trial and appeal. KHIKS’s team has particular experience and expertise in handling a broad range of defamation and reputational issues. Its lawyers are routinely honoured in leading industry publications and are frequently quoted in the media on a variety of legal topics, particularly disputes arising in the entertainment industry.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.