Advertising & Marketing 2025

Last Updated October 14, 2025

USA

Law and Practice

Author



Frankfurt Kurnit Klein & Selz was founded nearly 50 years ago as a boutique law firm servicing the entertainment and arts communities in New York City and now provides the highest-quality legal services to clients in a wide range of industries and disciplines worldwide. Frankfurt Kurnit’s advertising practice – which is counsel to many of the USA’s leading brands, advertising agencies, and platforms – is universally recognised as one of the leading advertising and marketing practices in the USA.

In the USA, advertising law is governed by a variety of overlapping federal, state and local laws. There are laws and regulations that prohibit false advertising generally, as well as laws and regulations that address specific types of marketing practices.

The primary consumer protection law in the USA is Section 5 of the Federal Trade Commission Act (the “FTC Act”), which prohibits “unfair or deceptive acts or practices”. A “deceptive” practice is a material representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances. An “unfair” practice is a practice that causes or is likely to cause substantial injury to consumers, which is not reasonably avoidable by consumers themselves and which is not outweighed by countervailing benefits to consumers or competition.

The primary federal law that gives a right of action to competitors to sue for false advertising is Section 43(a) of the Lanham Act, which generally prohibits false or misleading representations that are likely to cause confusion as to affiliation, connection, association or approval or that misrepresent the characteristics of the advertiser’s or another’s goods or services.

The Federal Trade Commission (FTC) is the primary federal regulator charged with enforcing federal laws governing advertising practices. There are a number of other federal agencies that are charged with enforcing advertising laws aimed at specific industries or types of advertising practices, such as the Federal Communications Commission (FCC), the US Food and Drug Administration, the Consumer Financial Protection Bureau, the US Department of Transportation, and the Department of the Treasury’s Alcohol and Tobacco Tax and Trade Bureau.

Each of the 50 US states (as well as some of its territories) has an attorney general who is charged with enforcing state laws governing advertising practices. Although it varies by state, there are additional state agencies who have authority to enforce advertising laws as well. Some local municipalities also have the authority to enforce advertising laws – for example, via county district attorneys in California and the New York City Department of Consumer and Worker Protection.

The remedies that are available to regulators vary, depending on the laws that are being enforced, but can include injunctive relief, restitution and disgorgement, and other types of equitable relief (such as corrective advertising), as well as damages and penalties.

Although the First Amendment to the US Constitution provides broad protections for freedom of speech, generally when marketers engage in advertising or other commercial speech in the USA, they may be held liable for deceptive advertising. Whether a marketer can be held liable for false advertising for specific advertising practices, however, will depend on the specific law that applies. In certain circumstances, other entities and individuals that participate in the creation and dissemination of false advertising (such as, for example, advertising agencies, media companies and company executives) can be held liable as well.

Drawing the line between commercial speech and non-commercial speech is not always straightforward, particularly when the speech comes from a commercial entity and serves multiple purposes. The US Supreme Court has said that commercial speech is, at its core, “speech proposing a commercial transaction”; however, it has also acknowledged that the precise bounds of the category of commercial speech are “subject to doubt, perhaps”. Although there are undoubtedly certain types of speech that are non-commercial, most types of communications coming from a company’s marketing department – even communications that are not traditional advertising, such as social media posts – are likely to be considered “advertising” that is subject to the specific rules governing advertising.

For most types of advertising, no government pre-approvals are required before running advertising. In some regulated industries (such as in connection with the labelling of alcoholic beverages), certain government pre-approvals are required.

Many television networks, outdoor and transit advertising companies, social media platforms and other media platforms have advertising standards and require advertising to be approved before they will allow the advertising to run.

The right of publicity – in other words, the right of a person to control the use of that person’s name, picture, voice or likeness for purposes of advertising or trade – is governed by state law and the rules vary by state. As a general matter, however, advertisers may not use the name, picture, likeness, voice or identity of an individual for any advertising or other commercial purpose without first obtaining the person’s written consent (subject to some limited exceptions). In many states, consent is also required (from the person’s estate) for a period after death as well.

The primary advertising self-regulatory authority in the USA is the National Advertising Division, which is administered by BBB National Programs. BBB National Programs administers a number of other advertising self-regulatory programmes as well, including the Children’s Advertising Review Unit. The procedures of each of these programmes are different; however, as a general matter, advertisers can challenge advertising that they believe violates the programme’s standards and then the self-regulatory body will review the matter and issue a decision. If the self-regulatory body finds that the advertising has violated the programme’s standards, it will issue a decision recommending that the advertiser modify or discontinue the advertising (subject to certain rights of appeal). Although compliance with these standards is voluntary, if an advertiser fails to comply, the self-regulatory body may refer the matter to a regulatory authority (eg, the FTC) for review.

Various trade associations – for example, the Distilled Spirits Council of the United States, the Beer Institute, and the Wine Institute – have their own advertising standards and dispute resolution programmes as well.

Under state law, consumers have a private right of action (either individually or on behalf of a class of consumers) to challenge advertising practices – although the specific standards vary by state. One of the key aspects of a false advertising claim is typically whether the advertising is likely to mislead a reasonable consumer acting reasonably under the circumstances. Consumers can seek monetary damages, injunctive relief, and other remedies.

Advertising is heavily regulated in the USA and there is a great deal of regulatory enforcement, self-regulatory activity, and other advertising-related litigation. Advertisers should ensure that they have proper substantiation for both their express and implied advertising claims or they are at risk of being challenged by regulators, competitors, consumers and others. Some areas of particular focus right now include:

  • advertising claims that impact consumers’ health and safety;
  • advertising that could lead consumers to suffer significant financial harm;
  • environmental claims;
  • junk fees and deceptive pricing;
  • emerging technology (such as AI);
  • automatic renewal practices; and
  • the use of endorsers and influencers.

The USA is a large and diverse country where there are widely differing, strongly held views about issues involving taste, politics and other cultural concerns. When advertising in the USA, advertisers should ensure that they work with local experts who are sensitive to these issues.

There is increased attention on avoiding harmful stereotypes in advertising and on making advertising itself more inclusive. Nonetheless, some advertisers have experienced significant backlash from some groups in connection with diversity-related advertising efforts.

Recently, the National Advertising Division and the Children’s Advertising Review Unit have amended their procedures relating to issues of stereotyping. This has led to some self-regulatory enforcement in this area as well.

The laws governing advertising and marketing have remained relatively consistent over time and there has been a remarkable continuity in enforcement priorities as well. That being said, when the government leadership changes at either the federal or state level, regulatory enforcement priorities ‒ and the ways in which enforcement is engaged in – do change.

The FTC is currently engaging in an aggressive enforcement programme (seeing bigger damages and tougher remedies) and has been using a wide range of methods (including engaging in rule-making, issuing notices of penalty offences, and issuing new guidance) to address practices that it is concerned about. One of the main issues that the FTC is particularly concerned about right now is how big tech, emerging technologies and the online ecosystem can harm consumers.

In general, whether an advertising claim is deceptive or misleading is determined from the perspective of the “reasonable consumer”. The FTC defines deception as a material misrepresentation or omission that is likely to mislead a consumer acting reasonably in the circumstances. Some state laws, however, define deception more broadly ‒ considering claims from the perspective of the ignorant, unthinking and credulous consumer.

Advertisers are generally responsible for ensuring that all express and implied claims communicated by their advertising, where material to consumers’ purchasing decisions, are truthful and substantiated. No substantiation is required for “puffery”. Puffery is an exaggerated or hyperbolic claim ‒ expressing an obvious statement of opinion ‒ that is not subject to proof and that consumers would not rely on when making a purchasing decision.

Advertisers are generally responsible for ensuring that they have proof for their advertising claims prior to the dissemination of those claims. As a general matter, advertisers must have a “reasonable basis” for their claims. Where advertisers claim to have a specific type of support for their claims (such as “tests prove” or “studies show”), they must have that support.

What constitutes a “reasonable basis” will depend on a variety of factors, including the type of claim, the product, the consequences of a false claim, the benefits of a truthful claim, the cost of developing substantiation for the claim, and the amount of substantiation that experts in the field believe is reasonable. In some cases, such as claims involving consumers’ health or safety, the FTC expects advertisers to have “competent and reliable scientific evidence” to support the claims.

When the performance of a product is shown in advertising, advertisers are generally responsible for ensuring that the performance shown is real (without any special effects or other modifications) and that the performance shown reflects the performance that consumers can generally expect to achieve when using the product.

The primary guidance on the use of endorsements and testimonials in advertising is set forth in the FTC’s “Guides for the Use of Endorsements and Testimonials in Advertising” (the “Endorsement Guides”). The FTC also recently promulgated a Trade Regulation Rule on the Use of Consumer Reviews and Testimonials (the “FTC Consumer Review Rule”).

Although the FTC and others have issued a great deal of guidance on this topic, when using endorsements in advertising, advertisers should keep in mind three key principles as a starting point. First, the endorsement should reflect the endorser’s honest opinions, findings, beliefs and experiences. Second, endorsers should not make advertising claims that the advertiser could not make itself. In other words, if an endorser makes a claim about the performance of a product, the advertiser must be able to substantiate that this is the generally expected performance of the product. Third, if there is a material connection between the endorser and the advertiser that is not reasonably expected by the audience, then that connection should be clearly and conspicuously disclosed.

The FTC also expects advertisers to monitor their endorsers to ensure that their endorsements comply with the law. It also expects advertisers to take appropriate action when said endorsements do not.

In addition to general laws prohibiting false advertising, standards for making environmental claims are set forth in the FTC’s “Guides for the Use of Environmental Marketing Claims” (the “Green Guides”) as well as various state laws governing specific environmental marketing practices. The Green Guides provide detailed guidance on the making of many different environmental marketing claims, including claims such as “recyclable”, “biodegradable”, “compostable”, and made from “renewable materials”. The Green Guides also caution against making unqualified general environmental benefit claims (such as “earth-friendly”), given that doing so may communicate a variety of claims that cannot be substantiated.

The FTC is currently undertaking a review of the Green Guides. Revised guidance is forthcoming.

As a general matter, in order for a disclosure in advertising to be effective, it should be “clear and conspicuous”. This means that the disclosure should be easily seen, read and understood by consumers. More recently, the FTC has further articulated the “clear and conspicuous” standard by saying that disclosures should be “difficult to miss” and – when disclosures are made online – they should be “unavoidable”.

There are many other types of advertising claims that are that are subject to specific federal or state law requirements. Marketers are advised to consult counsel in the USA before launching advertising campaigns here.

One area of particular concern to the FTC is when marketers claim that their products are made in the USA. The FTC’s “Enforcement Policy Statement on US Origin Claims” says that, in order for a marketer to make an advertising claim that a product is made in the USA, the marketer must be able to substantiate that the product is “all or virtually all” made in the USA. This standard was also recently codified in the FTC’s Made in USA Labelling Rule.

Issues related to stereotyping and diversity in advertising are not generally regulated (except to the extent that other laws are violated, such as laws prohibiting discrimination in employment or housing). Television network and other media platforms include restrictions on the use of negative stereotypes, and negative stereotyping is generally prohibited by US self-regulatory standards as well. By way of example, the National Advertising Division standards address “national advertising that is misleading or inaccurate due to its portrayal or encouragement of negative harmful social stereotyping, prejudice, or discrimination”.

The primary guidance related to advertising to children is contained in the “Self-Regulatory Guidelines for Children’s Advertising” issued by the Children’s Advertising Review Unit (CARU), which is a division of BBB National Programs.

The CARU guidelines apply to national advertising (in any medium) that is primarily directed at children under the age of 13. The key underlying principle of the guidelines is that advertisers have special responsibilities to children. The guidelines address a variety of issues, including advertising claims, product demonstrations, disclaimers, the use of endorsers and influencers, the blurring of advertising and entertainment content, and unsafe and inappropriate advertising.

In 2022, the FTC issued a report, “Bringing Dark Patterns to Light”, which warns advertisers against engaging in online design practices that that trick or manipulate consumers into making choices they would not otherwise have made or that would cause them harm. In the report, the FTC identified four key types of dark patterns:

  • design elements that induce false beliefs (eg, by making false claims or using deceptive formats);
  • design elements that hide or delay disclosure of material information (eg, by hiding key information in terms and conditions or engaging in drip pricing);
  • design elements that lead to unauthorised charges (eg, by charging a consumer after a free trial period without the consumer’s authorisation or by making it difficult for a consumer to cancel a subscription); and
  • design elements that obscure or subvert privacy choices (eg, by not allowing consumers to definitively reject data collection or use).

Some recent enforcement actions by both federal and state regulators have included allegations that marketers have engaged in dark patterns.

The general rule is that consumers have the right to know when they are being advertised to.

Several years ago, the FTC issued an “Enforcement Policy Statement on Deceptively Formatted Advertisements”, which provides detailed guidance about the use of branded content. The FTC and other regulators have also brought enforcement actions when marketers have misled consumers about the source of content or about whether the content they are viewing is advertising.

The FCC also has sponsor identification requirements for broadcast advertising and certain other media subject to FCC jurisdiction.

When engaging in native advertising, marketers are generally expected to clearly and conspicuously identify the content as advertising. The FTC’s “Enforcement Policy Statement on Deceptively Formatted Advertisements” provides detailed guidance about the use of native advertising.

Comparative advertising is generally permitted in the USA. When engaging in comparative advertising, advertisers should nonetheless take care to specifically identify the products being compared, so as to ensure that claims are truthful and not misleading, as well as to clearly and conspicuously disclose any material limitations on the comparisons. As with other advertising claims, advertisers are also generally responsible for ensuring that the claims are truthful for the entire time that they are used.

Advertisers are generally permitted to use the name of a competitor, a competitor’s trade mark, and a competitor’s packaging in truthful comparative advertising, as needed, subject to some limitations.

Under both federal and state law, advertisers can generally challenge false and misleading claims made by competitors, and can seek damages, injunctive relief, and other remedies. By way of example, under Section 43(a)(1)(B) of the Lanham Act, an advertiser can sue a competitor for false advertising where the competitor “misrepresents the nature, characteristics, qualities or geographic origin of his or her or another person’s goods, services or commercial activities”.

“Ambush” marketing generally refers to marketing and promotional activities by parties unaffiliated with a property or event that seek to take advantage of or misappropriate the goodwill and popularity generated by the property or event. The primary basis for challenging ambush marketing in the USA is based on Section 43(a)(1)(A) of the Lanham Act, which provides a right of action where an advertiser engages in marketing that “is likely to cause confusion, or to cause mistake, or to deceive as to the affiliation, connection or association of such person with another person, or as to the origin, sponsorship or approval of his or her goods, services or commercial activities by another person”. There are additional federal and state laws that may be implicated as well, including various state law theories such as breach of contract and/or unfair competition.

For the most part, the general rules that govern advertising in the USA also govern advertising in online and social media. Some laws have been enacted to govern specific online advertising practices as well (for example, email marketing, which is discussed in 6.1 Email Marketing). The FTC has also issued guidance on advertising online and on social media, including specific guidance on the use of disclosures online and on the use of influencers and consumer reviews online.

The Communications Decency Act (CDA) provides certain protections for third-party material that is placed online. The CDA provides that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Whether a CDA defence is available in the advertising context will often turn on whether the advertiser was in part responsible for the creation or development of the content that is posted. However, the CDA does not provide immunity from certain types of laws, including criminal laws and IP laws.

In addition, the Digital Millennium Copyright Act provides immunity for monetary damages for copyright infringement in certain situations, including for content that is posted at the direction of users ‒ provided that certain criteria are met.

The same rules that apply to advertising disclosures generally also apply to advertising online and in social media.

The FTC has also issued specific guidance on the use of disclosures online, “.com Disclosures: How to Make Effective Disclosures in Digital Advertising” – although this guidance has not been updated for more than a decade. The FTC has also issued specific guidance on the use of disclosures in specific contexts (eg, in connection with the use of influencers).

As mentioned in 2.7 Disclosures, although the general rule for disclosures is that they should be “clear and conspicuous”, the FTC has more recently indicated that – for online disclosures to be effective ‒ they should be “unavoidable”.

There are no general prohibitions under federal law on the use of social media platforms in the USA – although federal law may limit certain uses by federal employees. Some state laws have been enacted that limit some social media use as well; however, legal challenges to these laws are ongoing. Most social media platforms also do not permit use of their platforms by children under the age of 13.

The FTC’s Endorsement Guides provide detailed guidance on the use of influencers. As a general matter, advertisers should ensure that:

  • influencers’ statements reflect their honest opinions, findings, beliefs and experiences;
  • influencers’ claims about product performance reflect the generally expected performance of the product; and
  • influencers clearly and conspicuously disclose material connections they have to the brand that are not reasonably expected by the audience.

In appropriate circumstances, advertisers can be held liable for content posted by their influencers, and advertisers are expected to have reasonable training, monitoring and compliance programmes in place. Therefore, it is prudent for advertisers to have proper procedures in place to help ensure that influencer posts are legally compliant.

In the FTC’s Endorsement Guides, the FTC states that advertisers may be liable for deceptive endorsements by influencers. The FTC also advises advertisers to provide guidance for their endorsers on the need to ensure that their statements are not misleading and to take action sufficient to remedy non-compliance and prevent future non-compliance. As the FTC has explained: “While not a safe harbo[u]r, good faith and effective guidance, monitoring and remedial action should reduce the incidence of deceptive claims and reduce an advertiser’s odds of facing a Commission enforcement action.”

The FTC’s Endorsement Guides provide guidance on the solicitation, hosting and use of consumer reviews. In general, advertisers should solicit reviews in a manner that is intended to obtain consumers’ unbiased opinions. When hosting reviews that have been incentivised, advertisers should ensure that such incentives are properly disclosed. When hosting reviews, advertisers should not organise (or curate) the reviews in a manner that misrepresents consumers’ views.

One particular area of focus by the FTC has been on the proper hosting of consumer reviews on an advertiser’s website. The Endorsement Guides provide: “In procuring, suppressing, boosting, organi[s]ing, publishing, upvoting, downvoting, reporting or editing consumer reviews of their products, advertisers should not take actions that have the effect of distorting or otherwise misrepresenting what consumers think of their products, regardless of whether the reviews are considered endorsements under the Guide.”

When advertisers select individual reviews for use in advertising materials (as opposed to providing a section of a website for the hosting of reviews generally), advertisers should generally treat the review as they would treat any other endorsement being used in advertising.

As mentioned in 2.5 Endorsements and Testimonials, in August 2024, the FTC announced the issuance of the FTC Consumer Review Rule, which formally prohibits certain practices identified as unfair or deceptive in the Endorsement Guides. Key provisions of the rule include prohibitions against:

  • reviews and testimonials that falsely claim to be from a real person (including those generated by AI) or those from individuals who do not have actual experience with the business or its products/services;
  • businesses creating, selling or distributing deceptive reviews or testimonials, as well as against the purchase of such reviews, obtaining them from company insiders, or publishing these testimonials if the business knew or should have known they were fake or false; and
  • businesses giving compensation or incentives in exchange (explicitly or implicitly) for reviews that express a specific sentiment, whether positive or negative.

The Controlling the Assault of Non-Solicited Pornography And Marketing Act of 2003 (the “CAN-SPAM Act”), enforced by the FTC, establishes requirements for commercial email messages, provides email recipients with opt-out rights and spells out penalties for violations. Under the CAN-SPAM Act, key email marketing requirements include the following.

  • The message may not include false or misleading header information.
  • The message may not include a deceptive subject line.
  • The message must be clearly identifiable as marketing, whether by labelling it as such or as made clear from the content itself.
  • The message must include the sender’s physical postal address.
  • The message must clearly and conspicuously notify how they can opt out of receiving future commercial emails from the sender.
  • Any opt-out mechanism offered must be able to process opt-out requests for at least 30 days after the message is sent and all opt-out requests must be processed within ten business days of receipt. This requires email marketers (and their vendors) to scrub their recipient lists against their do-not-email lists before each deployment.

Each separate email in violation of the law is subject to penalties of up to USD53,088. Emails that make misleading claims about products or services may also be subject to laws outlawing deceptive advertising, including Section 5 of the FTC Act.

The Telephone Consumer Protection Act (TCPA), enforced by the FCC, and the Telemarketing Sales Rule (TSR), enforced by the FTC, regulate telemarketing calls and text messages, as do a patchwork of state telemarketing and do-not-call laws.

Broadly speaking, the TCPA restricts callers from making robo-calls and robo-texts unless they have received the appropriate prior express consent of the recipient, subject to several exemptions. It also requires telemarketers to provide an automated, interactive “opt-out” mechanism. The TCPA also contains provisions requiring identification by the caller and prohibits calls to emergency lines or lines serving hospitals, among other provisions. In 2024, the FCC adopted new rules intended to strengthen consumers’ ability to revoke consent to receive robo-calls and robo-texts, in addition to strengthening callers’ and texters’ obligations to honour such requests in a timely manner. The FCC previously revised its TCPA rules to establish the national Do-Not-Call Registry, which prohibits companies from contacting listed customers.

The TSR requires telemarketers to make specific disclosures of material information, prohibits misrepresentations, sets limits on the times telemarketers may call consumers, prohibits calls to a consumer who has asked not to be called again, and sets payment restrictions for the sale of certain goods and services.

The TCPA allows individuals to sue for damages ranging from USD500 to USD1,500 per telephone call or text message sent in violation of the statute. Violations of the TSR are subject to civil penalties of up to USD53,088 per violation. In addition, violators may be subject to nationwide injunctions that prohibit certain conduct, and may be required to pay redress to injured consumers.

Text messages are typically considered to be synonymous with “calls” under both the TCPA and the TSR. Marketers must therefore have the proper level of consent to send a text to a recipient’s mobile phone if they are using regulated technology or contacting a number on the national Do-Not-Call Registry and must comply with the same opt-out requirements.

Several state privacy laws address, in part, consumers’ concerns with marketers’ collection and use of personal data and impose strict obligations on targeted advertising practices. These state laws require marketers to inform consumers about their personal data collection practices, to delete and correct consumers’ personal data, and to block advertising that relies on cookies, pixels and related tracking technologies. For instance, several state statutes provide a right to opt out of the sale or processing of one’s personal data for the purposes of targeted advertising or profiling. By the beginning of 2026, 19 states will have comprehensive privacy laws in effect. Although there are nuances to the specific requirements of these laws, they all provide consumers with some version of the right to opt out of the sale or sharing of their personal information for the purpose of targeted advertising.

One state privacy law enacted in Maryland, effective 1 October 2025, imposes novel restrictions concerning the collection and processing of personal data on businesses that may impact advertising. Businesses must limit collection of personal data to what is “reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer”. Processing of personal data must be limited to what is necessary to or compatible with the purposes disclosed to the individual, meaning any unnecessary or incompatible secondary uses of personal data require separate, affirmative consent. Businesses can only collect, process or share sensitive data when it is strictly necessary to provide or maintain a requested product or service; selling sensitive data is prohibited. These provisions may drastically reduce marketers’ ability to – without authorisation ‒ use data for purposes related to advertising.

Most state privacy laws require written agreements with service providers or third parties that process personal information for the purpose of targeted advertising. These agreements generally must restrict unauthorised data use and mandate compliance with applicable privacy obligations. Under California law, for example, businesses must ensure that service providers do not use personal information for cross-context behavioural advertising. The Interactive Advertising Bureau’s Multi-State Privacy Agreement (MSPA) offers a standardised framework to help participants in the advertising technology space meet contractual and disclosure requirements across jurisdictions, particularly when honouring opt-outs for targeted advertising.

The FTC continues to oversee online behavioural advertising, using its authority to regulate unfair and deceptive practices. Expanding on its 2009 staff report, the FTC has repeatedly enforced against companies for alleged unlawful targeted advertising practices. It now requires companies engaged in targeted advertising to provide consumers with clear and conspicuous notice of data collection practices, alongside easy-to-use mechanisms for opting out of tracking. Businesses must implement reasonable security safeguards, apply data minimisation principles, and limit retention to what is necessary for legitimate business or legal purposes. The handling of sensitive personal information ‒ in particular, health data, precise location data, and children’s data ‒ requires affirmative express consent.

The FTC has also targeted the use of manipulative “dark patterns” that obscure consumer choice (see 3.3 Dark Patterns) and has pursued enforcement actions against companies that misrepresent their data practices or fail to uphold privacy commitments. Together, these measures demonstrate the FTC’s more active regulatory posture and efforts to embed stronger consumer protections into the evolving ecosystem of targeted advertising.

Enacted in 1998, the Children’s Online Privacy Protection Act (COPPA) empowers the FTC to issue and enforce regulations concerning children’s online privacy in the USA. The primary goal of COPPA is to place parents in control of what information is collected from their young children online and it applies to both:

  • operators of commercial websites and online services (including mobile apps) directed at children under 13 that collect, use or disclose personal information from children or on whose behalf such information is collected or maintained; and
  • operators of general audience websites or online services with actual knowledge that they are collecting, using or disclosing personal information from children under 13.

According to the FTC, covered entities’ responsibilities include the following:

  • posting a clear and comprehensive privacy policy describing their practices for personal information collected online from children;
  • providing direct notice to parents and obtaining verifiable parental consent, with limited exceptions, before collecting personal information online from children;
  • giving parents the option of consenting to the operator’s collection and internal use of a child’s information, but prohibiting the operator from disclosing that information to third parties (unless disclosure is integral to the site or service – in which case, this must be made clear to parents);
  • providing parents access to their child’s personal information to review and/or have deleted;
  • maintaining the confidentiality, security and integrity of information they collect from children, including by taking reasonable steps to release such information only to parties capable of maintaining its confidentiality and security;
  • retaining personal information collected online from a child for only as long as is necessary to fulfil the purpose for which it was collected; and
  • not conditioning a child’s participation in an online activity on the child providing more information than is reasonably necessary to participate in that activity.

A court can order civil penalties of up to USD53,088 per violation of COPPA. The amount may turn on several factors, including the egregiousness of the violation, any previous violations, the number of children involved, the amount and type of personal information collected, and the size of the company.

In January 2025, the FTC finalised updates to COPPA regulations applicable to advertising. Changes include:

  • requiring separate parental consent for the disclosure of children’s personal information to third parties;
  • narrowing exceptions to the parental consent requirement for internal operations, including conversion, measurement, and detecting fraud; and
  • permitting marketing materials as evidence of whether a site is directed at children.

In September 2022, California enacted the Age-Appropriate Design Code Act (AADC), which took effect in July 2024. The California AADC is a landmark privacy bill modelled on the UK’s Age-Appropriate Design Code Act that imposes certain requirements in relation to children’s data privacy. The California AADC applies broadly to businesses that provide online products and services that are “likely to be accessed” by a child. The California AADC was challenged by internet trade association NetChoice on constitutional grounds. In August 2024, the Ninth Circuit struck down the California AADC’s Data Protection Impact Assessment requirement as likely violating the First Amendment and remanded other provisions for further review. In March 2025, a district court broadened the injunction to block enforcement of the entire law, and the California Attorney General appealed the decision the following month. As of September 2025, the AADC remains unenforceable while litigation continues, leaving its ultimate fate uncertain.

Other states, including Maryland, Vermont and Nebraska, have passed their own versions of the AADC ‒ often with narrowed Data Protection Impact Assessment obligations, modified express age estimation mandates, and restrictions on processing personal data not reasonably necessary to provide an online product with which the child is “actively and knowingly engaged”.

Businesses are also prohibited from collecting, selling, sharing or retaining a child’s personal information where such data practices are not necessary for the online service or product. The prohibition on selling and sharing personal information may restrict the use of children’s personal information for the purpose of targeted advertising. Some states, including California, have expanded the definition of “child” in these contexts to include teens up to the age of 17.

Washington State enacted the Washington “My Health, My Data” Act, which took effect in March 2024. The “My Health, My Data” Act imposes significant restrictions on the use of “consumer health data” – defined broadly to include information that identifies a consumer’s past, present or future physical or mental health status – by all entities (including non-profits) that do business in or directed at the state. Among other requirements, the “My Health, My Data” Act requires that any regulated entities maintain consumer health data privacy policies, obtain consent from consumers before collecting consumer health data, and obtain written authorisation from consumers before selling or offering to sell consumer health data (including in the context of targeted advertising). Similar laws have been enacted in Nevada and the District of Columbia. Given the difficulties of scaling the authorisation requirements to consumers of online businesses and the presence of a private right of action for violations of the law, these laws are likely to – in effect – completely or near-completely end the use of consumer health data for the purpose of targeted advertising.

The FTC has further limited the use of consumer health data through regulation. In July 2024, amendments to the Health Breach Notification Rule took effect that expanded the definition of “breach of security” to now include any “unauthorised disclosure” of unsecured personal health records by health apps and similar technologies. These regulations add to the strict limitations on the use of health data in advertising and marketing.

Two new federal rules may affect companies that transfer personal data within the advertising ecosystem, as follows.

  • The Department of Justice’s Bulk Sensitive Data Rule restricts or prohibits transfers of bulk sensitive US data (eg, device identifiers, precise geolocation) to covered persons, even if the data is anonymised or de-identified. It applies to both direct and indirect disclosures, including through data brokerage and ad tech transactions, and may require strict security and contractual safeguards. Covered persons include any entity that is 50% or more owned or controlled, directly or indirectly, by a person or government of a country of concern (eg, China, Russia, Iran).
  • The Protecting Americans’ Data from Foreign Adversaries Act 2024 makes it unlawful for a “data broker” – an entity that sells or licenses personally identifiable sensitive data of US individuals – to transfer such data to a foreign adversary country or any entity controlled by a foreign adversary.
  • Each of these rules implicates companies that operate internationally or utilise tracking technologies provided by vendors or third parties in specific countries of concern. As a result, companies have begun to scrutinise their data flows more carefully.
  • At the state level, laws in California, Oregon, Texas and Vermont targeting data brokers have imposed new registration and disclosure obligations on companies that provide personal data for the purpose of targeted advertising. These states have focused their enforcement efforts on these businesses, resulting in investigative sweeps and several settlements.

A patchwork of federal and state laws governs sweepstakes and contests in the USA. What follows are some key general requirements.

  • As an initial matter, sweepstakes and contests must comply with general advertising principles and state unfair and deceptive practices laws. Accordingly, all promotional offers must be conducted in a non-deceptive, non-misleading manner, and the drawing of winners must be fair and unbiased.
  • As lotteries (except for those sponsored by state governments) are illegal in the USA, promotions must not contain all three of the following elements: prize, chance, and consideration.
  • In the USA, it is generally impermissible to require participants to make a purchase in order to compete for prizes in a game in which chance (not skill) is the predominant factor. Therefore, for such chance-based promotions, a non-purchase alternative method of entry is required.
  • Most states require that each sweepstakes or contests be governed by a comprehensive set of official rules that generally serve as the contract between the sponsor and the entrant. Although the details may vary depending on the nature of the promotion, most states generally require such rules to include: eligibility requirements, clear entry instructions, the start and end dates (and times, if applicable) of the entry period, a complete description of the prizes and their approximate retail value, how and when the winners may be determined, an odds statement, and the corporate name and physical address of the sponsor.
  • Most social media platforms have their own set of terms and conditions that govern sweepstakes and contests offered through the platform.

Games of skill are generally those in which the outcome is determined by a participant’s ability or aptitude instead of by chance. If chance dominates the promotion, it is not one of skill, even if some skill is required to participate. Most states follow the “dominant element” test to determine whether a promotion is skill-based, evaluating the following factors:

  • participants must have a distinct opportunity to exercise skill and must have sufficient data upon which to calculate an informed judgement to the extent required by the promotion;
  • the general class of participants must possess the skill;
  • the participants’ skills or efforts must sufficiently govern the result; and
  • the standard of skill must be known to the participants and this standard must govern the result.

A few states require that certain sweepstakes be registered before they can be implemented in the state, as follows.

  • If the aggregate value of all prizes to be awarded exceeds USD5,000, New York and Florida both require that the sponsor register the sweepstakes. The sponsor must also file a surety bond with each state, equalling the aggregate value of all prizes in the sweepstakes.
  • Rhode Island also requires registration of sweepstakes offered at in-state retail establishments and sets a lower registration threshold. Sweepstakes must be registered with the state if the aggregate value of all prizes to be awarded exceeds USD500.

The FTC’s “Guides Against Deceptive Pricing” address various kinds of pricing representations, including representations by marketers that their current price is a discount from their former price (a “sale” or “discount”), comparisons to others’ prices and to manufacturers’ suggested retail prices, and representations about special prices based on the purchase of other products (eg, gifts with purchase, and buy-one-get-one offers).

The FTC also provides guidance on “free” offers in its “Guide Concerning Use of the Word “Free” and Similar Representations”, which states that all such offers of “free” merchandise or services “must be made with extreme care so as to avoid any possibility that consumers will be misled or deceived”. The guide provides rules about the frequency of any such offers and the circumstances in which they can and cannot be made, as well as guidance concerning required disclosures, introductory offers, and negotiated sales.

Many states also have specific laws and regulations on pricing, free claims, and other promotional practices.

Moreover, a new wave of “all-in” price laws has recently emerged, mandating transparent pricing practices to ensure consumers are fully aware of the total cost of goods and services upfront. The FTC’s Rule on Unfair and Deceptive Fees took effect in May 2025 and is applicable to any business that offers, displays or advertises live-event tickets or short-term lodging. At the heart of the rule is the requirement to disclose the total price upfront – meaning the advertised price must include all mandatory fees and charges the business knows about and can calculate when the offer is made. Taxes, government charges, shipping fees, and charges for optional services do not need to be included in the initial total price, but they must be disclosed before the customer is asked to pay, and such disclosures must clearly explain the nature, purpose and amount of the excluded charges. Additionally, several state laws have recently passed (including in California, Colorado, Connecticut, Maine, Massachusetts, Minnesota, Oregon, Rhode Island and Virginia), generally requiring all mandatory fees to be included in an advertised price.

Under federal law, the Restore Online Shoppers’ Confidence Act (ROSCA) governs automatic renewal programmes. ROSCA sets forth certain baseline requirements, including that marketers obtain unambiguous consent for the “negative option” feature of their sales.

The FTC’s updated rule addressing recurring subscription programs (ie, the Negative Option or Click-to-Cancel Rule) was scheduled to take full effect in July 2025; however, the US Court of Appeals for the Eighth Circuit vacated the updated rule, finding that the FTC’s rule-making process contained “fatal” procedural deficiencies (without directly addressing the rule’s substance). The FTC’s next steps remain to be seen – for example, whether it appeals this decision, renews rule-making, or issues an updated enforcement policy. Nonetheless, in the meantime, the FTC can and will still enforce (and is enforcing) under ROSCA and the FTC Act.

At the state level, there is an ever-increasing list of state statutes regulating automatic renewal and continuous service programmes. Although these state laws do not dramatically change the regulatory landscape, many introduce stringent new requirements.

Auto-renew programmes have been the subject of recent regulatory enforcement at the federal, state and local level, in addition to self-regulatory actions at the National Advertising Division and class actions.

The FTC Act’s prohibition on unfair and deceptive conduct applies equally to the use of AI in advertising. One of the FTC’s key concerns is marketers’ use of AI tools in ways that steer consumers unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing and employment. The FTC has warned that advertisers might be tempted to employ AI tools to sell products and services and has reminded advertisers that misleading consumers via doppelgängers (such as fake dating profiles, phony followers, deepfakes or chatbots) could result – and, in fact, have resulted – in FTC enforcement actions. The FTC’s enforcement actions and guidance emphasise that the use of AI tools should be transparent, explainable, fair and empirically sound, while fostering accountability.

In its Endorsement Guides, the FTC revised the definition of an “endorser” to include what “appear[s] to be an individual, group, or institution”, so as to include fabricated endorsers (including those created using AI).

As discussed in 5.5.3 Consumer Reviews, the FTC Consumer Review Rule specifically prohibits reviews and testimonials that falsely claim to be from a real person, including those generated by AI. The FTC has been clear that existing consumer protection principles apply squarely to AI tools and claims. Recent enforcement actions illustrate this trend. In Workado, LLC (formerly Content at Scale AI), the FTC ordered the company to stop making unsupported accuracy or efficacy claims about its AI “content detection” tool – claims that purported broad applicability despite the tool having been trained almost solely on academic content. The consent order requires Workado to retain evidence for any performance claims, notify affected customers, and submit annual compliance reports over the next four years. Meanwhile, in its case against Air AI Technologies and related entities, the FTC alleged misleading business-opportunity claims – namely, promises of high earnings (up to USD250,000), “guaranteed” refunds (which were often denied), and other representations about what their AI product could deliver.

The use of AI tools raises significant IP concerns as well. Not only are there questions about whether there can be copyright ownership of the material that AI tools create (the US Copyright Office holds the position that there is no copyright protection for works created by non-humans, including AI), the tools themselves may infringe – or create output that infringes – third-party rights. There are currently dozens of lawsuits claiming that the way AI companies gather and utilise data and content from other sources to train their models violates copyright laws.

There are ongoing efforts at both the federal and state levels to establish a legal framework protecting individuals’ rights over the use of their voice and likeness, especially against misuse by AI ‒ examples of which are as follows.

  • Laws such as Tennessee’s Ensuring Likeness Voice and Image Security (ELVIS) Act have been introduced to specifically address the use of AI in creating unauthorised replicas, reflecting a growing trend to protect publicity rights from the implications of advancing technology.
  • California enacted AB 1836 and AB 2602 to curb unauthorised digital replicas. AB 1836 bars AI “digital replicas” of deceased performers without estate consent (effective from 1 January 2026), whereas AB 2602 renders vague, unrepresented contract clauses allowing a replica to replace a performer unenforceable (effective from 1 January 2025).
  • New York’s Digital Replica Contracts Act, which took effect on 1 January 2025, voids contract terms permitting a digital replica to substitute for a performer’s work unless the intended uses are reasonably specific and the performer has legal or union representation.
  • The New York Fashion Workers Act requires separate, explicit written consent from models for the creation or use of their digital replicas, distinct from representation agreements – with state Department of Labor guidance defining “digital replica” and how consent must be obtained.

The FTC has issued some guidance specific to the use of AI-related claims in advertising, including the following.

  • Do not over-promise what an algorithm or AI-based tool can deliver, including by claiming that it can do something beyond the current capability of any AI or automated technology.
  • Do not make baseless claims that a product is AI-enabled.
  • Consider the reasonably foreseeable risks arising from the use of AI tools (including ways in which tools can be misused or cause other harm) and take all reasonable precautions before the AI product hits the market.

At the federal level, the FTC has recently sharpened its focus on chatbots, including launching an inquiry in September 2025 into AI chatbots marketed as “companions”, focusing on whether such tools may mislead consumers, exploit vulnerable populations, or collect and use sensitive data in unfair or deceptive ways. This action reflects the agency’s broader scrutiny of AI-driven products, particularly where emotional manipulation, privacy risks or unsupported performance claims could harm consumers.

At the state level, legislatures are beginning to address chatbot transparency more directly. California’s Bot Disclosure Law (effective since 2019) requires clear disclosure when a bot is used in online commercial communications to mislead someone about its artificial nature in order to spur a purchase. Other states, including New York, have introduced pending bills that would impose similar disclosure obligations in both commercial and political contexts – signalling a broader trend towards regulating when and how consumers must be told they are interacting with a chatbot rather than a human.

General advertising principles should also generally apply to the marketing and sale of cryptocurrency and non-fungible tokens (NFTs). Marketers must be careful to avoid misleading claims about NFTs, as the value of such digital assets is subject to extreme volatility and may be adversely impacted by ‒ among other things ‒ a decline in public interest, a change in law, regulation or policy, and technical issues.

The FTC is also particularly concerned that, when consumers buy these digital products, they may not understand what they are buying and it is not always clear what they actually own or control. Therefore, FTC guidance suggests that – when offering digital products – companies should ensure that customers understand the material terms and conditions, including whether they are purchasing an item or simply obtaining a licence to use it.

Section 17(b) – the so-called anti-touting provision – of the Federal Securities Act 1933 also makes it unlawful for any person to publish, give publicity to or circulate any advertisement, among other communications, describing a security for a consideration received (or to be received) unless the advertisement fully discloses the receipt and amount of consideration. If a digital asset is considered a security, those rules may apply as well.

The SEC continues to follow through on repeated warnings that it will aggressively enforce Section 17(b) of the Securities Act against influencers, including mainstream celebrities, who fail to disclose the nature, scope and amount of compensation received in exchange for their sponsored posts promoting these products.

The recent Brown v Dolce & Gabbana USA Inc case underscores the reputational and legal risks that brands face when experimenting with NFTs. There, investors alleged that Dolce & Gabbana’s DGFamily NFTs failed to provide the high-value perks marketed, ranging from digital outfits and metaverse experiences to exclusive physical goods and events – thus exposing the brand to claims of misrepresentation. Even though the US arm of the company was dismissed from the suit for lack of direct involvement, the litigation highlights how fragmented corporate structures and unclear accountability can still draw brands into costly disputes. For luxury and consumer-facing companies, NFT ventures must be carefully structured (with clear disclosures, achievable benefits and contingency planning) to avoid unmet consumer expectations, long-lasting reputational harm, and allegations of fraud.

General advertising principles should apply to advertising within the metaverse. Some current key issues requiring attention include:

  • the blurring of advertising and other content;
  • the use of virtual influencers;
  • how to properly disclose material connections and other qualifying information;
  • privacy and data collection; and
  • the impact of the metaverse on children.

Although the FTC is primarily responsible for the enforcement of consumer protection laws in the USA, there are a number of other federal agencies that enforce consumer protection laws directed at specific industries, including food and drugs, alcohol and tobacco, banking and securities, and transportation. State laws may impact the marketing of regulated products (eg, cannabis) as well.

Although the FTC has expressed the general view that consumers have the right to know when they are being advertised to, the FTC has also indicated that product placement (at least in adult-directed advertising) often does not require disclosure. The FTC has explained that “merely showing products or brands in third-party entertainment content, as distinguished from sponsored content or disguised commercials”, does not require a disclosure that the advertiser paid for the placement. Disclosures may be needed to prevent consumer confusion when objective product claims or endorsements are being made.

The FCC’s sponsorship identification rules, however, require disclosure of product placement in broadcasting and certain other media that is subject to FCC jurisdiction.

Advertising is highly regulated in the USA and this chapter only touches on some of the key areas that are regulated. Marketers are advised to consult with local counsel before engaging in any advertising in the USA.

Frankfurt Kurnit Klein & Selz

28 Liberty Street
New York
NY 10005
USA

+1 212 826 5525

+1 347 438 2104

jgreenbaum@fkks.com www.fkks.com
Author Business Card

Trends and Developments


Author



Frankfurt Kurnit Klein & Selz was founded nearly 50 years ago as a boutique law firm servicing the entertainment and arts communities in New York City and now provides the highest-quality legal services to clients in a wide range of industries and disciplines worldwide. Frankfurt Kurnit’s advertising practice – which is counsel to many of the USA’s leading brands, advertising agencies, and platforms – is universally recognised as one of the leading advertising and marketing practices in the USA.

The Use of AI in the Advertising Space Comes Under Legal and Regulatory Scrutiny in the USA

In the USA, advertising is highly regulated at the federal, state and local level. The Federal Trade Commission (FTC) is the country’s primary advertising regulator, setting enforcement priorities in the industry by bringing actions and issuing guidance. State attorneys general are also playing an increasing role in advertising-related enforcement. In addition, the National Advertising Division (NAD) ‒ the primary advertising self-regulatory body in the USA ‒ plays a key role in reviewing advertising claims, resolving disputes, initiating challenges, and issuing industry guidance. The USA is a litigious environment with private plaintiffs also frequently bringing claims against advertisers. For advertisers, this means navigating a regulatory landscape that is highly active and dynamically evolving with new advertising practices.

Recently, AI has moved from hype to a significant driver of change in the advertising industry, helping advertisers to create content faster, optimise campaigns with greater precision, and open new ways of engaging audiences. Given AI’s potential to revolutionise the industry, it is no surprise that it has been an increasing focus of regulators in 2025. Although current FTC leadership has signalled less emphasis on broad new rule-making, agencies and law-makers have continued to be especially concerned with AI-related issues, including:

  • claims that overstate how much a product uses AI or overstate the AI’s capabilities;
  • false or misleading claims generated using AI;
  • transparency and disclosure;
  • virtual influencers;
  • reviews and testimonials;
  • chatbots;
  • bias; and
  • digital replicas.

AI also poses novel questions under IP laws and raises concerns with the Hollywood unions. Both add to the legal risks advertisers must consider when leveraging AI tools.

As AI changes how advertisers think about liability, negotiation and creativity in the advertising space, it is important for advertisers to bear in mind that – at least for now – long-standing advertising rules apply regardless of the technology or platform. Given that AI has been among 2025’s hottest topics, this piece will address some key recent AI-related legal developments in the advertising space.

AI-washing

Advertisers have been quick to capitalise on the hype by promoting AI-related services and schemes. In response, the FTC and the NAD have emphasised that advertisers should not deceive consumers about what AI can do or how it works. Notably, the FTC has cracked down on deceptive practices that involve exaggerating, overstating or misrepresenting how much a product or service uses AI or the results these technologies can achieve (a practice referred to as “AI-washing”).

In September 2024, the FTC announced Operation AI Comply, a law enforcement sweep targeting companies it alleged were using AI claims to “supercharge deceptive or unfair conduct that harms consumers”. Operation AI Comply enforcement efforts included a claim brought against DoNotPay. Marketed as “the world’s first robot lawyer”, the service promised to help consumers file lawsuits or generate legal documents. The FTC, however, alleged that the service provided inaccurate legal advice and lacked the expertise of a human lawyer. The FTC’s final order requires DoNotPay to pay USD193,000 in monetary relief and prohibits DoNotPay from advertising that its service performs like a real lawyer unless DoNotPay has sufficient evidence to back up this claim.

Examples of other FTC enforcement actions include a claim brought against a company for overstating the accuracy and efficacy of its AI content detection products and another claim against a business that used AI to pitch deceptive “get rich quick” opportunities.

This string of enforcement actions signals the FTC’s heightened scrutiny of AI-related marketing claims and, in particular, those that deceive consumers about performance, accuracy or benefits. A key takeaway for advertisers is that AI-washing is increasingly a regulatory and reputational risk. If an ad suggests a product is faster, smarter, or more effective because of AI, the advertiser must be prepared with proper substantiation. Just like any other performance claims, AI claims must be truthful and substantiated, and any necessary qualifications should be clearly and conspicuously disclosed.

False or misleading claims

Under US advertising law, it is a foundational rule that advertisers must have a reasonable basis for their claims and that claims must not be false or misleading in context. This principle remains applicable when claims are generated using AI.

When using AI to generate advertising content or comparative claims, advertisers should bear in mind that AI outputs are only as good as the data they are trained on and data sets from which AI content is generated may be unrepresentative or inaccurate. It is therefore important for advertisers to review any AI-generated content to ensure that it does not contain false or misleading claims or misrepresent a product’s performance. AI is not a substitute for substantiation – rather, advertisers are still responsible for supporting any claims with reliable evidence and for disclosing limitations where necessary to prevent consumer deception.

Transparency

In the USA, there is no general requirement to disclose that content was created using AI. Although different approaches can be seen around the world, the US framework by comparison is more situational. Instead, the duty to disclose arises where non-disclosure would make an ad misleading or deceptive under the FTC Act or related state consumer protection statutes.

The FTC has long made clear in its Guides Concerning the Use of Endorsements and Testimonials in Advertising (the “Endorsement Guides”) and native advertising guidance ‒ and more recently in its AI-focused posts ‒ that consumers should not be misled about the nature, source or sponsorship of advertising content. Accordingly, if the use of AI is material to consumer perception, disclosure is required. By way of example, if AI-generated or simulated content is used to depict product performance and consumers could reasonably interpret those depictions as real, the advertiser must clearly disclose that the content is simulated.

Influencers

As influencer marketing continues to be an important part of advertisers’ marketing efforts, the FTC is making it a priority to ensure that marketers’ use of influencers is consistent with the basic truth-in-advertising principle that endorsements must be honest and not misleading.

The FTC’s current thinking is set forth in its updated Endorsement Guides. Although the Endorsement Guides provide detailed guidance to marketers on the use of endorsements in advertising, they are grounded in three basic principles. First, endorsements should reflect the endorser’s honest opinions, findings, beliefs and experiences. Second, endorsers should not make claims that an advertiser cannot make itself. In other words, when an endorser makes a claim about product performance, the advertiser should ensure that the claim reflects the generally expected performance of the product. And third, if there is a material connection between and endorser and the advertiser that is not reasonably expected by the audience, then that connection should be clearly and conspicuously disclosed.

In the updated Endorsement Guides, the FTC clarified that these rules apply not only to human endorsers but also to avatars, virtual influencers, and AI-generated personas. The FTC emphasised that endorsements from synthetic characters are subject to the same standards of truthfulness and disclosure as those from real people. For advertisers, this means that if a virtual influencer is used to promote a product, any material connection (such as brand control or payment) must be clearly and conspicuously disclosed, and the virtual persona’s product claims must be truthful and substantiated.

Additionally, a unique challenge with virtual influencers is that ‒ unlike human endorsers ‒ they cannot actually use or experience a product. This makes it impossible for their statements to reflect a genuine personal opinion or experience. How then can a virtual influencer’s statements reflect their truthful and honest experiences?

Applied to virtual influencers, the principles of the Endorsement Guides require brands to avoid implying that the character’s “experience” is real. Instead, advertisers should make clear disclosures of the brand relationship and ‒ where material to consumer perception ‒ that the persona is AI-generated. In other words, when using virtual influencers, advertisers are obligated to ensure that consumers are not being misled into thinking that the synthetic character’s endorsement represents an authentic consumer experience.

Fake reviews and testimonials

The FTC published its Trade Regulation Rule on the Use of Consumer Reviews and Testimonials, which is intended to target unfair and deceptive practices involving consumer reviews or testimonials. Among other things, the rule prohibits reviews and testimonials that falsely claim to be from a real person, including those generated by AI or persons who do not have actual experience with the company’s products or services. The rule also enables the FTC to seek civil penalties against knowing violators.

Fake reviews and testimonials have also been targeted as part of Operation AI Comply. In one example, the FTC brought a claim against an AI writing assistant advertised as a way to generate consumer reviews. The FTC barred the company from marketing services intended to fabricate testimonials, citing the risk of misleading consumers who rely on authentic reviews. The takeaway for advertisers is that if an AI tool can be used to create deceptive content at scale, regulators may view the tool itself as problematic.

Chatbots

Marketers offering customer service through an AI-powered chatbot should make clear to consumers that they are communicating with a machine where consumers would reasonably think they are chatting with a person or where non-disclosure would be misleading. The FTC has emphasised in blog posts that chatbots must not mislead consumers about what they are or what they can do. If consumers are likely to believe they are interacting with a human when in fact they are communicating with AI, the omission of that fact can be deceptive.

Certain state laws also require disclosure when using bots to communicate with consumers. By way of example, the recently enacted California Bot Disclosure Law prohibits using bots with the intent to mislead consumers in connection with a consumer transaction. To comply, brands must provide clear and conspicuous disclosures and the placement of the disclosures must be such that a reasonable person interacting with the bot understands they are not speaking with a human. The law applies to anyone using a bot to communicate or interact with a person in California online.

When using AI-powered chatbots, advertisers are also responsible for the truth and accuracy of any claims made by their chatbots. And, if marketers should use AI tools to trick or manipulate consumers into making harmful choices that are contrary to their intended goals (so-called dark patterns), that may be considered an unfair practice.

Bias

Even apparently “neutral” technology can produce results that are biased or discriminatory. To avoid inadvertently introducing bias or other unfair outcomes, the FTC advises users of AI to scrutinise algorithms and data sets for biases, and to proactively monitor AI outputs and embrace transparency.

Digital replicas

Advertisers are increasingly harnessing the use of digital replicas. These are computer-generated simulations of a real person’s image, likeness, voice or performance that is virtually indistinguishable from the original.

The rapid rise of AI-generated image, voice and likeness “clones” has pushed US law-makers and regulators to confront the risks of digital replicas and deepfakes. In general, while no federal right of publicity law exists in the USA, state right of publicity laws protect individuals’ identities (name, image, likeness, etc) from unauthorised use ‒ examples of which include the following.

  • Tennessee’s ELVIS (Ensuring Likeness Voice and Image Security) Act addresses AI-driven voice cloning by expressly protecting an individual’s voice (and simulation of voice) as part of their right of publicity. In addition, the law makes it illegal for a person or company to distribute or publicly disseminate replicas that they know are unauthorised or to distribute tools or services whose primary function is to generate such unauthorised replicas.
  • New York and California have also passed regulations addressing contractual provisions related to digital replicas in performance or talent agreements. The laws aim to protect individuals from open-ended or unfair grants of rights to replicate their voice or likeness using AI. These laws state that clauses in performance contracts granting rights in digital replicas are unenforceable if they allow replicas to substitute for live work, fail to provide a reasonably specific description of intended uses, and were not negotiated with the benefit of legal counsel or union representation.

In parallel, industry agreements such as the SAG‒AFTRA (Screen Actors Guild‒American Federation of Television and Radio Artists) Commercials Contract include new rules around how AI can be used on union productions, including specific requirements around usage, permissions, and costs associated with the use of digital replicas and synthetic performers.

IP

In addition to consumer protection concerns, marketers should consider IP rights issues implicated by the use of AI-generated content.

For advertisers, one of the most pressing copyright issues around AI is whether assets generated with AI can be owned and protected. The US Copyright Office has reaffirmed that human authorship is required for copyright protection and works created entirely by AI are not registrable. However, advertising content that involves meaningful human creative input (through selection, arrangement, or modification of AI outputs) may qualify for protection – although, to be clear, only the elements of the work resulting from human authorship are protectable. This means brands can potentially claim rights in campaigns built with AI assistance, but only if the human role is documented and substantial. From a risk standpoint, advertisers can bolster their claims to ownership by ensuring that human authorship is embedded throughout the creative process.

Marketers should also bear in mind that AI tools have the potential to generate content that infringes on a third party’s copyright, trade mark, or right of publicity (for example, in the case of AI lookalikes or soundalikes). If the system has been trained on or is prompted with copyrighted, trademarked, or otherwise protected material, there is a risk that resulting output may incorporate or reproduce those protected elements. Using an AI-generated image, video or song that inadvertently reproduces protected expression could expose both the advertiser and its agency to infringement claims.

Advertisers should therefore review vendor terms carefully, seek contractual indemnities and warranties, ensure proper human oversight, and have in place an AI policy that establishes clear internal rules for how AI tools may be used.

Frankfurt Kurnit Klein & Selz

28 Liberty Street
New York
NY 10005
USA

+1 212 826 5525

+1 347 438 2104

jgreenbaum@fkks.com www.fkks.com
Author Business Card

Law and Practice

Author



Frankfurt Kurnit Klein & Selz was founded nearly 50 years ago as a boutique law firm servicing the entertainment and arts communities in New York City and now provides the highest-quality legal services to clients in a wide range of industries and disciplines worldwide. Frankfurt Kurnit’s advertising practice – which is counsel to many of the USA’s leading brands, advertising agencies, and platforms – is universally recognised as one of the leading advertising and marketing practices in the USA.

Trends and Developments

Author



Frankfurt Kurnit Klein & Selz was founded nearly 50 years ago as a boutique law firm servicing the entertainment and arts communities in New York City and now provides the highest-quality legal services to clients in a wide range of industries and disciplines worldwide. Frankfurt Kurnit’s advertising practice – which is counsel to many of the USA’s leading brands, advertising agencies, and platforms – is universally recognised as one of the leading advertising and marketing practices in the USA.

Compare law and practice by selecting locations and topic(s)

{{searchBoxHeader}}

Select Topic(s)

loading ...
{{topic.title}}

Please select at least one chapter and one topic to use the compare functionality.