With deep fake harms stretching beyond the purely commercial considerations of current trademark legislation, INTA has laid out how it deems the law should adapt to keep up with the emerging threat of AI digital replicas.
The International Trademark Association’s (INTA) Board of Directors convened on Tuesday (25 February) and voted to approve a resolution laying out its stance on how legislation should address harms caused by deep fakes – a form of audio-visual content that has been generated or manipulated using AI technology to create deceptive digital replications of an individual or object.
While the consensus on the global proliferation of AI remains divided, the matter of deep fakes has converged opinions, with the misleading technology raising alarm bells for its potential to manipulate personal likenesses, spread misinformation and infringe upon individuals’ rights of publicity.
INTA’s resolution, titled Legislation on Deep Fakes (Digital Replicas), responds to the global advances in AI technology which have “lowered the cost and eased the access of tools” utilised for the creation of deep fakes, and the need for flexibility in legislation to respond to the ever-evolving harms caused by misuse of AI technologies.
Jenny Simmons, INTA’s associate senior director for government relations, said: “This Resolution provides legislators with a blueprint that balances providing effective tools to fight unauthorised digital replicas with free speech rights. INTA looks forward to working with legislators to craft laws that draw upon the depth of INTA members’ expertise with intellectual property rights, consumer protection, ecommerce, and freedom of expression on the internet.”
A NEED TO DIVERGE
INTA’s “expedited consideration” of deep fake issues has been pushed forward by the US Congress’ deliberations of various legislation relevant to AI misuse and harms. This includes the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act and No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act, as well as the proposed Preventing Abuse of Digital Replicas Act (PADRA) which would amend the US Trademark Act of 1946 (Lanham Act).
While stressing that it “continued to support the minimum requirements” of its 1998 Resolution US Federal Right of Publicity and 2019 Resolution Right of Publicity Minimum Standards, INTA diverged from its prior conclusions, acknowledging that the rapid advancement of AI technologies has made it clear that associated risks are not limited to “lost sales or direct consumer harm”, but now include threats to the dignity and privacy rights of individuals. As such, INTA finds that there “is no compelling reason” that new US federal right of publicity legislation relating to deep fakes should be limited to an amendment to the Lanham Act, or other equivalent international trademark laws, so long as diverging legislation meets the minimum requirements outlined in its 1998 and 2019 resolutions.
INTA further states that, while legal clarity and harmonisation remains its utmost goal, it would back digital replica laws that offer sufficient clarity on how the new rights interact with state and common law protections, even where they do not pre-empt all alternative protections.
The resolution notes that many victims of deep fakes may not have the “resources or economic incentives” to commence legal action to obtain court orders for the removal of harmful replicas, and that many associated harms could not be remedied through monetary penalties. As such, INTA recommends the implementation of a notice and takedown framework that would enable deep fake victims to request the swift removal of digitally replicated content from a platform for the protection of invasions of privacy, dignitary harms or fraudulent schemes, while providing a “safe harbour” for complying social platforms.
EXCESSIVE AMBIGUITY
While the association has put forward its support for a broadened legislative landscape that could adequately encompass the range of harms faced by deep fake victims, it stressed that it would not support legislation that “introduce[d] excessive ambiguity into trademark law, invite litigation over vague or overly expansive provisions, or create new tiers or classes of trademarks based solely on whether those trademarks incorporate the image, voice or likeness of a real individual”.
Specifically, INTA states that it would not back proposals seeking to impose “artificial time limits” for trademarks incorporating replicas of individuals that continue to operate as trademarks, nor of a system that pre-empts state statutory or common law protections for only some categories of trademarks. The association further noted that, in line with its commitment to balancing intellectual property rights with protections for commentaries such as criticism, satire, parody and legitimate news coverage, it would not support litigation that weakened or failed to adequately protect the use of an individual’s persona in speech or expression.