Deepfakes, Voice Cloning, and Image Rights: How to Protect Your Digital Identity in the Age of AI
Key Takeaways
The recent steps taken by Taylor Swift to protect elements of her digital identity through trademark law raise broader questions about the legal tools available to ensure effective protection.
Generative artificial intelligence is not limited to producing text, images, or music. It also makes it possible to imitate a voice, reproduce a face, create realistic videos, or associate an individual with statements they never made.
Two recent examples illustrate this trend. While deepfakes and AI-generated false content are proliferating, Taylor Swift has initiated proceedings before the U.S. trademark office to protect certain elements of her voice and image. These applications reportedly target audio clips such as “Hey, it’s Taylor Swift” and a stage image associated with her tour. In January 2026, actor Matthew McConaughey similarly registered several elements associated with his public persona, including his well-known phrase “Alright, alright, alright”, as well as audio and visual samples, in order to counter unauthorized uses involving AI. (1)
The issue is not whether a person can “own” their voice or face, but rather which legal tools make it possible to control the digital exploitation of their physical identity. In the era of deepfakes, voice cloning, and AI-generated avatars, effective protection requires combining several legal mechanisms, including image rights, the right to privacy, the AI Act, and contractual safeguards. Establishing a digital identity governance strategy should also be considered.
1. Voice, Image, Appearance: Generative AI Disrupts Traditional Protections
Until recently, infringement of image or voice typically required the reproduction of pre-existing content: a photograph, video, sound recording, or interview.
Generative AI opens up new possibilities and adds a new dimension to our digital identity. It enables the creation of new yet realistic content based on the characteristics of a real person, in which that person appears to speak, sing, endorse a product, or make statements they have never actually made. It is precisely this realism that creates the legal risk: even when it is not based on a direct copy of a photograph, video, or recording, the generated content may infringe on the rights of the person depicted.
Deepfakes are not illegal by nature. However, they may become illegal depending on the circumstances of their creation, the data used, the intended purpose, the consent obtained, and how the content is distributed
The risk is not limited to public figures. Companies may face situations such as the creation of a fake video of an executive announcing a strategic decision, a voice message imitating an employee to trigger fraud, or an advertising avatar using an influencer’s likeness without authorization.
According to the CNIL, deepfakes may infringe privacy and reputation. Their creation and dissemination may engage the liability of their authors. (2) A deepfake does not fall solely within the scope of intellectual property law. It may also involve personal rights protection, civil liability, criminal law, personal data protection, and, in certain cases, trademark law.
The issue concerns the exploitation of digital identity elements: voice, face, gestures, silhouette, verbal signature, catchphrase, or stage posture. These elements may carry both economic and reputational value. For companies using AI in marketing activities, they should be treated as sensitive assets requiring appropriate legal handling.
2. Trademark Law, Image Rights, GDPR, AI Act: What are the Legal Safeguards?
Several complementary legal mechanisms can be used to address the protection of digital identity, around personality rights, as well as trademark law, and, more recently, artificial intelligence law.
2.1 Right to Privacy
The right to privacy is enshrined in Article 9 of the French Civil Code. Courts may order measures to prevent or put an end to an infringement of a person’s private life, including through summary proceedings.
The right to one’s image, derived from this principle, generally requires the prior authorization of the identifiable person before any reproduction or use of their image, subject in particular to the legitimate interest of informing the public. It should be noted that the image rights of public figures are limited by the right to information and the relevance of current events. They may oppose the dissemination of their image if it infringes their private life or dignity, or if it is used for commercial purposes without their consent.
2.2 Personal Data Protection
Individuals’ rights may also be protected through personal data protection law. A photograph or video may fall within the scope of personal data protection where it includes one or more identified or identifiable individuals.
Furthermore, processing whose purpose is to enable the recognition of a speaker based on vocal characteristics constitutes processing of biometric data. Biometric data fall within the category of special categories of personal data under Article 9 of the GDPR.
However, a voice is not necessarily biometric data in all circumstances. It becomes so when its characteristics are processed for the purpose of uniquely identifying a natural person.
Accordingly, a controller must not only ask, “Am I allowed to use this voice?”, but must also ensure that the use of such personal data complies with the GDPR, including the principles relating to processing of personal data and the rights of the data subject. (3)
2.3 Criminal Law
Article 226-8 of the French Criminal Code penalizes the publication of a montage made using a person’s words or image without their consent, where it is not clearly apparent that it is a montage or where this is not expressly indicated. Since the SREN Law of May 21, 2024, this provision also applies to certain content generated through algorithmic processing. (4)
Article 226-8-1 provides for a specific offense relating to sexualized montages created using a person’s words or image without their consent, with aggravated penalties where dissemination occurs via an online public communication service.
2.4 Trademark Law
A trademark may take various forms, including sound marks. A sound mark may consist of a sound or a combination of sounds and may be represented, for example, by a musical score or an MP3 file. The mark must be represented in a clear and precise manner so that third parties can determine the subject matter of protection.
Applied to voice, this means that an identifiable audio clip may, in principle, be registered as a trademark if it is distinctive. A human voice as such, taken in the abstract, will not necessarily be protectable. However, a vocal expression, jingle, phrase identifying a person, or sound associated with a commercial offering may constitute a distinctive sign.
Trademark protection may be effective against the reproduction of a registered sign or imitation creating a likelihood of confusion, but it does not necessarily cover all imitation of vocal tone or personal style. Accordingly, the protection strategies implemented by Taylor Swift and Matthew McConaughey are not intended to secure a global monopoly over their voice or image, but rather to protect specific signs associated with their identity.
For companies leveraging a strong sound or visual identity, such as a recognizable voice in advertising campaigns, a sound signature, a spoken slogan, or a digital character, trademark protection may be useful. Once registered, the trademark allows action against certain unauthorized uses (e.g., infringement actions). (5)
2.5 Artificial Intelligence Regulation (AI Act)
Finally, the European Regulation on artificial intelligence imposes transparency obligations for certain AI systems, particularly with respect to artificially generated or manipulated content.
Article 50 of the AI Act provides for transparency obligations concerning AI-generated content (images, audio, and text, including deepfakes). These obligations, including labeling requirements, will apply from August 2, 2026.
However, transparency does not amount to authorization. Indicating that content is AI-generated may be necessary, but it is not sufficient to make lawful the use of a person’s image or voice in such content. A person or company generating an avatar resembling an employee, artist, or influencer must still verify the rights granted by that individual, the legal basis for processing, and the applicable contractual framework.
3. Practical Recommendations: Building a Digital Identity Governance Framework
For companies wishing to use the image or voice of public figures, or even employees for marketing or advertising purposes, the protection of digital identity cannot rely on a single legal tool. This approach requires the implementation of a comprehensive governance strategy that integrates intellectual property, data protection, contracts, and security.
This governance strategy should include the following:
- Mapping the digital identity elements used or intended to be used (voices, images, avatars, sound signatures, characters, spoken slogans, etc.) and identifying the rights holders (artists, influencers, employees, content libraries, or the company itself).
- Securing permissions by entering into contracts for the transfer of rights, licensing, audiovisual services, dubbing, influencer marketing, or production. These contracts must specify whether the image or voice may be reproduced, modified, cloned, synthesized, integrated into an AI model, or reused to generate new content. Otherwise, there is a risk of discovering that the initial authorization does not cover AI-generated content.
- Considering trademark registration where relevant, particularly for companies relying on strong sound or visual identity. This requires careful preparation (choice of sign, distinctiveness, classes of goods and services including digital uses, territorial strategy).
- Ensuring that the use of these digital identity elements complies with the GDPR. When a company processes voices, images, or videos that can identify an individual to create content using an AI system, or to train or utilize content via AI, such processing must be properly identified and managed in a manner that respects the rights of the individuals concerned.
- Raising awareness among marketing, communications, product, HR, and IT teams regarding the legal risks associated with AI-generated content, particularly when imitating a voice “in the style of,” using a person’s likeness without consent, or creating deepfakes inspired by real individuals.
More generally, it is recommended to implement an internal AI policy covering authorized uses, required legal validations, rules governing content sourcing and reuse, transparency obligations, and pre-publication controls.
Every major technological advance creates a wave of disruption in society at large and in the legal sphere in particular. The advent of generative AI and its use by the general public is no exception to this rule. These systems create a specific area of risk: content may be technically easy to produce but legally difficult to secure. The fact that a tool allows one to clone a voice, alter a face, or generate a realistic video does not mean that its use is authorized.
Generative AI, and deepfakes in particular, mark a new phase in the expansion of digital identity. In this context, the legal protection of voice, image, and personal distinctive signs becomes a critical issue for public figures, artists, and companies leveraging recognizable digital identities.
(1) “Taylor Swift files trademarks for voice and image amid concern over AI misuse”
(2) CNIL, Deepfakes (hypertrucages): how to protect yourself and report unlawful content (in French)
(3) See Chapter II (Articles 5 et seq.) and Chapter III (Articles 12 et seq.) of the GDPR
(4) Article 226-8 of the French Criminal Code (as amended by the SREN Law of May 21, 2024): “The act of bringing to the attention of the public or of a third party, by any means whatsoever, a montage made using the words or the image of a person without that person’s consent shall be punishable by one year’s imprisonment and a fine of €15,000, where it is not clearly apparent that it is a montage or where this is not expressly indicated. The same offense, punishable by the same penalties, shall be deemed to include the act of bringing to the attention of the public or of a third party, by any means whatsoever, visual or audio content generated by algorithmic processing and representing the image or the words of a person, without that person’s consent, where it is not clearly apparent that the content is algorithmically generated or where this is not expressly indicated. (…)”
(5) Article L.713-2 of the French Intellectual Property Code prohibits, in particular, unless authorized, the use of a sign identical to the trademark in relation to identical goods or services. See also Article L.716-4 of the French Intellectual Property Code regarding trademark infringement.
Bénédicte DELEPORTE
Avocat
Deleporte Wentz Avocat
www.dwavocat.com
May 2026