Key takeaways from this interview
- Copyright protection for a work created using AI requires proof of substantial human intervention, as content generated solely by an AI system is not eligible for protection.
- Traceability of the creative process has become a prerequisite for evidencing such human intervention, in particular through the retention of briefs, prompts, artistic choices and post-production work.
- Agencies must implement a structured AI governance framework (internal processes, an AI charter, quality control and training) in order to mitigate legal and litigation risk.
- Evidential tools such as eIDAS qualified timestamping or, on a complementary basis, blockchain-based mechanisms can be used to strengthen proof of prior existence, integrity and the reliability of the creative process.
- The use of AI is reshaping the economic and contractual model of agencies, which must adapt their contractual clauses and, in some cases, place greater emphasis on the provision of advisory and support services rather than on the mere assignment of copyright.
The use of AI by digital agencies and by companies internally, for the production of content as varied as advertising or marketing campaigns, communication materials, articles, reports and similar outputs, has become widespread. This profound change in modes of creation has implications for the management of copyright. It exposes organisations to infringement risk, whether service providers and their clients, or companies whose content is produced by in-house teams. How should this risk be managed?
To what extent are AI tools used to automate certain creative and productive tasks? More specifically, what are the most common use cases?
Léa Puigmal: Digital agencies use AI for creative thinking, mock-ups and the generation of content such as logos, brands, articles, images or videos, for example in the production of a film or an advertising campaign for a client. AI is therefore involved at every stage of the process to support the creative professional, whether to explore ideas, generate content, rework it or integrate it into a broader deliverable. It is necessary to distinguish between two situations. On the one hand, AI is used as a tool acting as an assistant in the service of creation. On the other hand, AI may itself generate content. In both cases, there are legal implications.
What is the impact on copyright?
Léa Puigmal: Copyright protection of a work presupposes human intervention in the creative process, as content generated solely by an AI system does not benefit from protection. It is necessary to return to the requirement of originality under copyright law, which has been somewhat unsettled by the emergence of AI-assisted works. In France, copyright is based on a personalist conception of authorship, which entails the need for human involvement in the creation of a work. There have, however, been adaptations in the case law to take account of technological developments, particularly in relation to software. Nevertheless, ownership of copyright must necessarily vest in a human being. To establish that a work is original, it is necessary to demonstrate the imprint of the author’s personality and therefore all the human interventions that led to the creation of an image.
How can such human intervention be demonstrated?
Léa Puigmal: The central issue for creative agencies will be their ability to demonstrate human intervention in the creative process. The traceability of that process will be a determining factor in recognising the human contribution, both for the purpose of copyright protection and for the assignment of rights to the client, while avoiding any subsequent risk of infringement. In copyright matters, evidence is unrestricted. To characterise human intervention, it is therefore necessary to retain briefs, client instructions, the agency’s responses, and the artistic direction, together with all the choices flowing from it, such as colour schemes and visual identities, which may demonstrate human input. With the use of AI, it is also necessary to show that this human intervention is substantial. To that end, it is advisable to retain not only the elements already mentioned, but also the prompts and all post-production work, namely the arrangement and structuring of the various components of a campaign, as well as the processing of elements within an image generated by AI. Traceability of the creative process makes it possible to evidence the human share in creative work. Certain foreign case law provide guidance as to how this may be assessed.
Which case law are you referring to?
Léa Puigmal: I am thinking in particular of the US decision “Invoke AI / A Single Piece of American Cheese” issued by the U.S. Copyright Office (USCO), which sets out the factors taken into account when granting copyright registration to a work created through human–AI collaboration. Among the factors used to demonstrate substantial human intervention in the creative process, the USCO took into consideration the prompts, as evidence of work on the image using a technology known as “inpainting” (image retouching). This involves selecting a portion of an AI-generated image and, through multiple prompts, in this case 35 separate requests to the AI, reworking only part of that image. This illustrates how all elements of the creative process are decisive in demonstrating a substantial level of human intervention, and therefore eligibility for copyright protection, as well as in the event of a dispute.
How can an agency reduce litigation risk?
Léa Puigmal: An agency must put in place internal processes and a methodology for ensuring the traceability of the creative process, the end result of which will be governed by the contract with the client and, where applicable, by an assignment of rights. This includes adopting a charter governing the use of AI, setting out good practices, such as not incorporating AI-generated content into a client deliverable without prior quality control, carefully selecting AI models in advance, establishing genuine AI governance through an AI committee or a designated AI lead, and training and raising awareness among staff, both as to best practices and the risks arising from unmeasured use of AI.
What changes with the use of AI?
Léa Puigmal: Agencies were already retaining briefs, ideation materials and creative reflections. What AI changes is the level of risk. Depending on the AI models used and the guarantees provided, or not provided, an agency may assign rights in a work it does not in fact own, for example where advertising content is generated on the basis of protected works used without prior authorisation, or where content is generated exclusively by an AI system without human intervention. This exposes the agency to infringement proceedings or to an action seeking to invalidate the assignment, which may require reimbursement of the assignment price and potentially the payment of damages. As for the client, if the assignment is invalid, it will be unable to bring infringement proceedings against a competitor using the same visuals or advertising campaign.
What can the agency guarantee?
Léa Puigmal: An agency cannot provide guarantees exceeding those it holds itself. Transparency towards the client regarding the use of AI is therefore decisive, particularly as to the AI model used, the associated conditions and guarantees, the sources of the training data, the non-reuse of prompts for training purposes, confidentiality arrangements, or any possible commercial reuse of the generated content. By contrast, the agency can give commitments in respect of matters within its control, namely the provision of its own datasets or content, thereby guaranteeing the sources of the training data used for the selected AI model, configuring the AI where possible, implementing AI good practices applicable to all staff through an AI charter, and applying quality control to content generated by a human contributor on the basis of predefined criteria. These may include ensuring that no third-party trade marks are reproduced identically, that the deliverable has not been created “in the style of” a third party, and carefully selecting AI models that provide assurances as to the data on which they have been trained.
How should traceability be organised upstream and in post-production within the creative process?
Léa Puigmal: In copyright law, evidence is unrestricted. The evidential element is therefore particularly significant, as case law is highly fact-specific and it is for the court to interpret and determine which elements are probative. This will depend on the organisation’s level of maturity in its use of AI and the costs it is prepared to incur in managing this risk. Some organisations will rely, at a minimum, on screenshots. Otherwise, two techniques may be implemented: blockchain or qualified timestamping by a trusted third party. An interesting French decision addressing the evidential value of blockchain was issued by the Marseille Judicial Court on 20 March 2025. While it does not definitively resolve the issue, it provides some guidance. When blockchain is used within the creative process, it can establish that an action was carried out at a given date and time and anchor a work, in the form of a hash rather than the source content, to a certain date and time. In other words, it may provide proof of prior existence, corroborated by other evidential elements. However, this does not resolve the issue of copyright ownership. According to the court, blockchain can be accepted only as part of a body of evidence, alongside other probative elements. Moreover, the evidential value of blockchain was indirectly accepted due to the involvement of a trusted third party, namely the bailiff who carried out the official record. This shows the importance of third-party involvement.
By contrast, eIDAS qualified timestamping can provide a guarantee of the link between date, time and content, excluding any modification and ensuring integrity through the application of an advanced electronic signature, certified under eIDAS, by a trusted third party. Compliance with the requirements of the eIDAS Regulation gives rise to a presumption of reliability of the signature, shifting the burden of proof in the event of a dispute. Compared with blockchain, qualified timestamping adds an additional layer through the advanced electronic signature of a trusted third party, which ensures the reliability of the process.
How is such timestamping carried out?
Léa Puigmal: Ideally, it can be automated for each step of the process. Some AI tools automatically record prompts, but without any guarantee as to the reliability of that record. Adding qualified timestamping strengthens that reliability. If an organisation does not use blockchain or qualified timestamping tools, it can at least require its creatives to take screenshots at each stage and retain them.
I had the opportunity to work for a digital agency at a time when creatives were expressing a degree of impatience regarding the use of generative AI. A careful balance had to be struck between the opportunities generated by AI, particularly generative AI, and the risks arising from uncontrolled use of such technologies. To mitigate these risks, it is necessary for the agency to initiate a comprehensive reflection on the AI governance to be deployed and to implement an AI charter setting out permitted and prohibited practices, together with a methodology to be followed, for example involving the use of timestamping tools within the creative process. For this to be effective, staff engagement following awareness-raising on both the risks and the benefits of the technology is required. Training on the methodology adopted by the agency or on the technologies to be used would also be beneficial.
Have client contracts evolved with the use of AI? Do they now include specific clauses on AI usage, the creative process and human intervention, adapted warranties and an allocation of responsibilities?
Léa Puigmal: Contractual clauses must be adjusted. First, as a matter of transparency, clients must be informed that the deliverable was created with the assistance of AI. It is then necessary to include clauses specific to the use of AI, such as limitations of liability depending on the AI systems selected or the guarantees available, and to revisit standard clauses relating to intellectual property, confidentiality, security and the protection of personal data. A dedicated AI schedule is also strongly recommended in order to provide transparency as to the AI models used, their versions, the associated conditions of use, and the technologies employed to timestamp prompts or the creative process.
Does the transparency obligation under the European Artificial Intelligence Regulation affect copyright issues?
Léa Puigmal: Copyright is not addressed directly in the AI Regulation. However, the obligations imposed depending on the level of risk and the type of AI provide a framework for managing intellectual property risks. The Regulation imposes transparency and traceability obligations regarding sources and, in some cases, the retention of prompts for up to six months. Traceability will be a decisive factor in the negotiation and securing of intellectual property rights. Compliance with these obligations may ultimately assist in managing copyright risk, as the transparency and information duties align with contractual requirements, while the retention of prompts and traceability of sources and stages of the creative process support rights management.
How are agencies adapting to this transformation?
Léa Puigmal: AI is affecting the business model of creative agencies. That model has traditionally been based on the valuation of creative output, contractually expressed through an assignment of rights and an associated price. If part of the creation disappears or is carried out by a machine, how is the agency’s work to be valued? This is why some recommend that agencies, in order to limit their risk, should use copyright assignment only where it is possible to demonstrate significant human intervention. Otherwise, it is preferable to value the provision of services, such as configuration, the use of specific large language models and datasets, and advisory and support services in connection with AI. This entails a new approach to contract drafting, centred on the valuation of creative advisory services. The sector’s business model is therefore undergoing significant change. As the French Senate recalled in an information report published in early 2025, artificial intelligence should nonetheless retain its role as an assistant and a tool in support of creation.
Disclaimer
The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.
The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.



