The rise of digital technology and the dematerialisation of exchanges have significantly increased opportunities for fraud, leaving businesses more vulnerable to document manipulation. Fake bank account details, forged invoices, falsified payslips or counterfeit identity documents can result in substantial financial losses, damage to a company’s reputation, and lead to legal complications.
The risks associated with deepfakes have also gone far beyond speculation. They are now a tangible reality, as shown by the growing number of cases reported in the media. For instance, in February 2024, a multinational company lost $25.5 million when an employee believed they were attending a videoconference with colleagues and managers. Convinced by their instructions, the employee authorised a transfer, only to discover later that all participants were deepfakes[1].
This incident illustrates how deepfakes have become powerful tools for document fraud. According to Europol, document fraud is now the world’s third-largest criminal activity, causing an estimated $200 billion in annual losses[2]. With artificial intelligence accelerating these capabilities, this figure is expected to rise. In France alone, it is estimated that 70% of companies were targeted by fraud attempts in 2022[3].
For corporate lawyers, this evolution raises urgent questions: how can the evidential value of digital documents be guaranteed when virtually any file can now be convincingly falsified?

Table of content
- Deepfakes and AI: digital evidence in a manipulable reality
- The European legal framework for digital evidence
- Evidence strategies for businesses facing deepfake and AI manipulation threats
- Future threats to anticipate in digital evidence
Key takeaways: deepfakes and digital evidence
- The deepfake threat is real: $25.5 million lost through a fake videoconference; 70% of French companies targeted by fraud in 2022.
- The European legal framework is strengthening: eIDAS 2 and the AI Act now regulate digital evidence and AI use.
- Qualified electronic timestamping is essential: providing certified proof of date and time with a legal presumption of accuracy.
- Qualified electronic signatures and certified archiving: ensure authenticity, integrity, and long-term preservation of evidence.
- Relying on certified trust service providers: accredited under the eIDAS framework, maximises evidential value in the face of AI-driven manipulation.
Deepfakes and AI: digital evidence in a manipulable reality
The word deepfake is a blend of “deep learning” and “fake”. The technique relies on AI analysing hours of video and photographic content of a person to replicate their appearance, voice, and expressions with striking precision. The resulting model can then be made to say or do virtually anything.
The widespread availability of such tools is disrupting the entire evidence ecosystem. No longer does one need advanced technical expertise or substantial financial resources to produce a realistic forgery. Today, anyone with a personal computer, or even a smartphone app, can create deepfakes in real time.
The sectors most exposed to such fraud include finance, where fraudulent voice instructions can trigger unauthorised transfers; e-commerce, where counterfeit products are promoted through fake endorsement videos; and human resources, where interviews conducted via phone or videoconference can be faked.
The European legal framework for digital evidence
In response to these developments, European legislation is evolving. The eIDAS Regulation[4], in force since 2016, defines electronic trust services. Its revised version, eIDAS 2[5], which took effect in May 2024, strengthens security requirements and introduces new mechanisms for European digital identity.
This legal framework establishes a clear hierarchy of evidential weight among electronic documents. A qualified electronic timestamp issued by a trust service provider accredited by ANSSI enjoys a legal presumption of accuracy, meaning that the burden of proof falls on the opposing party to demonstrate any inaccuracy.
In parallel, the AI Act[6], which entered into force in August 2024, regulates the use of artificial intelligence. It imposes specific obligations on high-risk AI systems and prohibits certain manipulative practices. It also establishes legal mechanisms to distinguish AI-generated content, including obligations for transparency and traceability.
ANSSI plays a central role within this framework by certifying trust service providers and defining technical standards. Such certifications allow companies to rely on legally recognised and technically robust solutions.
Evidence strategies for businesses facing deepfake and AI manipulation threats
To respond to these risks, businesses must rethink their approach to digital evidence. The first step is to map out critical documents according to their potential evidential value. Not all digital documents require the same level of protection: a strategic commercial contract, for example, demands far stronger guarantees than a routine operational email.
The use of qualified electronic timestamping should be the first reflex. By securely associating a date and time with a digital file, timestamping creates proof of precedence that is admissible in court. When applied from the moment a document is created, it greatly simplifies the subsequent evidential process.
To ensure authenticity and integrity, qualified electronic signatures and qualified electronic seals are also essential. Defined under the eIDAS regulation, these mechanisms formally identify a document’s author and detect any later modifications. Their legal weight is equivalent to that of a traditional handwritten signature.
Certified electronic archiving completes this framework by preserving the integrity of evidence over time. An electronic archiving system compliant with the NF Z42-013 standard ensures documents cannot be tampered with, logs every operation, and guarantees format reversibility for long-term accessibility.
Against deepfakes specifically, biometric and behavioural authentication solutions are emerging. These technologies analyse metadata, compare IP addresses, and detect invisible inconsistencies. Major technology players such as Microsoft and Google are investing heavily in automated detection tools.
Finally, traceability in the evidence-collection process is becoming critical. Accurately recording who collected the evidence, when, where, and how, helps eliminate doubts of manipulation. Such meticulous documentation, often overlooked, can prove decisive in litigation.
Future threats to anticipate in digital evidence
In the longer term, quantum computing could upend the entire digital evidence ecosystem. These supercomputers, still in experimental stages, may undermine current cryptographic methods used to secure communications and data.
While this threat may seem distant, preparation must begin now. The US National Institute of Standards and Technology (NIST) has been working since 2016 on new “post-quantum” encryption algorithms designed to resist quantum attacks[7]. In France, ANSSI closely monitors these developments and advises companies to assess their vulnerability to this emerging risk.
Conclusion
With the exponential growth of AI-driven fraud, companies must adopt a comprehensive strategy to safeguard their ability to present admissible and secure evidence. This requires combining several elements:
- risk assessment by document type,
- systematic reliance on qualified trust service providers,
- team training on emerging threats,
- and the implementation of reinforced verification processes.
Collaboration with trusted third parties is therefore crucial. Qualified providers, certified by national authorities such as ANSSI under the eIDAS framework, deliver the technical expertise and legal guarantees needed to counter technological threats. Their involvement also shifts the burden of proof onto certified, specialised actors.
Constant adaptation of practices is equally necessary. As the technological landscape evolves, businesses must remain vigilant to new threats. Ongoing regulatory and technological monitoring, integrated into risk management processes, is the key to building a sustainable evidential strategy.
Sources
[1] Filippone D., Piégé par un deepfake en visio, un employé transfère 24 M€ à des escrocs, Le Monde Informatique, 5 février 2024 : https://www.lemondeinformatique.fr/actualites/lire-piege-par-un-deepfake-en-visio-un-employe-transfere-24-meteuro-a-des-escrocs-92875.html
[2] Europol, EU Serious and Organised Crime Threat Assessment (SOCTA) 2021, A corrupting influence : the infiltration and undermining of Europe’s economy and society by organised crime : https://www.europol.europa.eu/cms/sites/default/files/documents/socta2021_1.pdf
[3] Observatoire de la Filière de la Confiance Numérique 2025 : https://www.confiance-numerique.fr/wp-content/uploads/2025/06/observatoire-acn-2025-de-la-confiance-numerique.pdf
[4] Règlement (UE) N° 910/2014 du 23 juillet 2014 sur l’identification électronique et les services de confiance pour les transactions électroniques au sein du marché intérieur et abrogeant la directive 1999/93/CE (eIDAS) : https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=celex%3A32014R0910
[5] Règlement (UE) N°2024/1183 du 11 avril 2024 modifiant le règlement (UE) n° 910/2014 en ce qui concerne l’établissement du cadre européen relatif à une identité numérique (eIDAS 2) : https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=OJ:L_202401183
[6] Règlement (UE) N°2024/1689 du 13 juin 2024 établissant des règles harmonisées concernant l’intelligence artificielle : https://eur-lex.europa.eu/legal-content/FR/TXT/?uri=OJ:L_202401689
[7] https://csrc.nist.gov/projects/post-quantum-cryptography
Disclaimer
The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.
The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.