Evidency / Blog / Legal Tech: what role does AI play in the management of digital evidence?

Legal Tech: what role does AI play in the management of digital evidence?

Reading time: 5 min
Modification date: 27 January 2026

In December 2025, the “Intermarché wolf”[1] controversy highlighted a tension that has become central: the divide between human creation and content generated by artificial intelligence. Intermarché was forced to abandon its project to deploy AI-powered photo booths after the inconsistency with its advertising campaign produced “without AI” triggered an immediate backlash. An apparently anecdotal episode, but one that raises a fundamental issue: how can the authenticity of digital documents be ensured when AI can create, alter and falsify content?

This question is particularly acute for the Legal Tech sector and for in-house legal departments, which are increasingly adopting AI tools to analyse and process their documents while at the same time being required to produce evidence that is admissible in court.

This article examines the role of AI in Legal Tech and in the management of digital evidence: what it makes possible, the risks it poses to documentary integrity, and why it cannot replace trust mechanisms such as qualified timestamping and the electronic seal.

legal tech ia

Key takeaways

  • 71% of legal departments already use generative AI for their research
  • AI can analyse and sort evidence, but does not certify either its date or its integrity
  • Deepfakes represent a growing threat to the authenticity of evidence
  • Only a qualified eIDAS timestamp provides a legal presumption of date and integrity

LegalTech and AI: an updated definition

The digital transformation of legal services has accelerated in recent years. Legal Tech refers to the full range of digital tools that modernise legal practice. With the integration of artificial intelligence, these solutions move into a new phase.

What is meant by AI-driven Legal Tech?

AI-driven Legal Tech covers applications that use artificial intelligence to automate or support legal tasks. This category encompasses distinct realities:

  • on the one hand, “traditional” AI, based on classification rules;
  • on the other hand, generative AI, capable of producing text and analyses using language models.

The distinction is significant. While traditional AI sorts documents according to predefined criteria, generative AI can draft contract summaries or suggest clauses tailored to a specific context. This capability opens up substantial opportunities. It also raises new questions regarding reliability.clauses adaptées au contexte. Cette capacité ouvre des perspectives considérables. Mais elle soulève aussi des questions inédites en matière de fiabilité.

An accelerating uptake within legal departments

A study published in February 2025 by PwC, the Cercle Montesquieu and France Digitale confirms this trend:

  • 71% of general counsel use generative AI for their research,
  • 62% consider its integration into Legal Tech tools to be unavoidable.

This uptake reflects growing pressure: increasing regulatory complexity, expanding volumes of documentation, and heightened expectations of responsiveness.à une pression croissante : complexité réglementaire, volumétrie documentaire en expansion, exigence de réactivité.

How AI is reshaping legal professions and the management of evidence

Applications of AI in the legal field are multiplying. This transformation is changing professional practices, without resolving all the issues linked to evidential weight.

Automation of repetitive or low-value tasks

AI performs particularly well in handling time-consuming tasks. Prior art searches, clause extraction, preliminary document sorting and optical character recognition can all be automated effectively.

A contract running to several hundred pages can be analysed in a matter of minutes, where a lawyer would previously have spent several hours.

Intelligent analysis and anomaly detection

Beyond document sorting, AI enables semantic analysis of content. AI-driven Legal Tech tools can:

  • detect inconsistencies in logs,
  • identify anomalies in metadata,
  • assess risk scores in litigation files.

When AI undermines evidence: risks that cannot be ignored

Artificial intelligence can itself become a threat to the integrity of digital evidence.

Deepfakes and falsification: a challenge to authenticity

In February 2024, a Hong Kong multinational transferred USD 25 million to cybercriminals after an employee took part in a videoconference in which all participants were deepfakes. The impersonation of their voices and images had been made possible by AI analysis of conference videos available online featuring those executives.

French law has adapted to this reality. French Act No. 2024-449 of 21 May 2024 incorporates deepfakes into the French Criminal Code, under Articles 226-8 and 226-8-1. At EU level, the AI Act imposes transparency obligations on AI systems that generate content.

The risks are not limited to falsification. In December 2025, the Périgueux Judicial Court issued a widely noted decision: the judge observed that certain authorities relied upon were “untraceable or incorrect”, explicitly referring to the risk of “hallucinations” produced by AI tools. This case, the first of its kind in France, illustrates the dangers of placing undue reliance on AI-generated content.

In the face of these threats, whether falsification or fabrication, verifying the provenance and integrity of documents becomes a necessity. Qualified eIDAS timestamping makes it possible to freeze a document at a specific point in time and to demonstrate that it has not been altered since its creation. It provides proof of prior existence that may prove decisive if a falsified version later emerges. This is precisely the type of assurance delivered by Evidency’s qualified timestamping, securing the chain of trust in the face of deepfakes.

Confidentiality and GDPR: data exposed without awareness

The use of generative AI tools also raises data protection issues. When a lawyer submits a confidential contract to a language model, sensitive information may be exposed. The risk of inadvertent disclosure through prompts, and the importance of operating within sovereign environments, are therefore key areas of vigilance.

Proof of date, integrity and traceability: the evidential triad

In light of the limitations of AI, organisations must rethink their approach to digital evidence.

What AI does not provide, the chain of trust secures

AI can process and organise documents. However, for evidence to be admissible in court, three elements are required:

  • proof of date,
  • integrity,
  • traceability.

These safeguards fall within the scope of trust services defined by the eIDAS Regulation. Qualified timestamping is the first mechanism to adopt: it cryptographically associates a date and time with a document, with a legal presumption of accuracy recognised throughout the European Union.

For strategic documents, the electronic seal, the digital equivalent of a qualified corporate stamp, provides an additional assurance of authenticity.

Complementarity between AI tools and trust services

AI is an effective processing tool:

  • it quickly identifies sensitive or high-risk documents,
  • it detects anomalies,
  • it accelerates file analysis.

However, it provides no guarantees. It may generate fictitious references. It may be misused to create forged material. In all cases, human verification remains necessary.

This is why trust mechanisms continue to play a decisive role. Qualified timestamping and the electronic seal deliver what AI cannot: legally enforceable proof of date, assurance of integrity and documented traceability. Evidency, a Qualified Trust Service Provider accredited by ANSSI (the French National Cybersecurity Agency), offers solutions that integrate with existing Legal Tech tools via API. Organisations already using electronic signatures can therefore reinforce the evidential weight of their systems by adding qualified timestamping.

This compatibility makes it possible to combine the efficiency of AI with the legal certainty of timestamping that complies with eIDAS requirements.

The technical and legal expertise of the team supports legal departments in implementing these evidential workflows.

Conclusion

Artificial intelligence is reshaping legal professions and the management of digital evidence. It accelerates processing, facilitates analysis and detects anomalies. It also entails risks: hallucinations, deepfakes and the exposure of confidential data. Above all, it provides no certification.

The validity of evidence before the courts requires guarantees that only trust services can deliver: proof of date, integrity and traceability. This is where qualified timestamping plays its role, by ensuring both the date and the integrity of documents. Whether documents are processed using AI or secured through trust solutions, legal departments now have complementary tools that enhance productivity while preserving the evidential weight of their files.

[1] IMPERATRICE L., Intermarché renonce finalement à l’IA avec le loup « mal-aimé » : il n’y aura rien dans les Photomatons, Numérama, 18 décembre 2025 : https://www.numerama.com/tech/2145017-intermarche-renonce-finalement-a-lia-avec-le-loup-mal-aime-il-ny-aura-rien-dans-les-photomatons.html

[2] PwC en partenariat avec Cercle Montesquieu et France Digitale, Legaltech & IA générative : imaginer la fonction juridique du futur !, Février 2025.

[3] Tribunal Judiciaire de Périgueux, 18 décembre 2025, n° 23/00452 ; CHARLOTIN D., Les hallucinations de l’intelligence artificielle s’invitent devant les juridictions françaises, 19 décembre 2025 : https://blog.doctrine.fr/les-hallucinations-de-lintelligence-artificielle-sinvitent-devant-les-juridictions-francaises/

Disclaimer

The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.

The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.

  • Nicolas Peigner

    Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus aliquet dolor vel molestie pellentesque. Curabitur vitae condimentum lectus, ac laoreet magna. Nullam eu tortor odio.

Recommended
for you

How to verify the authenticity of a document and prevent fraud

How to verify the authenticity of a document and prevent fraud

Key points to keep in mind about the authenticity of a document Authenticating a document means verifying its origin, its integrity and its conformity with the original. Documentary fraud is increasing: 69% of French companies are targeted by fraud, a figure that...

Interview: Is trustworthy AI possible?

Interview: Is trustworthy AI possible?

Key takeaways of this interview: Trust in AI is not limited to the quality and lawfulness of training data. Above all, it requires effective capabilities for evaluating, testing and monitoring systems and their uses. The availability and quality of data are becoming a...