Key takeaways
- AI is becoming a major tool to fight social and tax fraud, but it is still underused by public authorities.
- AI technologies (machine learning, deep learning, generative AI) can process large volumes of data and detect increasingly sophisticated fraud patterns.
- Public administrations already rely on advanced systems (data mining, e-invoicing, aerial imagery, scanning tools), but many remain behind the current AI frontier.
- Generative AI could boost efficiency by breaking down data silos, supporting auditors through chatbot-like interfaces, and accelerating software development.
- Strong safeguards are essential: transparency, GDPR compliance, systematic human oversight, and limits to avoid automated and context-blind enforcement decisions.
Retrospectives
The Senate report devoted to the role of AI in the fight against fraud first notes a difference in organisational culture between the Directorate General of Public Finances (DGFIP), which is more proactive, and the social security funds, which are more reserved. According to the report, however, there are many advantages to the use of AI: detecting fraud stricto sensu, but also preventing it in the area of tax fraud or, in the social sphere, identifying cases in which insured persons fail to claim social benefits to which they are entitled.
The prevention of tax fraud begins even before the tax return is filed. The tax authorities now use technologies such as mandatory electronic invoicing for VAT and blockchain to record transactions, making it virtually impossible to alter documents once they pass through the administration.
After returns have been filed, the detection of tax fraud relies largely on data mining, a technique used by the DGFIP since 2014. Data mining is a complex method of cross-referencing data based on a statistical model, but which is neither machine learning nor deep learning. Data sources include internal data, public data and data collected specifically for analytical purposes. The administration has several means of obtaining such data. For example, the “Foncier Innovation” project uses aerial images to detect undeclared constructions. Since the Finance Acts for 2020 (Article 154) and 2024 (Article 112), the administration now has investigative powers over social media in order to collect information.
In the social field, this tool is also used. Since 2011, the Family Allowance Fund (CAF) has used it through the DMDE (i.e. incoming data mining): according to the report, this is a tool used to schedule inspections among households receiving benefits. A statistical algorithm determines a “risk score” (from 0 to 1) on the basis of approximately 40 data points or criteria (out of the 300 contained in a file). This algorithm operates solely on CAF data: no cross-checking is carried out with other bodies such as France Travail (formerly Pôle emploi). According to the report, the DMDE has made it possible to carry out 70% of on-site inspections.
However, these statistical tools remain quite far removed from the most recent advances in AI (in particular generative AI). The report refers to tools that are “very far from the technological frontier”.
Prospects
The rapporteurs envisage the future of the fight against social and tax fraud through AI by means of several tools. In the tax field, many projects are under way, such as the “100% Scanning” programme, which uses an algorithm to analyse X-ray scanner images in order to detect narcotics, or the “Foncier Innovation” project, which is to be extended to the whole of France.
But the report goes further: the senators consider that generative AI could play a key role in view of legislative inflation and its exponential complexity. Generative AI, operating on the basis of natural (and therefore textual) language, could analyse texts much more quickly and ultimately facilitate their implementation.
In the tax field, a first benefit of generative AI lies in breaking down information silos. Generative AI could be of interest as an ergonomic interface, in the form of a chatbot, made available to tax audit officers to cross-check information. Launched in 2018, the PILAT project is based on this idea but is more than two years behind schedule.
The report identifies another use case: generative AI could assist Urssaf developers (which uses around one hundred applications) by automatically generating code. The expansion of generative AI nevertheless entails certain risks, particularly with regard to privacy and the confidentiality of taxpayers’ and insured persons’ data. The holding of personal information outside traditional audit procedures and the conventional right of communication represents a risk acknowledged by both the CNIL and the Constitutional Council.
To protect taxpayers, several safeguards have been put in place. These include the right to transparency, which requires the tax administration to inform taxpayers explicitly about the use of AI, in accordance with the Digital Republic Act and the GDPR (General Data Protection Regulation). At European level, the AI Act has recently been approved and aims to harmonise AI rules across Europe, classifying AI systems according to their level of risk and imposing stricter requirements on high-risk systems.
In addition, human intervention is always required to verify the coherence of information. Where AI has contributed to the detection of fraud and to the drafting of the observation letter or the formal notice (via generative AI), there is a significant risk of automated reassessments, lacking nuance and without taking account of the specific context of the business. The role of the lawyer, as adviser to businesses, will be essential here, in order to mitigate excesses or errors arising from automation and to provide the elements necessary to limit the potentially binary, or even arbitrary, nature of AI.
Disclaimer
The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.
The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.



