Evidency / Blog / Artificial intelligence – AI Act: the creation of a principle of chain liability

Artificial intelligence – AI Act: the creation of a principle of chain liability

Reading time: 8 min
Modification date: 28 January 2026

An analysis of the liability of AI operators through an examination of the particularly broad scope of application of the European regulation on artificial intelligence and the bases for implementing that liability.

RIA principe de responsabilite en chaine

Key takeaways

  • The AI Act establishes a chain of liability, under which each AI stakeholder is responsible for its specific role, from design through to deployment.
  • The Regulation applies extraterritorially where an AI system has an effect on the EU market or on EU citizens.
  • An actor may incur legal liability as a provider where it modifies an AI system or changes its intended use.
  • The AI Act imposes enhanced transparency between operators, including obligations relating to documentation, logging and notification.
  • Very high financial penalties are provided for in order to regulate the development of AI systems and protect fundamental rights.

Is the AI Act a constraint on innovation? Behind this question lies the issue of the conditions under which liability is incurred by AI stakeholders. At the outset, it is worth recalling that global expenditure on the adoption of AI is estimated at USD 307 billion in 2025, with a cumulative impact of up to USD 19 trillion by 2030, accounting for 3.5% of global GDP. In other words, this is an emerging market that is set to become unavoidable in the coming years, all the more so because it is directly linked to the provision of digital services, notably covered by the DMA and the DSA, which themselves are under scrutiny by technology service providers in light of the new US administration.

It should also be recalled that the Draghi Report identifies competitiveness in the digital sector¹ as one of the key determinants of the EU’s future. Artificial intelligence therefore sits at the heart of the European Union’s economic and strategic priorities, just as it does for the United States.

It is against this particularly tense backdrop that the AI Act intervenes. Its purpose is to regulate a technology with two faces, akin to a digital Janus. On the one hand, it enables faster creation, improved aesthetics and greater precision, and, for developers, the ability to write functional components in a matter of seconds. On the other hand, the capacity to generate false content, impersonate identities, and capture protected data (personal data as well as artistic works) without any consideration creates a major risk for citizens’ rights.

By expressly drawing on the work of the GEHN IA², the AI Act defines the principles governing any use of AI. This is the purpose of Article 1 of the regulation known as the AI Act³, which provides as follows: “The purpose of this Regulation is to improve the functioning of the internal market and to promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety and fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union, and supporting innovation.”

Everything is encapsulated in this provision, and the liability of AI stakeholders flows entirely from the intention to strike a balance between innovation and protection. This objective is reflected in the AI Act through the definition of an AI content production chain, identifying the responsible parties (provider, deployer, importer, authorised representative), their obligations and the types of content that may be produced.

To achieve this objective, the regulation relies on a legal approach well known in EU law, namely risk self-assessment coupled with dissuasive sanctions. However, it goes further. The AI Act establishes a principle of chain and communication-based liability that is directly linked to sanctions. It (in)directly imposes contractual relationships between operators, which necessarily broadens the scope of their liability.

An analysis of the liability of operators is therefore carried out through an examination of the regulation’s particularly broad scope of application and the foundations for implementing that liability.

A multidimensional scope of application: the creation of a chain of responsibilities

Like its predecessor, the GDPR, the AI Act applies to any person located within the EU and to any service provided within the territory of the EU, irrespective of whether the operator is established inside or outside the Union. Where operators are established outside the EU, there is an obligation to appoint an authorised representative within the EU, who will be responsible for liaising with local authorities and the AI Office.

However, the regulation does not stop there. This extraterritorial logic extends into the contractual arrangements between operators. The AI Act is also intended to apply to AI systems that are not “placed on the market, put into service or used in the Union”. To justify this approach, the regulation refers to the example of an operator entrusting the performance of a task to a third party located outside the EU.

This mechanism, inspired by the same logic as the GDPR, makes the nature (or nationality?) of the data processed the guiding thread for determining the liability of those involved in its processing. The clearly stated objective of protecting the data of EU citizens implies that each link in the chain must exercise in-depth control over its partners or subcontractors. This is where the liability mechanism becomes central. Each operator is required to implement measures aimed at actively monitoring third-party subcontractors. This entails not only contractual revisions, including audit rights and incident reporting obligations, but also adjustments to pricing terms, since the provision of AI services will, as a matter of course, become more costly.

In practical terms, the AI Act introduces a new balance of power in commercial relationships, at a time when those arising from the GDPR are only just beginning to stabilise, while certain undertakings will also be required to factor DORA and/or NIS 2 considerations into their negotiations.

That said, this control framework is not confined to relationships between EU-based operators and non-EU operators. The same type of oversight is expected in relationships between operators established within the EU, namely between the provider (or manufacturer?), the authorised representative where applicable, the importer where applicable, the distributor where applicable, and the deployer.

This system of shifting functions raises two distinct issues. First, each participant may be regarded as a provider, even if it did not design the high-risk AI system, in three alternative or cumulative scenarios:

  • marketing a high-risk AI system under its own name,
  • making a substantial modification to the high-risk AI system in question,
  • and modifying the intended purpose of an AI system that was not previously a high-risk system so that it becomes one 6.

In such cases, the initial provider is no longer treated as the provider, but remains subject to obligations of cooperation with regulatory authorities and to duties of assistance and documentation vis-à-vis the new provider, unless it can demonstrate that its AI system was not intended to be used as a high-risk AI system. In other words, the regulation provides, from the outset, for a mechanism of legal substitution in the event of a commercial and/or technical arrangement.

Secondly, the provider/deployer liability relationship permeates the entire chain of responsibility. In addition to complying with its own obligations (accountability: documentation, security, transparency, oversight and logging where applicable), each function must notify the other party and the regulator, namely the AI Office, of any alteration to the AI system and/or any non-compliance with the requirements of the AI Act. As a result, each function in the chain is subject to multiple upstream and downstream notification obligations, operating as successive verification layers designed to ensure the reliability of information flows.

The triggering of each notification has two consequences. The first is full transparency, both vis-à-vis the supervisory authority and the contractual partner, regarding the operation of the AI system, its documentation, logs and algorithms. This level of transparency raises questions as to the protection of confidentiality in relation to know-how, manufacturing secrets and trade secrets. The second concerns the allocation of responsibility, both substantively and procedurally, for notifications between the parties and, above all, in relation to third parties, users and data subjects (within the meaning of the GDPR).

The result is an extensive framework of mutual and collective oversight, designed to operate for the benefit of the consumer. From this perspective, one can understand the position expressed by the Vice President of the United States, J.D. Vance, when he characterises this regulation as excessive or burdensome for technological innovation, going so far as to describe European regulation as a form of censorship. However, to fully grasp the substance of this criticism, it is necessary to examine closely the mechanisms by which the liability of stakeholders is engaged.

The prohibition of practices that threaten social life

As recalled in the introduction, the AI Act seeks to preserve the safety of citizens and to ensure trust in AI tools. Viewed through this lens, it is easier to understand why the chapter relating to the prohibition of certain practices entered into force on 2 February 2025, ahead of all other provisions, and in particular before the chapter dealing with permitted practices.

Among the prohibited practices are, unsurprisingly, those that pose a threat to trust in human relationships and, one might add, to the digital economy itself, namely:

  • impairing or vitiating the consent of others,
  • carrying out certain forms of classification of individuals,
  • the improper use of biometric instruments,
  • and inferring the emotions of a natural person in the workplace or in educational establishments (except for medical or safety reasons).

It follows that any practice intended to deceive, or which has the effect of classifying and/or monitoring individuals on the basis of their actual, alleged or perceived social behaviour, is strictly prohibited.

In light of the Charter of Fundamental Rights 10, these prohibitions are neither unexpected nor stigmatising. They are designed to preserve a space of freedom and trust between citizens themselves, but also between citizens and their states. The objective is therefore, quite logically, to protect European citizens against the predatory use of their data and against the erosion of the trust they place in the market.

The sanction for engaging in such prohibited processing may take the form of an administrative fine of up to EUR 35 million or 7% of global annual turnover calculated on the basis of the preceding financial year¹¹. By way of illustration, taking Google’s turnover for 2024, which amounted to USD 348.16 billion¹², the potential fine could reach USD 24.3712 billion.

As regards accountability, non-compliance with the governance and control mechanisms is punishable by a fine of up to EUR 15 million or 3% of global annual turnover calculated on the basis of the preceding financial year. Finally, the provision of inaccurate information to national authorities in the context of notifications and investigations is punishable by a fine of up to EUR 7.5 million or 1% of global annual turnover calculated on the basis of the preceding financial year.

Accordingly, while it is true that any AI project will have to pass through a particularly demanding legal and technical filter, the American criticism may find its true resonance when viewed in light of the sanctions regime introduced by the AI Act.

And this is before even considering the engagement of corporate liability by regulatory authorities alone. There is nothing to prevent skilled legal practitioners from negotiating penalty clauses, early termination mechanisms, and a wide range of audit and ongoing verification arrangements covering the processes implemented by the various operators, in order to manage liability both erga omnes and inter partes.

From this perspective, the AI Act may succeed where the GDPR fell short, namely in compelling dominant yet non-compliant positions in the software sector to align with regulatory expectations.

Does this mean that the regulation is excessive?

Some might respond to this question by invoking a well-known quotation from Rabelais: “Knowledge enters not into a malevolent soul, and science without conscience is but the ruin of the soul” ¹³. Others, taking a more pragmatic view, might argue that in a context of limited internal resources relative to demand, the most effective way to control external resources is to compel them to comply with transparency requirements.

References

[1] The Future of European Competitiveness – A Competitiveness Strategy for Europe, published on 9 September 2024

[2] April 2019 Guidelines of the High-Level Expert Group on Artificial Intelligence

[3] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)

[4] Article 2(1) of the AI Act

[5] Recital 22 of the AI Act

[6] Article 25 of the AI Act

[7] The terminology of the GDPR is used here to refer to internal compliance obligations.

[8] https://www.lemonde.fr/economie/article/2025/02/11/au-sommet-de-paris-la-charge-des-etats-unis-contre-la-censure-de-l-ia_6542711_3234.html

[9] Article 5 of the AI Act

[10] Charter of Fundamental Rights of the European Union of 7 December 2000

[11] Article 99 of the AI Act

[12] https://www.statista.com/statistics/267606/quarterly-revenue-of-google/#:~:text=The%20company%20amounted%20to%20an,Google%20sites%20and%20its%20network

[13] François Rabelais, Pantagruel, 1532

Disclaimer

The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.

The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.

  • Romain Waïss-Moreau

    Romain Waïs-Moreau is a partner lawyer at LWM, specialising in intellectual property and innovative technologies. With more than ten years’ experience advising on complex, technology-driven projects, he analyses the legal, economic and regulatory issues surrounding innovation, particularly in relation to data, cybersecurity and digital assets, providing expert insight into developments in the legal framework and their operational impact.

Recommended
for you