‹ Back to articles

AI Act: how high-risk AI operators can meet EU compliance requirements

Reading time: 14 min
Publication date: 12 June 2025
Modification date: 12 June 2025

The adoption of the Artificial Intelligence Act (AI Act) by the European Union in June 2024 sets a new precedent for the regulation of artificial intelligence across the bloc. Aimed at unifying AI governance, the regulation introduces a tiered risk-based framework that imposes substantial compliance duties on certain categories of AI system users and developers—especially those falling under the “high-risk” label.

With heightened scrutiny on AI deployed in sensitive sectors like healthcare, employment, finance, and public services, the legislation requires extensive documentation, monitoring, and human oversight mechanisms. For organisations involved in the deployment or development of such systems, the challenge now lies in both understanding the precise nature of these obligations and in being able to document compliance in a defensible way—especially in the event of audits, inspections, or legal proceedings.

This is where digital trust technologies such as qualified timestamping and certified electronic archiving become indispensable. While not formally required at this stage, their role is increasingly seen as instrumental in meeting evidentiary and technical standards under the regulation—particularly with upcoming harmonised standards expected to reference them explicitly.

Let’s explore who the regulation applies to, what high-risk classification means in practice, and how organisations can strategically align with compliance expectations under the AI Act.

ai act sanctions horodatage qualifié

Sommaire

The AI Act’s structure: a risk-based compliance ecosystem

The AI Act (Regulation EU 2024/1689) categorises AI systems based on the level of risk they pose to people’s safety, rights, and freedoms. The regulation defines four classes:

  • Minimal risk: Most systems fall here and are free of compliance duties (e.g. spam detection tools).
  • Limited risk: These systems must disclose AI use to users (e.g. generative AI chatbots).
  • High risk: Subject to stringent operational and governance obligations.
  • Unacceptable risk: Prohibited entirely due to their inherent threats (e.g. real-time biometric surveillance in public spaces).

The regulation zeroes in on high-risk systems that can influence individuals’ rights or livelihood—whether by shaping access to public services, employment opportunities, education pathways, or financial products. It covers a wide range of use cases such as:

  • AI used in hiring, job evaluations, or HR workflows,
  • Automated systems in healthcare diagnostics or treatment,
  • AI-driven tools in law enforcement or border control,
  • Decision-making systems used by courts or public administrations.

The scope is not limited to how AI is technically implemented, but rather how it affects outcomes that bear legal, economic, or social consequences for individuals.

Who is an AI operator under the regulation?

A key feature of the AI Act is its broad applicability across the AI system lifecycle. Compliance responsibilities are not limited to creators of AI solutions; they also extend to those who implement or operate them professionally.

Two principal roles are identified:

Providers: Entities that design, develop, or place AI systems on the EU market. This includes software companies, research institutions, and OEMs integrating AI into their products.

Deployers: Organisations that use AI systems professionally, including enterprises using off-the-shelf AI tools to automate internal workflows or decision-making processes.

Each role comes with its own obligations:

  • Providers must compile a comprehensive technical dossier, ensure accuracy and safety of their systems, undergo conformity assessments, and maintain records for up to ten years.
  • Deployers are expected to operate AI tools as per the provider’s instructions, ensure meaningful human oversight, and maintain event logs for at least six months post-deployment.

Even if an AI system is externally sourced, the organisation deploying it remains responsible for ensuring its use aligns with the regulatory requirements. This extends to sectors like finance (e.g. credit scoring), energy (e.g. smart grid optimisation), and education (e.g. automated grading tools).

Key compliance requirements for high-risk AI

While the AI Act does not introduce a certification scheme, it establishes clear operational requirements that must be met before a high-risk system can be placed on the market or used legally. Among the most critical obligations:

Technical documentation (articles 11 & 18)

A comprehensive file detailing system design, testing procedures, intended purpose, risk mitigation strategies, and performance metrics must be maintained and made available for review by regulators. It must be available for at least 10 years.

Automated logging functionality (article 12)

High-risk AI systems must be equipped to record key operational events—inputs, outputs, system alerts, and decision-making processes—to ensure traceability across the lifecycle.

Retention of logs (article 19)

Event logs must be securely stored for a minimum of six months. Data integrity and accessibility must be guaranteed during this period.

Effective human oversight

Mechanisms must be in place to allow human intervention, particularly where system outputs significantly impact individuals. Oversight must be real, not symbolic.

Conformity assessments & CE marking

Before use or commercialisation, high-risk systems must pass a conformity assessment (self-assessed or by a notified body depending on the context). A CE marking is required to certify compliance.

Together, these obligations reinforce a principle of “evidence-based compliance”: assertions of safety or accuracy are insufficient without traceable, verifiable documentation to back them up.

The compliance role of Qualified Timestamping and Archiving

Although not explicitly named in the current text of the AI Act, qualified digital timestamping and certified electronic archiving are rapidly emerging as best practices for compliance—especially in relation to logging, traceability, and legal defensibility.

Under article 12, AI operators must ensure full lifecycle logging of events and interactions with the system. However, logs only hold legal value if they can be shown to be authentic, tamper-proof, and verifiably dated. That’s where qualified trust services—defined under the eIDAS regulation—become essential.

Qualified Timestamping provides a legally recognised, cryptographically secure way to establish the date and time of digital records. Under eIDAS, these timestamps benefit from a presumption of reliability across the EU.

Probative Electronic Archiving ensures that data such as logs, technical files, and audit trails are preserved in a manner that maintains their evidentiary value over time, in accordance with recognised standards like NF Z42-013 or ISO 14641.

With a harmonised European standard on AI logging expected soon, it is increasingly likely that these mechanisms will become not just recommended—but expected—as part of future technical specifications.

Organisations that integrate these solutions now can pre-emptively meet compliance thresholds and avoid costly retrofitting later.

Understanding the penalties: why early action matters

Failure to comply with the AI Act’s provisions carries significant financial and reputational consequences.

The regulation outlines a tiered penalty structure:

  • Up to €15 million or 3% of annual global turnover (whichever is higher) for breaches related to high-risk systems.
  • Up to €7.5 million or 1% of turnover for misrepresentation or failure to provide accurate documentation.

The timeline to enforcement is as follows:

  • August 2024: Regulation enters into force.
  • August 2025: Member States designate supervisory authorities and begin applying penalties.
  • August 2026: Full enforcement for high-risk systems begins.
  • 2030: Transitional period ends for AI used in public-sector services.

In France, entities like CNIL, ANSSI, and DGCCRF are expected to oversee enforcement. Their roles will be defined in national law.

In the UK, bodies such as the Information Commissioner’s Office (ICO), the National Cyber Security Centre (NCSC), and the Competition and Markets Authority (CMA) are expected to play key roles in enforcement.

Conclusion: compliance as a strategic imperative

The AI Act reflects a broader shift in Europe’s digital regulation—one where trust, transparency, and accountability are no longer optional, but foundational. For organisations working with high-risk AI systems, this means embracing a proactive, evidence-based approach to compliance.

Technologies like qualified timestamping and certified archiving should not be viewed as ancillary. They are fast becoming core components of a compliance strategy that spans legal, technical, and operational domains.

By embedding digital trust mechanisms from the outset, organisations can protect themselves from regulatory exposure while fostering a culture of responsible AI development—one that reinforces public confidence and ethical use of technology.

Disclaimer

The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.

The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.

About the author

Stéphane Père
Stéphane is the Managing Director of Evidency. Formerly the Chief Data Officer at The Economist Group, he has over 20 years of international experience in the technology and media sectors.

Recommended
for you

How digitalisation is transforming claims management for insurers

How digitalisation is transforming claims management for insurers

With the increasing digitalisation of society, the insurance sector is undergoing significant transformation. One of the most notable changes is the shift towards online claims processing, which is reshaping traditional workflows. To adapt to this evolving landscape,...

Adapting evidence strategies in the digital era

Adapting evidence strategies in the digital era

For lawyers and corporate legal teams, evidence lies at the heart of any contentious action. Whether defending a position or challenging that of the opposing party, success depends on assembling strong, admissible evidence that meets judicial standards. When two...