{"id":5411,"date":"2025-06-12T11:12:52","date_gmt":"2025-06-12T09:12:52","guid":{"rendered":"https:\/\/evidency.io\/ia-act-operateur-agree-sanctions\/"},"modified":"2026-02-05T14:23:12","modified_gmt":"2026-02-05T13:23:12","slug":"ia-act-eu-compliance-risks","status":"publish","type":"post","link":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/","title":{"rendered":"AI Act: how high-risk AI operators can meet EU compliance requirements"},"content":{"rendered":"<p><strong>Key takeaways<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The AI Act establishes a risk-based approach, with enhanced obligations for so-called \u201chigh-risk\u201d AI systems used in sensitive sectors.<\/li>\n\n\n\n<li>The organisations concerned are not limited to developers, but also include AI deployers, who are responsible for use, human oversight and the retention of evidence.<\/li>\n\n\n\n<li>Compliance depends on the ability to demonstrate system traceability throughout its entire lifecycle (documentation, audit trails, logging and human oversight).<\/li>\n\n\n\n<li>Qualified timestamping and evidential electronic archiving emerge as key mechanisms for ensuring the integrity, a legally certain date, and the enforceability of compliance evidence.<\/li>\n\n\n\n<li>Anticipating these requirements at an early stage is necessary to limit exposure to significant financial penalties and to secure the future operation of AI systems.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"titre1\">The AI Act\u2019s structure: a risk-based compliance ecosystem<\/h2>\n\n\n\n<p>The AI Act (Regulation EU 2024\/1689) categorises AI systems based on the level of risk they pose to people\u2019s safety, rights, and freedoms. The regulation defines four classes:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Minimal risk:<\/strong> Most systems fall here and are free of compliance duties (e.g. spam detection tools).<\/li>\n\n\n\n<li><strong>Limited risk: <\/strong>These systems must disclose AI use to users (e.g. generative AI chatbots).<\/li>\n\n\n\n<li><strong>High risk: <\/strong>Subject to stringent operational and governance obligations.<\/li>\n\n\n\n<li><strong>Unacceptable risk:<\/strong> Prohibited entirely due to their inherent threats (e.g. real-time biometric surveillance in public spaces).<\/li>\n<\/ul>\n\n\n\n<p>The regulation zeroes in on high-risk systems that can influence individuals\u2019 rights or livelihood\u2014whether by shaping access to public services, employment opportunities, education pathways, or financial products. It covers a wide range of use cases such as:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AI used in hiring, job evaluations, or HR workflows,<\/li>\n\n\n\n<li>Automated systems in healthcare diagnostics or treatment,<\/li>\n\n\n\n<li>AI-driven tools in law enforcement or border control,<\/li>\n\n\n\n<li>Decision-making systems used by courts or public administrations.<\/li>\n<\/ul>\n\n\n\n<p>The scope is not limited to how AI is technically implemented, but rather how it affects outcomes that bear <strong>legal, economic, or social consequences for individuals<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"titre2\">Who is an AI operator under the regulation?<\/h2>\n\n\n\n<p>A key feature of the AI Act is its broad applicability across the AI system lifecycle. Compliance responsibilities are not limited to creators of AI solutions; they also extend to those who implement or operate them professionally.<\/p>\n\n\n\n<p>Two principal roles are identified:<\/p>\n\n\n\n<p><strong>Providers: <\/strong>Entities that <strong>design, develop, or place AI systems on the EU market<\/strong>. This includes software companies, research institutions, and OEMs integrating AI into their products.<\/p>\n\n\n\n<p><strong>Deployers: <\/strong>Organisations that use AI systems professionally, including enterprises using off-the-shelf AI tools to <strong>automate internal workflows or decision-making processes<\/strong>.<\/p>\n\n\n\n<p>Each role comes with its own obligations:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Providers <\/strong>must compile a comprehensive technical dossier, ensure accuracy and safety of their systems, undergo conformity assessments, and maintain records for up to ten years.<\/li>\n\n\n\n<li><strong>Deployers <\/strong>are expected to operate AI tools as per the provider\u2019s instructions, ensure meaningful human oversight, and maintain event logs for at least six months post-deployment.<\/li>\n<\/ul>\n\n\n\n<p>Even if an AI system is externally sourced, the organisation deploying it remains <strong>responsible for ensuring its use aligns with the regulatory requirements<\/strong>. This extends to sectors like finance (e.g. credit scoring), energy (e.g. smart grid optimisation), and education (e.g. automated grading tools).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"titre3\">Key compliance requirements for high-risk AI<\/h2>\n\n\n\n<p>While the AI Act does not introduce a certification scheme, it establishes <strong>clear operational requirements<\/strong> that must be met before a high-risk system can be placed on the market or used legally. Among the most critical obligations:<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"titre2\">Technical documentation (articles 11 &amp; 18)<\/h3>\n\n\n\n<p>A <strong>comprehensive file detailing system design, testing procedures, intended purpose, risk mitigation strategies, and performance metrics <\/strong>must be maintained and made available for review by regulators. It must be available for <strong>at least 10 years<\/strong>. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Automated logging functionality (article 12)<\/h3>\n\n\n\n<p>High-risk AI systems must be equipped to record key operational events\u2014inputs, outputs, system alerts, and decision-making processes\u2014to ensure <strong>traceability across the lifecycle<\/strong>.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Retention of logs (article 19)<\/h3>\n\n\n\n<p>Event logs must be securely stored for a <strong>minimum of six months<\/strong>. Data integrity and accessibility must be guaranteed during this period.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Effective human oversight<\/h3>\n\n\n\n<p>Mechanisms must be in place to <strong>allow human intervention<\/strong>, particularly where system outputs significantly impact individuals. Oversight must be real, not symbolic.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Conformity assessments &amp; CE marking<\/h3>\n\n\n\n<p>Before use or commercialisation, high-risk systems must pass a conformity assessment (self-assessed or by a notified body depending on the context). A <strong>CE marking<\/strong> is required to certify compliance.<\/p>\n\n\n\n<p>Together, these obligations reinforce a principle of \u201cevidence-based compliance\u201d: assertions of safety or accuracy are insufficient without traceable, verifiable documentation to back them up.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"titre4\">The compliance role of Qualified Timestamping and Archiving<\/h2>\n\n\n\n<p>Although not explicitly named in the current text of the AI Act, <a href=\"https:\/\/evidency.io\/en\/qualified-timestamping\/\">qualified digital timestamping<\/a> and <a href=\"https:\/\/evidency.io\/en\/electronic-archiving\/\">certified electronic archiving<\/a> are rapidly emerging as best practices for compliance\u2014especially in relation to logging, traceability, and legal defensibility.<\/p>\n\n\n\n<p>Under article 12, AI operators must ensure full <strong>lifecycle logging of events and interactions with the system<\/strong>. However, logs only hold legal value if they can be shown to be <strong>authentic, tamper-proof, and verifiably dated<\/strong>. That\u2019s where qualified trust services\u2014defined under the <a href=\"https:\/\/evidency.io\/en\/eidas-2-0-and-european-digital-identity\/\">eIDAS regulation<\/a>\u2014become essential.<\/p>\n\n\n\n<p>Qualified Timestamping provides a legally recognised, cryptographically secure way to establish the date and time of digital records. Under eIDAS, these timestamps benefit from a <strong>presumption of reliability across the EU<\/strong>.<\/p>\n\n\n\n<p>Probative Electronic Archiving ensures that data such as logs, technical files, and audit trails are preserved in a manner that <strong>maintains their evidentiary value over time<\/strong>, in accordance with recognised standards like NF Z42-013 or ISO 14641.<\/p>\n\n\n\n<p>With a <strong>harmonised European standard on AI logging<\/strong> expected soon, it is increasingly likely that these mechanisms will become not just recommended\u2014but expected\u2014as part of future technical specifications.<\/p>\n\n\n\n<p>Organisations that integrate these solutions now can pre-emptively meet compliance thresholds and <strong>avoid costly retrofitting later<\/strong>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"titre5\">Understanding the penalties: why early action matters<\/h2>\n\n\n\n<p>Failure to comply with the AI Act\u2019s provisions carries significant financial and reputational consequences. <\/p>\n\n\n\n<p>The regulation outlines a tiered penalty structure:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Up to <strong>\u20ac15 million or 3% of annual global turnover<\/strong> (whichever is higher) for breaches related to high-risk systems.<\/li>\n\n\n\n<li>Up to <strong>\u20ac7.5 million or 1% of turnover<\/strong> for misrepresentation or failure to provide accurate documentation.<\/li>\n<\/ul>\n\n\n\n<p>The timeline to enforcement is as follows:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>August 2024:<\/strong> Regulation enters into force.<\/li>\n\n\n\n<li><strong>August 2025:<\/strong> Member States designate supervisory authorities and begin applying penalties.<\/li>\n\n\n\n<li><strong>August 2026:<\/strong> Full enforcement for high-risk systems begins.<\/li>\n\n\n\n<li><strong>2030:<\/strong> Transitional period ends for AI used in public-sector services.<\/li>\n<\/ul>\n\n\n\n<p>In France, entities like CNIL, ANSSI, and DGCCRF are expected to oversee enforcement. Their roles will be defined in national law.<\/p>\n\n\n\n<p>In the UK, bodies such as the Information Commissioner\u2019s Office (ICO), the National Cyber Security Centre (NCSC), and the Competition and Markets Authority (CMA) are expected to play key roles in enforcement.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion: compliance as a strategic imperative<\/h2>\n\n\n\n<p>The AI Act reflects a <strong>broader shift in Europe\u2019s digital regulation<\/strong>\u2014one where <strong>trust, transparency, and accountability<\/strong> are no longer optional, but foundational. For organisations working with high-risk AI systems, this means embracing a proactive, evidence-based approach to compliance.<\/p>\n\n\n\n<p>Technologies like qualified timestamping and certified archiving should not be viewed as ancillary. They are fast becoming <strong>core components of a compliance strategy <\/strong>that spans legal, technical, and operational domains.<\/p>\n\n\n\n<p>By embedding <a href=\"https:\/\/evidency.io\/en\/adapting-evidence-strategies-in-the-digital-era\/\">digital trust<\/a> mechanisms from the outset, organisations can protect themselves from regulatory exposure while fostering a culture of responsible AI development\u2014one that reinforces <strong>public confidence and ethical use of technology<\/strong>.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-buttons is-content-justification-center is-layout-flex wp-container-core-buttons-is-layout-16018d1d wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link has-background wp-element-button\" href=\"https:\/\/evidency.io\/en\/contact-evidency\/\" style=\"background-color:#0c0171\">Contact our experts to discuss the impacts of the AI Act on your business<\/a><\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-group is-layout-flow wp-block-group-is-layout-flow\">\n<p><strong><em>Disclaimer<\/em><\/strong><\/p>\n\n\n\n<p><em>The opinions, presentations, figures and estimates set forth on the website including in the blog are for informational purposes only and should not be construed as legal advice. For legal advice you should contact a legal professional in your jurisdiction.<\/em><\/p>\n\n\n\n<p><em>The use of any content on this website, including in this blog, for any commercial purposes, including resale, is prohibited, unless permission is first obtained from Evidency. Request for permission should state the purpose and the extent of the reproduction. For non-commercial purposes, all material in this publication may be freely quoted or reprinted, but acknowledgement is required, together with a link to this website.<\/em><\/p>\n\n\n\n<p><\/p>\n<\/div>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Key takeaways The AI Act\u2019s structure: a risk-based compliance ecosystem The AI Act (Regulation EU 2024\/1689) categorises AI systems based on the level of risk they pose to people\u2019s safety, rights, and freedoms. The regulation defines four classes: The regulation zeroes in on high-risk systems that can influence individuals\u2019 rights or livelihood\u2014whether by shaping access [&hellip;]<\/p>\n","protected":false},"author":246879326,"featured_media":10450,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[6423,6481,6345],"tags":[],"ppma_author":[6443],"class_list":["post-5411","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-une","category-reglementations","category-horodatage","author-stephane-pere"],"acf":{"profil":"","bio":""},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI Act: ensuring compliance as a high-risk AI system operator<\/title>\n<meta name=\"description\" content=\"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Act: ensuring compliance as a high-risk AI system operator\" \/>\n<meta property=\"og:description\" content=\"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\" \/>\n<meta property=\"og:site_name\" content=\"Evidency\" \/>\n<meta property=\"article:published_time\" content=\"2025-06-12T09:12:52+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-02-05T13:23:12+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"950\" \/>\n\t<meta property=\"og:image:height\" content=\"500\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"St\u00e9phane P\u00e8re\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"St\u00e9phane P\u00e8re\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"NewsArticle\",\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\"},\"author\":{\"name\":\"St\u00e9phane P\u00e8re\",\"@id\":\"https:\/\/evidency.io\/en\/#\/schema\/person\/1888314d58ec64690ef29f8594d44cbf\"},\"headline\":\"AI Act: how high-risk AI operators can meet EU compliance requirements\",\"datePublished\":\"2025-06-12T09:12:52+00:00\",\"dateModified\":\"2026-02-05T13:23:12+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\"},\"wordCount\":1322,\"publisher\":{\"@id\":\"https:\/\/evidency.io\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp\",\"articleSection\":[\"Une\",\"R\u00e9glementations\",\"Horodatage \u00e9lectronique\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\",\"url\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\",\"name\":\"AI Act: ensuring compliance as a high-risk AI system operator\",\"isPartOf\":{\"@id\":\"https:\/\/evidency.io\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp\",\"datePublished\":\"2025-06-12T09:12:52+00:00\",\"dateModified\":\"2026-02-05T13:23:12+00:00\",\"description\":\"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.\",\"breadcrumb\":{\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage\",\"url\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp\",\"contentUrl\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp\",\"width\":950,\"height\":500,\"caption\":\"IA Act\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Evidency\",\"item\":\"https:\/\/evidency.io\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI Act: how high-risk AI operators can meet EU compliance requirements\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/evidency.io\/en\/#website\",\"url\":\"https:\/\/evidency.io\/en\/\",\"name\":\"Evidency\",\"description\":\"Sp\u00e9cialiste de la preuve num\u00e9rique\",\"publisher\":{\"@id\":\"https:\/\/evidency.io\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/evidency.io\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/evidency.io\/en\/#organization\",\"name\":\"Evidency\",\"url\":\"https:\/\/evidency.io\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/evidency.io\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/evidency.io\/wp-content\/uploads\/2024\/09\/header-logo.svg\",\"contentUrl\":\"https:\/\/evidency.io\/wp-content\/uploads\/2024\/09\/header-logo.svg\",\"width\":275,\"height\":58,\"caption\":\"Evidency\"},\"image\":{\"@id\":\"https:\/\/evidency.io\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/evidency-io\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/evidency.io\/en\/#\/schema\/person\/1888314d58ec64690ef29f8594d44cbf\",\"name\":\"St\u00e9phane P\u00e8re\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/evidency.io\/en\/#\/schema\/person\/image\/d13b2274f4c8cd6020ef82e1cf02bac2\",\"url\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane.webp\",\"contentUrl\":\"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane.webp\",\"caption\":\"St\u00e9phane P\u00e8re\"},\"description\":\"St\u00e9phane is the Managing Director of Evidency. Formerly the Chief Data Officer at The Economist Group, he has over 20 years of international experience in the technology and media sectors.\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/stephanepere\/\"],\"url\":\"https:\/\/evidency.io\/author\/stephane-pere\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI Act: ensuring compliance as a high-risk AI system operator","description":"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/","og_locale":"en_US","og_type":"article","og_title":"AI Act: ensuring compliance as a high-risk AI system operator","og_description":"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.","og_url":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/","og_site_name":"Evidency","article_published_time":"2025-06-12T09:12:52+00:00","article_modified_time":"2026-02-05T13:23:12+00:00","og_image":[{"width":950,"height":500,"url":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp","type":"image\/webp"}],"author":"St\u00e9phane P\u00e8re","twitter_card":"summary_large_image","twitter_misc":{"Written by":"St\u00e9phane P\u00e8re","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"NewsArticle","@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#article","isPartOf":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/"},"author":{"name":"St\u00e9phane P\u00e8re","@id":"https:\/\/evidency.io\/en\/#\/schema\/person\/1888314d58ec64690ef29f8594d44cbf"},"headline":"AI Act: how high-risk AI operators can meet EU compliance requirements","datePublished":"2025-06-12T09:12:52+00:00","dateModified":"2026-02-05T13:23:12+00:00","mainEntityOfPage":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/"},"wordCount":1322,"publisher":{"@id":"https:\/\/evidency.io\/en\/#organization"},"image":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage"},"thumbnailUrl":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp","articleSection":["Une","R\u00e9glementations","Horodatage \u00e9lectronique"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/","url":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/","name":"AI Act: ensuring compliance as a high-risk AI system operator","isPartOf":{"@id":"https:\/\/evidency.io\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage"},"image":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage"},"thumbnailUrl":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp","datePublished":"2025-06-12T09:12:52+00:00","dateModified":"2026-02-05T13:23:12+00:00","description":"The AI Act introduces strict obligations for high-risk AI systems. Learn how trust-enabling technologies like qualified timestamping can support compliance and reduce the risk of sanctions.","breadcrumb":{"@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#primaryimage","url":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp","contentUrl":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/IA-Act.webp","width":950,"height":500,"caption":"IA Act"},{"@type":"BreadcrumbList","@id":"https:\/\/evidency.io\/en\/ia-act-eu-compliance-risks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Evidency","item":"https:\/\/evidency.io\/en\/"},{"@type":"ListItem","position":2,"name":"AI Act: how high-risk AI operators can meet EU compliance requirements"}]},{"@type":"WebSite","@id":"https:\/\/evidency.io\/en\/#website","url":"https:\/\/evidency.io\/en\/","name":"Evidency","description":"Sp\u00e9cialiste de la preuve num\u00e9rique","publisher":{"@id":"https:\/\/evidency.io\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/evidency.io\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/evidency.io\/en\/#organization","name":"Evidency","url":"https:\/\/evidency.io\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/evidency.io\/en\/#\/schema\/logo\/image\/","url":"https:\/\/evidency.io\/wp-content\/uploads\/2024\/09\/header-logo.svg","contentUrl":"https:\/\/evidency.io\/wp-content\/uploads\/2024\/09\/header-logo.svg","width":275,"height":58,"caption":"Evidency"},"image":{"@id":"https:\/\/evidency.io\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/evidency-io\/"]},{"@type":"Person","@id":"https:\/\/evidency.io\/en\/#\/schema\/person\/1888314d58ec64690ef29f8594d44cbf","name":"St\u00e9phane P\u00e8re","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/evidency.io\/en\/#\/schema\/person\/image\/d13b2274f4c8cd6020ef82e1cf02bac2","url":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane.webp","contentUrl":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane.webp","caption":"St\u00e9phane P\u00e8re"},"description":"St\u00e9phane is the Managing Director of Evidency. Formerly the Chief Data Officer at The Economist Group, he has over 20 years of international experience in the technology and media sectors.","sameAs":["https:\/\/www.linkedin.com\/in\/stephanepere\/"],"url":"https:\/\/evidency.io\/author\/stephane-pere\/"}]}},"modified_by":"Cl\u00e9a Guinaudeau","authors":[{"term_id":6457,"user_id":246879326,"is_guest":0,"slug":"stephane-pere","display_name":"St\u00e9phane P\u00e8re","avatar_url":{"url":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane-1.webp","url2x":"https:\/\/evidency.io\/wp-content\/uploads\/2026\/01\/Stephane-1.webp"},"author_category":"1","first_name":"St\u00e9phane","description_complete":"St\u00e9phane has double Master Degrees in Business &amp; Law, and a 20 year experience in technology, media and market intelligence. Before stepping into the role of Managing Director at Evidency, he served as Chief Data Officer at The Economist Group, where he oversaw and shaped data strategy, analytics, and digital transformation efforts across international operations.\r\nHaving worked across markets and cultures, St\u00e9phane is skilled at aligning diverse teams, product, engineering, legal, compliance, operations, around shared goals.","domaine_dexpertise":"<ul>\r\n \t<li>Digital transformation<\/li>\r\n \t<li>Regulation and compliance<\/li>\r\n \t<li>Data strategy<\/li>\r\n<\/ul>","last_name":"P\u00e8re","user_url":"","job_title":"Managing Director","linkedin":"https:\/\/www.linkedin.com\/in\/stephanepere\/","description":"St\u00e9phane is the Managing Director of Evidency. Formerly the Chief Data Officer at The Economist Group, he has over 20 years of international experience in the technology and media sectors."}],"_links":{"self":[{"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/posts\/5411","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/users\/246879326"}],"replies":[{"embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/comments?post=5411"}],"version-history":[{"count":3,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/posts\/5411\/revisions"}],"predecessor-version":[{"id":11311,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/posts\/5411\/revisions\/11311"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/media\/10450"}],"wp:attachment":[{"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/media?parent=5411"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/categories?post=5411"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/tags?post=5411"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/evidency.io\/en\/wp-json\/wp\/v2\/ppma_author?post=5411"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}