EU AI Act Compliance 14 min read

EU AI Act Compliance Timeline: What to Do by When

J

Jared Clark

March 13, 2026

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, and its phased implementation schedule means that compliance is not a single deadline—it is a rolling series of obligations that began in August 2024 and will continue through 2027. Missing any phase can expose your organization to fines of up to €35 million or 7% of global annual turnover, whichever is higher.

As someone who has guided 200+ regulated organizations through complex certification and compliance programs at Certify Consulting, I can tell you that the companies succeeding under the EU AI Act are the ones that started preparing 12–18 months before their relevant deadline—not 12 weeks. This article maps every critical date, identifies what you must have in place, and explains how to sequence your work intelligently.


Why the EU AI Act Compliance Timeline Is More Complex Than GDPR

The General Data Protection Regulation gave organizations a single May 25, 2018 enforcement date. The EU AI Act works differently. Regulation (EU) 2024/1689 entered into force on August 1, 2024, but enforcement obligations roll out across four distinct phases tied to AI risk classification. This means a company deploying a general-purpose AI model faces a different compliance clock than one deploying an AI system in a high-risk regulated sector like medical devices or credit scoring.

The phased structure is intentional—it gives industry time to build conformity assessment infrastructure—but it also creates a risk of compliance complacency. Organizations focus on the furthest deadline and miss the near-term ones that apply to them today.

Citation hook #1: The EU AI Act imposes fines of up to €35 million or 7% of global annual turnover for violations involving prohibited AI practices, making it the highest financial penalty regime for AI governance in any jurisdiction worldwide.


The Four Compliance Phases: Dates and Obligations at a Glance

Phase Effective Date What Applies
Phase 1 — Prohibited Practices February 2, 2025 Bans on unacceptable-risk AI systems (e.g., social scoring, real-time biometric surveillance in public spaces, subliminal manipulation)
Phase 2 — GPAI & Governance August 2, 2025 General-Purpose AI (GPAI) model obligations; AI literacy requirements for all providers and deployers; national competent authority designation
Phase 3 — High-Risk AI (Annex I) August 2, 2026 High-risk AI systems in regulated sectors (medical devices, machinery, vehicles, civil aviation) covered by existing EU product safety legislation
Phase 4 — High-Risk AI (Annex III) & Full Enforcement August 2, 2027 High-risk AI in Annex III sectors (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice); full penalties active

Note: Certain obligations for high-risk systems already placed on the market before the relevant deadline carry a transition period through August 2, 2029 for previously conformity-assessed products.


Phase 1 (February 2, 2025): Prohibited AI Practices — Are You Already Exposed?

The first enforcement milestone is already behind us. As of February 2, 2025, six categories of AI practice are explicitly prohibited under EU AI Act Article 5:

  1. Subliminal or manipulative techniques that exploit psychological vulnerabilities
  2. Social scoring by public authorities
  3. Real-time remote biometric identification (RBI) in publicly accessible spaces by law enforcement (with narrow exceptions)
  4. Post-hoc RBI in public spaces (with exceptions)
  5. AI that infers emotions in workplace or educational settings (with exceptions)
  6. Untargeted scraping of facial images to build recognition databases
  7. Biometric categorization inferring sensitive attributes (race, political opinion, sexual orientation)

What you must have done by now: Conducted an AI inventory, classified each system against Article 5, and either ceased prohibited use or obtained legal confirmation of an applicable exception. If your organization has not done this, you are currently in violation.

Practical action: Review customer-facing AI features (chatbots, recommendation engines, emotion detection in HR tools) against the Article 5 prohibition list. Engage legal counsel with EU AI Act expertise—and document the analysis, because demonstrating good-faith review matters in enforcement proceedings.


Phase 2 (August 2, 2025): GPAI Models and AI Literacy — The Near-Term Priority

The August 2025 deadline is where most organizations are currently under-prepared, particularly those using or deploying General-Purpose AI (GPAI) models—a category that includes foundation models like large language models (LLMs) and multimodal models.

GPAI Obligations (Articles 51–56)

Providers of GPAI models must:

  • Prepare and maintain technical documentation (Annex XI)
  • Establish a policy for compliance with EU copyright law regarding training data
  • Publish a summary of training content (Annex XII)
  • Comply with transparency obligations toward downstream providers
  • Cooperate with AI Office requests

Providers of GPAI models with systemic risk (currently defined as models trained with compute exceeding 10²⁵ FLOPs) face additional obligations: adversarial testing, incident reporting, cybersecurity measures, and energy efficiency reporting.

AI Literacy Requirements (Article 4)

This obligation is frequently overlooked. Article 4 requires that all providers and deployers ensure their staff have sufficient AI literacy—meaning the skills and knowledge to understand and responsibly use AI systems. This applies to every organization using a commercial AI tool, not just AI developers.

Citation hook #2: Article 4 of the EU AI Act requires every deployer of an AI system—including companies using off-the-shelf AI tools—to ensure staff possess sufficient AI literacy, making workforce training a legal compliance obligation, not merely a best practice.

What you must have by August 2, 2025: - AI inventory completed and classified - GPAI model documentation prepared (if you are a GPAI provider) - AI literacy training program designed, delivered, and documented - Internal governance structure (AI governance board or equivalent) established - Policies and procedures for responsible AI use drafted

Timeline reality check: If you start a GPAI documentation project from scratch today, eight to twelve weeks is a realistic minimum for a well-resourced team. AI literacy programs require instructional design, delivery, and records—plan for six to ten weeks. Start both workstreams immediately.


Phase 3 (August 2, 2026): High-Risk AI in Annex I Regulated Sectors

Annex I of the EU AI Act cross-references existing EU product safety legislation. AI systems embedded in products regulated under these frameworks—medical devices (MDR/IVDR), machinery (Machinery Regulation), radio equipment (RED), civil aviation (EASA regulations), motor vehicles, and others—become subject to the full high-risk AI requirements one year after Phase 2.

High-Risk AI System Obligations (Articles 8–25)

Requirement Key Details
Risk Management System Ongoing, iterative process per Article 9; documented throughout lifecycle
Data Governance Training, validation, and test data must meet quality criteria (Article 10)
Technical Documentation Annex IV; must be kept updated for the system's lifetime
Logging and Record-Keeping Automatic logging capability to ensure traceability (Article 12)
Transparency to Deployers Instructions for use; system capabilities and limitations disclosed (Article 13)
Human Oversight Design must enable effective human oversight measures (Article 14)
Accuracy, Robustness, Cybersecurity Performance metrics declared; adversarial robustness addressed (Article 15)
Conformity Assessment Internal assessment or third-party notified body review depending on system type
EU Declaration of Conformity Signed by provider; retained 10 years
EU Database Registration Register in the EU AI Act public database before market placement

For organizations already maintaining ISO 13485 (medical devices QMS), ISO 9001, or similar quality management systems, approximately 60–70% of the documentation infrastructure for EU AI Act high-risk compliance is already in place. The gap is typically in AI-specific risk management, data governance documentation, and the technical file additions required by Annex IV.

What you must have by August 2, 2026: - High-risk AI systems identified and scoped - Risk management system per Article 9 implemented and documented - Technical documentation (Annex IV) completed - Conformity assessment pathway determined (internal vs. notified body) - EU Declaration of Conformity signed - EU database registration completed - Post-market monitoring plan established


Phase 4 (August 2, 2027): Full Enforcement — Annex III High-Risk AI

The final and most expansive phase covers AI systems in Annex III sectors. This is where the broadest range of enterprises—financial services, HR technology, healthcare operations, public services, and law enforcement—face binding obligations.

Annex III Sectors Subject to Full Enforcement in 2027

  • Biometrics: Remote biometric identification, emotion recognition, biometric categorization
  • Critical infrastructure: AI managing electricity, water, gas, road traffic, digital infrastructure
  • Education and vocational training: Admission decisions, assessment, monitoring students
  • Employment and workforce management: Recruitment, performance evaluation, task allocation
  • Essential private and public services: Credit scoring, life/health insurance risk assessment, emergency dispatch
  • Law enforcement: Crime prediction, polygraph-style tools, evidence reliability assessment
  • Migration and border control: Risk assessment, document examination
  • Administration of justice: Dispute research AI used by courts

Citation hook #3: The EU AI Act's Annex III classification covers AI systems used in employment decisions—including CV screening and performance monitoring tools—making HR technology vendors and their enterprise customers subject to the full high-risk AI compliance regime by August 2, 2027.

What you must have by August 2, 2027: - All Annex III AI systems identified, classified, and documented - Same Article 8–25 obligations satisfied as for Annex I (see Phase 3 table above) - Fundamental rights impact assessments completed (Article 27) — required for deployers in public sector and specific private sector contexts - Deployer obligations met: human oversight, registration in EU database, post-market monitoring, incident reporting to national authorities


How ISO 42001:2023 Accelerates EU AI Act Compliance

ISO 42001:2023—the international standard for AI management systems—is not mandated by the EU AI Act, but it is the most efficient compliance accelerator available to organizations today. The standard's structure maps directly to the EU AI Act's process-based obligations.

ISO 42001:2023 Clause EU AI Act Obligation
Clause 4 (Context) Article 9 risk management scope definition
Clause 6.1.2 (AI risk assessment) Article 9 risk management process
Clause 7.5 (Documented information) Annex IV technical documentation
Clause 8.4 (AI system impact assessment) Article 27 fundamental rights impact assessment
Clause 9.1 (Monitoring and measurement) Article 12 logging; Article 72 post-market monitoring
Clause 10 (Improvement) Article 9's requirement for iterative risk management

Organizations that achieve ISO 42001 certification before their EU AI Act compliance deadline arrive at that deadline with 70–80% of required documentation already structured. Certification also provides an auditable, third-party-verified evidence trail that regulators are increasingly recognizing as a strong indicator of good-faith compliance.

Learn more about ISO 42001 certification readiness and how it supports your regulatory strategy.


Your 18-Month EU AI Act Compliance Roadmap

Regardless of which phase is most immediately relevant to your organization, the following sequencing reflects what I recommend to every client at Certify Consulting:

Months 1–3: Foundation - Conduct comprehensive AI inventory (every system, every use case, every vendor) - Classify each system against EU AI Act risk tiers (prohibited, high-risk Annex I, high-risk Annex III, limited risk, minimal risk) - Assign ownership and establish AI governance committee - Deliver initial AI literacy training

Months 4–6: Gap Analysis and Documentation - Perform gap analysis against applicable obligations by tier - Draft risk management procedures per Article 9 - Begin technical documentation for high-risk systems - Evaluate ISO 42001 certification as a compliance pathway

Months 7–12: Implementation - Implement data governance controls per Article 10 - Build human oversight mechanisms into AI system design - Conduct conformity assessments (internal or notified body) - Draft EU Declarations of Conformity - Register high-risk systems in EU database

Months 13–18: Verification and Continuous Improvement - Conduct internal audits of AI governance system - Establish post-market monitoring and incident reporting protocols - Complete fundamental rights impact assessments for Annex III systems - Pursue ISO 42001 certification audit if applicable - Brief senior leadership and board on compliance status


Key Statistics on EU AI Act Readiness

  • A 2024 KPMG survey found that only 34% of European businesses had begun structured EU AI Act compliance preparations as of mid-2024, despite the February 2025 deadline for prohibited practice bans.
  • The European Commission estimates that 85% of AI systems currently deployed in the EU fall into the minimal-risk or limited-risk tiers and face no mandatory conformity assessment requirements, meaning the heaviest burdens concentrate on a targeted subset of AI deployments.
  • According to the EU AI Act's own impact assessment, compliance costs for a high-risk AI system are estimated at €6,000–€7,000 for initial conformity assessment and €5,000–€6,000 annually for post-market monitoring—figures that experienced practitioners note are conservative for complex systems in regulated industries.
  • The AI Office, established within the European Commission, has authority to impose fines of up to €15 million or 3% of global turnover for GPAI model violations, with the higher €35 million threshold reserved for prohibited practice violations.

Deployer vs. Provider: Who Owns Which Obligation?

One of the most common points of confusion I encounter is the distinction between provider (the entity that develops or places an AI system on the market) and deployer (the entity that uses a third-party AI system under its own authority).

Obligation Provider Deployer
Technical documentation (Annex IV) ✅ Required ❌ Not required
Conformity assessment ✅ Required ❌ Not required
EU Declaration of Conformity ✅ Required ❌ Not required
EU database registration ✅ Required ✅ Required (Annex III public sector)
AI literacy (Article 4) ✅ Required ✅ Required
Human oversight measures (Article 26) ❌ Design obligation ✅ Implementation obligation
Fundamental rights impact assessment (Article 27) ❌ Not required ✅ Required (specific contexts)
Post-market monitoring data provision ✅ Full system ✅ Cooperate with provider
Incident reporting ✅ To authorities ✅ Notify provider

If your organization uses a vendor-provided AI system in a high-risk context, you are a deployer and you carry real obligations—including human oversight implementation, staff training, incident reporting, and (in some contexts) fundamental rights impact assessments. Vendor contracts must allocate responsibilities explicitly.

For more on managing AI vendor relationships in regulated industries, visit the Certify Consulting AI governance resource hub.

You may also find our article on AI governance frameworks for regulated industries useful for structuring your internal compliance program.


FAQ: EU AI Act Compliance Timeline

Q: Does the EU AI Act apply to non-EU companies? A: Yes. The EU AI Act applies to any provider placing AI systems on the EU market, any provider whose AI system output is used in the EU, and any deployer established in the EU—regardless of where the AI developer is headquartered. U.S., UK, and APAC companies with EU customers or operations are in scope.

Q: What is the penalty for missing an EU AI Act compliance deadline? A: Penalties vary by violation type. Prohibited practice violations: up to €35 million or 7% of global annual turnover. Violations of other obligations (e.g., high-risk AI requirements, GPAI obligations): up to €15 million or 3% of global turnover. Providing incorrect or misleading information to authorities: up to €7.5 million or 1% of global turnover. For SMEs, caps default to the lower of the absolute or percentage figures.

Q: My company uses ChatGPT or Microsoft Copilot. Do I have GPAI obligations? A: As a deployer using a GPAI-based product, you do not face GPAI provider obligations—those fall on OpenAI and Microsoft. However, you do have deployer obligations: AI literacy for staff, human oversight implementation, and if you use the tool in a high-risk context defined by Annex III, you may face additional requirements. You should also review whether your vendor's GPAI usage terms satisfy Article 53's downstream transparency requirements.

Q: Is ISO 42001 certification required for EU AI Act compliance? A: ISO 42001 certification is not legally mandated by the EU AI Act. However, it provides a structured, auditable framework that maps directly to the Act's process-based obligations and significantly reduces compliance effort for high-risk AI system documentation. It is increasingly recognized by regulators and notified bodies as evidence of systematic AI governance.

Q: When do SMEs need to comply with the EU AI Act? A: SMEs follow the same phase deadlines as larger organizations, but the EU AI Act includes proportionality provisions—requirements to consider SME capacity when setting fees for notified body services and regulatory sandboxes specifically available to SMEs. Financial penalties also scale to global turnover, which limits absolute penalty exposure. However, there is no blanket SME exemption from compliance obligations.


Getting Your Organization Compliance-Ready

The EU AI Act's phased timeline creates both a challenge and an opportunity. Organizations that treat it as a rolling program—rather than waiting for a single hard deadline—will avoid the scramble that characterized late-stage GDPR preparation and the significant audit findings that followed.

At Certify Consulting, we have maintained a 100% first-time audit pass rate across 200+ client engagements in regulated industries by building compliance programs that are structured, documented, and proportionate to risk. The EU AI Act is complex, but it is navigable—if you start with the right framework and the right sequence.

If you are uncertain about where your AI systems fall in the risk classification hierarchy, or you need to assess your current gap against Phase 2 or Phase 3 obligations, a structured readiness assessment is the most efficient first step. Visit Certify Consulting to learn how we can support your EU AI Act compliance program.


Last updated: 2026-03-13

J

Jared Clark

Certification Consultant

Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.