Get started

EU AI Act Deadlines — What's Required by August 2026

regulatory-update 7 min read Updated 2026-03-23

EU AI Act Deadlines — What’s Required by August 2026

The EU AI Act is rolling out in phases. Some parts are already in force. The big one — high-risk AI system compliance — arrives August 2, 2026. That’s five months away. But there’s uncertainty: the European Commission’s Digital Omnibus proposal could push the deadline to December 2027, conditional on harmonised standards being published by then.

This article clarifies what’s actually required by August 2026, where the uncertainty lies, and how to think about timeline planning when you don’t know whether you’re working toward a five-month or thirteen-month deadline.

What’s Already in Force

February 2, 2025: Prohibited Practices Banned

Social scoring systems, AI that manipulates vulnerable groups, and real-time biometric mass surveillance were prohibited from February 2025. If you’re not doing any of these, this deadline has already passed. If you are, you need to stop immediately.

August 1, 2025: General-Purpose AI Transparency

General-purpose AI model providers (OpenAI, Anthropic, Google, Meta) must provide transparency documentation about their models: capabilities, known risks, how they were trained, testing results, and guardrails. This applies to the provider, not the deployer. If you’re using ChatGPT or Claude, OpenAI and Anthropic are responsible for this layer of compliance.

August 2, 2026 (or December 2, 2027 under Omnibus): High-Risk Systems Full Compliance

This is the big deadline. All high-risk AI systems must be fully compliant with EU AI Act requirements:

  • Conformity assessment completed and documented
  • Quality management system established and operating
  • Human oversight designated and procedures documented
  • Risk management system in place
  • Technical documentation complete
  • Training data governance documented
  • Incident reporting procedures established and operational
  • System registered in the EU AI database

This applies to AI used in hiring, credit, insurance, education, law enforcement, immigration, critical infrastructure. It applies to UK businesses deploying these systems if they affect EU individuals.

August 2, 2026: Limited-Risk Transparency Fully In Force

Chatbots, AI-generated content, and emotion recognition systems must have transparent disclosure mechanisms in place. If you’re running a chatbot that serves EU visitors, the disclosure that it’s AI must be implemented and working.

The August 2026 Deadline in Detail

What Triggers High-Risk Compliance

Any AI system used in these contexts must be compliant:

  • Recruitment and HR: CV screening, candidate ranking, interview analysis, performance monitoring, promotion decisions, dismissals
  • Credit and financial: Credit scoring, insurance underwriting
  • Education: Exam grading, course placement, predictive academic assessment
  • Law enforcement: Risk scoring, biometric identification
  • Immigration: Asylum assessment, eligibility determination
  • Critical infrastructure: Grid operation, traffic control, water system management

What “Compliant” Means

A high-risk system must meet all of these by August 2026:

  1. Conformity assessment: You’ve documented that the system meets the Act’s requirements
  2. Quality management: You have procedures for developing, deploying, monitoring, and improving the system
  3. Human oversight: Someone is designated to review outputs and has authority to override
  4. Risk assessment: You’ve identified potential harms and mitigation strategies
  5. Data governance: Training data is documented, curation decisions are recorded, bias testing is done
  6. Technical documentation: Complete description of the system, its purpose, specs, limitations, accuracy, testing
  7. Incident reporting: Serious incidents are identified and reported to authorities
  8. EU AI database registration: The system is registered before it goes live

Practical Impact by Sector

High impact — Recruitment: Any business using AI for CV screening, candidate ranking, or interview analysis must be compliant by August 2026. This includes recruitment agencies, in-house recruitment teams, and HR consulting firms. For most SMEs, this is unfinished work.

High impact — Financial services: Credit scoring and insurance underwriting systems must be compliant. Fintechs, banks, insurance brokers, any business making automated credit or insurance decisions using AI.

Moderate impact — Other sectors: If you’re using AI in education (student assessment), critical infrastructure (utilities), or law enforcement context (predictability tools), compliance is required. For most SMEs, this applies to fewer systems.

Low impact — General business: If you use AI in customer service, marketing, or internal analytics (minimal-risk systems), August 2026 only affects transparency (which should already be in place). Compliance is simpler.

The Digital Omnibus Uncertainty

The European Commission’s Digital Omnibus proposal (published February 2026) includes a conditional extension of the high-risk deadline from August 2026 to December 2027. The condition: “conditional on the availability of harmonised standards.”

Here’s what this means:

The reasoning: The EU AI Act requires compliance with detailed technical standards — how to conduct conformity assessments, what documentation is adequate, how to test for accuracy and bias, etc. These standards are being developed by European standardisation bodies (CEN/CENELEC). As of March 2026, the standards are still in draft. The Commission’s argument: it’s unfair to enforce full compliance by August 2026 if the technical standards that define “compliance” aren’t finished.

The condition: The extension only applies if harmonised standards are available by December 2027. If standards are delayed past December 2027, the extension doesn’t help you — you’d be non-compliant from August 2026 to December 2027 with no clear technical guidance on how to become compliant.

The uncertainty: The proposal requires approval by the European Parliament and Council. As of March 2026, it has not been approved. It could be approved, rejected, or modified. The deadline extension is not yet law.

The practical implications:

  • If you plan for August 2026 and the extension is approved, you finish early.
  • If you plan for December 2027 and the extension is rejected, you’re non-compliant from August 2026 to whenever you actually finish.
  • The second scenario carries more regulatory risk.

Limited-Risk Transparency: No Conditional Extension

The transparency requirements for limited-risk systems (chatbots, AI-generated content) have no extension in the Omnibus proposal. They’re already in force and should be fully implemented.

What to Do Now

Planning assumptions: Plan for August 2026. Treat it as the real deadline. The Omnibus extension might not happen, and even if it does, you don’t lose anything by preparing early.

Timeline: Five months from March 2026 to August 2026 is tight for high-risk systems that have no compliance work in place. Start immediately if you have high-risk AI.

Priority order:

  1. Inventory high-risk systems. Do you have any AI used in hiring, credit, insurance, education, law enforcement, immigration, or critical infrastructure?
  2. For each high-risk system, start the compliance work:
    • Risk assessment (identify potential harms)
    • Technical documentation (describe the system)
    • Data governance review (how was training data curated?)
    • Human oversight procedures (designate oversight, document process)
    • Conformity assessment (you’ve done the above, now confirm it all meets requirements)
  3. For limited-risk systems, confirm transparency is in place. Chatbots disclosing they’re AI? AI-generated content labelled?
  4. Register high-risk systems in the EU AI database. This must happen before August 2, 2026.

What to Watch

Digital Omnibus vote: Track whether the European Parliament and Council approve the conditional extension. This affects your August vs. December timeline.

Harmonised standards publication: Even if the extension is approved, it’s conditional on standards being available. Monitor CEN/CENELEC publications for final AI Act technical standards.

Enforcement activity: The EU AI Office and member state competent authorities are ramping up. Early guidance and enforcement signals (fines, warnings) will emerge between now and August. Pay attention.

UK regulatory updates: The ICO, FCA, CMA, and other UK bodies are developing sector-specific AI guidance. These aren’t the EU AI Act, but they overlap. Monitor UK regulator updates.

The Practical Reality

For most SMEs using high-risk AI, August 2026 is uncomfortably soon. Five months is enough time to do the work if you start immediately and have the right expertise. It’s not enough time if you’re starting from scratch and doing everything without help.

Consider:

  • Do you have in-house expertise to document your AI systems, conduct risk assessments, and write technical documentation? If not, external support (consultants, legal review) is valuable.
  • Is your high-risk AI from a single vendor, or do you have multiple systems? Single systems are easier to document; multiple systems require more coordination.
  • How much work has already been done? If you’ve already thought about how the system is used and potential risks, documentation is faster. If the system was adopted with no governance framework, starting from zero is slower.

Waiting to see if the Omnibus extension passes is a gamble. Planning for August is prudent.

What’s Next

If you want to understand which of your AI systems are high-risk and what compliance requires, start with risk classification. If you want a detailed breakdown of what each compliance requirement means, see the checklist.

For a full assessment of your AI compliance exposure and a timeline that accounts for the August 2026 deadline (with contingency for Omnibus delay), Bartram AI screens your systems and delivers a prioritised roadmap aligned to the deadline.


Cross-References

For full EU AI Act overview and enforcement details, see EU AI Act explained. For how to classify your systems and understand what’s high-risk, see risk classification.

Free newsletter

Get insights like this fortnightly

UK compliance rules are changing fast. Our newsletter covers what changed, what's coming, and what it means for your business.

Subscribe →

Free, fortnightly, no spam. Unsubscribe any time.

Want to check your compliance?

Find out where you stand — and get a prioritised action plan.

Screen your AI compliance →