EU AI Act — What UK Businesses Need to Know (Even If the UK Hasn’t Adopted It)
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It applies to any organisation that places an AI system on the EU market, provides an AI-powered service to EU customers, or whose AI system outputs are used in the EU. The UK government has not enacted its own AI Act, and there’s no indication one is coming in 2026. But the EU Act reaches across borders — and if your business has any EU presence, it probably applies to you.
The high-risk deadline is August 2026 — or December 2027 if the European Commission’s Digital Omnibus proposal is adopted. Either way, the window for compliance is now. Most UK businesses using AI have no idea they’re in scope.
What the EU AI Act Is
The EU AI Act classifies AI systems by risk level and imposes obligations based on risk:
Unacceptable Risk (Banned): Social scoring systems, AI that manipulates vulnerable groups, real-time mass biometric surveillance in public spaces (with limited law enforcement exceptions). These are prohibited outright. If your business uses AI for any of these purposes with EU customers, you’re breaking EU law.
High Risk: AI in areas where failures significantly harm people. This includes recruitment and HR (CV screening, candidate ranking, interview analysis, performance monitoring, promotion decisions), credit scoring, insurance underwriting, education assessment, law enforcement, and critical infrastructure. High-risk systems face the full compliance burden: documented risk management, quality management systems, human oversight protocols, technical documentation, data governance, incident reporting, and registration in the EU’s AI database. This is substantial work.
Limited Risk: AI that interacts with people (chatbots, virtual assistants) or generates content (deepfakes, AI-generated text/images) or performs emotion recognition. These face transparency obligations only — users must be told they’re interacting with AI or viewing AI-generated content. This is manageable: a clear disclaimer or notification.
Minimal Risk: Everything else. No specific obligations under the Act.
The UK government’s approach is fundamentally different. Rather than a single AI Act, the UK applies existing regulators’ frameworks to AI within their sectors. The ICO regulates automated decision-making under UK GDPR. The FCA regulates AI in financial services. The EHRC can investigate AI-driven discrimination. This is less comprehensive and less prescriptive than the EU Act, but it’s already in force.
Who’s in Scope
You’re in scope if you:
- Place an AI system on the EU market (sell an AI product to EU customers)
- Provide an AI-powered service to EU customers (a SaaS tool used by EU users)
- Deploy an AI system whose output is used in the EU (you use AI that affects EU individuals)
Practical test: Does your AI affect anyone in the EU?
- You have a website with a chatbot that serves EU visitors — you’re a deployer of limited-risk AI
- You use AI to screen job applications from EU-based candidates — you’re a deployer of high-risk AI
- You have a SaaS product used by EU customers with built-in AI features — you’re a deployer
- You use ChatGPT for internal analysis only, no EU exposure — you’re probably not in scope (UK domestic GDPR rules apply instead)
Common AI Uses and Their Risk Classification
High Risk:
- AI for resume screening and candidate ranking
- AI for hiring decisions or performance assessment
- Automated credit or insurance decisions
- AI for loan underwriting
- AI used in education assessment
Limited Risk:
- Chatbots and virtual assistants
- AI-generated content (text, images, deepfakes)
- Emotion recognition systems
- Dynamic pricing or recommendation engines (context-dependent; some may be high-risk)
Minimal Risk:
- AI for internal data analysis
- AI for marketing content generation
- Predictive analytics with no automated decision
- AI for automation or process optimisation
What the Act Requires
For High-Risk Systems:
-
Risk Management: Document risks the system poses, how you mitigate them, how you’ll monitor ongoing performance, and how you’ll handle failures.
-
Data Governance: Document the data used to train and deploy the system. Explain quality standards, bias testing, and how you handle data limitations.
-
Technical Documentation: System design, intended purpose, training data, known limitations, accuracy metrics, and failure modes.
-
Human Oversight: Designate someone responsible for monitoring the AI’s outputs, with clear authority and procedures to override or stop the system.
-
Accuracy and Performance: Demonstrate the system meets accuracy and performance standards appropriate for its use case. For hiring AI, what’s the false-positive rate? Can qualified candidates be missed?
-
Incident Reporting: Log and report significant incidents — cases where the system caused harm, made seriously wrong decisions, or failed.
-
Registration: Register your high-risk AI system in the EU’s AI database. This is a public register.
For Limited-Risk Systems:
Transparency only. Users must be informed: “This is AI-generated text” or “You’re chatting with an AI.” Simple disclaimers satisfy the requirement.
The Deadline Question
August 2026 (if Digital Omnibus not adopted): Full compliance required for high-risk systems. This is a tight deadline. If you’re behind, you have 4–5 months to get documentation, risk management, and human oversight procedures in place.
December 2027 (if Digital Omnibus adopted): Extended deadline, conditional on harmonised standards being available. The European Commission’s Digital Omnibus proposal, published in February 2026, proposes extending this deadline to give more time for standards and guidance to be published. As of March 2026, this requires approval from the European Parliament and Council — it’s not yet law.
Prudent approach: Plan for August 2026. If the Omnibus passes and the deadline extends, you’ve gained time. If it doesn’t pass, you’re ready.
What UK Businesses Face That Others Don’t
The UK hasn’t adopted an AI Act. But the trend across UK regulators is clear:
- The ICO enforces automated decision-making rights under UK GDPR Article 22. If you use AI for decisions that have legal or significant effects on individuals, they have the right to human review. The ICO can fine up to £17.5M / 4% turnover.
- The Equality and Human Rights Commission can investigate AI-driven discrimination under the Equality Act. Discrimination claims are uncapped.
- Sector regulators (FCA in finance, Ofcom in digital services, CMA in competition) issue AI guidance and can enforce within their remits.
This is less prescriptive than the EU AI Act but covers most of the same ground. A business compliant with the EU Act is almost certainly compliant with UK requirements.
What to Do Now
1. Inventory Your AI Systems
List every AI tool your business uses:
- Obvious AI: ChatGPT, Claude, chatbots, content generators
- Embedded AI: Applicant tracking systems with AI screening, CRM with AI features, email with AI categorisation, video conferencing with background blur
- API-based AI: OpenAI, Anthropic, Google, Microsoft integrations
- Third-party AI: Tools you use that include AI functionality
Many SMEs have more AI deployed than they realise. The ATS that “just screens resumes” may have AI built in. The CRM that “makes suggestions” may include AI.
2. Classify by Risk Tier
For each AI system, determine: is it high-risk, limited-risk, or minimal-risk? Use the classification criteria above.
High-risk is the priority. If you have high-risk AI and EU customers, you must start documentation now.
3. For High-Risk Systems: Begin Documentation
Start creating:
- System description and intended purpose
- Risk assessment (what could go wrong?)
- Data governance documentation
- Accuracy and performance metrics
- Bias testing results (if available)
- Human oversight procedures
- Incident response process
This is time-consuming but essential.
4. For Limited-Risk Systems: Implement Transparency
If you have chatbots or AI-generated content with EU exposure, add clear disclosures: “This chat is powered by AI” or “This image was generated by AI.”
5. Review Your AI Provider Contracts
Check your terms with OpenAI, Anthropic, Google, Microsoft, or other AI providers. Do they support your deployer obligations? Are you using the service within its intended purpose? Does the provider commit to supporting compliance?
6. Consider EU AI Exposure
Are your EU customers significant? Is AI core to your business, or peripheral? This affects urgency. A UK SaaS business with 30% of customers in the EU using AI for core functionality faces higher urgency than a logistics company using ChatGPT for occasional customer service.
Timeline and Next Steps
Now (March 2026): Start the inventory and classification.
By May 2026: Complete documentation for any high-risk systems (if August deadline applies).
August 2026: High-risk deadline (unless extended).
Monitor the Digital Omnibus: Watch for EU Parliament and Council votes. If the Omnibus passes, the deadline extends to December 2027 — a significant extension.
Key Takeaways
The EU AI Act applies to UK businesses with EU customers. High-risk AI — especially AI used in hiring, credit decisions, or automated employee decisions — faces substantial compliance requirements. The deadline is August 2026, though it may extend to December 2027 if the Digital Omnibus is adopted.
Most UK SMEs using AI have not assessed themselves against the Act. This is a gap. The compliance work is manageable if you start now but becomes chaotic if you wait.
The good news: you don’t need to stop using AI. You just need to document why you’re using it, what risks it poses, how you’re managing those risks, and what human oversight exists.
For a structured assessment of your AI systems against the EU AI Act, Bartram AI screens your AI usage and delivers a gap analysis with a clear action plan and timeline.
The AI Act is coming. Prepare now.