EU AI Act — What UK Businesses Need to Know (Even If the UK Hasn’t Adopted It)
The EU AI Act applies to UK businesses. Not in some distant future if the UK aligns with EU law, but now, if your AI system’s outputs affect people in the EU. And most UK SMEs don’t know they’re in scope.
This isn’t a UK law. The UK has not enacted its own AI Act and none is expected before 2026. But the EU AI Act is extraterritorial: if you offer an AI-powered service to EU customers, deploy an AI system that processes data of EU residents, or place an AI product on the EU market, the Act applies to you as a “deployer.” There are no exemptions for UK companies. The trigger is market effect, not company registration.
For UK businesses, this creates a peculiar but real compliance obligation. The EU Act sets standards you must meet. Your UK regulator — the ICO, FCA, CMA, or EHRC, depending on your sector — won’t directly enforce the EU Act, but will enforce overlapping UK frameworks (UK GDPR, sector-specific guidance) that point to similar outcomes. The result: you face dual pressure. Plan for the EU Act; remain compliant with UK law.
What the EU AI Act Is
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It came into force in stages: prohibited practices were banned from February 2025. Transparency obligations for general-purpose AI models apply from August 2025. High-risk system requirements — the provisions with the broadest business impact — are targeted for August 2026, though the European Commission’s Digital Omnibus proposal could extend this to December 2027.
The Act takes a risk-based approach. It doesn’t prohibit AI. It prohibits specific uses (social scoring systems, manipulative AI that exploits vulnerable groups, real-time mass biometric surveillance). It creates oversight obligations for high-risk uses (hiring, credit, insurance, education, law enforcement). It requires transparency for limited-risk systems (chatbots, content generators). Everything else faces minimal restrictions.
The Four Risk Tiers
Unacceptable risk (banned). Social scoring systems that rank people based on behaviour or personal characteristics. AI that manipulates vulnerable groups. Real-time biometric identification in public spaces (with narrow law enforcement exceptions). These are prohibited outright. If you’re using AI in any of these ways, you’re breaking the law, no phase-in period.
High risk. AI deployed in areas where failures could significantly harm people. This includes:
- Hiring and HR: CV screening, candidate ranking, interview analysis, performance monitoring, promotion decisions, dismissals
- Credit and financial decisions: credit scoring, insurance underwriting
- Education and training: essay grading, course placement, exam proctoring
- Law enforcement and justice: predicting offender risk, facial identification
- Migration and asylum management
- Critical infrastructure operation
High-risk systems face the full compliance burden. You must conduct conformity assessments, maintain quality management systems, ensure human oversight, manage risks, govern data, document everything, report incidents, and register in the EU’s AI database. This is intensive. Most SMEs using AI aren’t compliant with high-risk requirements.
Limited risk. AI systems that interact with people or generate content:
- Chatbots and virtual assistants
- AI-generated content (deepfakes, AI-written text, synthetic images)
- Emotion recognition systems
- Biometric categorisation (age, gender estimation)
Limited-risk systems face transparency obligations. If someone is interacting with a chatbot, they must be told it’s AI. If they’re viewing AI-generated content, they must know it’s synthetic. No technical documentation required, no conformity assessments — just be transparent.
Minimal risk. Everything else. Spam detection, recommendation engines, internal tools, AI that supports human decision-making without making decisions itself. No specific Act obligations, though general GDPR and sector-specific rules still apply.
Who’s in Scope
You’re in scope if you “place an AI system on the EU market” or “provide a service based on an AI system” to EU individuals. This covers:
- A UK e-commerce company with a website accessible to EU customers
- A UK recruitment agency screening candidates in the EU
- A UK marketing agency using AI to personalise content for EU visitors
- A UK SaaS company serving EU business customers
- A UK consultancy using AI to advise EU clients
You don’t need to be targeting the EU intentionally. If your website is in English and accessible to Europeans, or if EU customers can buy from you, the Act applies.
You’re a “deployer” if you use AI systems in your business. You’re a “provider” if you build or supply AI systems. Most SMEs are deployers. Deployers and providers have different obligations, and your AI provider’s compliance doesn’t relieve you of your own obligations.
High-Risk: The Compliance Intensive Tier
If your AI system is classified as high-risk, you must:
-
Conduct a conformity assessment. Document that the system complies with the Act’s requirements. This is not certification by a third party (though third-party assessment is available); it’s your responsibility to demonstrate compliance.
-
Maintain a quality management system. Processes for managing the system’s development, deployment, monitoring, and incident response. Documentation of your governance.
-
Ensure human oversight. Designate a person responsible for monitoring the system’s outputs, with the authority to override or stop it. Human-in-the-loop governance, not fire-and-forget automation.
-
Conduct a risk assessment. Identify what could go wrong, who could be harmed, and how you’ll prevent or mitigate those harms.
-
Govern your training data. Document data provenance, curation, testing, and validation. Ensure accuracy and prevent bias where relevant. For hiring systems, this is particularly stringent.
-
Create technical documentation. System description, intended purpose, technical specifications, data used, known limitations, accuracy metrics, human oversight mechanisms, testing protocols. Detailed, thorough documentation that could be reviewed by regulators.
-
Report incidents. If the system causes serious harm, you must report it to relevant authorities.
-
Register the system in the EU AI database. High-risk systems must be registered before deployment.
For a 10-person recruitment agency using an AI ATS (Applicant Tracking System), these requirements are not trivial. Most SMEs in this position have none of these controls in place. The system was adopted because it was useful, not because a governance framework was planned.
Limited-Risk: The Transparency Requirement
If your AI system is limited-risk (chatbots, content generation), you must disclose that it’s AI. That’s it. No technical documentation, no conformity assessments, no human oversight requirement. Just transparency.
This is much simpler than high-risk but easy to get wrong. A chatbot that doesn’t tell visitors “you’re talking to an AI” is non-compliant. An article generated by Claude but published without disclosing the AI involvement is non-compliant. The disclosure must be clear and easily understood — “powered by AI” or “AI-generated content” in plain language, not buried in fine print.
Enforcement and Penalties
Penalties are steep. Up to €35 million or 7% of global annual turnover, whichever is higher, for the most serious violations (prohibited practices and fundamental non-compliance with high-risk requirements). Up to €15 million or 3% for high-risk violations. Up to €7.5 million or 1% for providing false information.
SMEs and startups get reduced penalty caps proportional to turnover, but the exposure remains significant. And enforcement is only just starting. The EU AI Office oversees general-purpose models. Each EU member state is designating a national competent authority. First enforcement actions are expected from late 2026 or 2027.
For UK businesses, immediate risk isn’t a €35 million fine. It’s (a) a customer complaint that triggers investigation by an EU member state’s competent authority, (b) loss of EU market access if your AI systems don’t meet compliance, or (c) reputational damage from being identified as non-compliant — particularly in hiring or consumer-facing contexts.
UK Regulators: The Domestic Pressure
The UK has not adopted the EU AI Act. But UK regulators are applying existing frameworks to AI:
- The ICO enforces UK GDPR Article 22, which gives individuals the right not to be subject to solely automated decisions with legal or significant effects. Automated hiring, credit decisions, and insurance underwriting all trigger this.
- The EHRC investigates AI-driven discrimination under the Equality Act. An AI hiring system that screens out protected characteristics is discrimination, regardless of the EU Act.
- The FCA regulates AI in financial services.
- The CMA is developing AI guidance for consumer protection and competition law.
There is no single UK AI enforcement body. But regulatory pressure is happening. An AI system that’s non-compliant with the EU Act is often also non-compliant with UK law on automated decision-making or discrimination. The two frameworks point to similar outcomes.
The Compliance Gap
The awareness gap is enormous. A 2025 government survey found 68% of UK businesses are using at least one AI tool. Fewer than 10% have assessed their usage against any regulatory framework. The gap between adoption and governance is the widest of any compliance area.
Common blindspots:
- “We just use ChatGPT” — doesn’t feel like an AI system you’re deploying, but it is. You’re a deployer of a limited-risk GPAI model.
- “Our ATS includes AI screening” — many applicant tracking systems now include AI screening by default. Recruiters often don’t realise the system they’ve been using for two years is now making automated HR decisions classified as high-risk.
- “We integrated an API, so we’re not really using AI” — you’re a deployer. You may not have built the model, but you’re using it in your business, and you’re responsible for deployer obligations.
- “Our provider handles all this” — providers and deployers have different obligations. Your provider (OpenAI, Google, Anthropic) handles provider-side compliance. You handle deployment-side compliance: human oversight, transparency, monitoring, use within intended purpose.
What’s Required by August 2026
August 2026 is the target deadline for high-risk compliance (subject to the Digital Omnibus extension possibility). By then:
- All high-risk AI systems must be compliant with quality management, conformity assessment, documentation, and human oversight requirements
- Limited-risk systems must have transparency mechanisms in place
- The EU AI database must be populated with registered high-risk systems
There’s uncertainty here. The European Commission’s Digital Omnibus proposal includes a conditional extension to December 2027, linked to the availability of harmonised standards. But this is not yet law — it requires European Parliament and Council approval. Prudent businesses plan for August 2026 and monitor the Omnibus progress.
What to Do Now
-
Inventory your AI systems. Not just obvious ones like chatbots. Include embedded AI in your SaaS tools, ATS screening, CRM recommendation engines, marketing automation, dynamic pricing, any API integration with an AI provider.
-
Classify each system by risk tier. Use the framework in this article. Is it used in hiring, credit, insurance, education, law enforcement, or critical infrastructure (high risk)? Does it interact with people or generate content (limited risk)? Or is it minimal-risk?
-
Assess EU market exposure. For each system, does it affect EU individuals? Website chatbots serving EU visitors? Hiring tools screening EU candidates? Recommendation engines for EU consumers?
-
For high-risk systems, start documentation. This is time-intensive. System description, intended purpose, risk assessment, data governance, human oversight protocol, accuracy metrics, limitations. Begin now.
-
For limited-risk systems, ensure transparency. Implement clear disclosures. Test that they work.
-
Check your AI provider agreements. Review contracts for compliance commitments, support for your deployer obligations, and whether you’re using the AI within its intended purpose.
-
Monitor the Digital Omnibus. Track whether the high-risk deadline extension is adopted. Plan for August 2026 but be prepared for flexibility.
The window between now and August 2026 is when getting ahead creates breathing room. Waiting means rushing.
Cross-References
For automated decision-making under UK GDPR, see GDPR compliance guidance. For AI in hiring, see AI compliance for HR and risk classification. For broader digital compliance, see the Digital overview hub.