Get started

'We're a UK Company, So the EU AI Act Doesn't Apply' — And 5 Other AI Compliance Myths

myths 7 min read Updated 2026-03-23

“We’re a UK Company, So the EU AI Act Doesn’t Apply” — And 5 Other AI Compliance Myths

Myths about AI compliance are rampant because awareness is so low. Most SMEs haven’t thought deeply about whether their AI usage triggers any regulatory obligations. Into that gap step half-truths and convenient assumptions. The six myths below are the ones we hear most often from UK businesses. Each one is wrong in ways that matter.

Myth 1: “We’re a UK Company, So the EU AI Act Doesn’t Apply to Us”

Reality: If your AI system affects anyone in the EU — customers, employees, job candidates — the EU AI Act applies. The trigger is market effect, not company registration.

The EU AI Act is extraterritorial. It applies to any organisation that “places an AI system on the EU market” or “provides a service based on an AI system” to EU individuals. For a UK business, this is straightforward: if your website is accessible to EU visitors and serves EU customers, the Act applies. If your hiring process includes EU-based candidates, it applies. If your chatbot interacts with EU users, it applies.

The UK has not adopted the EU AI Act. The UK will not have its own AI Act in 2026. But that doesn’t create a gap where UK businesses are unregulated. It creates a gap where UK businesses face EU regulation on exported services and UK regulation on domestic operations. A UK recruitment agency screening EU job candidates faces the EU AI Act for that activity. The same agency screening UK candidates faces UK GDPR automated decision-making rules. Both apply.

What to do instead: Assess whether any of your AI systems affect EU individuals. If yes, the EU AI Act applies. Check what tier they’re classified in. Plan for compliance.


Myth 2: “We Just Use ChatGPT — We’re Not Really Using AI”

Reality: Using an AI tool in your business operations makes you a “deployer” under the EU AI Act. You don’t need to build AI to face compliance obligations.

This misconception is widespread because “using ChatGPT” feels passive — you’re a customer of a vendor, not a business deploying an AI system. But under the Act’s definitions, you’re a deployer. You’re taking an AI model (ChatGPT, built by OpenAI) and using it in your business context. That’s deployment.

The Act distinguishes between “providers” (who build or supply AI systems) and “deployers” (who use AI systems). OpenAI is a provider. You’re a deployer. Each has different obligations. OpenAI complies with provider requirements (transparency about GPAI capabilities, benchmarking, safety measures). You comply with deployer requirements (using the AI within its intended purpose, monitoring its outputs, ensuring it doesn’t violate European or UK law).

If you’re using ChatGPT to generate marketing copy, you’re a deployer of a limited-risk system (content generation). You must disclose that the content is AI-generated if you publish it to customers. If you’re using ChatGPT to help screen job applications, you’re a deployer of a high-risk system. You must ensure human oversight and have documented processes.

What to do instead: Inventory all AI tools you use, including ChatGPT, Claude, Perplexity, Midjourney, etc. Classify what you’re using them for. Identify the risk tier. Ensure appropriate controls are in place.


Myth 3: “AI Compliance Is Only for Big Tech Companies”

Reality: The EU AI Act applies based on risk level, not company size. A 15-person recruitment agency using AI screening faces the same high-risk obligations as a multinational tech company.

SMEs do benefit from reduced penalty caps (penalties are proportional to turnover rather than fixed minimums), and some procedural simplifications. But substantive compliance requirements — conformity assessment, documentation, human oversight, risk management — apply regardless of size.

In fact, SMEs using high-risk AI are often more exposed than large companies. A large company is more likely to have compliance infrastructure, legal resources, and governance experience. An SME using AI hiring because it’s useful without any governance framework faces the same regulatory requirements with fewer resources to meet them.

What to do instead: If you’re an SME using high-risk AI, take compliance seriously. The penalty for non-compliance is lower proportionally, but the operational burden is the same. Starting now, before enforcement action becomes common, is strategic.


Myth 4: “The Deadline Is August 2026 and That’s Final”

Reality: The European Commission’s Digital Omnibus proposal could extend the high-risk compliance deadline to December 2027. But this is a conditional extension, not yet law, and comes with substantial uncertainty.

The Digital Omnibus proposal (published in February 2026) includes a conditional extension of the high-risk deadline from August 2026 to December 2027, linked to the availability of harmonised standards. The idea is: if EU standards bodies haven’t yet published the detailed harmonised standards needed for compliance, it’s unfair to enforce full compliance on August 2026. Push the deadline to December 2027, conditional on standards being published by then.

This sounds sensible in theory. In practice, it creates uncertainty. The proposal requires approval by the European Parliament and Council — it’s not yet law. Businesses that plan for December 2027 and see August 2026 enforcement actions land on them with no warning will have made a strategic mistake. Businesses that plan for August 2026 and get the extension have simply finished early.

The extension also doesn’t apply to limited-risk systems (chatbots, content generation). Those transparency obligations are already in force.

What to do instead: Plan for August 2026. This gives you the least slippage risk and the most time to work. Monitor whether the Digital Omnibus is adopted by Parliament and Council. If it is, you’ve planned conservatively. If it isn’t, you’re ahead.


Myth 5: “Our AI Doesn’t Make Decisions — It Just Makes Recommendations”

Reality: The EU AI Act covers AI that “assists” human decision-making, not only fully automated decisions. If your AI ranks, scores, or recommends and a human typically follows the AI’s output, the system is likely in scope.

This distinction matters because many SMEs distinguish between “our AI decides” (which sounds risky and they want to avoid) and “our AI recommends” (which sounds safer because a human is involved). Under the Act, the distinction doesn’t hold.

High-risk hiring AI is classified whether it makes the decision or recommends a shortlist. A recruitment AI that narrows 500 applicants to 20 shortlist candidates is making a consequential decision on the 480 who didn’t make the shortlist, even if a human ultimately decides who to interview. Similarly, a credit scoring system that assigns a “risk score” that a loan officer typically follows is subject to high-risk requirements, not because the AI makes the final decision (a human does) but because the AI’s recommendation heavily influences it.

What to do instead: Don’t assume that because a human is involved, your AI is low-risk. Assess the actual impact the AI has on the decision. If it narrows options, ranks candidates, scores risk, or recommends actions that humans typically follow, treat it as a decision-support system and classify accordingly.


Myth 6: “Our AI Provider Handles Compliance — We Don’t Need to Worry”

Reality: Providers and deployers have separate obligations. Your AI provider’s compliance does not equal your compliance. You must handle deployment-side compliance yourself.

OpenAI publishes a usage policy. Anthropic publishes an acceptable use policy. Google publishes responsible AI principles. These are provider obligations — commitments about how the model was built, its capabilities, known risks, and what uses the provider does or doesn’t support.

You, as a deployer, must comply with different obligations: using the AI within the provider’s intended purpose, monitoring its outputs, ensuring it complies with European and UK law, documenting your use, and managing human oversight. The provider’s compliance doesn’t relieve you of yours.

A concrete example: OpenAI says ChatGPT shouldn’t be used for high-stakes decisions like hiring. If you use it to automatically screen résumés and reject candidates without human review, you’ve violated OpenAI’s usage policy and you’ve likely violated EU AI Act high-risk requirements. OpenAI’s compliance with provider rules doesn’t protect you.

What to do instead: Review your AI provider agreements. Understand what the provider commits to and what’s left to you. Ensure you’re using the AI within its intended purpose. Document your deployer-side compliance — human oversight, monitoring, risk management, incident response.


What to Do Instead

The path forward is the same for all these myths: assess, classify, understand your obligations, and prepare.

  1. List your AI systems. Include ChatGPT, Claude, your ATS, recommendation engines, everything.
  2. Classify each by risk tier. Is it high-risk? Limited-risk? Minimal-risk?
  3. Understand what compliance means for your tier. High-risk is intensive. Limited-risk requires transparency. Minimal-risk requires standard GDPR and sector-specific compliance.
  4. Audit your current state. Are you compliant? What’s missing?
  5. Create an action plan. What needs to be done, by when, and by whom?

If you want guidance on all of this — classification, compliance assessment, and an action plan — Bartram AI screens your AI usage against regulatory requirements and delivers a prioritised roadmap to August 2026.


Cross-References

For risk classification details, see how to classify your AI systems. For specific compliance requirements by tier, see the checklist. For concrete examples of limited-risk systems in practice, see chatbot compliance.

Free newsletter

Get insights like this fortnightly

UK compliance rules are changing fast. Our newsletter covers what changed, what's coming, and what it means for your business.

Subscribe →

Free, fortnightly, no spam. Unsubscribe any time.

Want to check your compliance?

Find out where you stand — and get a prioritised action plan.

Screen your AI compliance →